--- title: "GloVe Word Embeddings" author: "Dmitriy Selivanov" date: "`r Sys.Date()`" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{GloVe Word Embeddings} %\VignetteEngine{knitr::knitr} %\VignetteEncoding{UTF-8} --- ```{r global_options, include=FALSE} knitr::opts_chunk$set(echo=TRUE, eval=FALSE, warning=FALSE, message=FALSE) ``` # Word embeddings After Tomas Mikolov et al. released the [word2vec](https://code.google.com/archive/p/word2vec) tool, there was a boom of articles about word vector representations. One of the best of these articles is Stanford's [GloVe: Global Vectors for Word Representation](http://nlp.stanford.edu/projects/glove/), which explained why such algorithms work and reformulating word2vec optimizations as a special kind of factorization for word co-occurence matrices. Here I will briefly introduce the GloVe algorithm and show how to use its text2vec implementation. # GloVe algorithm THe GloVe algorithm consists of following steps: 1. Collect word co-occurence statistics in a form of word co-ocurrence matrix $X$. Each element $X_{ij}$ of such matrix represents how often word *i* appears in context of word *j*. Usually we scan our corpus in the following manner: for each term we look for context terms within some area defined by a *window_size* before the term and a *window_size* after the term. Also we give less weight for more distant words, usually using this formula: $$decay = 1/offset$$ 2. Define soft constraints for each word pair: $$w_i^Tw_j + b_i + b_j = log(X_{ij})$$ Here $w_i$ - vector for the main word, $w_j$ - vector for the context word, $b_i$, $b_j$ are scalar biases for the main and context words. 3. Define a cost function $$J = \sum_{i=1}^V \sum_{j=1}^V \; f(X_{ij}) ( w_i^T w_j + b_i + b_j - \log X_{ij})^2$$ Here $f$ is a weighting function which help us to prevent learning only from extremely common word pairs. The GloVe authors choose the following function: $$ f(X_{ij}) = \begin{cases} (\frac{X_{ij}}{x_{max}})^\alpha & \text{if } X_{ij} < XMAX \\ 1 & \text{otherwise} \end{cases} $$ # Linguistic regularities Now let's examine how GloVe embeddings works. As commonly known, word2vec word vectors capture many linguistic regularities. To give the canonical example, if we take word vectors for the words "paris," "france," and "germany" and perform the following operation: $$vector("paris") - vector("france") + vector("germany")$$ the resulting vector will be close to the vector for "berlin". Let's download the same Wikipedia data used as a demo by word2vec: ```{r} library(text2vec) text8_file = "~/text8" if (!file.exists(text8_file)) { download.file("http://mattmahoney.net/dc/text8.zip", "~/text8.zip") unzip ("~/text8.zip", files = "text8", exdir = "~/") } wiki = readLines(text8_file, n = 1, warn = FALSE) ``` In the next step we will create a vocabulary, a set of words for which we want to learn word vectors. Note, that all of text2vec's functions which operate on raw text data (`create_vocabulary`, `create_corpus`, `create_dtm`, `create_tcm`) have a streaming API and you should iterate over tokens as the first argument for these functions. ```{r} # Create iterator over tokens tokens <- space_tokenizer(wiki) # Create vocabulary. Terms will be unigrams (simple words). it = itoken(tokens, progressbar = FALSE) vocab <- create_vocabulary(it) ``` These words should not be too uncommon. Fot example we cannot calculate a meaningful word vector for a word which we saw only once in the entire corpus. Here we will take only words which appear at least five times. text2vec provides additional options to filter vocabulary (see `?prune_vocabulary`). ```{r} vocab <- prune_vocabulary(vocab, term_count_min = 5L) ``` Now we have 71,290 terms in the vocabulary and are ready to construct term-co-occurence matrix (TCM). ```{r} # Use our filtered vocabulary vectorizer <- vocab_vectorizer(vocab) # use window of 5 for context words tcm <- create_tcm(it, vectorizer, skip_grams_window = 5L) ``` Now we have a TCM matrix and can factorize it via the GloVe algorithm. text2vec uses a parallel stochastic gradient descent algorithm. By default it will use all cores on your machine, but you can specify the number of cores if you wish. Let's fit our model. (It can take several minutes to fit!) ```{r, message=TRUE} glove = GlobalVectors$new(rank = 50, x_max = 10) wv_main = glove$fit_transform(tcm, n_iter = 10, convergence_tol = 0.01, n_threads = 8) # INFO [09:35:20.779] epoch 1, loss 0.1758 # INFO [09:35:28.212] epoch 2, loss 0.1223 # INFO [09:35:35.500] epoch 3, loss 0.1081 # INFO [09:35:43.100] epoch 4, loss 0.1003 # INFO [09:35:50.848] epoch 5, loss 0.0953 # INFO [09:35:58.593] epoch 6, loss 0.0917 # INFO [09:36:06.346] epoch 7, loss 0.0890 # INFO [09:36:14.123] epoch 8, loss 0.0868 # INFO [09:36:21.862] epoch 9, loss 0.0851 # INFO [09:36:29.610] epoch 10, loss 0.0836 ``` And now we get the word vectors: ```{r} wv_context = glove$components word_vectors = wv_main + t(wv_context) ``` We can find the closest word vectors for our *paris - france + germany* example: ```{r} berlin <- word_vectors["paris", , drop = FALSE] - word_vectors["france", , drop = FALSE] + word_vectors["germany", , drop = FALSE] cos_sim = sim2(x = word_vectors, y = berlin, method = "cosine", norm = "l2") head(sort(cos_sim[,1], decreasing = TRUE), 5) # paris berlin munich madrid germany # 0.7859821 0.7410693 0.6490518 0.6216343 0.6160014 ``` You can achieve much better results by experimenting with `skip_grams_window` and the parameters of the `GloVe` class (including word vectors size and the number of iterations).