Word Vectors

TagsCS 224N

What came before?

We had WordNet, which is essentially a giant dictionary that contains synonym sets and hypernums (”is a” relationships)

This didn’t work very well because words matter in their contexts; synonyms are dynamically bound. Furthermore, new words and new meanings of words appear all the time.

We also tried treating words as discrete symbols, i.e. like one-hot vectors. The problem is that two similar words were just as orthogonal as two very different words. Can we make a vector that somehow encodes the semantics of a word?

We arrive at the field of distributional semantics , which is the assumption that words are influenced by their surrounding contexts. Or, to put it another way, words are more similar if they are used in similar contexts.

Here’s something tricky. Words that are antonyms are pretty semantically similar: “hot” and “cold” are more similar as a pair than “hot” and “rabbit.” With a large vector embedding size, we can encode this similarity as well as the antonym relationship, which is just like another relationship.

Word2Vec

The concept of Word2Vec is actually really simple. For the corpus of text, take a center word, and maximize the probability that the surrounding words are seen. This represents a sort of Markov Random Field where every nn nodes are interconnected

There are two model variants. The p(oc)p(o | c) is our version, and it’s called a skip gram. We can also do continuous bag of words, where you predict p(co)p(c | o).

Concretely, we arrive at this log probability loss function

Computing probabilities

For every word, we have an embedding vwv_w, which is used when it is a center word. We have an embedding uwu_w which is used when it is a context word. We use two embeddings because we will see that linear relationships are much easier to convey. When we use the embeddings, we average together v,uv, u.

We compute the probability of a context word given a center word P(oc)P(o | c) by computing

P(oc)=exp(uoTvc)wVexp(uwTvc)P(o | c) = \frac{\exp(u_o^Tv_c)}{\sum_{w \in V}\exp(u_w^Tv_c)}

which is akin to finding the dot product between vcv_c and all UU, and then taking the softmax (hold onto this thought: we will look at vectorization later).

Computing d/dvd/dv (non-vectorized)

Computing d/dud/du (non-vectorized)

TO DO (not discussed in class yet)

Negative sampling and skip-gram

So computing P(oc)P(o | c) is ok for a small vocab, but the denominator is concerning. Can we reframe this problem as a binary classification? Yes we can!

We just want a binary logistic regression that differentiates a true center-context word pair with a “noise” pair (center word with a random word). Concretely, we use the logistic regression loss:

But what is P(w)P(w)? Well, we can start with U(w)U(w), or the natural distribution of words (found by tallying up the data). It turns out that if we do U(w)3/4U(w)^{3/4}, we “soften” the distribution and get better performance.

This yields a pretty sparse gradient, and this allows you to pick which rows in your embeddings to update.

Co-occurence counting

While this stuff all makes sense, why can’t we just count the co-occurences and make an embedding that way? We can make such a matrix.

In other words, XijX_{ij} is the number of times word ii appeard within a certain window of word jj.

You can then reduce dimensionality through SVD. Often, this doesn’t work very well. There are some tricks, including log frequencies, clipping all values at 100 instead of 0, and ignoring function words that would saturate the occurrence matrices. You can also use ramped windows that count closer words more than further words.

Emergent Semantic Patterns

You can have a certain linear vector mean something. For example, you can have a pluralization vector that turns most nouns into its plural form.

GloVe

GloVE tries to exploit the emergent semantic patterns to a great degree. It uses a log-bilinear model that encourages linearity in the embedding space.

Concretely, it expressed

which means that vector differences becomes

Evaluating word vectors

Intrinsic

These are things that can be done to the word vectors directly without any downstream task. Examples include vector analogies. You try making a vector that moves between a semantic analogy, like man → woman yields a gender vector. Applying it to “king” should yield “queen”.

You can also see if the similarity between words correlates with how humans see the words.

Extrinsic

You can try the word vectors on a downstream task. A good example is named entity recognition, which tries to assign categories of person, organization, or location to each word.

To do this, you just take a window centered around the target word. You compute the embeddings for all the words in the window, and then you feed that lowdim vector as an intput to a classifier, which has a softmax output.

Word ambiguity

Words can take on a ton of meaning and meanings in different contexts. How can one vector express all of that?

Well, you can try to just add more vectors! This is one thing that can work, but it’s impractical. It adds more moving parts. How many more vectors? etc.

It actually turns out that in a good word embedding, the embedding is a superposition of the different semantic meanings. You can extract the relevant meaning using the surrounding context. And because there is a degree of sparseness to the vector, the different components are actually recoverable.