- GloVe: Global Vectors for Word Representation
Introduction GloVe is an unsupervised learning algorithm for obtaining vector representations for words Training is performed on aggregated global word-word co-occurrence statistics from a corpus, and the resulting representations showcase interesting linear substructures of the word vector space
- GloVe: Global Vectors for Word Representation
The result, GloVe, is a new global log-bilinear regression model for the unsupervised learning of word representations that outperforms other models on word analogy, word similarity, and named entity recognition tasks
- Jeffrey Pennington - Stanford University
Our unsupervised RAEs are based on a novel unfolding objective and learn feature vectors for phrases in syntactic trees These features are used to measure the word- and phrase-wise similarity between two sentences
- The Stanford Natural Language Processing Group
@inproceedings{pennington2014glove, author = {Jeffrey Pennington and Richard Socher and Christopher D Manning}, booktitle = {Empirical Methods in Natural Language Processing (EMNLP)}, title = {GloVe: Global Vectors for Word Representation}, year = {2014}, pages = {1532--1543}, url = {http: www aclweb org anthology D14-1162}, }
- Christopher Manning, Stanford NLP
GloVe: Global Vectors for Word Representation by Jeffrey Pennington, Richard Socher, and Christopher Manning won the 10-year Test of Time Award at ACL 2024 (2024)
- Christopher Manning: Papers and publications - Stanford University
Information Spreading and Levels of Representation in LFG CSLI Technical Report CSLI-93-176, Stanford University, Stanford CA http: nlp stanford edu ~manning papers proj ps
- The Stanford Natural Language Processing Group
Our approaches go beyond learning word vectors and also learn vector representations for multi-word phrases, grammatical relations, and bilingual phrase pairs, all of which are useful for various NLP applications
- Traversing Knowledge Graphs in Vector Space - Stanford University
We initial-ized all models with word vectors from Penning-ton et al (2014) We found that composition-ally trained models outperform the neural tensor network (NTN) on WordNet, while being only slightly behind on Freebase
|