-
Notifications
You must be signed in to change notification settings - Fork 16
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #11 from shreyasajal/master
Add files via upload
- Loading branch information
Showing
2 changed files
with
49 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,24 @@ | ||
|
||
|
||
|
||
## **GloVe Representation Model** | ||
|
||
> **Quick Overview** | ||
|
||
1. A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. | ||
2. The model efficiently leverages statistical information by training only on the nonzero elements in a word-word co-occurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. | ||
|
||
|
||
>[ **Presentation made for the discussion**](https://docs.google.com/presentation/d/1UZZ35_wa9pQbZEIsC77SVpzC3XrrM5LACXiS0UOnOQg/edit?usp=sharing) | ||
|
||
> [**Implementation through transfer learning**](https://colab.research.google.com/drive/1J75hTE5UFPKeO0GcV8os9YTILhptYDrY?usp=sharing) | ||
> **Resources** | ||
|
||
1. [Paper](https://www.aclweb.org/anthology/D14-1162/) | ||
2. [Video](https://www.youtube.com/watch?v=ASn7ExxLZws&t=3068s) | ||
3. [Pretrained word vectors](https://nlp.stanford.edu/projects/glove/) | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,25 @@ | ||
|
||
|
||
|
||
## **Word2Vec Representation Model** | ||
|
||
> **Quick Overview** | ||
|
||
1. The main goal of this paper is to introduce techniques that can be | ||
used for learning high-quality word vectors from huge data sets with | ||
billions of words, and with millions of words in the vocabulary. | ||
|
||
2. It proposes two novel model architectures - CBOW and Skip-Gram , for computing continuous vector representations of words from very large data sets. | ||
|
||
|
||
|
||
> [**Presentation made for the discussion**](https://drive.google.com/file/d/1Hwi-Iy1tgr-N3zHoRFh0pLE5PN6cuk3s/view?usp=sharing) | ||
|
||
> **Resources** | ||
> | ||
1. [Paper](https://arxiv.org/abs/1301.3781) | ||
2. [Video](https://www.youtube.com/watch?v=ERibwqs9p38) | ||
|