Skip to content

Commit 89355a4

Browse files
tomaarsenxenovaosanseviero
authored
🪆 Introduction to Matryoshka Embedding Models (huggingface#1849)
* Initial draft for Matryoshka Embedding blogpost * Remove duplicate asset * Remove incorrect sentence about training speed * Include demo via iframe instead Co-authored-by: Joshua Lochner <[email protected]> * Revert "Remove incorrect sentence about training speed" As it turns out, the sentence was correct, oops. This reverts commit 94354df. * Add results section * Adopt various slight rewrites from review Co-authored-by: Joshua Lochner <[email protected]> * jpg -> png * Add missing comma Co-authored-by: Joshua Lochner <[email protected]> * Mention that you can store truncated embeddings + Vector Database * Link to Colab & demo for inference * Move assets to hf/doc-images; mention fixed size; update thumbnail * considerably smaller latencies -> considerable speedups Co-authored-by: Joshua Lochner <[email protected]> * Update phrasing in training Co-authored-by: Omar Sanseviero <[email protected]> * Add Matryoshka dolls section + gif Co-authored-by: Joshua Lochner <[email protected]> * Apply suggestions from code review Co-authored-by: Omar Sanseviero <[email protected]> * Add newlines before/after headers & lists * Add ToC * Add intro above the ToC * Remove the old thumbnail; we don't need to change it over time * Host inference colab on the Hub instead * Embed demo a bit larger, still centered * Update incorrectly specified parameter * Add Omar as third author * Add another link to the Sentence Transformers repository * Revert to 100% width --------- Co-authored-by: Joshua Lochner <[email protected]> Co-authored-by: Omar Sanseviero <[email protected]>
1 parent e35ed7c commit 89355a4

File tree

3 files changed

+243
-0
lines changed

3 files changed

+243
-0
lines changed

‎_blog.yml

+10
Original file line numberDiff line numberDiff line change
@@ -3508,3 +3508,13 @@
35083508
- research
35093509
- LLM
35103510
- gcp
3511+
3512+
- local: matryoshka
3513+
title: "🪆 Introduction to Matryoshka Embedding Models"
3514+
author: tomaarsen
3515+
thumbnail: /blog/assets/matryoshka/thumbnail.png
3516+
date: Feb 23, 2024
3517+
tags:
3518+
- nlp
3519+
- community
3520+
- guide

‎assets/matryoshka/thumbnail.png

201 KB
Loading

‎matryoshka.md

+233
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,233 @@
1+
---
2+
title: "🪆 Introduction to Matryoshka Embedding Models"
3+
thumbnail: /blog/assets/matryoshka/thumbnail.png
4+
authors:
5+
- user: tomaarsen
6+
- user: xenova
7+
- user: osanseviero
8+
---
9+
10+
# 🪆 Introduction to Matryoshka Embedding Models
11+
12+
In this blogpost, we will introduce you to the concept of Matryoshka Embeddings and explain why they are useful. We will discuss how these models are theoretically trained and how you can train them using Sentence Transformers.
13+
14+
Additionally, we will provide practical guidance on how to use Matryoshka Embedding models and share a comparison between a Matryoshka embedding model and a regular embedding model. Finally, we invite you to check out our interactive demo that showcases the power of these models.
15+
16+
## Table of Contents
17+
18+
* [Understanding Embeddings](#understanding-embeddings)
19+
* [🪆 Matryoshka Embeddings](#-matryoshka-embeddings)
20+
* [🪆 Matryoshka Dolls](#-matryoshka-dolls)
21+
* [Why would you use 🪆 Matryoshka Embedding models?](#why-would-you-use-matryoshka-embedding-models)
22+
* [How are 🪆 Matryoshka Embedding models trained?](#how-are-matryoshka-embedding-models-trained)
23+
+ [Theoretically](#theoretically)
24+
+ [In Sentence Transformers](#in-sentence-transformers)
25+
* [How do I use 🪆 Matryoshka Embedding models?](#how-do-i-use-matryoshka-embedding-models)
26+
+ [Theoretically](#theoretically-1)
27+
+ [In Sentence Transformers](#in-sentence-transformers-1)
28+
* [Results](#results)
29+
* [Demo](#demo)
30+
* [References](#references)
31+
32+
## Understanding Embeddings
33+
Embeddings are one of the most versatile tools in natural language processing, enabling practitioners to solve a large variety of tasks. In essence, an embedding is a numerical representation of a more complex object, like text, images, audio, etc.
34+
35+
![embedding model](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/matryoshka/embedding_model.png)
36+
37+
The embedding model will always produce embeddings of the same fixed size. You can then compute the similarity of the complex object by computing the similarity of these embeddings!
38+
39+
![embedding similarity](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/matryoshka/embedding_similarity.png)
40+
41+
This has an enormous amount of use cases, and serves as the backbone for recommendation systems, retrieval, one-shot or few-shot learning, outlier detection, similarity search, paraphrase detection, clustering, classification, and much more!
42+
43+
## 🪆 Matryoshka Embeddings
44+
45+
As research progressed, new state-of-the-art (text) embedding models started producing embeddings with increasingly higher output dimensions, i.e., every input text is represented using more values. Although this improves performance, it comes at the cost of efficiency of downstream tasks such as search or classification.
46+
47+
Consequently, [Kusupati et al.](https://arxiv.org/abs/2205.13147) (2022) were inspired to create embedding models whose embeddings could reasonably be shrunk without suffering too much on performance.
48+
49+
![matryoshka model](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/matryoshka/matryoshka_model.png)
50+
51+
These Matryoshka embedding models are trained such that these small truncated embeddings are still useful. In short, Matryoshka embedding models can produce useful embeddings of various dimensions.
52+
53+
## 🪆 Matryoshka Dolls
54+
55+
For those unfamiliar, "Matryoshka dolls", also known as "Russian nesting dolls", are a set of wooden dolls of decreasing size that are placed inside one another. In a similar way, Matryoshka embedding models aim to store more important information in earlier dimensions, and less important information in later dimensions. This characteristic of Matryoshka embedding models allows us to truncate the original (large) embedding produced by the model, while still retaining enough of the information to perform downstream tasks.
56+
57+
![matryoshka models](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/matryoshka/matryoshka-small.gif)
58+
59+
## Why would you use 🪆 Matryoshka Embedding models?
60+
61+
Such variable-size embedding models can be quite valuable to practitioners, for example:
62+
63+
1. **Shortlisting and reranking**: Rather than performing your downstream task (e.g., nearest neighbor search) on the full embeddings, you can shrink the embeddings to a smaller size and very efficiently "shortlist" your embeddings. Afterwards, you can process the remaining embeddings using their full dimensionality.
64+
2. **Trade-offs**: Matryoshka models will allow you to scale your embedding solutions to your desired storage cost, processing speed and performance.
65+
66+
## How are 🪆 Matryoshka Embedding models trained?
67+
68+
### Theoretically
69+
70+
The Matryoshka Representation Learning (MRL) approach can be adopted for almost all embedding model training frameworks. Normally, a training step for an embedding model involves producing embeddings for your training batch (of texts, for example) and then using some loss function to create a loss value that represents the quality of the produced embeddings. The optimizer will adjust the model weights throughout training to reduce the loss value.
71+
72+
For Matryoshka Embedding models, a training step also involves producing embeddings for your training batch, but then you use some loss function to determine not just the quality of your full-size embeddings but the quality of your embeddings at various different dimensionalities. For example, output dimensionalities are 768, 512, 256, 128, and 64. The loss values for each dimensionality are added together, resulting in a final loss value. The optimizer will then try and adjust the model weights to lower this loss value.
73+
74+
In practice, this incentivizes the model to frontload the most important information at the start of an embedding, such that it will be retained if the embedding is truncated.
75+
76+
### In Sentence Transformers
77+
78+
[Sentence Tranformers](https://sbert.net) is a commonly used framework to train embedding models, and it recently implemented support for Matryoshka models. Training a Matryoshka embedding model using Sentence Transformers is quite elementary: rather than applying some loss function on only the full-size embeddings, we also apply that same loss function on truncated portions of the embeddings.
79+
80+
For example, if a model has an original embedding dimension of 768, it can now be trained on 768, 512, 256, 128 and 64. Each of these losses will be added together, optionally with some weight:
81+
82+
```python
83+
from sentence_transformers import SentenceTransformer
84+
from sentence_transformers.losses import CoSENTLoss, MatryoshkaLoss
85+
86+
model = SentenceTransformer("microsoft/mpnet-base")
87+
88+
base_loss = CoSENTLoss(model=model)
89+
loss = MatryoshkaLoss(
90+
model=model,
91+
loss=base_loss,
92+
matryoshka_dims=[768, 512, 256, 128, 64],
93+
matryoshka_weight=[1, 1, 1, 1, 1],
94+
)
95+
96+
model.fit(
97+
train_objectives=[(train_dataset, loss)],
98+
...,
99+
)
100+
```
101+
102+
Training with `MatryoshkaLoss` does not incur a notable overhead in training time.
103+
104+
References:
105+
106+
* [`MatryoshkaLoss`](https://sbert.net/docs/package_reference/losses.html#matryoshkaloss)
107+
* [`CoSENTLoss`](https://sbert.net/docs/package_reference/losses.html#cosentloss)
108+
* [`SentenceTransformer`](https://sbert.net/docs/package_reference/SentenceTransformer.html)
109+
* [`SentenceTransformer.fit`](https://sbert.net/docs/package_reference/SentenceTransformer.html#sentence_transformers.SentenceTransformer.fit)
110+
* [Matryoshka Embeddings - Training](https://sbert.net/examples/training/matryoshka/README.html#training)
111+
112+
See the following complete scripts as examples of how to apply the `MatryoshkaLoss` in practice:
113+
114+
* **[matryoshka_nli.py](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/matryoshka/matryoshka_nli.py)**: This example uses the MultipleNegativesRankingLoss with MatryoshkaLoss to train a strong embedding model using Natural Language Inference (NLI) data. It is an adaptation of the [NLI](../nli/README) documentation.
115+
* **[matryoshka_nli_reduced_dim.py](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/matryoshka/matryoshka_nli_reduced_dim.py)**: This example uses the MultipleNegativesRankingLoss with MatryoshkaLoss to train a strong embedding model with a small maximum output dimension of 256. It trains using Natural Language Inference (NLI) data, and is an adaptation of the [NLI](../nli/README) documentation.
116+
* **[matryoshka_sts.py](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/matryoshka/matryoshka_sts.py)**: This example uses the CoSENTLoss with MatryoshkaLoss to train an embedding model on the training set of the STSBenchmark dataset. It is an adaptation of the [STS](../sts/README) documentation.
117+
118+
## How do I use 🪆 Matryoshka Embedding models?
119+
120+
### Theoretically
121+
122+
In practice, getting embeddings from a Matryoshka embedding model works the same way as with a normal embedding model. The only difference is that after receiving the embeddings, we can optionally truncate them to a smaller dimensionality. Do note that if the embeddings were normalized, then after truncating they will no longer be, so you may want to re-normalize.
123+
124+
After truncating, you can either directly apply them for your use cases, or store them such that they can be used later. After all, smaller embeddings in your vector database should result in considerable speedups!
125+
126+
Keep in mind that although processing smaller embeddings for downstream tasks (retrieval, clustering, etc.) will be faster, getting the smaller embeddings from the model is just as fast as getting the larger ones.
127+
128+
### In Sentence Transformers
129+
130+
In Sentence Transformers, you can load a Matryoshka Embedding model like normal, and run inference with it using [`SentenceTransformers.encode`](https://sbert.net/docs/package_reference/SentenceTransformer.html#sentence_transformers.SentenceTransformer.encode). After getting the embeddings, we can truncate them to our desired size, and we can normalize them if we want.
131+
132+
Let's try to use a model that I trained using [`matryoshka_nli.py`](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/matryoshka/matryoshka_nli.py) with [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base):
133+
134+
```python
135+
from sentence_transformers import SentenceTransformer
136+
from sentence_transformers.util import cos_sim
137+
138+
model = SentenceTransformer("tomaarsen/mpnet-base-nli-matryoshka")
139+
140+
matryoshka_dim = 64
141+
embeddings = model.encode(
142+
[
143+
"The weather is so nice!",
144+
"It's so sunny outside!",
145+
"He drove to the stadium.",
146+
]
147+
)
148+
embeddings = embeddings[..., :matryoshka_dim] # Shrink the embedding dimensions
149+
print(embeddings.shape)
150+
# => (3, 64)
151+
152+
# Similarity of the first sentence to the other two:
153+
similarities = cos_sim(embeddings[0], embeddings[1:])
154+
print(similarities)
155+
# => tensor([[0.8910, 0.1337]])
156+
```
157+
158+
* Link to the model: [tomaarsen/mpnet-base-nli-matryoshka](https://huggingface.co/tomaarsen/mpnet-base-nli-matryoshka)
159+
160+
Feel free to experiment with using different values for `matryoshka_dim` and observing how that affects the similarities. You can do so either by running this code locally, on the cloud such as with [Google Colab](https://colab.research.google.com/#fileId=https%3A//huggingface.co/tomaarsen/mpnet-base-nli-matryoshka/blob/main/inference.ipynb), or by checking out the [demo](#demo).
161+
162+
References:
163+
164+
* [`SentenceTransformer`](https://sbert.net/docs/package_reference/SentenceTransformer.html)
165+
* [`SentenceTransformer.encode`](https://sbert.net/docs/package_reference/SentenceTransformer.html#sentence_transformers.SentenceTransformer.encode)
166+
* [`util.cos_sim`](https://sbert.net/docs/package_reference/util.html#sentence_transformers.util.cos_sim)
167+
* [Matryoshka Embeddings - Inference](https://sbert.net/examples/training/matryoshka/README.html#inference)
168+
169+
<details><summary><b>Click here to see how to use the Nomic v1.5 Matryoshka Model</b></summary>
170+
171+
```python
172+
from sentence_transformers import SentenceTransformer
173+
from sentence_transformers.util import cos_sim
174+
import torch.nn.functional as F
175+
176+
model = SentenceTransformer("nomic-ai/nomic-embed-text-v1.5", trust_remote_code=True)
177+
178+
matryoshka_dim = 64
179+
embeddings = model.encode(
180+
[
181+
"search_query: What is TSNE?",
182+
"search_document: t-distributed stochastic neighbor embedding (t-SNE) is a statistical method for visualizing high-dimensional data by giving each datapoint a location in a two or three-dimensional map.",
183+
"search_document: Amelia Mary Earhart was an American aviation pioneer and writer.",
184+
],
185+
convert_to_tensor=True,
186+
)
187+
# The Nomic team uses a custom architecture, making them recommend Layer Normalization before truncation
188+
embeddings = F.layer_norm(embeddings, normalized_shape=(embeddings.shape[1],))
189+
embeddings[..., :matryoshka_dim] # Shrink the embedding dimensions
190+
191+
similarities = cos_sim(embeddings[0], embeddings[1:])
192+
# => tensor([[0.7154, 0.4468]])
193+
```
194+
195+
* Link to the model: [nomic-ai/nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5)
196+
197+
198+
</details>
199+
200+
## Results
201+
202+
Now that Matryoshka models have been introduced, let's look at the actual performance that we may be able to expect from a Matryoshka embedding model versus a regular embedding model. For this experiment, I have trained two models:
203+
204+
* [tomaarsen/mpnet-base-nli-matryoshka](https://huggingface.co/tomaarsen/mpnet-base-nli-matryoshka): Trained by running [`matryoshka_nli.py`](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/matryoshka/matryoshka_nli.py) with [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base).
205+
* [tomaarsen/mpnet-base-nli](https://huggingface.co/tomaarsen/mpnet-base-nli): Trained by running a modified version of [`matryoshka_nli.py`](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/matryoshka/matryoshka_nli.py) where the training loss is only `MultipleNegativesRankingLoss` rather than `MatryoshkaLoss` on top of `MultipleNegativesRankingLoss`. I also use [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) as the base model.
206+
207+
Both of these models were trained on the AllNLI dataset, which is a concatenation of the [SNLI](https://huggingface.co/datasets/snli) and [MultiNLI](https://huggingface.co/datasets/multi_nli) datasets. I have evaluated these models on the [STSBenchmark](https://huggingface.co/datasets/mteb/stsbenchmark-sts) test set using multiple different embedding dimensions. The results are plotted in the following figure:
208+
209+
![results](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/matryoshka/results.png)
210+
211+
In the top figure, you can see that the Matryoshka model reaches a higher Spearman similarity than the standard model at all dimensionalities, indicative that the Matryoshka model is superior in this task.
212+
213+
Furthermore, the performance of the Matryoshka model falls off much less quickly than the standard model. This is shown clearly in the second figure, which shows the performance at the embedding dimension relative to the maximum performance. **Even at 8.3% of the embedding size, the Matryoshka model preserves 98.37% of the performance**, much higher than the 96.46% by the standard model.
214+
215+
These findings are indicative that truncating embeddings by a Matryoshka model could 1) significantly speed up downstream tasks such as retrieval and 2) significantly save on storage space, all without a notable hit in performance.
216+
217+
## Demo
218+
219+
In this demo, you can dynamically shrink the output dimensions of the [`nomic-ai/nomic-embed-text-v1.5`](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) Matryoshka embedding model and observe how it affects the retrieval performance. All of the embeddings are computed in the browser using [🤗 Transformers.js](https://github.com/xenova/transformers.js).
220+
221+
<iframe
222+
src="https://xenova-adaptive-retrieval-web.static.hf.space"
223+
frameborder="0"
224+
width="100%"
225+
height="800"
226+
></iframe>
227+
228+
## References
229+
230+
* Kusupati, A., Bhatt, G., Rege, A., Wallingford, M., Sinha, A., Ramanujan, V., ... & Farhadi, A. (2022). Matryoshka representation learning. Advances in Neural Information Processing Systems, 35, 30233-30249. https://arxiv.org/abs/2205.13147
231+
* Matryoshka Embeddings — Sentence-Transformers documentation. (n.d.). https://sbert.net/examples/training/matryoshka/README.html
232+
* UKPLab. (n.d.). GitHub. https://github.com/UKPLab/sentence-transformers
233+
* Unboxing Nomic Embed v1.5: Resizable Production Embeddings with Matryoshka Representation Learning. (n.d.). https://blog.nomic.ai/posts/nomic-embed-matryoshka

0 commit comments

Comments
 (0)