You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this project, we introduce BGE-M3, the first embedding model which supports multiple retrieval modes、multilingual and multi-granularity retrieval.
121
-
- Multi-Functionality: It can simultaneously perform the three common retrieval functionalities of embedding model: dense retrieval, multi-vector retrieval, and sparse retrieval.
122
-
- Multi-Linguality: It can support more than 100 working languages.
123
-
- Multi-Granularity: It is able to process inputs of different granularities, spanning from short sentences to long documents of up to 8192 tokens.
In this project, we introduce BGE-M3, the first embedding model which supports:
126
+
-**Multi-Functionality**: It can simultaneously perform the three common retrieval functionalities of embedding model: dense retrieval, multi-vector retrieval, and sparse retrieval.
127
+
-**Multi-Linguality**: It can support more than 100 working languages.
128
+
-**Multi-Granularity**: It is able to process inputs of different granularities, spanning from short sentences to long documents of up to 8192 tokens.
124
129
125
-
We propose a novel self-knowledge distillation approach to improve the performance of single retrieval mode.
126
-
We optimize the batching strategy, enabling a large batch size, which can used simply when fine-tuning with long text or large language model.
127
-
We also construct a dataset for document retrieval and propose a simple strategy to improve the ability to model long text.
128
130
**The training code and fine-tuning data will be open-sourced in the near future.**
In this project, we introduce Visualized-BGE, which integrating image token embedding into the BGE Text Embedding framework. Visualized-BGE can be used for various hybrid modal retrieval tasks, such as Multi-Modal Knowledge Retrieval, Composed Image Retrieval, and Knowledge Retrieval with Multi-Modal Queries.
132
134
133
135
Our model delivers outstanding zero-shot performance across multiple hybrid modal retrieval tasks. It can also serve as a base model for downstream fine-tuning for hybrid modal retrieval tasks.
We extend the context length of Llama-3-8B-Instruct from 8K to 80K via QLoRA fine-tuning. The entire training cycle is super efficient, which takes 8 hours on one 8xA800 (80G) GPU machine. The resulted model exhibits superior performances across a broad range of evaluation tasks, such as NIHS, topic retrieval, and long-context language understanding; meanwhile, it also well preserves the original capability over short contexts. The dramatic context extension is mainly attributed to merely 3.5K synthetic data generated by GPT-4, which indicates the LLMs' inherent (yet largely underestimated) potential to extend its original context length. In fact, the context length could be extended far beyond 80K with more computing resources.
140
+
We extend the context length of Llama-3-8B-Instruct from 8K to 80K via QLoRA fine-tuning. The entire training cycle is super efficient, which takes 8 hours on one 8xA800 (80G) GPU machine (the context length can go far beyond 80k with more computing resources). The resulted model exhibits superior performances across a broad range of evaluation tasks, such as NIHS, topic retrieval, and long-context language understanding; meanwhile, it also well preserves the original capability over short contexts.
Model merging has been used to improve the performance of single model.
149
-
We find this method is also useful for large language models and dense embedding model,
150
-
and design the LM-Cocktail strategy which automatically merges fine-tuned models and base model using a simple function to compute merging weights.
151
-
LM-Cocktail can be used to improve the performance on target domain without decrease
152
-
the general capabilities beyond target domain.
153
-
It also can be used to generate a model for new tasks without fine-tuning.
152
+
153
+
LM-Cocktail automatically merges fine-tuned models and base model using a simple function to compute merging weights.
154
+
LM-Cocktail can be used to improve the performance on target domain without decrease the general capabilities beyond target domain,
155
+
as well as generate a model for new tasks without fine-tuning.
154
156
You can use it to merge the LLMs (e.g., Llama) or embedding models.
155
157
More details please refer to our report: [LM-Cocktail](https://arxiv.org/abs/2311.13534) and [code](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail).
156
158
@@ -159,7 +161,7 @@ More details please refer to our report: [LM-Cocktail](https://arxiv.org/abs/231
LLM Embedder is fine-tuned based on the feedback from LLMs.
162
-
It can support the retrieval augmentation needs of large language models, including knowledge retrieval, memory retrieval, example retrieval, and tool retrieval.
164
+
It supports the retrieval augmentation needs of large language models, including knowledge retrieval, memory retrieval, example retrieval, and tool retrieval.
163
165
It is fine-tuned over 6 tasks: Question Answering, Conversational Search, Long Conversation,
164
166
Long-Range Language Modeling, In-Context Learning, and Tool Learning.
165
167
For more details please refer to [report](https://arxiv.org/abs/2310.07554) and [./FlagEmbedding/llm_embedder/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder)
@@ -175,7 +177,7 @@ The data format is the same as embedding model, so you can fine-tune it easily f
175
177
For more details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
We provide a new version of the cross-encoder that supports more languages and longer lengths. The data format is similar to our embedding models, but now includes prompt data for fine-tuning and inference. You can perform inference using specific layers or using the entire layers. You can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_reranker#fine-tune).
180
182
For more details please refer to [./FlagEmbedding/llm_reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_reranker).
181
183
@@ -229,6 +231,7 @@ Refer to our [report: c-pack](https://arxiv.org/pdf/2309.07597.pdf) and [code](h
229
231
230
232
231
233
### Contributors:
234
+
Thank all our contributors for their efforts and warmly welcome new members to join in!
0 commit comments