You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|[e5-data](https://huggingface.co/datasets/cfli/bge-e5data)| Public data identical to [e5-mistral](https://huggingface.co/intfloat/e5-mistral-7b-instruct)|
42
+
|[public-data](https://huggingface.co/datasets/cfli/bge-e5data)| Public data identical to [e5-mistral](https://huggingface.co/intfloat/e5-mistral-7b-instruct)|
43
43
|[full-data](https://huggingface.co/datasets/cfli/bge-full-data)| The full dataset we used for training |
44
44
45
45
## Usage
@@ -219,13 +219,13 @@ If you find this repository useful, please give us a star ⭐.
219
219
To cite our work:
220
220
221
221
```
222
-
@misc{li2023makinglargelanguagemodels,
223
-
title={Making Large Language Models A Better Foundation For Dense Retrieval},
224
-
author={Chaofan Li and Zheng Liu and Shitao Xiao and Yingxia Shao},
225
-
year={2023},
226
-
eprint={2312.15503},
222
+
@misc{li2024makingtextembeddersfewshot,
223
+
title={Making Text Embedders Few-Shot Learners},
224
+
author={Chaofan Li and MingHao Qin and Shitao Xiao and Jianlyu Chen and Kun Luo and Yingxia Shao and Defu Lian and Zheng Liu},
0 commit comments