Skip to content

Commit ba9845a

Browse files
committed
update readme
1 parent 456899a commit ba9845a

1 file changed

Lines changed: 21 additions & 12 deletions

File tree

examples/README.md

Lines changed: 21 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,30 +1,42 @@
1-
# 1. Introduction
1+
# Examples
2+
3+
- [1. Introduction](#1. Introduction)
4+
- [2. Installation](#2. Installation)
5+
- [3. Inference](#3. Inference)
6+
- [4. Finetune](#4. Finetune)
7+
- [5. Evaluation](#5. Evaluation)
8+
9+
## 1. Introduction
210

311
In this example, we show how to **inference**, **finetune** and **evaluate** the baai-general-embedding.
412

5-
# 2. Installation
13+
## 2. Installation
614

715
* **with pip**
16+
817
```shell
918
pip install -U FlagEmbedding
1019
```
1120

1221
* **from source**
22+
1323
```shell
1424
git clone https://github.com/FlagOpen/FlagEmbedding.git
1525
cd FlagEmbedding
1626
pip install .
1727
```
28+
1829
For development, install as editable:
30+
1931
```shell
2032
pip install -e .
2133
```
2234

23-
# 3. Inference
35+
## 3. Inference
2436

2537
We have provided the inference code for two types of models: the **embedder** and the **reranker**. These can be loaded using `FlagAutoModel` and `FlagAutoReranker`, respectively. For more detailed instructions on their use, please refer to the documentation for the [embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/inference/embedder) and [reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/inference/reranker).
2638

27-
## 1. Embedder
39+
### 1. Embedder
2840

2941
```python
3042
from FlagEmbedding import FlagAutoModel
@@ -49,7 +61,7 @@ scores = q_embeddings @ p_embeddings.T
4961
print(scores)
5062
```
5163

52-
## 2. Reranker
64+
### 2. Reranker
5365

5466
```python
5567
from FlagEmbedding import FlagAutoReranker
@@ -65,7 +77,7 @@ scores = model.compute_score(pairs)
6577
print(scores)
6678
```
6779

68-
# 4. Finetune
80+
## 4. Finetune
6981

7082
We support fine-tuning a variety of BGE series models, including `bge-large-en-v1.5`, `bge-m3`, `bge-en-icl`, `bge-multilingual-gemma2`, `bge-reranker-v2-m3`, `bge-reranker-v2-gemma`, and `bge-reranker-v2-minicpm-layerwise`, among others. As examples, we use the basic models `bge-large-en-v1.5` and `bge-reranker-large`. For more details, please refer to the [embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune/embedder) and [reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune/reranker) sections.
7183

@@ -74,7 +86,7 @@ pip install deepspeed
7486
pip install flash-attn --no-build-isolation
7587
```
7688

77-
## 1. Embedder
89+
### 1. Embedder
7890

7991
```shell
8092
torchrun --nproc_per_node 2 \
@@ -109,7 +121,7 @@ torchrun --nproc_per_node 2 \
109121
--kd_loss_type kl_div
110122
```
111123

112-
## 2. Reranker
124+
### 2. Reranker
113125

114126
```shell
115127
torchrun --nproc_per_node 2 \
@@ -139,16 +151,13 @@ torchrun --nproc_per_node 2 \
139151
--save_steps 1000
140152
```
141153

142-
# 5. Evaluation
154+
## 5. Evaluation
143155

144156
We support evaluations on [MTEB](https://github.com/embeddings-benchmark/mteb), [BEIR](https://github.com/beir-cellar/beir), [MSMARCO](https://microsoft.github.io/msmarco/), [MIRACL](https://github.com/project-miracl/miracl), [MLDR](https://huggingface.co/datasets/Shitao/MLDR), [MKQA](https://github.com/apple/ml-mkqa), [AIR-Bench](https://github.com/AIR-Bench/AIR-Bench), and custom datasets. Below is an example of evaluating MSMARCO passages. For more details, please refer to the [evaluation examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/evaluation).
145157

146158
```shell
147159
pip install pytrec_eval
148160
pip install https://github.com/kyamagu/faiss-wheels/releases/download/v1.7.3/faiss_gpu-1.7.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
149-
```
150-
151-
```shell
152161
python -m FlagEmbedding.evaluation.msmarco \
153162
--eval_name msmarco \
154163
--dataset_dir ./data/msmarco \

0 commit comments

Comments
 (0)