Skip to content

Commit 5c92602

Browse files
authored
Update README.md
1 parent e3699b8 commit 5c92602

1 file changed

Lines changed: 15 additions & 0 deletions

File tree

FlagEmbedding/llm_reranker/README.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -355,6 +355,21 @@ from FlagEmbedding.llm_reranker.merge import merge_layerwise_finetuned_llm
355355
merge_layerwise_finetuned_llm('BAAI/bge-reranker-v2-minicpm-layerwise', 'lora_llm_output_path', 'merged_model_output_paths')
356356
```
357357

358+
Then you can replace the `config.json` in `merged_model_output_paths` with the `config.json` from [BAAI/bge-reranker-v2-minicpm-layerwise.](https://huggingface.co/BAAI/bge-reranker-v2-minicpm-layerwise/blob/main/config.json)
359+
360+
### Load llm-based layerwise reranker in local
361+
362+
If you download reranker-v2-minicpm-layerwise, you can load it with the following method:
363+
1. make sure `configuration_minicpm_reranker.py` and `modeling_minicpm_reranker.py` in `/path/bge-reranker-v2-minicpm-layerwise`.
364+
2. modify the following part of `config.json`:
365+
```
366+
"auto_map": {
367+
"AutoConfig": "configuration_minicpm_reranker.LayerWiseMiniCPMConfig",
368+
"AutoModel": "modeling_minicpm_reranker.LayerWiseMiniCPMModel",
369+
"AutoModelForCausalLM": "modeling_minicpm_reranker.LayerWiseMiniCPMForCausalLM"
370+
},
371+
```
372+
358373
## Evaluation
359374

360375
- llama-index.

0 commit comments

Comments
 (0)