Skip to content

Commit 1470fc6

Browse files
authored
Update README.md
1 parent 681f615 commit 1470fc6

1 file changed

Lines changed: 3 additions & 0 deletions

File tree

Long_LLM/longllm_qlora/README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -62,6 +62,9 @@ For any path specified for `train_data` and `eval_data`: if it is prefixed with
6262

6363

6464
# Training
65+
66+
**NOTE: `unsloth` does not support DDP training now despite they used to in May 2024. So the training script won't work. You're encouraged to open a feature request in the [unsloth repo](https://github.com/unslothai/unsloth). Or, you can try to use some other framework for efficient tuning, like MegatronLM. More details can be found in [this issue](https://github.com/FlagOpen/FlagEmbedding/issues/919).**
67+
6568
```bash
6669
export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
6770

0 commit comments

Comments
 (0)