Skip to content

Commit 5df01ae

Browse files
authored
Merge pull request #266 from Canner/feature/huggingface-text-generation
Feature: Support HuggingFace Text Generation including meta-llama2 model
2 parents 180f8a6 + 5e22e51 commit 5df01ae

13 files changed

Lines changed: 913 additions & 276 deletions

File tree

packages/doc/docs/extensions/huggingface/huggingface-table-question-answering.mdx

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -67,12 +67,13 @@ SELECT {{ products.value() | huggingface_table_question_answering(query=question
6767

6868
Please check [Table Question Answering](https://huggingface.co/docs/api-inference/detailed_parameters#table-question-answering-task) for further information.
6969

70-
| Name | Required | Default | Description |
71-
| -------------- | -------- | ------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
72-
| query | Y | | The query in plain text that you want to ask the table. |
73-
| model | N | google/tapas-base-finetuned-wtq | The model id of a pretrained model hosted inside a model repo on huggingface.co. See: https://huggingface.co/models?pipeline_tag=table-question-answering |
74-
| use_cache | N | true | There is a cache layer on the inference API to speedup requests we have already seen |
75-
| wait_for_model | N | false | If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done |
70+
| Name | Required | Default | Description |
71+
|----------------|----------|---------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------|
72+
| query | Y | | The query in plain text that you want to ask the table. |
73+
| endpoint | N | | The inference endpoint URL, when using `endpoint`, it replaces the original default value of `model`. |
74+
| model | N | google/tapas-base-finetuned-wtq | The model id of a pre-trained model hosted inside a model repo on huggingface.co. See: https://huggingface.co/models?pipeline_tag=table-question-answering |
75+
| use_cache | N | true | There is a cache layer on the inference API to speedup requests we have already seen |
76+
| wait_for_model | N | false | If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done |
7677

7778

7879
## Examples
Lines changed: 98 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,98 @@
1+
# Text Generation
2+
3+
The [Text Generation](https://huggingface.co/docs/api-inference/detailed_parameters#text-generation-task) is one of the Natural Language Processing tasks supported by Hugging Face.
4+
5+
## Using the `huggingface_text_generation` filter.
6+
7+
The result will be a string from `huggingface_text_generation`.
8+
9+
:::info
10+
The **Text Generation** default model is **gpt2**, If you would like to use the [Meta LLama2](https://huggingface.co/meta-llama) models, you have two methods to do:
11+
12+
1. Subscribe to the [Pro Account](https://huggingface.co/pricing#pro).
13+
- Set the Meta LLama2 model using the `model` keyword argument in `huggingface_text_generation`, e.g: `meta-llama/Llama-2-13b-chat-hf`.
14+
15+
2. Using [Inference Endpoint](https://huggingface.co/inference-endpoints).
16+
- Select one of the [Meta LLama2](https://huggingface.co/meta-llama) Models and deploy it to the [Inference Endpoint](https://huggingface.co/inference-endpoints).
17+
- Set the endpoint URL using the `endpoint` keyword argument in `huggingface_text_generation`.
18+
:::
19+
20+
**Sample 1 - Subscribe to the [Pro Account](https://huggingface.co/pricing#pro)**:
21+
22+
```sql
23+
{% set data = [
24+
{
25+
"rank": 1,
26+
"institution": "Massachusetts Institute of Technology (MIT)",
27+
"location code":"US",
28+
"location":"United States"
29+
},
30+
{
31+
"rank": 2,
32+
"institution": "University of Cambridge",
33+
"location code":"UK",
34+
"location":"United Kingdom"
35+
},
36+
{
37+
"rank": 3,
38+
"institution": "Stanford University"
39+
"location code":"US",
40+
"location":"United States"
41+
}
42+
-- other universities.....
43+
] %}
44+
45+
SELECT {{ data | huggingface_text_generation(query="Which university is the top-ranked university?", model="meta-llama/Llama-2-13b-chat-hf") }} as result
46+
```
47+
48+
**Sample 1 - Response**:
49+
50+
```json
51+
[
52+
{
53+
"result": "Answer: Based on the provided list, the top-ranked university is Massachusetts Institute of Technology (MIT) with a rank of 1."
54+
}
55+
]
56+
```
57+
58+
**Sample 2 - Using [Inference Endpoint](https://huggingface.co/inference-endpoints)**:
59+
60+
61+
```sql
62+
{% req universities %}
63+
SELECT rank,institution,"location code", "location" FROM read_csv_auto('2023-QS-World-University-Rankings.csv')
64+
{% endreq %}
65+
66+
SELECT {{ universities.value() | huggingface_text_generation(query="Which university located in the UK is ranked at the top of the list?", endpoint='xxx.yyy.zzz.huggingface.cloud') }} as result
67+
```
68+
69+
**Sample 2 - Response**:
70+
71+
```json
72+
[
73+
{
74+
"result": "Answer: Based on the list provided, the top-ranked university in the UK is the University of Cambridge, which is ranked at number 2."
75+
}
76+
]
77+
```
78+
79+
### Arguments
80+
81+
Some default value was changed, so it may different from [Text Generation](https://huggingface.co/docs/api-inference/detailed_parameters#text-generation-task) default value.
82+
83+
| Name | Required | Default | Description |
84+
|----------------------|----------|---------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
85+
| query | Y | | The query in plain text that you want to ask the table. |
86+
| endpoint | N | | The inference endpoint URL, when using `endpoint`, it replaces the original default value of `model`. |
87+
| model | N | gpt2 | The model id of a pre-trained model hosted inside a model repo on huggingface.co. See: https://huggingface.co/models?pipeline_tag=text-generation |
88+
| top_k | N | | Integer value to define the top tokens considered within the sample operation to create new text. |
89+
| top_p | N | | Float value to define the tokens that are within the sample operation of text generation. Add tokens in the sample for more probable to least probable until the sum of the probabilities is greater than top_p. |
90+
| temperature | N | 0.1 | Range: (0.0 - 100.0). The temperature of the sampling operation. 1 means regular sampling, 0 means always take the highest score, 100.0 is getting closer to uniform probability. |
91+
| repetition_penalty | N | | Range: (0.0 - 100.0). The more a token is used within generation the more it is penalized to not be picked in successive generation passes. |
92+
| max_new_tokens | N | 250 | The amount of new tokens to be generated, this does not include the input length it is a estimate of the size of generated text you want. Each new tokens slows down the request, so look for balance between response times and length of text generated. |
93+
| max_time | N | | Range (0-120.0). The amount of time in seconds that the query should take maximum. Network can cause some overhead so it will be a soft limit. Use that in combination with max_new_tokens for best results. |
94+
| return_full_text | N | false | If set to False, the return results will not contain the original query making it easier for prompting. |
95+
| num_return_sequences | N | 1 | The number of proposition you want to be returned. |
96+
| do_sample | N | | Whether or not to use sampling, use greedy decoding otherwise. |
97+
| use_cache | N | true | There is a cache layer on the inference API to speedup requests we have already seen |
98+
| wait_for_model | N | false | If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done |

packages/doc/sidebars.js

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -176,6 +176,10 @@ const sidebars = {
176176
type: 'doc',
177177
id: 'extensions/huggingface/huggingface-table-question-answering',
178178
},
179+
{
180+
type: 'doc',
181+
id: 'extensions/huggingface/huggingface-text-generation',
182+
},
179183
]
180184
},
181185
// {

0 commit comments

Comments
 (0)