Skip to content

Commit 4d8cdf5

Browse files
committed
fix: use max_completion_tokens for reasoning models in OpenAI engine
Newer OpenAI models (o1, o3, o4, gpt-5 series) reject the max_tokens parameter and require max_completion_tokens instead. These reasoning models also do not support temperature and top_p parameters. Conditionally set the correct token parameter and omit unsupported sampling parameters based on the model name. Fixes #529 Signed-off-by: majiayu000 <1835304752@qq.com>
1 parent 40182f2 commit 4d8cdf5

File tree

1 file changed

+10
-4
lines changed

1 file changed

+10
-4
lines changed

src/engine/openAi.ts

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -36,12 +36,18 @@ export class OpenAiEngine implements AiEngine {
3636
public generateCommitMessage = async (
3737
messages: Array<OpenAI.Chat.Completions.ChatCompletionMessageParam>
3838
): Promise<string | null> => {
39-
const params = {
39+
const isReasoningModel = /^(o[1-9]|gpt-5)/.test(this.config.model);
40+
41+
const params: Record<string, unknown> = {
4042
model: this.config.model,
4143
messages,
42-
temperature: 0,
43-
top_p: 0.1,
44-
max_tokens: this.config.maxTokensOutput
44+
...(isReasoningModel
45+
? { max_completion_tokens: this.config.maxTokensOutput }
46+
: {
47+
temperature: 0,
48+
top_p: 0.1,
49+
max_tokens: this.config.maxTokensOutput
50+
})
4551
};
4652

4753
try {

0 commit comments

Comments
 (0)