Skip to content

Commit dc7f7f6

Browse files
committed
fix: use max_completion_tokens for reasoning models in OpenAI engine
Newer OpenAI models (o1, o3, o4, gpt-5 series) reject the max_tokens parameter and require max_completion_tokens instead. These reasoning models also do not support temperature and top_p parameters. Conditionally set the correct token parameter and omit unsupported sampling parameters based on the model name. Fixes #529 Signed-off-by: majiayu000 <1835304752@qq.com>
1 parent db8a22b commit dc7f7f6

File tree

1 file changed

+10
-4
lines changed

1 file changed

+10
-4
lines changed

src/engine/openAi.ts

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -43,12 +43,18 @@ export class OpenAiEngine implements AiEngine {
4343
public generateCommitMessage = async (
4444
messages: Array<OpenAI.Chat.Completions.ChatCompletionMessageParam>
4545
): Promise<string | null> => {
46-
const params = {
46+
const isReasoningModel = /^(o[1-9]|gpt-5)/.test(this.config.model);
47+
48+
const params: Record<string, unknown> = {
4749
model: this.config.model,
4850
messages,
49-
temperature: 0,
50-
top_p: 0.1,
51-
max_tokens: this.config.maxTokensOutput
51+
...(isReasoningModel
52+
? { max_completion_tokens: this.config.maxTokensOutput }
53+
: {
54+
temperature: 0,
55+
top_p: 0.1,
56+
max_tokens: this.config.maxTokensOutput
57+
})
5258
};
5359

5460
try {

0 commit comments

Comments
 (0)