fix: use max_completion_tokens for reasoning models (o1/o3/o4/gpt-5)#544
Merged
di-sukharev merged 4 commits intodi-sukharev:masterfrom Mar 31, 2026
Merged
Conversation
Owner
|
@majiayu000 thank you for the contribution, please fix the tests and i merge it🙏 |
Contributor
Author
|
fixed — two issues: chalk was missing from jest transformIgnorePatterns (ESM import error), and gemini.test.ts had a wrong mock path (../src -> ../../src). all 4 unit test suites pass now |
Repository owner
deleted a comment from
arvkvk7-crypto
Mar 29, 2026
Owner
|
@majiayu000 one test is still failing at https://github.com/di-sukharev/opencommit/actions/runs/23655960709/job/69058677428?pr=544 |
Owner
Newer OpenAI models (o1, o3, o4, gpt-5 series) reject the max_tokens parameter and require max_completion_tokens instead. These reasoning models also do not support temperature and top_p parameters. Conditionally set the correct token parameter and omit unsupported sampling parameters based on the model name. Fixes di-sukharev#529 Signed-off-by: majiayu000 <1835304752@qq.com>
Remove Record<string, unknown> type annotation to let TypeScript infer the params object type, preserving type checking on all properties. Cast to ChatCompletionCreateParamsNonStreaming at the create() call site to accommodate the SDK's missing max_completion_tokens type. Add unit test for reasoning model detection regex. Signed-off-by: majiayu000 <1835304752@qq.com>
- Add chalk to jest transformIgnorePatterns so ESM chalk import works - Fix wrong mock path in gemini.test.ts (../src -> ../../src) Signed-off-by: majiayu000 <1835304752@qq.com>
…ier formatting Signed-off-by: majiayu000 <1835304752@qq.com>
d4ecc49 to
f74ba2d
Compare
Contributor
Author
|
@di-sukharev Rebased on master and fixed all CI failures:
All three should be green now 🙏 |
Owner
|
thank you @majiayu000 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.

Fixes #529
Problem
Newer OpenAI models (o1, o3, o4, gpt-5 series) reject the
max_tokensparameter and requiremax_completion_tokensinstead. These reasoning models also don't supporttemperatureandtop_pparameters.Changes
In
src/engine/openAi.ts, added model detection (/^(o[1-9]|gpt-5)/) to conditionally:max_completion_tokensinstead ofmax_tokensfor reasoning modelstemperatureandtop_pfor reasoning models (they don't support these)The detection pattern is consistent with the existing pattern in
src/utils/modelCache.ts.