Context
I am using @copilot-kit/sdk, and I want to configure Ollama as my LLM provider.
The Issue
I would like to know if it is possible to configure the reasoning_effort parameter (low, medium, high) through the SDK when using the Ollama provider (the "think" parameter in ollama takes only 2 values (True/False)) .
Currently, I cannot find a clear way to pass this specific parameter in the provider configuration or the request options.
Proposed / Asked behavior
Is there an existing way to pass provider-specific parameters like reasoning_effort to Ollama? If not, are there plans to support this to allow better control over "thinking" tokens?
Context
I am using @copilot-kit/sdk, and I want to configure Ollama as my LLM provider.
The Issue
I would like to know if it is possible to configure the reasoning_effort parameter (low, medium, high) through the SDK when using the Ollama provider (the "think" parameter in ollama takes only 2 values (True/False)) .
Currently, I cannot find a clear way to pass this specific parameter in the provider configuration or the request options.
Proposed / Asked behavior
Is there an existing way to pass provider-specific parameters like reasoning_effort to Ollama? If not, are there plans to support this to allow better control over "thinking" tokens?