GeminiLLMConfig
Configuration for the Gemini LLM provider.
Defined in: src/providers/llm/gemini/GeminiLLM.ts:58
Configuration for the Gemini LLM provider.
Remarks
Extends OpenAICompatibleLLMConfig with the convenience alias geminiApiKey. Provide either geminiApiKey/apiKey (direct API access) or proxyUrl (server-side proxy). At least one must be set.
Peer dependency: openai (Gemini speaks the OpenAI chat completions format).
Example
// Direct API access
const config: GeminiLLMConfig = {
geminiApiKey: 'AIza...',
model: 'gemini-2.0-flash',
stream: true,
temperature: 0.7,
};
// Via server-side proxy
const proxyConfig: GeminiLLMConfig = {
proxyUrl: 'http://localhost:3000/api/proxy/gemini',
model: 'gemini-1.5-pro',
};
See
OpenAICompatibleLLMConfig for inherited properties (apiKey, proxyUrl, baseURL, etc.).
Extends
Properties
| Property | Type | Default value | Description | Inherited from | Defined in |
|---|---|---|---|---|---|
apiKey? | string | undefined | API key for the provider. Required when connecting directly. Omit when using proxyUrl. | OpenAICompatibleLLMConfig.apiKey | src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:79 |
baseURL? | string | Provider-specific (set by each subclass) | Base URL for the provider’s API. Remarks Defaults differ per subclass (e.g., OpenAI uses https://api.openai.com/v1, Groq uses https://api.groq.com/openai/v1). This value is ignored when proxyUrl is set. | OpenAICompatibleLLMConfig.baseURL | src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:112 |
debug? | boolean | false | Whether to enable debug logging for this provider. Remarks When true, the provider emits detailed internal logs. This is separate from the SDK-level LoggingConfig. | OpenAICompatibleLLMConfig.debug | src/core/types/providers.ts:86 |
endpoint? | string | undefined | Custom endpoint URL to override the provider’s default API endpoint. Remarks Useful for self-hosted instances, proxy servers, or development environments. | OpenAICompatibleLLMConfig.endpoint | src/core/types/providers.ts:75 |
geminiApiKey? | string | undefined | Google Gemini API key. Convenience alias for apiKey. Remarks If both geminiApiKey and apiKey are set, geminiApiKey takes precedence. Obtain a key from Google AI Studio. | - | src/providers/llm/gemini/GeminiLLM.ts:68 |
maxRetries? | number | 3 | Maximum number of retries for failed API requests. | OpenAICompatibleLLMConfig.maxRetries | src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:118 |
maxTokens? | number | undefined | Maximum number of tokens to generate in the response. Remarks For voice applications, lower values (100-300) help keep responses concise and reduce TTS latency. | OpenAICompatibleLLMConfig.maxTokens | src/core/types/providers.ts:589 |
model | string | undefined | Model identifier for the provider. Example 'gpt-4'`, `'llama-3.3-70b-versatile'`, `'gemini-2.0-flash' | OpenAICompatibleLLMConfig.model | src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:101 |
proxyUrl? | string | undefined | URL of the CompositeVoice proxy server endpoint for this provider. Remarks When set, the OpenAI SDK sends requests to this URL instead of the provider’s native endpoint, and a dummy API key ('proxy') is used. The proxy server is responsible for injecting the real API key. Example proxyUrl: 'http://localhost:3000/api/proxy/openai' | OpenAICompatibleLLMConfig.proxyUrl | src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:95 |
stopSequences? | string[] | undefined | Sequences that cause the LLM to stop generating. Remarks When the model generates any of these sequences, generation halts. Useful for controlling response boundaries. | OpenAICompatibleLLMConfig.stopSequences | src/core/types/providers.ts:627 |
stream? | boolean | undefined | Whether to stream the LLM response token by token. Remarks When true, the provider yields tokens incrementally via an async iterable. Streaming is essential for low-latency voice applications as it allows TTS to begin synthesizing before the full response is generated. | OpenAICompatibleLLMConfig.stream | src/core/types/providers.ts:618 |
systemPrompt? | string | undefined | System prompt providing instructions and context to the LLM. Remarks Sets the behavior and persona of the assistant. For voice applications, include instructions to keep responses brief and conversational. | OpenAICompatibleLLMConfig.systemPrompt | src/core/types/providers.ts:608 |
temperature? | number | undefined | Temperature for controlling generation randomness. Remarks Values from 0 (deterministic) to 2 (highly creative). Lower values produce more focused responses; higher values increase variety. | OpenAICompatibleLLMConfig.temperature | src/core/types/providers.ts:580 |
timeout? | number | undefined | Request timeout in milliseconds. Remarks Applies to HTTP requests (REST providers) and connection establishment (WebSocket providers). Set to 0 for no timeout. | OpenAICompatibleLLMConfig.timeout | src/core/types/providers.ts:95 |
topP? | number | undefined | Top-P (nucleus) sampling parameter. Remarks Limits token selection to the smallest set whose cumulative probability exceeds this value. Values from 0 to 1. Often used as an alternative to temperature. | OpenAICompatibleLLMConfig.topP | src/core/types/providers.ts:599 |