LLMProviderConfig
Configuration for large language model providers.
Defined in: src/core/types/providers.ts:563
Configuration for large language model providers.
Remarks
Extends BaseProviderConfig with LLM-specific options for model selection, generation parameters, system prompt, and streaming behavior.
Example
const llmConfig: LLMProviderConfig = {
apiKey: 'your-api-key',
model: 'claude-sonnet-4-20250514',
temperature: 0.7,
maxTokens: 150,
systemPrompt: 'You are a helpful voice assistant. Keep responses brief.',
stream: true,
};
See
- BaseProviderConfig for inherited fields
- LLMProvider for the provider interface
Extends
Extended by
Properties
| Property | Type | Default value | Description | Inherited from | Defined in |
|---|---|---|---|---|---|
apiKey? | string | undefined | API key or authentication token for the provider. Remarks For client-side usage, consider using a proxy server to keep API keys secure. The SDK provides Express, Next.js, and Node adapters for this purpose. | BaseProviderConfig.apiKey | src/core/types/providers.ts:67 |
debug? | boolean | false | Whether to enable debug logging for this provider. Remarks When true, the provider emits detailed internal logs. This is separate from the SDK-level LoggingConfig. | BaseProviderConfig.debug | src/core/types/providers.ts:86 |
endpoint? | string | undefined | Custom endpoint URL to override the provider’s default API endpoint. Remarks Useful for self-hosted instances, proxy servers, or development environments. | BaseProviderConfig.endpoint | src/core/types/providers.ts:75 |
maxTokens? | number | undefined | Maximum number of tokens to generate in the response. Remarks For voice applications, lower values (100-300) help keep responses concise and reduce TTS latency. | - | src/core/types/providers.ts:589 |
model | string | undefined | Model identifier to use for generation. Remarks Provider-specific model name (e.g., 'claude-sonnet-4-20250514' for Anthropic, 'gpt-4' for OpenAI). | - | src/core/types/providers.ts:571 |
stopSequences? | string[] | undefined | Sequences that cause the LLM to stop generating. Remarks When the model generates any of these sequences, generation halts. Useful for controlling response boundaries. | - | src/core/types/providers.ts:627 |
stream? | boolean | undefined | Whether to stream the LLM response token by token. Remarks When true, the provider yields tokens incrementally via an async iterable. Streaming is essential for low-latency voice applications as it allows TTS to begin synthesizing before the full response is generated. | - | src/core/types/providers.ts:618 |
systemPrompt? | string | undefined | System prompt providing instructions and context to the LLM. Remarks Sets the behavior and persona of the assistant. For voice applications, include instructions to keep responses brief and conversational. | - | src/core/types/providers.ts:608 |
temperature? | number | undefined | Temperature for controlling generation randomness. Remarks Values from 0 (deterministic) to 2 (highly creative). Lower values produce more focused responses; higher values increase variety. | - | src/core/types/providers.ts:580 |
timeout? | number | undefined | Request timeout in milliseconds. Remarks Applies to HTTP requests (REST providers) and connection establishment (WebSocket providers). Set to 0 for no timeout. | BaseProviderConfig.timeout | src/core/types/providers.ts:95 |
topP? | number | undefined | Top-P (nucleus) sampling parameter. Remarks Limits token selection to the smallest set whose cumulative probability exceeds this value. Values from 0 to 1. Often used as an alternative to temperature. | - | src/core/types/providers.ts:599 |