Skip to content

LLMProviderConfig

Configuration for large language model providers.

Defined in: src/core/types/providers.ts:563

Configuration for large language model providers.

Remarks

Extends BaseProviderConfig with LLM-specific options for model selection, generation parameters, system prompt, and streaming behavior.

Example

const llmConfig: LLMProviderConfig = {
  apiKey: 'your-api-key',
  model: 'claude-sonnet-4-20250514',
  temperature: 0.7,
  maxTokens: 150,
  systemPrompt: 'You are a helpful voice assistant. Keep responses brief.',
  stream: true,
};

See

Extends

Extended by

Properties

PropertyTypeDefault valueDescriptionInherited fromDefined in
apiKey?stringundefinedAPI key or authentication token for the provider. Remarks For client-side usage, consider using a proxy server to keep API keys secure. The SDK provides Express, Next.js, and Node adapters for this purpose.BaseProviderConfig.apiKeysrc/core/types/providers.ts:67
debug?booleanfalseWhether to enable debug logging for this provider. Remarks When true, the provider emits detailed internal logs. This is separate from the SDK-level LoggingConfig.BaseProviderConfig.debugsrc/core/types/providers.ts:86
endpoint?stringundefinedCustom endpoint URL to override the provider’s default API endpoint. Remarks Useful for self-hosted instances, proxy servers, or development environments.BaseProviderConfig.endpointsrc/core/types/providers.ts:75
maxTokens?numberundefinedMaximum number of tokens to generate in the response. Remarks For voice applications, lower values (100-300) help keep responses concise and reduce TTS latency.-src/core/types/providers.ts:589
modelstringundefinedModel identifier to use for generation. Remarks Provider-specific model name (e.g., 'claude-sonnet-4-20250514' for Anthropic, 'gpt-4' for OpenAI).-src/core/types/providers.ts:571
stopSequences?string[]undefinedSequences that cause the LLM to stop generating. Remarks When the model generates any of these sequences, generation halts. Useful for controlling response boundaries.-src/core/types/providers.ts:627
stream?booleanundefinedWhether to stream the LLM response token by token. Remarks When true, the provider yields tokens incrementally via an async iterable. Streaming is essential for low-latency voice applications as it allows TTS to begin synthesizing before the full response is generated.-src/core/types/providers.ts:618
systemPrompt?stringundefinedSystem prompt providing instructions and context to the LLM. Remarks Sets the behavior and persona of the assistant. For voice applications, include instructions to keep responses brief and conversational.-src/core/types/providers.ts:608
temperature?numberundefinedTemperature for controlling generation randomness. Remarks Values from 0 (deterministic) to 2 (highly creative). Lower values produce more focused responses; higher values increase variety.-src/core/types/providers.ts:580
timeout?numberundefinedRequest timeout in milliseconds. Remarks Applies to HTTP requests (REST providers) and connection establishment (WebSocket providers). Set to 0 for no timeout.BaseProviderConfig.timeoutsrc/core/types/providers.ts:95
topP?numberundefinedTop-P (nucleus) sampling parameter. Remarks Limits token selection to the smallest set whose cumulative probability exceeds this value. Values from 0 to 1. Often used as an alternative to temperature.-src/core/types/providers.ts:599

© 2026 CompositeVoice. All rights reserved.

Font size
Contrast
Motion
Transparency