OpenAILLM
OpenAI LLM provider for GPT models.
Defined in: src/providers/llm/openai/OpenAILLM.ts:103
OpenAI LLM provider for GPT models.
Remarks
A thin subclass of OpenAICompatibleLLM that configures the OpenAI SDK with OpenAI-specific options. All generation logic (streaming, non-streaming, abort handling, proxy support) is inherited from the base class.
The only customization is:
providerNameis set to'OpenAILLM'for log/error messages.buildClientOptions()injects theorganizationfield when organizationId is configured.
Example
import { OpenAILLM } from 'composite-voice';
const llm = new OpenAILLM({
apiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4o-mini',
systemPrompt: 'You are a concise voice assistant.',
});
await llm.initialize();
const stream = await llm.generate('What causes thunder?');
for await (const chunk of stream) {
process.stdout.write(chunk);
}
await llm.dispose();
See
- OpenAILLMConfig for configuration options.
- OpenAICompatibleLLM for the base class.
- AnthropicLLM for the Anthropic alternative.
Extends
Constructors
Constructor
new OpenAILLM(config, logger?): OpenAILLM;
Defined in: src/providers/llm/openai/OpenAILLM.ts:114
Creates a new OpenAI LLM provider instance.
Parameters
| Parameter | Type | Description |
|---|---|---|
config | OpenAILLMConfig | OpenAI provider configuration. Must include at least model and either apiKey or proxyUrl. |
logger? | Logger | Optional custom logger instance. |
Returns
OpenAILLM
Overrides
OpenAICompatibleLLM.constructor
Properties
| Property | Modifier | Type | Default value | Description | Overrides | Inherited from | Defined in |
|---|---|---|---|---|---|---|---|
config | public | OpenAILLMConfig | undefined | LLM-specific provider configuration. | OpenAICompatibleLLM.config | - | src/providers/llm/openai/OpenAILLM.ts:104 |
initialized | protected | boolean | false | Tracks whether initialize has completed successfully. | - | OpenAICompatibleLLM.initialized | src/providers/base/BaseProvider.ts:97 |
logger | protected | Logger | undefined | Scoped logger instance for this provider. | - | OpenAICompatibleLLM.logger | src/providers/base/BaseProvider.ts:94 |
providerName | readonly | "OpenAILLM" | 'OpenAICompatibleLLM' | Display name used in log messages and errors. Remarks Override this property in subclasses to provide a meaningful name (e.g., 'GroqLLM', 'GeminiLLM'). The name appears in all log output and in ProviderInitializationError messages. | OpenAICompatibleLLM.providerName | - | src/providers/llm/openai/OpenAILLM.ts:105 |
roles | readonly | readonly ProviderRole[] | undefined | LLM providers cover the 'llm' pipeline role by default. | - | OpenAICompatibleLLM.roles | src/providers/base/BaseLLMProvider.ts:75 |
type | readonly | ProviderType | undefined | Communication transport this provider uses ('rest' or 'websocket'). | - | OpenAICompatibleLLM.type | src/providers/base/BaseProvider.ts:74 |
Methods
assertReady()
protected assertReady(): void;
Defined in: src/providers/base/BaseProvider.ts:255
Guard that throws if the provider has not been initialized.
Returns
void
Remarks
Call at the start of any method that requires the provider to be ready.
Throws
Error Thrown with a descriptive message when initialized is false.
Inherited from
OpenAICompatibleLLM.assertReady
buildClientOptions()
protected buildClientOptions(): Record<string, unknown>;
Defined in: src/providers/llm/openai/OpenAILLM.ts:127
Build OpenAI-specific SDK constructor options.
Returns
Record<string, unknown>
An object containing the organization key if set, or an empty object.
Remarks
Injects the organization field into the OpenAI SDK constructor when organizationId is configured.
Overrides
OpenAICompatibleLLM.buildClientOptions
dispose()
dispose(): Promise<void>;
Defined in: src/providers/base/BaseProvider.ts:154
Clean up resources and dispose of the provider.
Returns
Promise<void>
Remarks
Delegates to the subclass hook onDispose and resets the initialized flag. If the provider is not initialized, the call is a no-op.
Throws
Re-throws any error raised by onDispose.
Inherited from
generate()
generate(prompt, options?): Promise<AsyncIterable<string, any, any>>;
Defined in: src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:324
Generate an LLM response from a single text prompt.
Parameters
| Parameter | Type | Description |
|---|---|---|
prompt | string | The user’s text prompt. |
options? | LLMGenerationOptions | Optional generation overrides (temperature, maxTokens, signal, etc.). |
Returns
Promise<AsyncIterable<string, any, any>>
An async iterable that yields text chunks. When streaming is enabled (the default), chunks arrive incrementally; otherwise, a single chunk containing the full response is yielded.
Remarks
Convenience wrapper that converts the prompt to a message array (prepending the system prompt if configured) and delegates to generateFromMessages.
Throws
Error Thrown if the provider has not been initialized or the client is unavailable.
Throws
AbortError Thrown if the provided options.signal is aborted before or during generation.
Example
const provider = new OpenAICompatibleLLM({ apiKey: 'sk-...', model: 'gpt-4' });
await provider.initialize();
const stream = await provider.generate('Explain quantum computing briefly.');
for await (const chunk of stream) {
process.stdout.write(chunk);
}
Inherited from
generateFromMessages()
generateFromMessages(messages, options?): Promise<AsyncIterable<string, any, any>>;
Defined in: src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:371
Generate an LLM response from a multi-turn conversation.
Parameters
| Parameter | Type | Description |
|---|---|---|
messages | LLMMessage[] | Array of conversation messages (system, user, assistant). |
options? | LLMGenerationOptions | Optional generation overrides (temperature, maxTokens, signal, etc.). |
Returns
Promise<AsyncIterable<string, any, any>>
An async iterable that yields text chunks. When streaming is enabled (the default), chunks arrive incrementally; otherwise, a single chunk containing the full response is yielded.
Remarks
This is the primary generation method. It merges per-call options with the provider config defaults, converts the messages to OpenAI’s ChatCompletionMessageParam format, and dispatches to either the streaming or non-streaming code path based on config.stream.
The returned async iterable respects the options.signal abort signal. When aborted, iteration stops and an AbortError is thrown. This is used by the CompositeVoice eager/preflight pipeline to cancel speculative generations.
Throws
Error Thrown if the provider has not been initialized or the client is unavailable.
Throws
AbortError Thrown if the provided options.signal is aborted before or during generation.
Example
const provider = new OpenAICompatibleLLM({ apiKey: 'sk-...', model: 'gpt-4' });
await provider.initialize();
const messages: LLMMessage[] = [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What is the capital of France?' },
];
const stream = await provider.generateFromMessages(messages);
for await (const chunk of stream) {
process.stdout.write(chunk);
}
Inherited from
OpenAICompatibleLLM.generateFromMessages
getConfig()
getConfig(): LLMProviderConfig;
Defined in: src/providers/base/BaseLLMProvider.ts:192
Get a shallow copy of the current LLM configuration.
Returns
A new LLMProviderConfig object.
Inherited from
initialize()
initialize(): Promise<void>;
Defined in: src/providers/base/BaseProvider.ts:127
Initialize the provider, making it ready for use.
Returns
Promise<void>
Remarks
Calls the subclass hook onInitialize. If the provider has already been initialized the call is a no-op.
Throws
ProviderInitializationError Thrown when onInitialize rejects. The original error is wrapped with the provider class name for diagnostics.
Inherited from
OpenAICompatibleLLM.initialize
isReady()
isReady(): boolean;
Defined in: src/providers/base/BaseProvider.ts:178
Check whether the provider has been initialized and is ready.
Returns
boolean
true when initialize has completed successfully and dispose has not yet been called.
Inherited from
mergeOptions()
protected mergeOptions(options?): LLMGenerationOptions;
Defined in: src/providers/base/BaseLLMProvider.ts:170
Merge per-call generation options with the provider’s config defaults.
Parameters
| Parameter | Type | Description |
|---|---|---|
options? | LLMGenerationOptions | Optional per-call overrides. |
Returns
A merged LLMGenerationOptions object.
Remarks
Values supplied in options take precedence over values in config. Only defined values are included in the result, allowing providers to distinguish “not set” from explicit values.
Inherited from
OpenAICompatibleLLM.mergeOptions
onConfigUpdate()
protected onConfigUpdate(_config): void;
Defined in: src/providers/base/BaseProvider.ts:242
Hook called after updateConfig merges new values.
Parameters
| Parameter | Type | Description |
|---|---|---|
_config | Partial<BaseProviderConfig> | The partial configuration that was merged. |
Returns
void
Remarks
The default implementation is a no-op. Override in subclasses to react to runtime configuration changes (e.g. reconnect with a new API key).
Inherited from
OpenAICompatibleLLM.onConfigUpdate
onDispose()
protected onDispose(): Promise<void>;
Defined in: src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:288
Dispose of the OpenAI client and release resources.
Returns
Promise<void>
Remarks
Nullifies the client reference so that it can be garbage-collected. Called automatically by BaseLLMProvider.dispose.
Inherited from
onInitialize()
protected onInitialize(): Promise<void>;
Defined in: src/providers/llm/openai-compatible/OpenAICompatibleLLM.ts:237
Initialize the OpenAI-compatible client.
Returns
Promise<void>
Remarks
Dynamically imports the openai peer dependency, resolves the base URL (preferring proxyUrl over baseURL), and creates the SDK client instance. Called automatically by BaseLLMProvider.initialize.
Throws
ProviderInitializationError Thrown if neither apiKey nor proxyUrl is configured, or if the openai package cannot be found (peer dependency not installed).
Inherited from
OpenAICompatibleLLM.onInitialize
promptToMessages()
protected promptToMessages(prompt): LLMMessage[];
Defined in: src/providers/base/BaseLLMProvider.ts:141
Convert a plain-text prompt into an LLMMessage array.
Parameters
| Parameter | Type | Description |
|---|---|---|
prompt | string | The user’s input text. |
Returns
A messages array suitable for generateFromMessages.
Remarks
If the provider’s config includes a systemPrompt, it is prepended as a system message. The prompt itself becomes a user message.
Inherited from
OpenAICompatibleLLM.promptToMessages
updateConfig()
updateConfig(config): void;
Defined in: src/providers/base/BaseProvider.ts:201
Merge partial configuration updates into the current config.
Parameters
| Parameter | Type | Description |
|---|---|---|
config | Partial<BaseProviderConfig> | A partial configuration object whose keys will overwrite existing values. |
Returns
void
Remarks
After merging, the subclass hook onConfigUpdate is called so providers can react to changed values at runtime.