Skip to content

OpenAI (GPT)

Use OpenAI's GPT models as the LLM provider in a CompositeVoice pipeline.

Use OpenAILLM when you want GPT models for your voice assistant.

Prerequisites

  • An OpenAI API key or a CompositeVoice proxy server
  • Install the peer dependency:
npm install openai

Basic setup

import { CompositeVoice, OpenAILLM, NativeSTT, NativeTTS } from '@lukeocodes/composite-voice';

const agent = new CompositeVoice({
  stt: new NativeSTT({ language: 'en-US' }),
  llm: new OpenAILLM({
    proxyUrl: '/api/proxy/openai',
    model: 'gpt-4o-mini',
    systemPrompt: 'You are a concise voice assistant. Keep answers under two sentences.',
  }),
  tts: new NativeTTS(),
});

await agent.start();

Configuration options

OptionTypeDefaultDescription
modelstring(required)Model identifier. See model variants below.
systemPromptstringSystem-level instructions for the assistant.
temperaturenumberRandomness (0 = deterministic, 2 = creative).
maxTokensnumberMaximum tokens per response.
topPnumberNucleus sampling threshold (0—1).
streambooleantrueStream tokens incrementally.
proxyUrlstringCompositeVoice proxy endpoint. Recommended for browsers.
apiKeystringDirect API key. Use only in server-side code.
organizationIdstringOpenAI organization ID for multi-org accounts.
maxRetriesnumber3Retry count for failed requests.

Model variants

ModelSpeedNotes
gpt-4o-miniFastGood balance of speed and quality for voice
gpt-4oModerateHigh capability, multimodal
gpt-4-turboModerateLarge context window
gpt-3.5-turboFastLower cost, lower capability

Complete example

import {
  CompositeVoice,
  OpenAILLM,
  DeepgramSTT,
  DeepgramTTS,
} from '@lukeocodes/composite-voice';

const agent = new CompositeVoice({
  stt: new DeepgramSTT({
    proxyUrl: '/api/proxy/deepgram',
    language: 'en',
    options: { model: 'nova-3', smartFormat: true },
  }),
  llm: new OpenAILLM({
    proxyUrl: '/api/proxy/openai',
    model: 'gpt-4o-mini',
    temperature: 0.7,
    maxTokens: 256,
    systemPrompt: 'You are a friendly voice assistant. Answer briefly.',
  }),
  tts: new DeepgramTTS({
    proxyUrl: '/api/proxy/deepgram',
    voice: 'aura-2-thalia-en',
  }),
  conversationHistory: { enabled: true, maxTurns: 10 },
});

await agent.start();

Tips

  • model is required. OpenAILLM does not set a default model. You must specify one.
  • gpt-4o-mini is ideal for voice. It offers low latency and good quality for conversational use cases.
  • Use organizationId for multi-org accounts. If your API key belongs to multiple organizations, set this to route requests correctly.
  • OpenAILLM extends OpenAICompatibleLLM. All streaming, abort, and proxy logic is inherited from the base class.

© 2026 CompositeVoice. All rights reserved.

Font size
Contrast
Motion
Transparency