OpenAI-Compatible API Integration

Located in Scripts/Runtime/OpenAI/

A unified integration that works with any OpenAI-compatible server — OpenRouter, OpenAI, LocalAI, LM Studio, Ollama (via its OpenAI endpoint), vLLM, and more.

Key Scripts

| Script | Purpose | |--------|---------| | OpenAI.cs (OpenAIAPI class) | Static API class. Handles chat, streaming, image generation, completions, model listing, and history. | | OpenAIConfig.cs | MonoBehaviour configuration component. Manages server preset, API key, model, temperature, etc. | | OpenAIPayload.cs | Request/response data structures for the OpenAI-compatible API. |

Server Presets

The ServerPreset enum handles provider switching:

| Preset | URL | Use Case | |--------|-----|----------| | Custom | User-defined | LocalAI, LM Studio, Ollama, vLLM, or any OpenAI-compatible server | | OpenRouter | https://openrouter.ai/api/ | 100+ cloud models (OpenAI, Anthropic, Google, Meta, etc.) | | OpenAI | https://api.openai.com/ | Official OpenAI API (GPT-4o, DALL-E, etc.) |

Basic Usage

// Configure via preset
OpenAIAPI.SetServerPreset(ServerPreset.OpenRouter);
OpenAIAPI.SetApiKey("sk-or-v1-your-api-key");

// Or use a custom local server
OpenAIAPI.SetServerPreset(ServerPreset.Custom, "http://localhost:8080/");

// Initialize a chat session
OpenAIAPI.InitChat(historyLimit: 8, system: "You are a helpful assistant.");

// Send a chat message (streaming)
await OpenAIAPI.ChatStream(
    onTextReceived: (text) => responseText.text += text,
    model: "openai/gpt-4o-mini",
    prompt: "Tell me a story",
    temperature: 0.7f,
    maxTokens: 1024
);

// Image generation
Texture2D image = await OpenAIAPI.GenerateImage(prompt, model, "512x512");

// List available models
var models = await OpenAIAPI.GetModels();

Configuration

Use the OpenAIConfig component in your scene:

| Setting | Description | |---------|-------------| | Server Preset | Quick provider selection (Custom, OpenRouter, OpenAI) | | Custom Server URL | For local/self-hosted servers (only when preset is Custom) | | API Key | Required for cloud providers, optional for local | | Model | e.g. llama-3.2-1b-instruct (local), openai/gpt-4o (OpenRouter), gpt-4o (OpenAI) | | Temperature | 0.0 = deterministic, 2.0 = very random | | Max Tokens | Max tokens to generate (0 = model default) | | Top P / Top K | Sampling parameters (0 = disabled) | | History Limit | Messages to keep in chat history | | System Prompt | Optional system prompt for AI behavior | | Image Model / Size | Settings for image generation | | Image Timeout | Timeout in seconds (default: 300) |

Popular Models by Provider

| Provider | Model ID | Description | |----------|----------|-------------| | OpenRouter | openai/gpt-4o | GPT-4 Omni via OpenRouter | | OpenRouter | anthropic/claude-3.5-sonnet | Claude 3.5 Sonnet via OpenRouter | | OpenRouter | google/gemini-pro | Gemini Pro via OpenRouter | | OpenAI | gpt-4o | GPT-4 Omni direct | | OpenAI | gpt-4o-mini | Fast, affordable GPT-4 | | Local | llama-3.2-1b-instruct | Lightweight local model | | Local | mistral-7b-instruct | Strong general-purpose local model |

Chat History

// Save chat to disk
OpenAIAPI.SaveChatHistory("mychat.dat");

// Load chat from disk
OpenAIAPI.LoadChatHistory("mychat.dat", historyLimit: 8);
💡
Editor Chat Window: The AI Chat Window (Window → Velesio → AI Chat Window) also supports this backend. Configure preset, key, and model in the settings panel.