Delph

API Reference

Use llmchat as a backend for your own apps. Generate your API key from the API tab in the app sidebar.

Authentication

All requests require a Bearer token in the Authorization header.

Authorization: Bearer llmchat-YOUR_API_KEY

Chat

POST/api/v1/chat

Request body

FieldTypeRequiredDescription
messagesarrayYesArray of {role, content} objects. Roles: user, assistant, system
modelstringNoModel ID (default: gemini-flash-2.5)
web_searchbooleanNoEnable web search (default: false)
custom_instructionsstringNoExtra system instructions
thread_idstringNoOptional ID to group messages

Examples

cURL

curl -X POST https://delph.tech/api/v1/chat \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gemini-flash-2.5",
    "messages": [
      { "role": "user", "content": "Explain quantum entanglement simply." }
    ],
    "web_search": false
  }'

JavaScript

const response = await fetch('https://delph.tech/api/v1/chat', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer YOUR_API_KEY',
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    model: 'gemini-flash-2.5',
    messages: [
      { role: 'user', content: 'Explain quantum entanglement simply.' }
    ],
  }),
});

// Response is a Server-Sent Events stream
const reader = response.body.getReader();
const decoder = new TextDecoder();

while (true) {
  const { value, done } = await reader.read();
  if (done) break;
  const chunk = decoder.decode(value);
  // Parse SSE events from chunk
  console.log(chunk);
}

Available Models

Credits are deducted per request based on the model used. Credits reset daily.

Model IDNameCreditsNotes
gemini-flash-2.5Gemini 2.5 Flash1Fast, free tier
gemini-2.5-proGemini 2.5 Pro31M context
gpt-4o-miniGPT-4o Mini1Good balance
gpt-4.1-nanoGPT-4.1 Nano1Lightweight
llama-4-scoutLlama 4 Scout1Open source
llama-3.3-70bLlama 3.3 70B1Open source
claude-haiku-4.5Haiku 4.510Fast Claude
claude-sonnet-4.6Sonnet 4.610High quality
claude-opus-4.6Opus 4.610Most capable
gpt-4.1GPT-4.15High quality
deepseek-r1DeepSeek R15Reasoning model

Error Codes

StatusMeaning
401Invalid or missing API key
400Invalid request body
429Daily credit limit reached
500Internal server error

Need help? Open an issue on GitHub.