API Documentation

Free OpenAI-compatible API powered by Claude models. Full reference for chat, images, streaming, and thinking modes.

Base URL: https://ai.drafterplus.nl/api/v1
Authentication: Bearer token (API key) in the Authorization header.
Format: OpenAI-compatible - works as a drop-in replacement for OpenAI SDK.

Table of Contents

Authentication

Include your API key in every request as a Bearer token:

Authorization: Bearer YOUR_API_KEY

Get your free API key from the Dashboard under API Keys. No credit card required.

Tip: Never expose your API key in client-side code. Always make API calls from your backend server.

Models Reference

DrafterPlus AI gives you access to Anthropic's Claude models and our own image generation models. All accessible through a single API key.

Chat Models

Model IDDescriptionSpeedPlan
claude-haiku-4-5 Fast, efficient. Great for simple tasks, classification, quick answers Fastest Free
claude-sonnet-4-6 Balanced intelligence and speed. Best for most applications Medium Pro
claude-opus-4-6 Most capable. Complex reasoning, analysis, creative writing Slower Plus

Model Aliases

You can also use full version identifiers:

AliasFull Model ID
claude-haiku-4-5claude-haiku-4-5-20250506
claude-sonnet-4-5claude-sonnet-4-5-20250514
claude-sonnet-4-6claude-sonnet-4-6-20250610
claude-opus-4-5claude-opus-4-5-20250514
claude-opus-4-6claude-opus-4-6-20250610

Image Model

Model IDDescription
nano-banano-pro Image generation model by Google. Used for all image generation requests.
Note: Nano Banano Pro is a single model from Google - not three separate models. It handles all image generation requests on DrafterPlus AI.

Chat Completions

POST /api/v1/chat/completions

Send messages and receive AI responses. Fully compatible with the OpenAI chat completions format.

Request Body

ParameterTypeRequiredDescription
modelstringYesModel ID (see Models Reference above)
messagesarrayYesArray of message objects with role and content
max_tokensintegerNoMaximum tokens to generate (default: 1024)
temperaturefloatNoRandomness 0-1 (default: 0.7). Lower = more focused
streambooleanNoEnable streaming response (default: false)
thinkingstringNoThinking mode: "medium", "high", or "max" (see Thinking Modes)
top_pfloatNoNucleus sampling 0-1 (default: 1)
stopstring/arrayNoStop sequence(s) to end generation

Message Roles

Basic Request

{
  "model": "claude-haiku-4-5",
  "messages": [
    { "role": "system", "content": "You are a helpful coding assistant." },
    { "role": "user", "content": "Write a Python function to reverse a string." }
  ],
  "max_tokens": 1024,
  "temperature": 0.5
}

Response

{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1710100000,
  "model": "claude-haiku-4-5",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Here's a Python function to reverse a string:\n\n```python\ndef reverse_string(s):\n    return s[::-1]\n```"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 28,
    "completion_tokens": 45,
    "total_tokens": 73
  }
}

Streaming

Stream responses token-by-token using Server-Sent Events (SSE). This creates a real-time typing effect in your UI, dramatically reducing perceived latency.

Enable Streaming

Add stream: true to your request:

{
  "model": "claude-haiku-4-5",
  "messages": [{ "role": "user", "content": "Tell me a story" }],
  "stream": true
}

Stream Response Format

The API sends events line by line. Each line starts with data: followed by a JSON chunk:

data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"role":"assistant"},"finish_reason":null}]}

data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"content":"Once"},"finish_reason":null}]}

data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"content":" upon"},"finish_reason":null}]}

data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"content":" a"},"finish_reason":null}]}

data: [DONE]

Handle Streaming in JavaScript

const response = await fetch('https://ai.drafterplus.nl/api/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer YOUR_API_KEY',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    model: 'claude-haiku',
    messages: [{ role: 'user', content: 'Tell me a story' }],
    stream: true
  })
});

const reader = response.body.getReader();
const decoder = new TextDecoder();
let result = '';

while (true) {
  const { done, value } = await reader.read();
  if (done) break;

  const chunk = decoder.decode(value);
  const lines = chunk.split('\n');

  for (const line of lines) {
    if (line.startsWith('data: ') && line !== 'data: [DONE]') {
      const data = JSON.parse(line.slice(6));
      const content = data.choices[0]?.delta?.content;
      if (content) {
        result += content;
        // Update your UI here
        document.getElementById('output').textContent = result;
      }
    }
  }
}

Handle Streaming in Python

import requests
import json

response = requests.post(
    'https://ai.drafterplus.nl/api/v1/chat/completions',
    headers={'Authorization': 'Bearer YOUR_API_KEY'},
    json={
        'model': 'claude-haiku-4-5',
        'messages': [{'role': 'user', 'content': 'Tell me a story'}],
        'stream': True
    },
    stream=True
)

for line in response.iter_lines():
    if line:
        line = line.decode('utf-8')
        if line.startswith('data: ') and line != 'data: [DONE]':
            data = json.loads(line[6:])
            content = data['choices'][0].get('delta', {}).get('content', '')
            if content:
                print(content, end='', flush=True)

When to Use Streaming

Thinking Modes

Thinking modes let the model "think" before responding, producing better results for complex tasks like math, logic, code analysis, and multi-step reasoning. The model's thinking process is hidden from the output but improves answer quality.

Available Thinking Levels

ModeBudgetBest ForSpeed Impact
medium ~2,000 thinking tokens Quick reasoning tasks, simple math, basic logic Slight slowdown
high ~8,000 thinking tokens Complex analysis, code review, detailed explanations Moderate slowdown
max ~32,000 thinking tokens Research, complex coding, multi-step problems Significant slowdown
Note: Thinking tokens do not count toward your usage quota. They improve quality without extra cost.

Usage

{
  "model": "claude-sonnet-4-6",
  "messages": [
    {
      "role": "user",
      "content": "Solve this step by step: If a train travels at 60mph for 2.5 hours, then at 80mph for 1.5 hours, what is the total distance and average speed?"
    }
  ],
  "thinking": "high",
  "max_tokens": 2048
}

Thinking + Streaming

Thinking works with streaming. The model will think first (during which you'll see no output), then stream the final answer. Combine both for the best user experience:

{
  "model": "claude-sonnet-4-6",
  "messages": [{ "role": "user", "content": "Analyze this algorithm's time complexity..." }],
  "thinking": "medium",
  "stream": true
}

When to Use Thinking

Image Generation

POST /api/v1/images/generations

Generate images from text descriptions using the Nano Banano Pro model.

Request Body

ParameterTypeRequiredDescription
promptstringYesText description of the image to generate
nintegerNoNumber of images (default: 1, max: 4)
sizestringNo"256x256", "512x512", or "1024x1024" (default)

Example Request

{
  "prompt": "A minimalist logo for a tech startup, black and white, clean lines",
  "size": "1024x1024"
}

Response

{
  "created": 1710100000,
  "data": [
    {
      "url": "https://ai.drafterplus.nl/images/generated/abc123.png"
    }
  ]
}

Usage API

GET /api/v1/usage

Check your current daily usage and remaining credits. Useful for building dashboards or managing rate limits in your application.

Response

{
  "plan": "free",
  "today": {
    "chat": 5,
    "imageGen": 1,
    "imageEdit": 0,
    "apiCalls": 6,
    "tokens": 15420
  },
  "limits": {
    "chat": 20,
    "imageGen": 3
  },
  "remaining": {
    "chat": 15,
    "imageGen": 2
  }
}

Rate Limits

PlanChat/dayImages/dayPrice
Free 20 requests 3 images Free forever
Pro 100 requests 25 images 7.49 EUR/week
Plus 500 requests 50 images 14.99 EUR/week

Rate limits reset daily at midnight UTC. Exceeded limits return HTTP 429 with a Retry-After header.

Error Codes

CodeMeaningWhat to Do
400Bad RequestCheck your request body format and parameters
401UnauthorizedInvalid or missing API key. Check your Authorization header
403ForbiddenAccount blocked or model not available on your plan
429Rate LimitedDaily quota exceeded. Wait for reset or upgrade plan
500Server ErrorTemporary issue. Retry with exponential backoff
503UnavailableService temporarily down. Check /status

Error Response Format

{
  "error": {
    "message": "Rate limit exceeded. Daily quota: 20/20 used.",
    "type": "rate_limit_error",
    "code": 429
  }
}

Code Examples

cURL

curl -X POST https://ai.drafterplus.nl/api/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "claude-haiku-4-5",
    "messages": [{"role": "user", "content": "Hello!"}],
    "max_tokens": 256
  }'

Node.js (fetch)

const response = await fetch('https://ai.drafterplus.nl/api/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer YOUR_API_KEY',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    model: 'claude-haiku',
    messages: [{ role: 'user', content: 'Hello!' }],
    max_tokens: 256
  })
});

const data = await response.json();
console.log(data.choices[0].message.content);

Node.js (OpenAI SDK)

Use the official OpenAI Node.js package with DrafterPlus AI as a drop-in replacement:

import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: 'YOUR_API_KEY',
  baseURL: 'https://ai.drafterplus.nl/api/v1'
});

const response = await client.chat.completions.create({
  model: 'claude-haiku',
  messages: [{ role: 'user', content: 'Hello!' }]
});

console.log(response.choices[0].message.content);

Python (requests)

import requests

response = requests.post(
    'https://ai.drafterplus.nl/api/v1/chat/completions',
    headers={'Authorization': 'Bearer YOUR_API_KEY'},
    json={
        'model': 'claude-haiku-4-5',
        'messages': [{'role': 'user', 'content': 'Hello!'}],
        'max_tokens': 256
    }
)

data = response.json()
print(data['choices'][0]['message']['content'])

Python (OpenAI SDK)

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://ai.drafterplus.nl/api/v1"
)

response = client.chat.completions.create(
    model="claude-haiku-4-5",
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)

Python (Streaming)

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://ai.drafterplus.nl/api/v1"
)

stream = client.chat.completions.create(
    model="claude-sonnet-4-6",
    messages=[{"role": "user", "content": "Write a poem about the ocean"}],
    stream=True
)

for chunk in stream:
    content = chunk.choices[0].delta.content
    if content:
        print(content, end="", flush=True)

Express.js Middleware

const express = require('express');
const app = express();
app.use(express.json());

const API_KEY = process.env.DRAFTERPLUS_API_KEY;

app.post('/api/chat', async (req, res) => {
  const { message } = req.body;
  
  const response = await fetch('https://ai.drafterplus.nl/api/v1/chat/completions', {
    method: 'POST',
    headers: {
      'Authorization': `Bearer ${API_KEY}`,
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({
      model: 'claude-haiku',
      messages: [
        { role: 'system', content: 'You are a helpful assistant.' },
        { role: 'user', content: message }
      ]
    })
  });
  
  const data = await response.json();
  res.json({ reply: data.choices[0].message.content });
});

app.listen(3000);

OAuth2 Applications

Build third-party applications that access DrafterPlus AI on behalf of users. OAuth2 lets users authorize your app without sharing their API key.

OAuth2 Flow

  1. Register your application in the Dashboard
  2. Redirect users to /oauth/authorize?client_id=YOUR_ID&redirect_uri=YOUR_URL&response_type=code
  3. User authorizes your app on our consent page
  4. We redirect back with a code: YOUR_URL?code=AUTHORIZATION_CODE
  5. Exchange the code for an access token via POST /oauth/token
  6. Use the access token as a Bearer token in API requests

Exchange Code for Token

POST /oauth/token
{
  "grant_type": "authorization_code",
  "code": "AUTHORIZATION_CODE",
  "client_id": "YOUR_CLIENT_ID",
  "client_secret": "YOUR_CLIENT_SECRET",
  "redirect_uri": "YOUR_REDIRECT_URI"
}

Read the full OAuth2 Guide on our blog for a complete walkthrough with code examples.

SDK & Libraries

Since DrafterPlus AI is OpenAI-compatible, you can use the official OpenAI SDKs directly:

No custom SDK needed. If it works with OpenAI, it works with DrafterPlus AI.

Need help? Join our getting started guide, check the blog for tutorials, or reach out through the support page.

Background Removal

Remove backgrounds from images using AI. Returns a transparent PNG.

POST
/api/v1/image/remove-bg

Request

Send a multipart/form-data request with the image file.

image file (required) The image file (PNG, JPG, WebP). Max 10MB.

Headers

Authorization: Bearer YOUR_API_KEY

Example (cURL)

curl -X POST https://ai.drafterplus.nl/api/v1/image/remove-bg \
  -H "Authorization: Bearer dp_your_api_key" \
  -F "[email protected]"

Example (Node.js)

const fs = require('fs');
const FormData = require('form-data');

const form = new FormData();
form.append('image', fs.createReadStream('photo.jpg'));

const response = await fetch('https://ai.drafterplus.nl/api/v1/image/remove-bg', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer dp_your_api_key',
    ...form.getHeaders()
  },
  body: form
});

const buffer = await response.arrayBuffer();
fs.writeFileSync('output.png', Buffer.from(buffer));

Example (Python)

import requests

response = requests.post(
    'https://ai.drafterplus.nl/api/v1/image/remove-bg',
    headers={'Authorization': 'Bearer dp_your_api_key'},
    files={'image': open('photo.jpg', 'rb')}
)

with open('output.png', 'wb') as f:
    f.write(response.content)

Response

Returns the image as a image/png binary with transparent background.

Rate Limits

Free 5 per day
Pro 25 per day
Plus 100 per day