← Back to landing Dashboard
Private alpha documentation

OctynX documentation

Read-only product documentation for the current OctynX alpha. This page covers current implemented behavior only.

Overview

OctynX is a European-oriented control layer for governed AI model access. In this alpha, it provides one controlled place to send AI requests, inspect safe request metadata, and use an approved model/provider catalog.

Quickstart

Use short prompts and low token limits during alpha evaluation.

curl -sS -X POST http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "X-API-Key: <client-api-key>" \
  -d '{
    "model": "mistral-small-3.2-24b-instruct-2506",
    "target": "scaleway",
    "messages": [{"role": "user", "content": "<short prompt>"}],
    "max_tokens": 120
  }'

Authentication

Chat completions

Approved model catalog

Guardrails and abuse controls

Dashboard visibility

Request visibility is metadata-only. The dashboard/request view can show:

It does not show prompt bodies, response bodies, API keys, raw provider errors, or internal admin tokens.

Demo notes

Known alpha limitations

The current alpha does not include: