European-oriented private alpha

Control layer for governed AI model access.

OctynX gives technical teams one controlled place to send AI requests, see what happened, and manage approved model/provider access before usage becomes fragmented.

Protected access

Client API keys protect the OpenAI-compatible chat-completions endpoint for controlled alpha usage.

Safe request visibility

Inspect request status, latency, model, provider and token metadata without exposing prompts, responses or keys.

Approved catalog

Start with one validated Scaleway model and explicit model selection, not marketplace discovery or automatic routing.

Request private demo -> View approved catalog -> View documentation ->

Usage guardrails

Max-token, prompt-size, and request-body caps are enforced before provider calls.

Safer public errors

Client-facing failures stay short and generic while operational detail remains internal.

Demo-ready flow

Show blocked unauthenticated calls, approved model access, dashboard visibility, and guardrail rejection.

Protected API contract preview

POST /v1/chat/completions HTTP/1.1 Host: octynx.com Content-Type: application/json X-API-Key: <client-api-key> { "model": "mistral-small-3.2-24b-instruct-2506", "messages": [{"role": "user", "content": "What is OctynX?"}], "max_tokens": 100 } HTTP/1.1 200 OK Content-Type: application/json { "id": "req_abc123", "choices": [{ "message": {"role": "assistant", "content": "OctynX is a control layer for governed AI model access."}, "finish_reason": "stop" }], "usage": {"total_tokens": 24} }

How the alpha works

1

Protect access

Issue client API keys and keep provider credentials behind the OctynX layer.

2

Approve models

Expose only the alpha catalog your team has approved, starting with Scaleway.

3

Review usage

Use dashboard metadata to understand request flow without leaking sensitive prompt or response content.

Alpha security posture

  • Client API key protection for chat completions
  • Dashboard authentication for operational visibility
  • No prompt bodies, response bodies, API keys or raw provider errors in public views
  • Short generic public error categories for safer demos
  • Pre-public hardening still requires rate limiting and deployment security review

Current alpha scope

  • OpenAI-style /v1/chat/completions endpoint
  • Approved Scaleway catalog with explicit model selection
  • SQLite-backed request and lead metadata logging
  • Health/readiness endpoints for local operation
  • No advanced routing, marketplace, billing or enterprise policy engine yet

Request access to the private alpha

OctynX is being prepared with design partners who need governed AI access, practical visibility, and a disciplined path toward AI governance. Tell us what you want to control first.