Skip to main content

Overview

Here is a visual overview of the entire process, from a user’s query to the final rendered UI:
  1. User enters a prompt on the UI
  2. UI sends the prompt to your backend API
  3. Backend calls the C1 API with the prompt, history, system instructions.
  4. C1 API returns a UI specification object (C1 DSL) which is generated by existing LLM API based on the model selected while making the call.
  5. Backend relays the C1 Response to the UI
  6. UI renders the C1 Response using <C1Component />
Let’s see how integration with C1 works.

The Backend API Call

The core integration pattern involves your backend acting as an intermediary between your UI client and the C1 API. This allows you to add business logic, prepare data before calling C1 and secure your API keys. The C1 API is OpenAI-compatible, so you can use the official openai client library. If you already have openai integrated, the only change required is to configure the client with your Thesys API key and the C1 baseURL. Before making the call to C1, your server can enrich the user’s prompt with additional context, such as conversation history, system instructions, or integrate data from your database.
main.py
from openai import OpenAI
import os

# Create OpenAI client with Thesys endpoint
client = OpenAI(
    api_key=os.getenv('THESYS_API_KEY'),
    base_url='https://api.thesys.dev/v1/embed'
)

# Now use the client for your AI requests
response = client.chat.completions.create(
    model='<model-name>',
    messages=[
        {'role': 'user', 'content': 'Hello, world!'}
    ]
)
The response from the C1 API is a structured UI specification (C1 DSL) that is then passed back to your UI.

UI Rendering

The C1 React SDK provides <C1Component> to handle the rendering. It is reponsible to take the response returned from C1 and render it as interactive React components. If you are building a chat interface, you can use <C1Chat>. It provides everything including chat history, user thread management. This drastically reduces the development time.

C1 Response

Unlike standard LLM responses, which are plain text or markdown, a C1 Response is a string that is structured payload to contain rich content. It uses an XML-like structure to package multiple types of content into a single string. It is generally stored as assistant message content in the database.
<!-- A C1 Response can contain multiple content types -->
<thinking>...</thinking><content>...</content><artifact>...</artifact>
The different tags represent different parts of the response:
  • <thinking>: Data for displaying real-time thinking-state indicators.
  • <content>: The primary Generative UI response.
  • <artifact>: Document-style content, such as Slides or Reports.
C1 provides backend helpers to help you create and stream this response payload. For a detailed guide on this topic please refer to C1 response. C1 response can be streamed to the UI and rendered using the <C1Component> or <C1Chat> component.

Selecting a model

The C1 API supports multiple models. You can select the model based on your use case. Current supported models are:
  1. Anthropic - Claude Sonnet 4
  2. OpenAI - GPT-5
To see the full list of supported models, please refer to the Models page.

Summary

C1 gives you full control over the end-to-end experience. Your backend orchestrates the call by adding context and business logic.
And your UI is responsible for fetching the resulting C1 Response and using the C1 SDK to render the final interactive UI.