Cedar provides a flexible system for creating custom agent flows that can work with different backend providers. This guide explains how Cedar prepares requests, routes calls, and handles responses from your agent backend. Cedar-OS  Diagram

Preparing the Request Body

Before making any call to your agent backend, Cedar automatically assembles all the necessary context and data. This preparation process ensures your agent receives comprehensive information to generate meaningful responses.
1

User prompt extraction

Cedar extracts the current user input from the chat editor, converting the rich text content into a clean string format.
// Cedar automatically calls stringifyEditor() to extract text
const editorContent = state.stringifyEditor();
// Result: "What's the status of the user authentication feature?"
2

State and context gathering

Cedar collects all subscribed state and additional context that has been registered with the agent input system.
// Cedar automatically gathers additional context
const additionalContext = state.stringifyAdditionalContext();
// Result includes: subscribed state, custom setters, mention data, etc.
The agentInputContextSlice manages all context gathering. See Agent Input Context for details on how to subscribe state and provide context to your agents.
3

Context tokenization

Cedar serializes the collected context into a structured format that your agent backend can parse and understand.
// Example of tokenized additional context
{
  "todos": [
    { "id": "1", "title": "Fix login bug", "completed": false }
  ],
  "setters": {
    "addTodo": {
      "name": "addTodo",
      "stateKey": "todos",
      "description": "Add a new todo item",
      "parameters": ["title: string"]
    }
  }
}
4

Network Request Body Structure

The network request body varies significantly between providers, allowing Cedar to work with different backend architectures while maintaining a consistent internal API.For mastra and Custom backends, we pass in additionalContext, state, and more as seperate fields to give you control over what you want to do with the context and state.For direct LLM calls, we automatically tokenise the additionalContext & prompt into one string for optimal responses.

Request Body Differences

{
  "prompt": "What's the status of the user authentication feature?",
  "systemPrompt": "You are a helpful project manager...",
  "additionalContext": {
    "todos": [...],
    "setters": {...}
  },
  "states": {
	"emails:": [...]
  },
  "route": "/chat",
  "resourceId": "user123",
  "threadId": "thread456",
}

Creating Your Own AI Workflow

Beyond using Cedar’s built-in agent connections, you can create custom AI workflows that leverage Cedar’s context system while implementing your own processing logic.
1

Define your prompt

Create a function that defines a specific prompt for your workflow. This allows you to customize the AI’s behavior for your specific use case.
function createSummaryWorkflow() {
	const systemPrompt = `You are a project summary assistant. 
  Analyze the provided todos and create a concise project status report. 
  Focus on completion rates and upcoming priorities.`;

	return systemPrompt;
}
2

Extract specific additional context

Use Cedar’s context system to get exactly the data your workflow needs. You can filter and transform the context to match your requirements.
function getProjectContext(state: CedarState) {
	const additionalContext = state.stringifyAdditionalContext();

	// Extract only the data relevant to project summaries
	const projectContext = {
		todos: additionalContext.todos || [],
		deadlines: additionalContext.deadlines || [],
		teamMembers: additionalContext.teamMembers || [],
	};

	return projectContext;
}
3

Call the LLM

Make the API call to your chosen LLM provider with your custom prompt and filtered context.
async function callLLM(prompt: string, context: any) {
	const response = await fetch('https://api.openai.com/v1/chat/completions', {
		method: 'POST',
		headers: {
			Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
			'Content-Type': 'application/json',
		},
		body: JSON.stringify({
			model: 'gpt-4o-mini',
			messages: [
				{
					role: 'system',
					content: createSummaryWorkflow(),
				},
				{
					role: 'user',
					content: `Project Context: ${JSON.stringify(
						context,
						null,
						2
					)}\n\nGenerate a project summary.`,
				},
			],
		}),
	});

	return response;
}
4

Handle the LLM response

Process the raw LLM response using Cedar’s response handling system to ensure proper integration with your application.
async function handleLLMResponse(response: Response, state: CedarState) {
	const data = await response.json();
	const content = data.choices[0]?.message?.content;

	// Use Cedar's response processor
	const processor = state.agentConnection.responseProcessors.get('message');
	if (processor && content) {
		await processor(content, state);
	}

	return content;
}
5

Custom parsing and return

Add your own parsing logic to extract structured data from the LLM response and return it in the format your application needs.
function parseProjectSummary(llmResponse: string) {
	// Extract structured data from the response
	const summaryMatch = llmResponse.match(/Summary: (.*?)(?:\n|$)/);
	const prioritiesMatch = llmResponse.match(/Priorities: (.*?)(?:\n|$)/);
	const completionMatch = llmResponse.match(/Completion: (\d+)%/);

	return {
		summary: summaryMatch?.[1] || '',
		priorities: prioritiesMatch?.[1]?.split(',').map((p) => p.trim()) || [],
		completionRate: completionMatch?.[1] ? parseInt(completionMatch[1]) : 0,
		generatedAt: new Date().toISOString(),
	};
}

// Complete workflow function
export async function runProjectSummaryWorkflow(state: CedarState) {
	try {
		const context = getProjectContext(state);
		const response = await callLLM(createSummaryWorkflow(), context);
		const llmContent = await handleLLMResponse(response, state);
		const structuredResult = parseProjectSummary(llmContent);

		return structuredResult;
	} catch (error) {
		console.error('Project summary workflow failed:', error);
		throw error;
	}
}
For more advanced response processing patterns and custom response handlers, see Custom Response Processing to learn how to create sophisticated response processors that can handle streaming, actions, and complex data transformations.