It's React
Same reconciler, same hooks, same JSX. The render target is model context instead of DOM. If you know React, you know this.
React, but the render target is model context instead of DOM. Build AI applications with the tools you already know.
If you've used React, you've already learned 80% of agentick. The remaining 20% is AI-specific primitives — tools, timeline, model context — built on the same component model.
function TodoApp() {
const [todos, setTodos] = useState<string[]>([]);
const [input, setInput] = useState("");
return (
<div>
<h1>Todo List</h1>
<input value={input} onChange={(e) => setInput(e.target.value)} />
<button
onClick={() => {
setTodos((t) => [...t, input]);
setInput("");
}}
>
Add
</button>
<ul>
{todos.map((t) => (
<li key={t}>{t}</li>
))}
</ul>
</div>
);
}function TodoAgent() {
const [todos, setTodos] = useState<string[]>([]);
return (
<>
<System>You are a todo manager.</System>
<Tool
name="add_todo"
description="Add a todo item"
input={z.object({ text: z.string() })}
handler={({ text }) => {
setTodos((t) => [...t, text]);
return `Added: ${text}`;
}}
/>
<Section title="Current Todos">
<List>
{todos.map((t) => (
<ListItem>{t}</ListItem>
))}
</List>
</Section>
<Timeline />
</>
);
}Same useState, same JSX, same component model. The React app renders a <div> for a browser. The agent renders <System>, <Tool>, and <Section> for an LLM's context window. When state changes, the reconciler diffs and recompiles — just like a DOM update.
npm install agentick @agentick/openaiimport { createAgent } from "agentick";
import { openai } from "@agentick/openai";
const agent = createAgent({
model: openai({ model: "gpt-4o" }),
system: "You are a helpful assistant.",
tools: [SearchTool, CalculatorTool],
});
const result = await agent.run({
messages: [{ role: "user", content: "Hello!" }],
});Five lines to a working agent. No JSX required. Want more control? Keep reading.
When you need composition, hooks, and full control over the context tree:
import { createApp, useKnob } from "agentick";
import { OpenAIModel } from "@agentick/openai";
const app = createApp(() => {
const [mode, setMode] = useKnob("mode", "helpful", {
options: ["helpful", "concise", "creative"],
description: "Response style",
});
return (
<>
<OpenAIModel model="gpt-4o" />
<System>You are a {mode} assistant.</System>
<SearchTool />
<CalculatorTool />
<Knobs />
<Timeline />
</>
);
});createAgent and full JSX are the same thing underneath — createAgent just wraps <Agent> in a createApp call. Start simple, eject when you need to.
Tools aren't just functions the model calls. They're components in the fiber tree with their own render output:
const TodoTool = createTool({
name: "manage_todos",
description: "Add, complete, or list todos",
input: z.object({
action: z.enum(["add", "complete", "list"]),
text: z.string().optional(),
}),
handler: async ({ action, text }, ctx) => {
if (action === "add") {
todos.push({ text, done: false });
ctx?.setState("lastAction", `Added: ${text}`);
}
// ...
return { success: true };
},
render: () => (
<Section id="todos" audience="model">
<List title="Current Todos" task>
{todos.map((t) => (
<ListItem checked={t.done}>{t.text}</ListItem>
))}
</List>
</Section>
),
});The render function lives in the fiber tree. <List task> and <ListItem checked> are semantic primitives — the compiler renders them as markdown checkboxes, structured content, whatever the model needs. When tool state changes, the reconciler diffs and recompiles. No string templates.
One hook creates reactive state, renders it to model context, and registers a tool — the model can adjust its own behavior mid-conversation:
function ResearchAgent() {
const [depth, setDepth] = useKnob("search_depth", 3, {
min: 1,
max: 10,
description: "How many search results to analyze",
});
const [style] = useKnob("writing_style", "academic", {
options: ["academic", "casual", "technical"],
description: "Output writing style",
});
return (
<>
<System>
You are a research assistant. Analyze the top {depth} results. Write in a {style} style.
</System>
<SearchTool maxResults={depth} />
<Knobs />
<Timeline />
</>
);
}The model sees the knobs as form controls in its context and gets a set_knob tool to change them. The agent decides mid-conversation that it needs more search depth? It sets the knob, the state updates, the context recompiles, next tick sees the new value.
Hooks control the tick loop — how many times the model runs, what happens between turns, and when to stop:
function DeepResearchAgent() {
const [sources, setSources] = useState<Source[]>([]);
// Keep running until we have enough sources
useContinuation((result) => sources.length < 5);
// Log after each model turn
useOnTickEnd((result, ctx) => {
console.log(`Tick ${ctx.tick}: ${sources.length} sources`);
});
return (
<>
<System>
Find and analyze sources. Use the search tool repeatedly until you have at least 5 quality
sources.
</System>
<SearchTool onResult={(s) => setSources((prev) => [...prev, s])} />
<Section title="Sources Found">
<List>
{sources.map((s) => (
<ListItem key={s.url}>{s.title}</ListItem>
))}
</List>
</Section>
<Timeline />
</>
);
}useContinuation controls whether the agent keeps running. result.shouldContinue shows the framework's default; return nothing to defer, or override with a boolean or { stop/continue: true, reason? }. Same lifecycle model as React effects — useOnMount, useOnTickStart, useOnTickEnd, useAfterCompile.
Serve agents over HTTP with sessions, auth, and real-time streaming:
import { createGateway } from "@agentick/gateway";
const gateway = createGateway({
port: 3000,
apps: {
assistant: createApp(() => <AssistantAgent />),
research: createApp(() => <DeepResearchAgent />),
},
defaultApp: "assistant",
auth: {
type: "token",
token: process.env.API_TOKEN,
},
});
await gateway.start();Gateway manages sessions, handles SSE streaming to clients, and supports custom RPC methods. Use @agentick/client to connect from browsers, or @agentick/express and @agentick/nestjs to embed into existing apps.
npm install agentick @agentick/openai