In this example, we will use the dynamic ContextMode, which automatically maintains a continuous conversation history instead of resetting the context with each new input. Every new message is added to the ongoing chat, allowing the model to remember what has already been said and respond with full context.
Make a new project folder
Install dependencies
pnpm add typescript ts-node @types/node -D
pnpm add @trymirai/uzu
Initialize a tsconfig.json
{
"compilerOptions": {
"target": "es2020",
"module": "commonjs",
"moduleResolution": "node",
"strict": true,
"esModuleInterop": true,
"outDir": "dist",
"types": [
"node"
]
},
"include": [
"*.ts"
]
}
Create main.ts
Don’t forget to add your API key.
import Engine, { Config, ContextMode, Input, RunConfig } from '@trymirai/uzu';
async function main() {
const engine = await Engine.load('API_KEY');
const model = await engine.chatModel('Qwen/Qwen3-0.6B');
await engine.downloadChatModel(model, (update) => {
console.log('Progress:', update.progress);
});
const config = Config
.default()
.withContextMode(ContextMode.dynamic());
const session = engine.chatSession(model, config);
const requests = [
'Tell about London',
'Compare with New York',
'Compare the population of the two',
];
const runConfig = RunConfig
.default()
.withTokensLimit(1024)
.withEnableThinking(false);
for (const request of requests) {
const output = session.run(Input.text(request), runConfig, (partialOutput) => {
return true;
});
console.log('Request:', request);
console.log('Response:', output.text.original.trim());
console.log('-------------------------');
}
}
main().catch((error) => {
console.error(error);
});
Now that we’ve tried the simplest snippet, let’s take a look at the step-by-step integration guide.