This guide covers only the simplest steps to run an LLM on your device. For a detailed, step-by-step explanation of how everything works, see the full integration guide.
1
Make a new project folder
Copy
mkdir demo && cd demo
2
Initialize with pnpm
Copy
pnpm init
3
Install dependencies
Copy
pnpm add typescript ts-node @types/node -D
pnpm add @trymirai/uzu
4
Initialize a tsconfig.json
Copy
{
"compilerOptions": {
"target": "es2020",
"module": "commonjs",
"moduleResolution": "node",
"strict": true,
"esModuleInterop": true,
"outDir": "dist",
"types": [
"node"
]
},
"include": [
"*.ts"
]
}
6
Create main.ts
Don’t forget to add your API key.
Copy
import Engine, { ClassificationFeature, Preset, SamplingMethod } from '@trymirai/uzu';
async function main() {
const feature = new ClassificationFeature('sentiment', [
'Happy',
'Sad',
'Angry',
'Fearful',
'Surprised',
'Disgusted',
]);
const textToDetectFeature =
"Today's been awesome! Everything just feels right, and I can't stop smiling.";
const prompt =
`Text is: "${textToDetectFeature}". Choose ${feature.name} from the list: ${feature.values.join(', ')}. ` +
"Answer with one word. Don't add a dot at the end.";
const output = await Engine.create('API_KEY')
.chatModel('Qwen/Qwen3-0.6B')
.download((update) => {
console.log('Progress:', update.progress);
})
.preset(Preset.classification(feature))
.session()
.tokensLimit(32)
.enableThinking(false)
.samplingMethod(SamplingMethod.greedy())
.reply(prompt);
console.log('Prediction:', output.text.original);
console.log('Stats:', output.stats);
}
main().catch((error) => {
console.error(error);
});
7
Run the snippet
Copy
pnpm ts-node main.ts
Now that we’ve tried the simplest snippet, let’s take a look at the step-by-step integration guide.