1

Update the registry and choose the model you want to use

try await engine.updateRegistry()
let localModels = engine.localModels
let localModelId = "Meta-Llama-3.2-1B-Instruct-bfloat16"
You can choose a supported model by checking the localModels list.
Alternatively, you can check the models list in the Platform and copy the corresponding identifier.
2

Start the download and wait until it completes

let handle = try engine.downloadHandle(identifier: localModelId)
try handle.start()
let progressStream = try handle.progress()
while let downloadProgress = await progressStream.next() {
    handleDownloadProgress(downloadProgress)
}
You can also manually control the model’s loading process and observe its state:
try engine.download(identifier: localModelId)
let modelDownloadState = engine.downloadState(identifier: localModelId)

// engine.pause(identifier: localModelId)
// engine.resume(identifier: localModelId)
// engine.stop(identifier: localModelId)
// engine.delete(identifier: localModelId)

Now let’s talk about the different configurations