Choose Your AI Model

Select a language model to power your voice assistant

Recommended Models

TinyLlama 1.1B

Fast, lightweight model (~650MB)

Speed: ⚡⚡⚡ Quality: ⭐⭐

DistilGPT2

Very fast, small model (~350MB)

Speed: ⚡⚡⚡⚡ Quality: ⭐⭐

GPT-2

Classic model, good quality (~550MB)

Speed: ⚡⚡⚡ Quality: ⭐⭐⭐

SmolLM2 135M (Fast)

4-bit quantized model (~118MB)

Speed: ⚡⚡⚡⚡⚡ Quality: ⭐⭐⭐

SmolLM2 135M (Quality)

16-bit model, slower (~270MB)

Speed: ⚡⚡⚡ Quality: ⭐⭐⭐⭐

🚀 MediaPipe-Powered Models (Google's Optimized Inference)

These models use Google's MediaPipe for enhanced performance and efficiency.

📋 MediaPipe Information:
• MediaPipe models require the MediaPipe Tasks LLM library (experimental feature)
Q4 Quantization: Optimized models use 4-bit quantization for 40% faster inference
Auto-fallback: If MediaPipe is unavailable, these models automatically switch to proven ONNX alternatives
• Gemma 3 1B Q4 → SmolLM2 1.7B Q4 or TinyLlama (seamless transition)
• For Brave browser: Allow scripts and disable strict shields if needed
• Q4 models require less memory and provide faster response times

Gemma 3 1B Instruct Q4 🚀

Google's optimized Gemma 3 1B with Q4 quantization (~600MB)
⚡ 40% faster inference with Q4 quantization
Auto-fallback: SmolLM2 1.7B Q4 or TinyLlama if MediaPipe unavailable

Speed: ⚡⚡⚡⚡⚡⚡ Quality: ⭐⭐⭐⭐
Q4 Optimized

Gemma 3 1B Instruct 🔥

Google's Gemma 3 1B standard precision (~1.0GB)
Auto-fallback: SmolLM2 1.7B or TinyLlama if MediaPipe unavailable

Speed: ⚡⚡⚡⚡⚡ Quality: ⭐⭐⭐⭐
MediaPipe Optimized

Gemma 7B Instruct 🔥

Larger Gemma model via MediaPipe (~5.2GB)
Auto-fallback: TinyLlama 1.1B if MediaPipe unavailable

Speed: ⚡⚡⚡⚡ Quality: ⭐⭐⭐⭐⭐
MediaPipe Optimized

Custom Hugging Face Model

Enter any ONNX text generation model from Hugging Face