Translation Engines
LocaleKit supports multiple translation engines. Choose based on your needs:
Comparison
| Engine | Type | European | Asian | Context | Speed | API Key |
|---|---|---|---|---|---|---|
| Cloud | ★★★★★ | ★★★☆☆ | ★★★☆☆ | 2.5 req/s | DEEPL_API_KEY | |
| Cloud | ★★★★★ | ★★★★☆ | ★★★★★ | 20 req/s | OPENAI_API_KEY | |
| On-device | ★★★★☆ | ★★★★☆ | ★★★★☆ | 0.5 req/s | — | |
| 🖥 macOS app only | ★★★★☆ | ★★★★☆ | ★★★★★ | 1 req/s | — |
DeepL
Best for European languages. Fast batch processing (50 keys/batch). Requires a paid API key.
export DEEPL_API_KEY="your-key"
localekit translate --engine deepl
OpenAI
Best context awareness — understands UI strings, placeholders, and tone. Fast batch processing (50 keys/batch). Supports all languages GPT supports.
export OPENAI_API_KEY="your-key"
localekit translate --engine openai
MLX
Runs entirely on your Mac using Apple Silicon. No API key, no cloud, no cost. Models download automatically on first use and are cached at ~/Library/Caches/huggingface/.
localekit translate --engine mlx
# Or specify a model:
localekit translate --engine mlx --mlx-model mlx-community/Qwen3-8B-4bitAvailable Models
| Model | Parameters | Size | RAM | Languages |
|---|---|---|---|---|
| 4B | 2.5 GB | 8 GB | 119 | |
| 7B | 4.1 GB | 8 GB | 15 | |
| 8B | 5 GB | 16 GB | 119 | |
| 12B | 8 GB | 16 GB | 35 | |
| 30B (3B (MoE) MoE) | 16 GB | 16 GB | 119 | |
| 32B | 18 GB | 32 GB | 119 |
Recommendation
8 GB Mac → Qwen3 4B (default). 16 GB+ Mac → Qwen3 30B-A3B (best quality-to-speed with MoE).
Apple Intelligence
🖥 macOS App Only
Apple Intelligence is available in the LocaleKit macOS app but not in the CLI. Use DeepL, OpenAI, or MLX for CLI workflows.
Requires macOS 26+ with Apple Intelligence enabled. Best context awareness (5/5), processes one key at a time.
