Skip to content

Translation Engines

LocaleKit supports multiple translation engines. Choose based on your needs:

Comparison

EngineTypeEuropeanAsianContextSpeedAPI Key
DeepLCloud★★★★★★★★☆☆★★★☆☆2.5 req/sDEEPL_API_KEY
OpenAICloud★★★★★★★★★★★★★★20 req/sOPENAI_API_KEY
MLXOn-device★★★★★★★★★★★★0.5 req/s
Apple Intelligence🖥 macOS app only★★★★★★★★★★★★★1 req/s

DeepL

Best for European languages. Fast batch processing (50 keys/batch). Requires a paid API key.

bash
export DEEPL_API_KEY="your-key"
localekit translate --engine deepl

OpenAI

Best context awareness — understands UI strings, placeholders, and tone. Fast batch processing (50 keys/batch). Supports all languages GPT supports.

bash
export OPENAI_API_KEY="your-key"
localekit translate --engine openai

MLX

Runs entirely on your Mac using Apple Silicon. No API key, no cloud, no cost. Models download automatically on first use and are cached at ~/Library/Caches/huggingface/.

bash
localekit translate --engine mlx
# Or specify a model:
localekit translate --engine mlx --mlx-model mlx-community/Qwen3-8B-4bit

Available Models

ModelParametersSizeRAMLanguages
Qwen3 4B default4B2.5 GB8 GB119
Mistral 7B7B4.1 GB8 GB15
Qwen3 8B8B5 GB16 GB119
Gemma 3 12B12B8 GB16 GB35
Qwen3 30B-A3B30B (3B (MoE) MoE)16 GB16 GB119
Qwen3 32B32B18 GB32 GB119

Recommendation

8 GB Mac → Qwen3 4B (default). 16 GB+ Mac → Qwen3 30B-A3B (best quality-to-speed with MoE).

Apple Intelligence

🖥 macOS App Only

Apple Intelligence is available in the LocaleKit macOS app but not in the CLI. Use DeepL, OpenAI, or MLX for CLI workflows.

Requires macOS 26+ with Apple Intelligence enabled. Best context awareness (5/5), processes one key at a time.

LocaleKit CLI 0.7.2 · Built by Hexagone Studio