On this page

AI translation

The AI Translation feature lets you automatically generate translations for the open TS file using large language models (LLMs). You can use either a local LLM server (such as Ollama, LM Studio, or llama.cpp) or cloud-based APIs that support the OpenAI-compatible REST protocol (such as OpenAI, Groq, or Anthropic).

Qt Linguist supports two API modes:

  • Ollama - Ollama's native REST API. Use this when running Ollama locally.
  • OpenAI Compatible - The standard OpenAI REST API format. Use this for:
    • Local servers: LM Studio, llama.cpp, or Ollama (which also supports OpenAI-compatible mode)
    • Cloud providers: OpenAI, Groq, DeepSeek, and others

Setting up a local LLM server

To use AI Translation with a local server, install one of the following and download at least one model:

  • Ollama - Easy to use, manages models automatically
  • LM Studio - GUI application with model browser
  • llama.cpp - Lightweight, runs GGUF models directly

Note: Running LLMs locally requires sufficient memory depending on the model size and quantization level. Refer to your LLM server's documentation for specific requirements, for example LM Studio System Requirements.

For Ollama, pull a model using the command line:

ollama pull qwen3:14b
ollama serve

For LM Studio, download models through the application's interface and start the local server:

lms server start

For llama.cpp, you can either use one of the built-in model presets:

llama-server --fim-qwen-7b-default

Or download a GGUF model file and start the server manually:

llama-server -m model.gguf --port 8080

Using cloud APIs

To use cloud-based translation services, select OpenAI Compatible as the API type, enter the provider's API endpoint URL, and provide your API key.

Enter only the base URL without the /v1 path suffix. For example, for OpenRouter use https://openrouter.ai/api. Consult your provider's documentation for the correct endpoint URL.

Using the AI Translation dialog

In Linguist, choose Tools > AI Translation to open the AI Translation dialog:

The Configuration part of the dialog provides:

  • API Type: choose Ollama for local Ollama servers using Ollama's native API, or OpenAI Compatible for LM Studio, llama.cpp, cloud APIs, or Ollama in OpenAI-compatible mode.
  • Server URL: the base URL where the server listens. Do not include the /v1 path suffix. Default: http://localhost:11434 for Ollama, http://localhost:8080 for OpenAI Compatible.
  • API Key: authentication key for cloud APIs (optional for local servers).
  • Model: drop-down list of available models.
  • Context: optional application context to improve translation accuracy (e.g., "medical software", "video game", "financial application").

Screenshot of Qt Linguist AI translation dialog - Configuration

The Selection part of the dialog provides:

  • File: the TS file to translate.
  • Filter: limit translation to specific groups (contexts or labels).
  • Translate: start the AI translation.

Screenshot of Qt Linguist AI translation dialog - Selection

The Progress part of the dialog provides:

  • A stop button to stop the translation progress
  • Apply Translations: apply the translated items into the TS file.

Screenshot of Qt Linguist AI translation dialog - Progress

During translation, progress messages appear in the Translation Log. When complete, review the translated texts in the log. Click Apply Translations to insert the AI-generated translations into the TS file.

The following models are recommended for translation tasks, balancing quality, speed, and resource usage. These models can be found on:

  • Ollama - search by model name (e.g., ollama pull qwen3:14b)
  • LM Studio - search in the model browser
  • Hugging Face - download GGUF files for llama.cpp
ModelSizeNotes
Mistral Small 24B14 GBHigh translation quality with strong multilingual support. Requires >16 GB VRAM for optimal performance.
Qwen3 14B9 GBBalance of quality and resource usage. Supports 100+ languages.
Qwen3 30B19 GBHigh-quality translations. Uses MoE architecture for efficient inference.
Qwen2.5 14B9 GBStrong multilingual support for 29+ languages including CJK languages.
Gemma 3 12B8 GBSupports 140+ languages. Good for resource-constrained systems.
7shi/llama-translate 8B5 GBSpecialized translation model for English, French, Chinese, and Japanese. Lightweight option for limited hardware. Available on Ollama only.

For systems with limited resources, smaller variants like Qwen3 8B (5 GB) or Qwen2.5 7B (5 GB) provide reasonable translation quality while requiring less memory.

Note: Translation quality varies by language pair and model. Test different models to find the best combination of speed, quality, and resource usage for your specific translation needs.

© 2026 The Qt Company Ltd. Documentation contributions included herein are the copyrights of their respective owners. The documentation provided herein is licensed under the terms of the GNU Free Documentation License version 1.3 as published by the Free Software Foundation. Qt and respective logos are trademarks of The Qt Company Ltd. in Finland and/or other countries worldwide. All other trademarks are property of their respective owners.