🌐 macOS · Apple Silicon

Live audio translation. Entirely on your Mac.

Whisper transcribes. Mistral 7B translates. Both run locally on Apple Silicon — no cloud, no API keys, no data leaves your machine. Pick any audio source, choose a target language, and go.

Whisper (5 models) Mistral 7B 4-bit MLX Apple Neural Engine 101 Languages 100% Local Floating Overlay Transcript & SRT Text-to-Speech
LLT menu bar app — audio source, languages, overlay controls LLT floating translation overlay — original text and translation
LLT Mobile
Now on iPhone & iPad
LLT Mobile — the iOS companion to LLT Desktop
Stream audio from your iPhone to your Mac running LLT Desktop, for a fully local translation pipeline. Or pick one of four direct cloud engines for when you don't have your Mac with you.
101
Languages
100%
Local / Private
2
AI Models
0
Cloud Calls
💬

Overlay

Floating overlay shows original text and translation in real time

📝

Transcribe

Full session transcripts saved as TXT with timestamps

🎬

SRT Export

Export subtitles in SRT format for video post-production

🔊

Text-to-Speech

Hear translations read aloud via macOS system voice

Mic on the table. Zoom call. Any app. All work.

Choose your audio source and LLT captures it. WebRTC, Teams, a mic on a conference table — it doesn't matter. If your Mac can hear it, LLT can translate it.

🎙️

Microphone CoreAudio

Any hardware mic, USB mic, or audio interface. Select your device from the dropdown. Put a mic on a table and translate a meeting room in real time.

📱

Single App Audio ScreenCaptureKit

Capture audio from one specific app — Zoom, Teams, Chrome, Discord, FaceTime, or any other app. Only that app's audio gets translated, nothing else.

🔊

System Audio ScreenCaptureKit

Capture all system audio at once. Everything playing on your Mac gets transcribed and translated. Useful as a catch-all for multi-source scenarios.

Audio in, translation out. All local.

The backend starts with the app and runs a Python server on localhost. Audio is captured, chunked, sent via WebSocket, transcribed by Whisper, translated by Mistral, and displayed as an overlay — all without leaving your Mac.

01

Audio Capture

Mic, app, or system audio → 3-second chunks via WebSocket to local backend

02

Whisper STT

Speech-to-text with auto language detection. 5 models from tiny (~75 MB) to large (~3 GB) — choose in Settings > Whisper. Runs on Neural Engine.

03

Mistral Translation

Mistral 7B Instruct (4-bit MLX) translates transcribed text to target language

04

Output

Floating overlay with original + translation (auto-hides after 6s), optional text-to-speech, and full transcript save as TXT or SRT subtitles.

LLT floating translation overlay
Floating overlay — original text (gray) + translation (bold white)

Lives in your menu bar. Starts on demand.

LLT is a menu bar app — no dock icon, no window clutter. Left-click opens the control panel, right-click shows status and quit. The backend starts with the app (or on demand), but translation only begins when you press Start.

🟢

Backend Lifecycle

Backend auto-starts with the app (configurable) and auto-stops on quit. Loads your selected Whisper model + Mistral on startup. Status visible in the control panel and menu bar icon.

▶️

Translation On Demand

Backend running ≠ translating. You manually press Start when you need translation. This keeps resource usage minimal until you actually need it — no background processing when idle.

LLT Settings — General tab with backend and display options LLT Settings — multilingual interface (DE, EN, ES, FR, IT)
Settings — 8 tabs: General, Languages, Engines, Whisper, vMix, Storage, License, Info

Everything you need. Nothing in the cloud.

Overlay

Floating overlay at the bottom of your screen. Shows original text (small) and translation (large). Auto-hides after 6 seconds. Rounded corners, dark semi-transparent background.

Show Original Text

Display the original transcribed text above the translation in the overlay. See exactly what Whisper heard and how Mistral translated it.

Save Transcript

Save the full session as a timestamped TXT file in ~/Documents/LiveTranslator/. Original + translation for every segment. Also supports SRT subtitle export.

Text-to-Speech

Read translations aloud using macOS system speech. Useful when you can't look at the screen — the translation plays through your speakers or headphones.

Auto-Gain

Automatically adjusts input gain to maintain consistent audio levels for Whisper. Targets -14 dB RMS with smooth adaptation. Manual gain also available (0.5x–4x).

Auto-Start Backend

Optionally start the backend automatically when the app launches. Otherwise, start it manually from the control panel. Backend always stops cleanly on quit (SIGTERM → SIGINT → SIGKILL).

Whisper Model Selection

Choose from 5 Whisper models: tiny (~75 MB), base (~150 MB), small (~470 MB), medium (~1.5 GB), or large (~3 GB). Download any model on demand, switch anytime in Settings > Whisper, and delete unused models to save disk space. Tip: small is usually enough and fast. Set the source language explicitly — auto-detection can cause Whisper to hallucinate on short or quiet segments.

Smart Pause Detection

Intelligent voice activity detection that waits for natural pauses before sending audio to Whisper. Produces cleaner, more complete sentences instead of choppy fragments. Dramatically improves translation quality — especially for longer speech and interviews.

Network Access & LLT-Mobile

Enable network access (0.0.0.0) so other devices can connect to LLT over your local network. Use the free LLT-Mobile iOS app as a wireless microphone — or as a standalone audio source feeding into LLT via Wi-Fi. No cables, no Bluetooth pairing issues.

STT Engines

Choose your speech-to-text engine in Settings > Engines: Whisper (local, offline), Deepgram, AssemblyAI, Google Cloud (REST or gRPC Streaming), Microsoft Azure, or Vosk (self-hosted). VAD is set automatically per engine — streaming engines detect pauses server-side.

vMix Title Output

Send translations directly to vMix as live title updates via the vMix Web API. Configure host, port, input number, and custom field names for translation and original text. The translated text appears in your vMix title generator — format and style it however you want inside vMix.

Fully portable: use your iPhone as a wireless mic, run LLT on a MacBook, and send translations to vMix over Wi-Fi — live interview translation without cables.

⬇ Download vMix Lower Third Template (.xaml)
LLT Settings — Whisper model selection: tiny, base, small, medium, large LLT Settings — vMix Title Output with host, port, input, and field mapping LLT Settings — STT Engines: Whisper, Deepgram, AssemblyAI, Google Cloud, Azure, Vosk LLT Settings — App info and version
Settings — Whisper model selection, vMix Title Output, and app info

Pick the engine that fits your workflow.

Each engine pairs a speech-to-text source with a translation provider. Some run fully local, some call the cloud directly from the app, and some stream through the local Python backend. The matrix shows which engines need the backend running.

Engine STT Translation Backend?
Whisper Whisper (local) Mistral (local) yes
Google REST Google STT (direct) Google Translate (direct) no
Google gRPC Google gRPC Streaming (via backend) Google Translate (direct) yes
Deepgram Deepgram Streaming (direct) Google Translate (direct) no
AssemblyAI AssemblyAI Streaming (direct) Google Translate (direct) no
Azure Azure STT + Translation (direct, one call) Azure no
Vosk Vosk (local, via backend) Mistral (local) yes

Google gRPC in detail: audio → Python backend → Google gRPC STT (true streaming, no VAD needed) → backend returns the recognized text (skipTranslation: true, so Mistral is skipped) → the app calls Google Translate REST directly from Swift (same API key) → original + translation are displayed.

Requirements: backend running (for gRPC) · google-cloud-speech installed in the Python venv · Service Account JSON · Google API key (for Translate). Restart the backend after enabling.

101 languages. Auto-detection included.

Whisper detects the source language automatically. Mistral translates to any of 101 target languages. Set source to "Auto" and just let it figure out what's being spoken.

🇩🇪 🇬🇧 🇪🇸 🇫🇷 🇮🇹 🇵🇹 🇳🇱 🇵🇱 🇷🇺 🇺🇦 🇹🇷 🇬🇷 🇨🇳 🇯🇵 🇰🇷 🇸🇦 🇮🇳 🇮🇱 🇹🇭 🇻🇳 🇮🇩 🇲🇾 🇸🇪 🇩🇰 🇳🇴 🇫🇮 🇨🇿 🇸🇰 🇭🇺 🇷🇴 🇧🇬 🇭🇷 🇷🇸 🇸🇮 🇱🇹 🇱🇻 🇪🇪 🇮🇸 🇬🇪 🇦🇲 🇦🇿 🇰🇿 🇿🇦 🇰🇪 🇪🇹 🇳🇬 🇧🇩 🇵🇭 🇭🇰
LLT — 101 languages available, Whisper recognizes, Mistral translates
101 built-in languages — no installation required, auto-detection included

Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Bashkir, Basque, Belarusian, Bengali, Bosnian, Breton, Bulgarian, Burmese, Cantonese, Catalan, Chinese, Croatian, Czech, Danish, Dutch, English, Estonian, Faroese, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Lao, Latin, Latvian, Lingala, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Māori, Marathi, Mongolian, Nepali, Norwegian, Nynorsk, Occitan, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskrit, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tagalog, Tajik, Tamil, Tatar, Telugu, Thai, Tibetan, Turkish, Turkmen, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, Yiddish, Yoruba

What you need.

LLT runs AI models on your Mac. This requires Apple Silicon and enough RAM for the models.

System

macOS 13 (Ventura) or newer. Apple Silicon (M1, M2, M3, M4 — any variant). 8 GB RAM minimum (base model), 16 GB recommended (medium/large). Whisper models need 1–10 GB RAM depending on size. Python 3.11+ for the backend.

Installation

DMG with the app + Install-Backend.command script. The script creates a Python venv, installs Whisper, MLX, Mistral, and all dependencies. Choose your Whisper model during install (tiny to large) — change anytime in Settings > Whisper. One-time setup.

LLT DMG installer — drag to Applications LLT Settings — Storage management for AI models
Simple DMG installation + model storage management

One version. Everything included.

LLT — Local Live Translator for macOS

macOS · Apple Silicon
$29 one-time

🚀 Launch Price — $49 after launch

  • 101 languages (Whisper + Mistral)
  • Auto language detection
  • Mic, app audio, or system audio
  • Floating translation overlay
  • Text-to-speech output
  • Transcript & SRT export
  • Auto-gain / manual gain
  • Menu bar app — always accessible
  • Multi-language interface (DE, EN, ES, FR, IT)
  • 100% local — no cloud, no API keys
  • Smart Pause Detection (VAD)
  • vMix Title Output + Lower Third template
  • Free LLT-Mobile iOS app included (wireless mic & audio)
  • 7 STT engines (Whisper, Deepgram, AssemblyAI, Google, Azure, Vosk)
  • All v1.x updates included

Version 1.2.114 days full use, then license required

Requires Apple Silicon Mac (M1/M2/M3/M4) with 16 GB+ RAM. Backend installs automatically. Models download on first run (~4 GB).

More tools by Adelvo

LLT — Local Live Translator for macOS is part of the Adelvo family of professional media tools.

Real-time translation. Zero cloud dependency.

Whisper + Mistral running locally on Apple Silicon. Just works.

⬇ macOSApple Silicon 🛒 Buy
Version 1.2.1 · 14 days free trial — $29 one-time (regular $49)