Why this page exists. UAIE is offered to users in 31 languages. We are a small publishing house with one inventor, one technical director, and a worldwide audience. The honest position about the localisation behind UAIE is that some of it is native-grade, some of it is machine translation cross-checked through documented resources, and some of it is best-effort awaiting native-speaker review. This page tells you which is which, names the resources we have used, and invites you to suggest corrections where you find them. We do not pretend to be more than we are.
Translation in UAIE happens at two distinct surfaces, and the methodology behind each is different. The page that follows is organised accordingly.
Surface 1 — Interface translation. The static product strings — module titles, button labels, status messages, footer text, the words you are reading right now if you have switched UAIE into a non-English language. These 31 localisations are produced through the four-skill methodology described below, classified at one of four verification levels, and shown in the language selector as a small coloured badge. This is the surface most of this page covers.
Surface 2 — Document translation. When a user selects text inside a Microsoft Word document and runs the Translate module, the actual translation of the user's content is performed dynamically by Anthropic's Claude through the user's own API key. This is a different quality profile, governed by Claude's training rather than by our editorial verification. We document the relevant honest expectations for this surface in its own section near the foot of this page.
Conflating the two would be misleading. The badges and verification levels described in the next sections refer only to interface translation. Document translation has its own separate quality discussion further down.
Every supported language is classified at one of four levels, shown as a small coloured badge next to the language selector inside UAIE:
Translation work in UAIE is organised by language family, with a documented methodology for each. The methodologies are formalised as internal skills used by the publishing house's editorial workflow and are summarised here.
For Hindi, Marathi, Tamil, Telugu, Bengali, Kannada, and Urdu, the principal verification resource is Bhashini — the Government of India's machine-translation initiative covering 23+ Indian languages. Bhashini provides translations more responsive to Indian-standard register than international tools.
For Sanskrit, the principal resource is the IIT Kanpur Gita Supersite, which carries the Bhagavad Gita, the principal Upanishads, and the Brahma Sutras with the commentaries of Shankara, Ramanuja, Madhva, and Sayana, in Devanagari, IAST transliteration, and English. Where a Sanskrit phrase has been rendered in scriptural register, the Gita Supersite is the gold-standard reference. We do not use general-purpose translators (Google Translate, DeepL, Microsoft Translator) for Sanskrit because they are not trained on Vedic register and produce unreliable output for classical texts.
For Yoruba, Igbo, Amharic, Zulu, Hausa, Swahili, and Afrikaans, the named first-stop tool is Lekhak, a translator that specialises in African languages. Specialist agencies (Lughayangu, White Globe, Stepes) are used for documents that need a native-quality pass beyond first-draft machine translation. Government-recognised certified services (RushTranslate, ImmiTranslate, Certified Translation India, Novatia Translations for Nigerian Yorùbá) are used for any document headed for a court, embassy, or ministry.
For Arabic, Chinese, English, French, Russian, Spanish, German, and Portuguese — the eight languages best supported by United Nations resources — the principal terminology source is UNTERM, the public multilingual UN terminology database. The Official Document System (ODS) is used to find precedent translations of phrases that have already been rendered officially. For first-draft translations and contextual verification, DeepL is the named first stop for European pairs and Google Translate for Arabic and Chinese (its training set includes UN documents directly, which makes it unusually accurate on diplomatic phrasing in those pairs).
For Gulf-jurisdiction Arabic, the principal authoritative sources are the UAE government portal U.ae, the Dubai government portal Dubai.ae, the Ministry of Human Resources and Emiratisation (MOHRE), and the Saudi government portal Saudi.sa. Gulf register differs from UN-Modern-Standard register; for institutional outreach in the Gulf, the government portal takes priority. Microsoft Translator is named first among AI tools because it is documented as integrated into Gulf government systems. Lekhak handles Urdu-to-Arabic with OCR for scanned material; Bluente handles legal documents where formatting must be preserved.
The classification below applies to the interface strings only — the static localisation of UAIE itself. For the Translate module's quality on these languages, see the document-translation section further down.
| Language | Status | Verification route |
|---|---|---|
| English (en) | Native | Source language. |
| Hindi (hi) | Verified | Bhashini. |
| Marathi (mr) | Verified | Bhashini; Phase 2 completed in v1.2.10. |
| Tamil (ta) | Best-effort | Bhashini, Anuvadini AI — Phase 2 committed in v1.2.10 via Claude-direct multilingual capability; Bhashini-routed verification still recommended. |
| Telugu (te) | Best-effort | Bhashini, Anuvadini AI — Phase 2 committed in v1.2.10 via Claude-direct multilingual capability; Bhashini-routed verification still recommended. |
| Bengali (bn) | Best-effort | Bhashini, Anuvadini AI — Phase 2 committed in v1.2.10 via Claude-direct multilingual capability; Bhashini-routed verification still recommended. |
| Kannada (kn) | Best-effort | Bhashini, Anuvadini AI — Phase 2 committed in v1.2.10 via Claude-direct multilingual capability; Bhashini-routed verification still recommended. |
| Sanskrit (sa) | Verified | IIT Kanpur Gita Supersite + modern technical-Sanskrit register; Phase 2 completed in v1.2.10. |
| Spanish (es) | Verified | UNTERM, DeepL. |
| French (fr) | Verified | UNTERM, DeepL. |
| Portuguese (pt) | Verified | UNTERM, DeepL. |
| Chinese (zh) | Verified | UNTERM, Google Translate (UN-trained); Phase 2 completed in v1.2.10. Native review welcomed. |
| Swahili (sw) | Best-effort | Lekhak, Lughayangu — Phase 2 committed in v1.2.10 as best-effort; native review pending. |
| Japanese (ja) | Verified | DeepL, Reverso Context. |
| German (de) | Verified | UNTERM, DeepL. |
| Arabic (ar) | Verified | UNTERM (Modern Standard) — Gulf-register variant pending. |
| Urdu (ur) | Verified | Bhashini (Indian-standard) — UAE-resident and Pakistani variants pending. |
| Italian (it) | Verified | DeepL, Reverso Context. |
| Dutch (nl) | Verified | DeepL. |
| Polish (pl) | Verified | DeepL. |
| Russian (ru) | Verified | UNTERM, DeepL. |
| Greek (el) | Verified | DeepL, Reverso Context. |
| Romanian (ro) | Best-effort | DeepL, Reverso Context — native review pending. |
| Ukrainian (uk) | Best-effort | DeepL — native review pending. |
| Turkish (tr) | Best-effort | Microsoft Translator, DeepL — native review pending. |
| Hausa (ha) | Best-effort | Lekhak — native review pending. |
| Yoruba (yo) | Best-effort | Lekhak, Lughayangu, Novatia — native review pending. |
| Igbo (ig) | Best-effort | Lekhak, Lughayangu, White Globe — native review pending. |
| Amharic (am) | Best-effort | Lekhak, RushTranslate, ImmiTranslate — native review pending. |
| Zulu (zu) | Best-effort | Lekhak, White Globe, RushTranslate — native review pending. |
| Afrikaans (af) | Best-effort | Google Translate (well-supported) — native review still recommended. |
The Translate module is a separate feature from interface translation. The user selects text in their Word document, presses Translate Selection or Entire Document, and receives the translated text inserted at the chosen location. The quality and the methodology behind this feature are different from the interface translations described above.
The Translate module sends the selected text to Anthropic's Claude through the user's own API key, which the user provides in the optional Settings panel inside UAIE. The translated text returns directly to the user's machine and is inserted into the document. The publishing house has no servers; nothing about the source text, the target language, or the translated output passes through any system controlled by Fiza Pathan Publishing. The Privacy Policy describes this in full.
The engine is therefore Claude — a large language model trained extensively on multilingual text. It is not a custom-built translation engine, and it is not an aggregator of multiple translation tools. The quality the user experiences is the quality of Claude's translation capability for the language pair in question, on that day.
Claude's translation capability is generally strong, often very strong, but it is not uniform across languages. Honest expectations for the Translate module, organised by language family:
The Translate module is appropriate for:
The Translate module is not appropriate for, and must not be relied upon for:
For any document that needs professional or certified translation, the same Tier 2 and Tier 3 routes from the four interface-translation methodologies apply:
The verification badges on the language selector refer to the interface translation only. There is no equivalent badge for the Translate module's output, because the Translate module's quality is dynamic — it depends on the language pair, the source register, the length of the passage, and Claude's current capabilities. A user receiving a translation from the Translate module is responsible for judging whether that translation is fit for the purpose at hand, with the guidance above as the honest reference.
Every panel of UAIE for Microsoft Word carries a "Suggest a translation correction" link in its footer. The link opens a pre-filled email to uaie@gmail.com with the language already named and a template body asking for the current text, the suggested correction, and any notes on register or dialect.
The feedback link is for interface strings — the localisations classified by the four-tier verification model above. It is not a channel for reporting individual document-translation outputs from the Translate module, because those are dynamic and not stored on our side. If a user finds the Translate module weak in a particular language, the right response is to use one of the escalation routes above for that document, and to take the weakness as a signal that the language is in the variable band rather than the strong band.
Native-speaker corrections to interface strings are read by Blaise Martis (Technical Director) and applied in the next patch release. Where a contributor wishes to be credited, the credit appears in the changelog of the release that adopts the correction. Where the contributor prefers to remain anonymous, no credit is attached and the change is recorded simply as "native-speaker correction adopted."
Three things are independent of the localisation work and are unaffected by translation status: