Skip to content

Latest commit

 

History

History

Swahili

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

Swahili to English Translation

This repository provides pre-trained multilingual translation models designed for fast and accurate translations between various languages, such as Kurdish, Samoan, Xhosa, Lao, Corsican, Cebuano, Galician, Yiddish, Swahili, and Yoruba. These models can be used to translate texts from these languages into English and vice versa, making them suitable for machine translation tasks, language localization projects, and building custom translation tools.

Key Features:

  • Swahili to English Translation
  • English to Swahili Translation
  • Support for multiple languages (see full list below)
  • Pre-trained and optimized for accuracy
  • Easy integration into existing translation workflows

Other Languages:

  • Kurdish
  • Samoan
  • Xhosa
  • Lao
  • Corsican
  • Cebuano
  • Galician
  • Yiddish
  • Swahili
  • Yoruba

Use Cases:

  • Machine translation of texts from underrepresented languages
  • Localization of websites, apps, or documents into multiple languages
  • Developing multilingual NLP tools for research and production environments

Download Models

Requirements:

To run the models, you need to install ctranslate2 and sentencepiece:

pip install ctranslate2 sentencepiece

Simple Usage Example

The following code demonstrates how to load and use a model for translation from English to Kurdish (en → ku).

import sentencepiece as spm
from ctranslate2 import Translator

path_to_model = <here_is_your_path_to_the_model>
source = 'en'
target = 'ku'

translator = Translator(path_to_model, compute_type='int8')
source_tokenizer = spm.SentencePieceProcessor(f'{path_to_model}/{source}.spm.model')
target_tokenizer = spm.SentencePieceProcessor(f'{path_to_model}/{target}.spm.model')

text = [
  'I need to make a phone call.',
  'Can I help you prepare food?',
  'We want to go for a walk.'
]

input_tokens = source_tokenizer.EncodeAsPieces(text)
translator_output = translator.translate_batch(
  input_tokens,
  batch_type='tokens',
  beam_size=2,
  max_input_length=0,
  max_decoding_length=256
)

output_tokens = [item.hypotheses[0] for item in translator_output]
translation = target_tokenizer.DecodePieces(output_tokens)
print('\n'.join(translation))

Keywords:

Kurdish to English Translation, Samoan to English Translation, Xhosa Translation, Lao to English, Corsican Translation, Cebuano Translation, Galician to English Translation, Yiddish to English Translation, Swahili Translation, Yoruba to English Translation, Multilingual Machine Translation, NLP, Neural Networks, eLearning

License

This project is licensed under the MIT License.

Contact:

If you have any questions, just email [email protected]