lingua

1. What does this library do?

Its task is simple: It tells you which language some provided textual data is written in. This is very useful as a preprocessing step for linguistic data in natural language processing applications such as text classification and spell checking. Other use cases, for instance, might include routing e-mails to the right geographically located customer service department, based on the e-mails' languages.

2. Why does this library exist?

Language detection is often done as part of large machine learning frameworks or natural language processing applications. In cases where you don't need the full-fledged functionality of those systems or don't want to learn the ropes of those, a small flexible library comes in handy.

Python is widely used in natural language processing, so there are a couple of comprehensive open source libraries for this task, such as Google's CLD 2 and CLD 3, langid, fastText and langdetect. Unfortunately, except for the last one they have two major drawbacks:

  1. Detection only works with quite lengthy text fragments. For very short text snippets such as Twitter messages, it does not provide adequate results.
  2. The more languages take part in the decision process, the less accurate are the detection results.

Lingua aims at eliminating these problems. It nearly does not need any configuration and yields pretty accurate results on both long and short text, even on single words and phrases. It draws on both rule-based and statistical methods but does not use any dictionaries of words. It does not need a connection to any external API or service either. Once the library has been downloaded, it can be used completely offline.

3. Which languages are supported?

Compared to other language detection libraries, Lingua's focus is on quality over quantity, that is, getting detection right for a small set of languages first before adding new ones. Currently, the following 75 languages are supported:

  • Afrikaans
  • Albanian
  • Arabic
  • Armenian
  • Azerbaijani
  • Basque
  • Belarusian
  • Bengali
  • Norwegian Bokmal
  • Bosnian
  • Bulgarian
  • Catalan
  • Chinese
  • Croatian
  • Czech
  • Danish
  • Dutch
  • English
  • Esperanto
  • Estonian
  • Finnish
  • French
  • Ganda
  • Georgian
  • German
  • Greek
  • Gujarati
  • Hebrew
  • Hindi
  • Hungarian
  • Icelandic
  • Indonesian
  • Irish
  • Italian
  • Japanese
  • Kazakh
  • Korean
  • Latin
  • Latvian
  • Lithuanian
  • Macedonian
  • Malay
  • Maori
  • Marathi
  • Mongolian
  • Norwegian Nynorsk
  • Persian
  • Polish
  • Portuguese
  • Punjabi
  • Romanian
  • Russian
  • Serbian
  • Shona
  • Slovak
  • Slovene
  • Somali
  • Sotho
  • Spanish
  • Swahili
  • Swedish
  • Tagalog
  • Tamil
  • Telugu
  • Thai
  • Tsonga
  • Tswana
  • Turkish
  • Ukrainian
  • Urdu
  • Vietnamese
  • Welsh
  • Xhosa
  • Yoruba
  • Zulu

4. How good is it?

Lingua is able to report accuracy statistics for some bundled test data available for each supported language. The test data for each language is split into three parts:

  1. a list of single words with a minimum length of 5 characters
  2. a list of word pairs with a minimum length of 10 characters
  3. a list of complete grammatical sentences of various lengths

Both the language models and the test data have been created from separate documents of the Wortschatz corpora offered by Leipzig University, Germany. Data crawled from various news websites have been used for training, each corpus comprising one million sentences. For testing, corpora made of arbitrarily chosen websites have been used, each comprising ten thousand sentences. From each test corpus, a random unsorted subset of 1000 single words, 1000 word pairs and 1000 sentences has been extracted, respectively.

Given the generated test data, I have compared the detection results of Lingua, fastText, langdetect, langid, CLD 2 and CLD 3 running over the data of Lingua's supported 75 languages. Languages that are not supported by the other detectors are simply ignored for them during the detection process.

Each of the following sections contains two plots. The bar plot shows the detailed accuracy results for each supported language. The box plot illustrates the distributions of the accuracy values for each classifier. The boxes themselves represent the areas which the middle 50 % of data lie within. Within the colored boxes, the horizontal lines mark the median of the distributions.

4.1 Single word detection


Single Word Detection Performance


Bar plot Single Word Detection Performance



4.2 Word pair detection


Word Pair Detection Performance


Bar plot Word Pair Detection Performance



4.3 Sentence detection


Sentence Detection Performance


Bar plot Sentence Detection Performance



4.4 Average detection


Average Detection Performance


Bar plot Average Detection Performance

5. Why is it better than other libraries?

Every language detector uses a probabilistic n-gram model trained on the character distribution in some training corpus. Most libraries only use n-grams of size 3 (trigrams) which is satisfactory for detecting the language of longer text fragments consisting of multiple sentences. For short phrases or single words, however, trigrams are not enough. The shorter the input text is, the less n-grams are available. The probabilities estimated from such few n-grams are not reliable. This is why Lingua makes use of n-grams of sizes 1 up to 5 which results in much more accurate prediction of the correct language.

A second important difference is that Lingua does not only use such a statistical model, but also a rule-based engine. This engine first determines the alphabet of the input text and searches for characters which are unique in one or more languages. If exactly one language can be reliably chosen this way, the statistical model is not necessary anymore. In any case, the rule-based engine filters out languages that do not satisfy the conditions of the input text. Only then, in a second step, the probabilistic n-gram model is taken into consideration. This makes sense because loading less language models means less memory consumption and better runtime performance.

In general, it is always a good idea to restrict the set of languages to be considered in the classification process using the respective api methods. If you know beforehand that certain languages are never to occur in an input text, do not let those take part in the classifcation process. The filtering mechanism of the rule-based engine is quite good, however, filtering based on your own knowledge of the input text is always preferable.

6. How to add it to your project?

Lingua is available in the Python Package Index and can be installed with:

pip install lingua-language-detector

7. How to use?

7.1 Basic usage

>>> from lingua import Language, LanguageDetectorBuilder
>>> languages = [Language.ENGLISH, Language.FRENCH, Language.GERMAN, Language.SPANISH]
>>> detector = LanguageDetectorBuilder.from_languages(*languages).build()
>>> detector.detect_language_of("languages are awesome")
Language.ENGLISH

7.2 Minimum relative distance

By default, Lingua returns the most likely language for a given input text. However, there are certain words that are spelled the same in more than one language. The word prologue, for instance, is both a valid English and French word. Lingua would output either English or French which might be wrong in the given context. For cases like that, it is possible to specify a minimum relative distance that the logarithmized and summed up probabilities for each possible language have to satisfy. It can be stated in the following way:

>>> from lingua import Language, LanguageDetectorBuilder
>>> languages = [Language.ENGLISH, Language.FRENCH, Language.GERMAN, Language.SPANISH]
>>> detector = LanguageDetectorBuilder.from_languages(*languages).with_minimum_relative_distance(0.9).build()
>>> print(detector.detect_language_of("languages are awesome"))
None

Be aware that the distance between the language probabilities is dependent on the length of the input text. The longer the input text, the larger the distance between the languages. So if you want to classify very short text phrases, do not set the minimum relative distance too high. Otherwise, None will be returned most of the time as in the example above. This is the return value for cases where language detection is not reliably possible.

7.3 Confidence values

Knowing about the most likely language is nice but how reliable is the computed likelihood? And how less likely are the other examined languages in comparison to the most likely one? These questions can be answered as well:

>>> from lingua import Language, LanguageDetectorBuilder
>>> languages = [Language.ENGLISH, Language.FRENCH, Language.GERMAN, Language.SPANISH]
>>> detector = LanguageDetectorBuilder.from_languages(*languages).build()
>>> confidence_values = detector.compute_language_confidence_values("languages are awesome")
>>> for language, value in confidence_values:
...     print(f"{language.name}: {value:.2f}")
ENGLISH: 0.93
FRENCH: 0.04
GERMAN: 0.02
SPANISH: 0.01

In the example above, a list is returned containing those languages which the calling instance of LanguageDetector has been built from, sorted by their confidence value in descending order. Each value is a probability between 0.0 and 1.0. The probabilities of all languages will sum to 1.0. If the language is unambiguously identified by the rule engine, the value 1.0 will always be returned for this language. The other languages will receive a value of 0.0.

There is also a method for returning the confidence value for one specific language only:

>>> from lingua import Language, LanguageDetectorBuilder
>>> languages = [Language.ENGLISH, Language.FRENCH, Language.GERMAN, Language.SPANISH]
>>> detector = LanguageDetectorBuilder.from_languages(*languages).build()
>>> confidence_value = detector.compute_language_confidence("languages are awesome", Language.FRENCH)
>>> print(f"{confidence_value:.2f}")
0.04

The value that this method computes is a number between 0.0 and 1.0. If the language is unambiguously identified by the rule engine, the value 1.0 will always be returned. If the given language is not supported by this detector instance, the value 0.0 will always be returned.

7.4 Eager loading versus lazy loading

By default, Lingua uses lazy-loading to load only those language models on demand which are considered relevant by the rule-based filter engine. For web services, for instance, it is rather beneficial to preload all language models into memory to avoid unexpected latency while waiting for the service response. If you want to enable the eager-loading mode, you can do it like this:

LanguageDetectorBuilder.from_all_languages().with_preloaded_language_models().build()

Multiple instances of LanguageDetector share the same language models in memory which are accessed asynchronously by the instances.

7.5 Low accuracy mode versus high accuracy mode

Lingua's high detection accuracy comes at the cost of being noticeably slower than other language detectors. The large language models also consume significant amounts of memory. These requirements might not be feasible for systems running low on resources. If you want to classify mostly long texts or need to save resources, you can enable a low accuracy mode that loads only a small subset of the language models into memory:

LanguageDetectorBuilder.from_all_languages().with_low_accuracy_mode().build()

The downside of this approach is that detection accuracy for short texts consisting of less than 120 characters will drop significantly. However, detection accuracy for texts which are longer than 120 characters will remain mostly unaffected.

In high accuracy mode (the default), the language detector consumes approximately 800 MB of memory if all language models are loaded. In low accuracy mode, memory consumption is reduced to approximately 60 MB.

An alternative for a smaller memory footprint and faster performance is to reduce the set of languages when building the language detector. In most cases, it is not advisable to build the detector from all supported languages. When you have knowledge about the texts you want to classify you can almost always rule out certain languages as impossible or unlikely to occur.

7.6 Detection of multiple languages in mixed-language texts

In contrast to most other language detectors, Lingua is able to detect multiple languages in mixed-language texts. This feature can yield quite reasonable results but it is still in an experimental state and therefore the detection result is highly dependent on the input text. It works best in high-accuracy mode with multiple long words for each language. The shorter the phrases and their words are, the less accurate are the results. Reducing the set of languages when building the language detector can also improve accuracy for this task if the languages occurring in the text are equal to the languages supported by the respective language detector instance.

>>> from lingua import Language, LanguageDetectorBuilder
>>> languages = [Language.ENGLISH, Language.FRENCH, Language.GERMAN]
>>> detector = LanguageDetectorBuilder.from_languages(*languages).build()
>>> sentence = "Parlez-vous français? " + \
...            "Ich spreche Französisch nur ein bisschen. " + \
...            "A little bit is better than nothing."
>>> for result in detector.detect_multiple_languages_of(sentence):
...     print(f"{result.language.name}: '{sentence[result.start_index:result.end_index]}'")
FRENCH: 'Parlez-vous français? '
GERMAN: 'Ich spreche Französisch nur ein bisschen. '
ENGLISH: 'A little bit is better than nothing.'

In the example above, a list of DetectionResult is returned. Each entry in the list describes a contiguous single-language text section, providing start and end indices of the respective substring.

7.7 Methods to build the LanguageDetector

There might be classification tasks where you know beforehand that your language data is definitely not written in Latin, for instance. The detection accuracy can become better in such cases if you exclude certain languages from the decision process or just explicitly include relevant languages:

from lingua import LanguageDetectorBuilder, Language, IsoCode639_1, IsoCode639_3

# Include all languages available in the library.
LanguageDetectorBuilder.from_all_languages()

# Include only languages that are not yet extinct (= currently excludes Latin).
LanguageDetectorBuilder.from_all_spoken_languages()

# Include only languages written with Cyrillic script.
LanguageDetectorBuilder.from_all_languages_with_cyrillic_script()

# Exclude only the Spanish language from the decision algorithm.
LanguageDetectorBuilder.from_all_languages_without(Language.SPANISH)

# Only decide between English and German.
LanguageDetectorBuilder.from_languages(Language.ENGLISH, Language.GERMAN)

# Select languages by ISO 639-1 code.
LanguageDetectorBuilder.from_iso_codes_639_1(IsoCode639_1.EN, IsoCode639_1.DE)

# Select languages by ISO 639-3 code.
LanguageDetectorBuilder.from_iso_codes_639_3(IsoCode639_3.ENG, IsoCode639_3.DEU)
class ConfidenceValue(typing.NamedTuple):

This class describes a language's confidence value.

Attributes: language (Language): The language associated with this confidence value.

value (float):
    The language's confidence value which lies between 0.0 and 1.0.
ConfidenceValue(language: lingua.Language, value: float)

Create new instance of ConfidenceValue(language, value)

language: lingua.Language

Alias for field number 0

value: float

Alias for field number 1

Inherited Members
builtins.tuple
index
count
class DetectionResult(typing.NamedTuple):

This class describes a contiguous single-language text section within a possibly mixed-language text.

Attributes: start_index (int): The start index of the identified single-language substring.

end_index (int):
    The end index of the identified single-language substring.

word_count (int):
    The number of words being part of the identified
    single-language substring.

language (Language):
    The detected language of the identified single-language substring.
DetectionResult( start_index: int, end_index: int, word_count: int, language: lingua.Language)

Create new instance of DetectionResult(start_index, end_index, word_count, language)

start_index: int

Alias for field number 0

end_index: int

Alias for field number 1

word_count: int

Alias for field number 2

language: lingua.Language

Alias for field number 3

Inherited Members
builtins.tuple
index
count
class LanguageDetectorBuilder:

This class configures and creates an instance of LanguageDetector.

LanguageDetectorBuilder(languages: FrozenSet[lingua.Language])
@classmethod
def from_all_languages(cls) -> lingua.LanguageDetectorBuilder:

Create and return an instance of LanguageDetectorBuilder with all built-in languages.

@classmethod
def from_all_spoken_languages(cls) -> lingua.LanguageDetectorBuilder:

Create and return an instance of LanguageDetectorBuilder with all built-in spoken languages.

@classmethod
def from_all_languages_with_arabic_script(cls) -> lingua.LanguageDetectorBuilder:

Create and return an instance of LanguageDetectorBuilder with all built-in languages supporting the Arabic script.

@classmethod
def from_all_languages_with_cyrillic_script(cls) -> lingua.LanguageDetectorBuilder:

Create and return an instance of LanguageDetectorBuilder with all built-in languages supporting the Cyrillic script.

@classmethod
def from_all_languages_with_devanagari_script(cls) -> lingua.LanguageDetectorBuilder:

Create and return an instance of LanguageDetectorBuilder with all built-in languages supporting the Devanagari script.

@classmethod
def from_all_languages_with_latin_script(cls) -> lingua.LanguageDetectorBuilder:

Create and return an instance of LanguageDetectorBuilder with all built-in languages supporting the Latin script.

@classmethod
def from_all_languages_without( cls, *languages: lingua.Language) -> lingua.LanguageDetectorBuilder:

Create and return an instance of LanguageDetectorBuilder with all built-in languages except those passed to this method.

@classmethod
def from_languages( cls, *languages: lingua.Language) -> lingua.LanguageDetectorBuilder:

Create and return an instance of LanguageDetectorBuilder with the languages passed to this method.

@classmethod
def from_iso_codes_639_1( cls, *iso_codes: lingua.IsoCode639_1) -> lingua.LanguageDetectorBuilder:

Create and return an instance of LanguageDetectorBuilder with the languages specified by the ISO 639-1 codes passed to this method.

Raises: ValueError: if less than two ISO codes are specified

@classmethod
def from_iso_codes_639_3( cls, *iso_codes: lingua.IsoCode639_3) -> lingua.LanguageDetectorBuilder:

Create and return an instance of LanguageDetectorBuilder with the languages specified by the ISO 639-3 codes passed to this method.

Raises: ValueError: if less than two ISO codes are specified

def with_minimum_relative_distance(self, distance: float) -> lingua.LanguageDetectorBuilder:

Set the desired value for the minimum relative distance measure.

By default, Lingua returns the most likely language for a given input text. However, there are certain words that are spelled the same in more than one language. The word 'prologue', for instance, is both a valid English and French word. Lingua would output either English or French which might be wrong in the given context. For cases like that, it is possible to specify a minimum relative distance that the logarithmized and summed up probabilities for each possible language have to satisfy.

Be aware that the distance between the language probabilities is dependent on the length of the input text. The longer the input text, the larger the distance between the languages. So if you want to classify very short text phrases, do not set the minimum relative distance too high. Otherwise you will get most results returned as None which is the return value for cases where language detection is not reliably possible.

Raises: ValueError: if distance is smaller than 0.0 or greater than 0.99

def with_preloaded_language_models(self) -> lingua.LanguageDetectorBuilder:

Preload all language models when creating the LanguageDetector instance.

By default, Lingua uses lazy-loading to load only those language models on demand which are considered relevant by the rule-based filter engine. For web services, for instance, it is rather beneficial to preload all language models into memory to avoid unexpected latency while waiting for the service response. This method allows to switch between these two loading modes.

def with_low_accuracy_mode(self) -> lingua.LanguageDetectorBuilder:

Disables the high accuracy mode in order to save memory and increase performance.

By default, Lingua's high detection accuracy comes at the cost of loading large language models into memory which might not be feasible for systems running low on resources.

This method disables the high accuracy mode so that only a small subset of language models is loaded into memory. The downside of this approach is that detection accuracy for short texts consisting of less than 120 characters will drop significantly. However, detection accuracy for texts which are longer than 120 characters will remain mostly unaffected.

def build(self) -> lingua.LanguageDetector:

Create and return the configured LanguageDetector instance.

@dataclass
class LanguageDetector:

This class detects the language of text.

LanguageDetector( _languages: FrozenSet[lingua.Language], _minimum_relative_distance: float, _is_low_accuracy_mode_enabled: bool, _languages_with_unique_characters: FrozenSet[lingua.Language], _one_language_alphabets: Dict[lingua.language._Alphabet, lingua.Language], _unigram_language_models: Dict[lingua.Language, numpy.ndarray], _bigram_language_models: Dict[lingua.Language, numpy.ndarray], _trigram_language_models: Dict[lingua.Language, numpy.ndarray], _quadrigram_language_models: Dict[lingua.Language, numpy.ndarray], _fivegram_language_models: Dict[lingua.Language, numpy.ndarray], _cache: Dict[lingua.Language, Dict[str, Optional[float]]])
def detect_language_of(self, text: str) -> Optional[lingua.Language]:

Detect the language of text.

Args: text (str): The text whose language should be identified.

Returns: The identified language. If the language cannot be reliably detected, None is returned.

def detect_multiple_languages_of(self, text: str) -> List[lingua.DetectionResult]:

Attempt to detect multiple languages in mixed-language text.

This feature is experimental and under continuous development.

A list of DetectionResult is returned containing an entry for each contiguous single-language text section as identified by the library. Each entry consists of the identified language, a start index and an end index. The indices denote the substring that has been identified as a contiguous single-language text section.

Args: text (str): The text whose language should be identified.

Returns: A list of detection results. Each result contains the identified language, the start index and end index of the identified single-language substring.

def compute_language_confidence_values(self, text: str) -> List[lingua.ConfidenceValue]:

Compute confidence values for each language supported by this detector for the given text.

The confidence values denote how likely it is that the given text has been written in any of the languages supported by this detector.

A list is returned containing those languages which the calling instance of LanguageDetector has been built from. The entries are sorted by their confidence value in descending order. Each value is a probability between 0.0 and 1.0. The probabilities of all languages will sum to 1.0. If the language is unambiguously identified by the rule engine, the value 1.0 will always be returned for this language. The other languages will receive a value of 0.0.

Args: text (str): The text for which to compute confidence values.

Returns: A list of 2-element tuples. Each tuple contains a language and the associated confidence value.

def compute_language_confidence(self, text: str, language: lingua.Language) -> float:

Compute the confidence value for the given language and input text.

The confidence value denotes how likely it is that the given text has been written in the given language. The value that this method computes is a number between 0.0 and 1.0. If the language is unambiguously identified by the rule engine, the value 1.0 will always be returned. If the given language is not supported by this detector instance, the value 0.0 will always be returned.

Args: text (str): The text for which to compute the confidence value.

language (Language):
    The language for which to compute the confidence value.

Returns: A float value between 0.0 and 1.0.

class IsoCode639_1(enum.Enum):

This enum specifies the ISO 639-1 code representations for the supported languages.

ISO 639 is a standardized nomenclature used to classify languages.

Inherited Members
enum.Enum
name
value
class IsoCode639_3(enum.Enum):

This enum specifies the ISO 639-3 code representations for the supported languages.

ISO 639 is a standardized nomenclature used to classify languages.

Inherited Members
enum.Enum
name
value
@total_ordering
class Language(enum.Enum):

This enum specifies the so far 75 supported languages which can be detected by Lingua.

AFRIKAANS = Language.AFRIKAANS
ALBANIAN = Language.ALBANIAN
ARABIC = Language.ARABIC
ARMENIAN = Language.ARMENIAN
AZERBAIJANI = Language.AZERBAIJANI
BASQUE = Language.BASQUE
BELARUSIAN = Language.BELARUSIAN
BENGALI = Language.BENGALI
BOKMAL = Language.BOKMAL
BOSNIAN = Language.BOSNIAN
BULGARIAN = Language.BULGARIAN
CATALAN = Language.CATALAN
CHINESE = Language.CHINESE
CROATIAN = Language.CROATIAN
CZECH = Language.CZECH
DANISH = Language.DANISH
DUTCH = Language.DUTCH
ENGLISH = Language.ENGLISH
ESPERANTO = Language.ESPERANTO
ESTONIAN = Language.ESTONIAN
FINNISH = Language.FINNISH
FRENCH = Language.FRENCH
GANDA = Language.GANDA
GEORGIAN = Language.GEORGIAN
GERMAN = Language.GERMAN
GREEK = Language.GREEK
GUJARATI = Language.GUJARATI
HEBREW = Language.HEBREW
HINDI = Language.HINDI
HUNGARIAN = Language.HUNGARIAN
ICELANDIC = Language.ICELANDIC
INDONESIAN = Language.INDONESIAN
IRISH = Language.IRISH
ITALIAN = Language.ITALIAN
JAPANESE = Language.JAPANESE
KAZAKH = Language.KAZAKH
KOREAN = Language.KOREAN
LATIN = Language.LATIN
LATVIAN = Language.LATVIAN
LITHUANIAN = Language.LITHUANIAN
MACEDONIAN = Language.MACEDONIAN
MALAY = Language.MALAY
MAORI = Language.MAORI
MARATHI = Language.MARATHI
MONGOLIAN = Language.MONGOLIAN
NYNORSK = Language.NYNORSK
PERSIAN = Language.PERSIAN
POLISH = Language.POLISH
PORTUGUESE = Language.PORTUGUESE
PUNJABI = Language.PUNJABI
ROMANIAN = Language.ROMANIAN
RUSSIAN = Language.RUSSIAN
SERBIAN = Language.SERBIAN
SHONA = Language.SHONA
SLOVAK = Language.SLOVAK
SLOVENE = Language.SLOVENE
SOMALI = Language.SOMALI
SOTHO = Language.SOTHO
SPANISH = Language.SPANISH
SWAHILI = Language.SWAHILI
SWEDISH = Language.SWEDISH
TAGALOG = Language.TAGALOG
TAMIL = Language.TAMIL
TELUGU = Language.TELUGU
THAI = Language.THAI
TSONGA = Language.TSONGA
TSWANA = Language.TSWANA
TURKISH = Language.TURKISH
UKRAINIAN = Language.UKRAINIAN
URDU = Language.URDU
VIETNAMESE = Language.VIETNAMESE
WELSH = Language.WELSH
XHOSA = Language.XHOSA
YORUBA = Language.YORUBA
ZULU = Language.ZULU
@classmethod
def all(cls) -> FrozenSet[lingua.Language]:

Return a set of all supported languages.

@classmethod
def all_spoken_ones(cls) -> FrozenSet[lingua.Language]:

Return a set of all supported spoken languages.

@classmethod
def all_with_arabic_script(cls) -> FrozenSet[lingua.Language]:

Return a set of all languages supporting the Arabic script.

@classmethod
def all_with_cyrillic_script(cls) -> FrozenSet[lingua.Language]:

Return a set of all languages supporting the Cyrillic script.

@classmethod
def all_with_devanagari_script(cls) -> FrozenSet[lingua.Language]:

Return a set of all languages supporting the Devanagari script.

@classmethod
def all_with_latin_script(cls) -> FrozenSet[lingua.Language]:

Return a set of all languages supporting the Latin script.

@classmethod
def from_iso_code_639_1(cls, iso_code: lingua.IsoCode639_1) -> lingua.Language:

Return the language associated with the ISO 639-1 code passed to this method.

Raises: ValueError: if there is no language for the given ISO code

@classmethod
def from_iso_code_639_3(cls, iso_code: lingua.IsoCode639_3) -> lingua.Language:

Return the language associated with the ISO 639-3 code passed to this method.

Raises: ValueError: if there is no language for the given ISO code

Inherited Members
enum.Enum
name
value
class LanguageModelFilesWriter:

This class creates language model files and writes them to a directory.

LanguageModelFilesWriter()
@classmethod
def create_and_write_language_model_files( cls, input_file_path: pathlib.Path, output_directory_path: pathlib.Path, language: lingua.Language, char_class: str):

Create language model files for accuracy report generation and write them to a directory.

Args: input_file_path: The path to a txt file used for language model creation. The assumed encoding of the txt file is UTF-8. output_directory_path: The path to an existing directory where the language model files are to be written. language: The language for which to create language models. char_class: A regex character class such as \p{L} to restrict the set of characters that the language models are built from.

Raises: Exception: if the input file path is not absolute or does not point to an existing txt file; if the input file's encoding is not UTF-8; if the output directory path is not absolute or does not point to an existing directory; if the character class cannot be compiled to a valid regular expression

class TestDataFilesWriter:

This class creates test data files for accuracy report generation and writes them to a directory.

TestDataFilesWriter()
@classmethod
def create_and_write_test_data_files( cls, input_file_path: pathlib.Path, output_directory_path: pathlib.Path, char_class: str, maximum_lines: int):

Create test data files for accuracy report generation and write them to a directory.

Args: input_file_path: The path to a txt file used for test data creation. The assumed encoding of the txt file is UTF-8. output_directory_path: The path to an existing directory where the test data files are to be written. char_class: A regex character class such as \p{L} to restrict the set of characters that the test data are built from. maximum_lines: The maximum number of lines each test data file should have.

Raises: Exception: if the input file path is not absolute or does not point to an existing txt file; if the input file's encoding is not UTF-8; if the output directory path is not absolute or does not point to an existing directory; if the character class cannot be compiled to a valid regular expression