Skip to content

๐Ÿ“ Compute distance between sequences. 30+ algorithms, pure python implementation, common interface, optional external libs usage.

License

Notifications You must be signed in to change notification settings

ennamarie19/textdistance

ย 
ย 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

TextDistance

TextDistance logo

Build Status PyPI version Status License

TextDistance -- python library for comparing distance between two or more sequences by many algorithms.

Features:

  • 30+ algorithms
  • Pure python implementation
  • Simple usage
  • More than two sequences comparing
  • Some algorithms have more than one implementation in one class.
  • Optional numpy usage for maximum speed.

Algorithms

Edit based

Algorithm Class Functions
Hamming Hamming hamming
MLIPNS MLIPNS mlipns
Levenshtein Levenshtein levenshtein
Damerau-Levenshtein DamerauLevenshtein damerau_levenshtein
Jaro-Winkler JaroWinkler jaro_winkler, jaro
Strcmp95 StrCmp95 strcmp95
Needleman-Wunsch NeedlemanWunsch needleman_wunsch
Gotoh Gotoh gotoh
Smith-Waterman SmithWaterman smith_waterman

Token based

Algorithm Class Functions
Jaccard index Jaccard jaccard
Sรธrensenโ€“Dice coefficient Sorensen sorensen, sorensen_dice, dice
Tversky index Tversky tversky
Overlap coefficient Overlap overlap
Tanimoto distance Tanimoto tanimoto
Cosine similarity Cosine cosine
Monge-Elkan MongeElkan monge_elkan
Bag distance Bag bag

Sequence based

Algorithm Class Functions
longest common subsequence similarity LCSSeq lcsseq
longest common substring similarity LCSStr lcsstr
Ratcliff-Obershelp similarity RatcliffObershelp ratcliff_obershelp

Compression based

Normalized compression distance with different compression algorithms.

Classic compression algorithms:

Algorithm Class Function
Arithmetic coding ArithNCD arith_ncd
RLE RLENCD rle_ncd
BWT RLE BWTRLENCD bwtrle_ncd

Normal compression algorithms:

Algorithm Class Function
Square Root SqrtNCD sqrt_ncd
Entropy EntropyNCD entropy_ncd

Work in progress algorithms that compare two strings as array of bits:

Algorithm Class Function
BZ2 BZ2NCD bz2_ncd
LZMA LZMANCD lzma_ncd
ZLib ZLIBNCD zlib_ncd

See blog post for more details about NCD.

Phonetic

Algorithm Class Functions
MRA MRA mra
Editex Editex editex

Simple

Algorithm Class Functions
Prefix similarity Prefix prefix
Postfix similarity Postfix postfix
Length distance Length length
Identity similarity Identity identity
Matrix similarity Matrix matrix

Installation

Stable

Only pure python implementation:

pip install textdistance

With extra libraries for maximum speed:

pip install "textdistance[extras]"

With all libraries (required for benchmarking and testing):

pip install "textdistance[benchmark]"

With algorithm specific extras:

pip install "textdistance[Hamming]"

Algorithms with available extras: DamerauLevenshtein, Hamming, Jaro, JaroWinkler, Levenshtein.

Dev

Via pip:

pip install -e git+https://github.com/life4/textdistance.git#egg=textdistance

Or clone repo and install with some extras:

git clone https://github.com/life4/textdistance.git
pip install -e ".[benchmark]"

Usage

All algorithms have 2 interfaces:

  1. Class with algorithm-specific params for customizing.
  2. Class instance with default params for quick and simple usage.

All algorithms have some common methods:

  1. .distance(*sequences) -- calculate distance between sequences.
  2. .similarity(*sequences) -- calculate similarity for sequences.
  3. .maximum(*sequences) -- maximum possible value for distance and similarity. For any sequence: distance + similarity == maximum.
  4. .normalized_distance(*sequences) -- normalized distance between sequences. The return value is a float between 0 and 1, where 0 means equal, and 1 totally different.
  5. .normalized_similarity(*sequences) -- normalized similarity for sequences. The return value is a float between 0 and 1, where 0 means totally different, and 1 equal.

Most common init arguments:

  1. qval -- q-value for split sequences into q-grams. Possible values:
    • 1 (default) -- compare sequences by chars.
    • 2 or more -- transform sequences to q-grams.
    • None -- split sequences by words.
  2. as_set -- for token-based algorithms:
    • True -- t and ttt is equal.
    • False (default) -- t and ttt is different.

Examples

For example, Hamming distance:

import textdistance

textdistance.hamming('test', 'text')
# 1

textdistance.hamming.distance('test', 'text')
# 1

textdistance.hamming.similarity('test', 'text')
# 3

textdistance.hamming.normalized_distance('test', 'text')
# 0.25

textdistance.hamming.normalized_similarity('test', 'text')
# 0.75

textdistance.Hamming(qval=2).distance('test', 'text')
# 2

Any other algorithms have same interface.

Articles

A few articles with examples how to use textdistance in the real world:

Extra libraries

For main algorithms textdistance try to call known external libraries (fastest first) if available (installed in your system) and possible (this implementation can compare this type of sequences). Install textdistance with extras for this feature.

You can disable this by passing external=False argument on init:

import textdistance
hamming = textdistance.Hamming(external=False)
hamming('text', 'testit')
# 3

Supported libraries:

  1. Distance
  2. jellyfish
  3. py_stringmatching
  4. pylev
  5. Levenshtein
  6. pyxDamerauLevenshtein

Algorithms:

  1. DamerauLevenshtein
  2. Hamming
  3. Jaro
  4. JaroWinkler
  5. Levenshtein

Benchmarks

Without extras installation:

algorithm library time
DamerauLevenshtein rapidfuzz 0.00312
DamerauLevenshtein jellyfish 0.00591
DamerauLevenshtein pyxdameraulevenshtein 0.03335
DamerauLevenshtein textdistance 0.83524
Hamming Levenshtein 0.00038
Hamming rapidfuzz 0.00044
Hamming jellyfish 0.00091
Hamming distance 0.00812
Hamming textdistance 0.03531
Jaro rapidfuzz 0.00092
Jaro jellyfish 0.00191
Jaro textdistance 0.07365
JaroWinkler rapidfuzz 0.00094
JaroWinkler jellyfish 0.00195
JaroWinkler textdistance 0.07501
Levenshtein rapidfuzz 0.00099
Levenshtein Levenshtein 0.00122
Levenshtein jellyfish 0.00254
Levenshtein pylev 0.15688
Levenshtein distance 0.28669
Levenshtein textdistance 0.53902

Total: 24 libs.

Yeah, so slow. Use TextDistance on production only with extras.

Textdistance use benchmark's results for algorithm's optimization and try to call fastest external lib first (if possible).

You can run benchmark manually on your system:

pip install textdistance[benchmark]
python3 -m textdistance.benchmark

TextDistance show benchmarks results table for your system and save libraries priorities into libraries.json file in TextDistance's folder. This file will be used by textdistance for calling fastest algorithm implementation. Default libraries.json already included in package.

Running tests

All you need is task. See Taskfile.yml for the list of available commands. For example, to run tests including third-party libraries usage, execute task pytest-external:run.

Contributing

PRs are welcome!

  • Found a bug? Fix it!
  • Want to add more algorithms? Sure! Just make it with the same interface as other algorithms in the lib and add some tests.
  • Can make something faster? Great! Just avoid external dependencies and remember that everything should work not only with strings.
  • Something else that do you think is good? Do it! Just make sure that CI passes and everything from the README is still applicable (interface, features, and so on).
  • Have no time to code? Tell your friends and subscribers about textdistance. More users, more contributions, more amazing features.

Thank you โค๏ธ

About

๐Ÿ“ Compute distance between sequences. 30+ algorithms, pure python implementation, common interface, optional external libs usage.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.2%
  • Jupyter Notebook 1.7%
  • Shell 0.1%