Multi-Template-Matching is a python package to perform object-recognition in images using one or several smaller template images.
The main function MTM.matchTemplates
returns the best predicted locations provided either a score_threshold and/or the expected number of objects in the image.
The branch opencl contains some test using the UMat object to run on GPU, but it is actually slow, which can be expected for small dataset as the transfer of the data between the CPU and GPU is slow.
Using pip in a python environment, pip install Multi-Template-Matching
Once installed, import MTM
should work.
Example jupyter notebooks can be downloaded from the tutorial folder of the github repository and executed in the newly configured python environement.
The wiki section of the repo contains a mini API documentation with description of the key functions of the package.
The website of the project contains some more general documentation.
Check out the jupyter notebook tutorial for some example of how to use the package.
You can run the tutorials online using Binder, no configuration needed ! (click the Binder banner on top of this page).
To run the tutorials locally, install the package using pip as described above, then clone the repository and unzip it.
Finally open a jupyter-notebook session in the unzipped folder to be able to open and execute the notebook.
The wiki section of this related repository also provides some information about the implementation.
If you use this implementation for your research, please cite:
Thomas, L.S.V., Gehrig, J. Multi-template matching: a versatile tool for object-localization in microscopy images.
BMC Bioinformatics 21, 44 (2020). https://doi.org/10.1186/s12859-020-3363-7
Download the citation as a .ris file from the journal website, here.
Previous github releases were archived to Zenodo, but the best is to use pip to install specific versions.
See this repo for the implementation as a Fiji plugin.
Here for a KNIME workflow using Multi-Template-Matching.
This work has been part of the PhD project of Laurent Thomas under supervision of Dr. Jochen Gehrig at ACQUIFER.
This project has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 721537 ImageInLife.