A repository of resources used in our tutorials and guides ⚡️
This library is a collection of useful scripts that can be used for integrating with our platform tools, or for general CV application purposes. The scripts are written in various programming languages and are available under the MIT License.
Firstly, users should clone this repository and change to the resource folder directory.
git clone https://github.com/datature/resources.git
cd resources
In each folder, there will be a requirements.txt
file that contains the dependencies required for Python scripts to run. Users can install the dependencies by running the following command:
pip install -r requirements.txt
It is recommended to use a virtual environment to install the dependencies. For more information on virtual environments, please refer to Python venv.
Each folder contains a README.md
file that contains the instructions for running the scripts. Please refer to the README.md
file for more information.
We welcome contributions to this repository. Please refer to CONTRIBUTING.md for more information on what areas you can contribute in and coding best practice guidelines.
This section contains example scripts that can be used for integrating with our platform tools, or for general CV application purposes.
Topic | Description |
---|---|
Active Learning | For performing active learning on your dataset. |
Data Preprocessing | Useful tools for preprocessing your data. |
Inference Dashboard | For easy visualizations of inference results. |
Learning | Sample scripts for one-shot and few-shot learning. |
Tracking | For single and multi-object tracking in videos. |
This section contains guides and code snippets on how to use our Datature Python SDK for automating tasks without having to interact with our Nexus platform. The SDK is available on PyPI. It can be installed by running the following command:
pip install -U datature
The SDK can either be invoked in Python, or through the command line interface (CLI). For more information or advanced features on the SDK, please refer to the SDK documentation.
This section contains scripts on how to deploy your models trained on Nexus for inference. We currently support the following deployment methods:
Method | Description |
---|---|
Edge Deployment | For deploying models on edge devices such as Raspberry Pi & NVIDIA Jetson. |
Inference API | Where models are hosted on our servers and inference can be performed through API calls. |
Local Inference | For running simple inference scripts on your local machine. |