Skip to content

We write your reusable computer vision tools. πŸ’œ

License

Notifications You must be signed in to change notification settings

let-run/supervision

Repository files navigation

πŸ‘‹ hello

We write your reusable computer vision tools. Whether you need to load your dataset from your hard drive, draw detections on an image or video, or count how many detections are in a zone. You can count on us! 🀝

πŸ’» install

Pip install the supervision package in a 3.11>=Python>=3.8 environment.

pip install supervision[desktop]

Read more about desktop, headless, and local installation in our guide.

πŸ”₯ quickstart

>>> import supervision as sv
>>> from ultralytics import YOLO

>>> model = YOLO('yolov8s.pt')
>>> result = model(IMAGE)[0]
>>> detections = sv.Detections.from_ultralytics(result)

>>> len(detections)
5
πŸ‘‰ more detections utils
  • Easily switch inference pipeline between supported object detection/instance segmentation models

    >>> import supervision as sv
    >>> from segment_anything import sam_model_registry, SamAutomaticMaskGenerator
    
    >>> sam = sam_model_registry[MODEL_TYPE](checkpoint=CHECKPOINT_PATH).to(device=DEVICE)
    >>> mask_generator = SamAutomaticMaskGenerator(sam)
    >>> sam_result = mask_generator.generate(IMAGE)
    >>> detections = sv.Detections.from_sam(sam_result=sam_result)
  • Advanced filtering

    >>> detections = detections[detections.class_id == 0]
    >>> detections = detections[detections.confidence > 0.5]
    >>> detections = detections[detections.area > 1000]
  • Image annotation

    >>> import supervision as sv
    
    >>> box_annotator = sv.BoxAnnotator()
    >>> annotated_frame = box_annotator.annotate(
    ...     scene=IMAGE,
    ...     detections=detections
    ... )
>>> import supervision as sv

>>> dataset = sv.DetectionDataset.from_yolo(
...     images_directory_path='...',
...     annotations_directory_path='...',
...     data_yaml_path='...'
... )

>>> dataset.classes
['dog', 'person']

>>> len(dataset)
1000
πŸ‘‰ more dataset utils
  • Load object detection/instance segmentation datasets in one of the supported formats

    >>> dataset = sv.DetectionDataset.from_yolo(
    ...     images_directory_path='...',
    ...     annotations_directory_path='...',
    ...     data_yaml_path='...'
    ... )
    
    >>> dataset = sv.DetectionDataset.from_pascal_voc(
    ...     images_directory_path='...',
    ...     annotations_directory_path='...'
    ... )
    
    >>> dataset = sv.DetectionDataset.from_coco(
    ...     images_directory_path='...',
    ...     annotations_path='...'
    ... )
  • Loop over dataset entries

    >>> for name, image, labels in dataset:
    ...     print(labels.xyxy)
    
    array([[404.      , 719.      , 538.      , 884.5     ],
           [155.      , 497.      , 404.      , 833.5     ],
           [ 20.154999, 347.825   , 416.125   , 915.895   ]], dtype=float32)
  • Split dataset for training, testing, and validation

    >>> train_dataset, test_dataset = dataset.split(split_ratio=0.7)
    >>> test_dataset, valid_dataset = test_dataset.split(split_ratio=0.5)
    
    >>> len(train_dataset), len(test_dataset), len(valid_dataset)
    (700, 150, 150)
  • Merge multiple datasets

    >>> ds_1 = sv.DetectionDataset(...)
    >>> len(ds_1)
    100
    >>> ds_1.classes
    ['dog', 'person']
    
    >>> ds_2 = sv.DetectionDataset(...)
    >>> len(ds_2)
    200
    >>> ds_2.classes
    ['cat']
    
    >>> ds_merged = sv.DetectionDataset.merge([ds_1, ds_2])
    >>> len(ds_merged)
    300
    >>> ds_merged.classes
    ['cat', 'dog', 'person']
  • Save object detection/instance segmentation datasets in one of the supported formats

    >>> dataset.as_yolo(
    ...     images_directory_path='...',
    ...     annotations_directory_path='...',
    ...     data_yaml_path='...'
    ... )
    
    >>> dataset.as_pascal_voc(
    ...     images_directory_path='...',
    ...     annotations_directory_path='...'
    ... )
    
    >>> dataset.as_coco(
    ...     images_directory_path='...',
    ...     annotations_path='...'
    ... )
  • Convert labels between supported formats

    >>> sv.DetectionDataset.from_yolo(
    ...     images_directory_path='...',
    ...     annotations_directory_path='...',
    ...     data_yaml_path='...'
    ... ).as_pascal_voc(
    ...     images_directory_path='...',
    ...     annotations_directory_path='...'
    ... )
  • Load classification datasets in one of the supported formats

    >>> cs = sv.ClassificationDataset.from_folder_structure(
    ...     root_directory_path='...'
    ... )
  • Save classification datasets in one of the supported formats

    >>> cs.as_folder_structure(
    ...     root_directory_path='...'
    ... )
>>> import supervision as sv

>>> dataset = sv.DetectionDataset.from_yolo(...)

>>> def callback(image: np.ndarray) -> sv.Detections:
...     ...

>>> confusion_matrix = sv.ConfusionMatrix.benchmark(
...     dataset = dataset,
...     callback = callback
... )

>>> confusion_matrix.matrix
array([
    [0., 0., 0., 0.],
    [0., 1., 0., 1.],
    [0., 1., 1., 0.],
    [1., 1., 0., 0.]
])
πŸ‘‰ more metrics
  • Mean average precision (mAP) for object detection tasks.

    >>> import supervision as sv
    
    >>> dataset = sv.DetectionDataset.from_yolo(...)
    
    >>> def callback(image: np.ndarray) -> sv.Detections:
    ...     ...
    
    >>> mean_average_precision = sv.MeanAveragePrecision.benchmark(
    ...     dataset = dataset,
    ...     callback = callback
    ... )
    
    >>> mean_average_precision.map50_95
    0.433

🎬 tutorials

Traffic Analysis with YOLOv8 and ByteTrack - Vehicle Detection and Tracking Traffic Analysis with YOLOv8 and ByteTrack - Vehicle Detection and Tracking

Created: 6 Sep 2023 | Updated: 6 Sep 2023

In this video, we explore real-time traffic analysis using YOLOv8 and ByteTrack to detect and track vehicles on aerial images. Harnessing the power of Python and Supervision, we delve deep into assigning cars to specific entry zones and understanding their direction of movement. By visualizing their paths, we gain insights into traffic flow across bustling roundabouts...


SAM - Segment Anything Model by Meta AI: Complete Guide SAM - Segment Anything Model by Meta AI: Complete Guide

Created: 11 Apr 2023 | Updated: 11 Apr 2023

Discover the incredible potential of Meta AI's Segment Anything Model (SAM)! We dive into SAM, an efficient and promptable model for image segmentation, which has revolutionized computer vision tasks. With over 1 billion masks on 11M licensed and privacy-respecting images, SAM's zero-shot performance is often competitive with or even superior to prior fully supervised results...

πŸ’œ built with supervision

Did you build something cool using supervision? Let us know!

football-players-tracking-25.mp4
traffic_analysis_result.mov

πŸ“š documentation

Visit our documentation page to learn how supervision can help you build computer vision applications faster and more reliably.

πŸ† contribution

We love your input! Please see our contributing guide to get started. Thank you πŸ™ to all our contributors!


About

We write your reusable computer vision tools. πŸ’œ

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%