Skip to content

Commit

Permalink
Merge branch 'develop' into feature/add-opacity-to-polygon-zone-annot…
Browse files Browse the repository at this point in the history
…ator
  • Loading branch information
LinasKo committed Sep 19, 2024
2 parents 7ad0bc3 + e3eeb5f commit 85e053e
Show file tree
Hide file tree
Showing 28 changed files with 999 additions and 1,047 deletions.
9 changes: 8 additions & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ We are actively improving this library to reduce the amount of work you need to

## Code of Conduct

Please read and adhere to our [Code of Conduct](CODE_OF_CONDUCT.md). This document outlines the expected behavior for all participants in our project.
Please read and adhere to our [Code of Conduct](https://supervision.roboflow.com/latest/code_of_conduct/). This document outlines the expected behavior for all participants in our project.

## Table of Contents

Expand Down Expand Up @@ -86,6 +86,7 @@ Use conventional commit messages to clearly describe your changes. The format is
<type>[optional scope]: <description>

Common types include:

- feat: A new feature
- fix: A bug fix
- docs: Documentation only changes
Expand Down Expand Up @@ -128,13 +129,16 @@ PRs must pass all tests and linting requirements before they can be merged.
Before starting your work on the project, set up your development environment:

1. Clone your fork of the project:

```bash
git clone https://github.com/YOUR_USERNAME/supervision.git
cd supervision
```

Replace `YOUR_USERNAME` with your GitHub username.

2. Create and activate a virtual environment:

```bash
python3 -m venv .venv
source .venv/bin/activate
Expand All @@ -143,17 +147,20 @@ Before starting your work on the project, set up your development environment:
3. Install Poetry:

Using pip:

```bash
pip install -U pip setuptools
pip install poetry
```

Or using pipx (recommended for global installation):

```bash
pipx install poetry
```

4. Install project dependencies:

```bash
poetry install
```
Expand Down
2 changes: 1 addition & 1 deletion demo.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1357,7 +1357,7 @@
}
],
"source": [
"IMAGE_NAME = list(ds.images.keys())[0]\n",
"IMAGE_NAME = next(iter(ds.images.keys()))\n",
"\n",
"image = ds.images[IMAGE_NAME]\n",
"annotations = ds.annotations[IMAGE_NAME]\n",
Expand Down
74 changes: 37 additions & 37 deletions docs/changelog.md

Large diffs are not rendered by default.

4 changes: 2 additions & 2 deletions docs/deprecated.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,12 +25,12 @@ These features are phased out due to better alternatives or potential issues in

### 0.22.0

- [`Detections.from_froboflow`](detection/core.md/#supervision.detection.core.Detections.from_roboflow) is removed as of `supervision-0.22.0`. Use [`Detections.from_inference`](detection/core.md/#supervision.detection.core.Detections.from_inference) instead.
- [`Detections.from_roboflow`](detection/core.md/#supervision.detection.core.Detections.from_roboflow) is removed as of `supervision-0.22.0`. Use [`Detections.from_inference`](detection/core.md/#supervision.detection.core.Detections.from_inference) instead.
- The method `Color.white()` was removed as of `supervision-0.22.0`. Use the constant `Color.WHITE` instead.
- The method `Color.black()` was removed as of `supervision-0.22.0`. Use the constant `Color.BLACK` instead.
- The method `Color.red()` was removed as of `supervision-0.22.0`. Use the constant `Color.RED` instead.
- The method `Color.green()` was removed as of `supervision-0.22.0`. Use the constant `Color.GREEN` instead.
- The method `Color.blue()` was removed as of `supervision-0.22.0`. Use the constant `Color.BLUE` instead.
- The method [`ColorPalette.default()`](draw/color.md/#supervision.draw.color.ColorPalette.default) was removed as of `supervision-0.22.0`. Use the constant [`ColorPalette.DEFAULT`](draw/color.md/#supervision.draw.color.ColorPalette.DEFAULT) instead.
- The method `ColorPalette.default()` was removed as of `supervision-0.22.0`. Use the constant [`ColorPalette.DEFAULT`](draw/color/#supervision.draw.color.ColorPalette.DEFAULT) instead.
- `BoxAnnotator` was removed as of `supervision-0.22.0`, however `BoundingBoxAnnotator` was immediately renamed to `BoxAnnotator`. Use [`BoxAnnotator`](detection/annotators.md/#supervision.annotators.core.BoxAnnotator) and [`LabelAnnotator`](detection/annotators.md/#supervision.annotators.core.LabelAnnotator) instead of the old `BoxAnnotator`.
- The method [`FPSMonitor.__call__`](utils/video.md/#supervision.utils.video.FPSMonitor.__call__) was removed as of `supervision-0.22.0`. Use the attribute [`FPSMonitor.fps`](utils/video.md/#supervision.utils.video.FPSMonitor.fps) instead.
16 changes: 8 additions & 8 deletions docs/how_to/detect_and_annotate.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ comments: true

Supervision provides a seamless process for annotating predictions generated by various
object detection and segmentation models. This guide shows how to perform inference
with the [Inference](https://github.com/roboflow/inference),
with the [Inference](https://github.com/roboflow/inference),
[Ultralytics](https://github.com/ultralytics/ultralytics) or
[Transformers](https://github.com/huggingface/transformers) packages. Following this,
you'll learn how to import these predictions into Supervision and use them to annotate
Expand Down Expand Up @@ -69,7 +69,7 @@ Now that we have predictions from a model, we can load them into Supervision.

=== "Inference"

We can do so using the [`sv.Detections.from_inference`](detection/core/#supervision.detection.core.Detections.from_inference) method, which accepts model results from both detection and segmentation models.
We can do so using the [`sv.Detections.from_inference`](/latest/detection/core/#supervision.detection.core.Detections.from_inference) method, which accepts model results from both detection and segmentation models.

```{ .py hl_lines="2 8" }
import cv2
Expand All @@ -84,7 +84,7 @@ Now that we have predictions from a model, we can load them into Supervision.

=== "Ultralytics"

We can do so using the [`sv.Detections.from_ultralytics`](detection/core/#supervision.detection.core.Detections.from_ultralytics) method, which accepts model results from both detection and segmentation models.
We can do so using the [`sv.Detections.from_ultralytics`](/latest/detection/core/#supervision.detection.core.Detections.from_ultralytics) method, which accepts model results from both detection and segmentation models.

```{ .py hl_lines="2 8" }
import cv2
Expand All @@ -99,7 +99,7 @@ Now that we have predictions from a model, we can load them into Supervision.

=== "Transformers"

We can do so using the [`sv.Detections.from_transformers`](detection/core/#supervision.detection.core.Detections.from_transformers) method, which accepts model results from both detection and segmentation models.
We can do so using the [`sv.Detections.from_transformers`](/latest/detection/core/#supervision.detection.core.Detections.from_transformers) method, which accepts model results from both detection and segmentation models.

```{ .py hl_lines="2 19-21" }
import torch
Expand Down Expand Up @@ -135,7 +135,7 @@ You can load predictions from other computer vision frameworks and libraries usi

## Annotate Image with Detections

Finally, we can annotate the image with the predictions. Since we are working with an object detection model, we will use the [`sv.BoxAnnotator`](/latest/annotators/#supervision.annotators.core.BoxAnnotator) and [`sv.LabelAnnotator`](/latest/annotators/#supervision.annotators.core.LabelAnnotator) classes.
Finally, we can annotate the image with the predictions. Since we are working with an object detection model, we will use the [`sv.BoxAnnotator`](/latest/detection/annotators/#supervision.annotators.core.BoxAnnotator) and [`sv.LabelAnnotator`](/latest/detection/annotators/#supervision.annotators.core.LabelAnnotator) classes.

=== "Inference"

Expand Down Expand Up @@ -217,7 +217,7 @@ Finally, we can annotate the image with the predictions. Since we are working wi

## Display Custom Labels

By default, [`sv.LabelAnnotator`](/latest/annotators/#supervision.annotators.core.LabelAnnotator)
By default, [`sv.LabelAnnotator`](/latest/detection/annotators/#supervision.annotators.core.LabelAnnotator)
will label each detection with its `class_name` (if possible) or `class_id`. You can
override this behavior by passing a list of custom `labels` to the `annotate` method.

Expand Down Expand Up @@ -320,9 +320,9 @@ override this behavior by passing a list of custom `labels` to the `annotate` me
## Annotate Image with Segmentations

If you are running the segmentation model
[`sv.MaskAnnotator`](/latest/annotators/#supervision.annotators.core.MaskAnnotator)
[`sv.MaskAnnotator`](/latest/detection/annotators/#supervision.annotators.core.MaskAnnotator)
is a drop-in replacement for
[`sv.BoxAnnotator`](/latest/annotators/#supervision.annotators.core.BoxAnnotator)
[`sv.BoxAnnotator`](/latest/detection/annotators/#supervision.annotators.core.BoxAnnotator)
that will allow you to draw masks instead of boxes.

=== "Inference"
Expand Down
4 changes: 2 additions & 2 deletions docs/how_to/track_objects.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ enabling the continuous following of the object's motion path across different f

Annotating the video with tracking IDs helps in distinguishing and following each object
distinctly. With the
[`sv.LabelAnnotator`](/latest/annotators.md/#supervision.annotators.core.LabelAnnotator)
[`sv.LabelAnnotator`](/latest/detection/annotators/#supervision.annotators.core.LabelAnnotator)
in Supervision, we can overlay the tracker IDs and class labels on the detected objects,
offering a clear visual representation of each object's class and unique identifier.

Expand Down Expand Up @@ -230,7 +230,7 @@ offering a clear visual representation of each object's class and unique identif

Adding traces to the video involves overlaying the historical paths of the detected
objects. This feature, powered by the
[`sv.TraceAnnotator`](/latest/annotators/#supervision.annotators.core.TraceAnnotator),
[`sv.TraceAnnotator`](/latest/detection/annotators/#supervision.annotators.core.TraceAnnotator),
allows for visualizing the trajectories of objects, helping in understanding the
movement patterns and interactions between objects in the video.

Expand Down
42 changes: 17 additions & 25 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,18 +35,15 @@ You can install `supervision` in a

!!! example "pip install (recommended)"

=== "headless"
The headless installation of `supervision` is designed for environments where graphical user interfaces (GUI) are not needed, making it more lightweight and suitable for server-side applications.
=== "pip"

```bash
pip install supervision
```

=== "desktop"
If you require the full version of `supervision` with GUI support you can install the desktop version. This version includes the GUI components of OpenCV, allowing you to display images and videos on the screen.
[![version](https://badge.fury.io/py/supervision.svg)](https://badge.fury.io/py/supervision)
[![downloads](https://img.shields.io/pypi/dm/supervision)](https://pypistats.org/packages/supervision)
[![license](https://img.shields.io/pypi/l/supervision)](https://github.com/roboflow/supervision/blob/main/LICENSE.md)
[![python-version](https://img.shields.io/pypi/pyversions/supervision)](https://badge.fury.io/py/supervision)

```bash
pip install "supervision[desktop]"
pip install supervision
```

!!! example "conda/mamba install"
Expand Down Expand Up @@ -81,11 +78,8 @@ You can install `supervision` in a
source venv/bin/activate
pip install --upgrade pip

# headless install
# installation
pip install -e "."

# desktop install
pip install -e ".[desktop]"
```

=== "poetry"
Expand All @@ -99,60 +93,58 @@ You can install `supervision` in a
poetry env use python3.10
poetry shell

# headless install
# installation
poetry install

# desktop install
poetry install --extras "desktop"
```

## 🚀 Quickstart

<div class="grid cards" markdown>

- __Detect and Annotate__
- **Detect and Annotate**

---

Annotate predictions from a range of object detection and segmentation models

[:octicons-arrow-right-24: Tutorial](how_to/detect_and_annotate.md)

- __Track Objects__
- **Track Objects**

---

Discover how to enhance video analysis by implementing seamless object tracking

[:octicons-arrow-right-24: Tutorial](how_to/track_objects.md)

- __Detect Small Objects__
- **Detect Small Objects**

---

Learn how to detect small objects in images

[:octicons-arrow-right-24: Tutorial](how_to/detect_small_objects.md)

- > __Count Objects Crossing Line__
- **Count Objects Crossing Line**

---

Explore methods to accurately count and analyze objects crossing a predefined line

- > __Filter Objects in Zone__
[:octicons-arrow-right-24: Notebook](https://supervision.roboflow.com/latest/notebooks/count-objects-crossing-the-line/)

- **Filter Objects in Zone**

---

Master the techniques to selectively filter and focus on objects within a specific zone

- **Cheatsheet**
- **Cheatsheet**

***
---

Access a quick reference guide to the most common `supervision` functions

[:octicons-arrow-right-24: Cheatsheet](https://roboflow.github.io/cheatsheet-supervision/)


</div>
8 changes: 4 additions & 4 deletions docs/notebooks/annotate-video-with-detections.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -287,10 +287,10 @@
],
"source": [
"# Create a bounding box annotator object.\n",
"bounding_box = sv.BoundingBoxAnnotator()\n",
"box_annotator = sv.BoxAnnotator()\n",
"\n",
"# Annotate our frame with detections.\n",
"annotated_frame = bounding_box.annotate(scene=frame.copy(), detections=detections)\n",
"annotated_frame = box_annotator.annotate(scene=frame.copy(), detections=detections)\n",
"\n",
"# Display the frame.\n",
"sv.plot_image(annotated_frame)"
Expand All @@ -302,7 +302,7 @@
"id": "o8SsyCid6YV3"
},
"source": [
"Notice that we create a `box_annoator` variable by initalizing a [BoundingBoxAnnotator](https://supervision.roboflow.com/latest/annotators/#boundingboxannotator). We can change the color and thickness, but for simplicity we keep the defaults. There are a ton of easy to use [annotators](https://supervision.roboflow.com/latest/annotators/) available in the Supervision package other than a bounding box that are fun to play with."
"Notice that we create a `box_annotator` variable by initalizing a [BoxAnnotator](https://supervision.roboflow.com/latest/detection/annotators/#boxannotator). We can change the color and thickness, but for simplicity we keep the defaults. There are a ton of easy to use [annotators](https://supervision.roboflow.com/latest/detection/annotators/) available in the Supervision package other than a bounding box that are fun to play with."
]
},
{
Expand Down Expand Up @@ -342,7 +342,7 @@
" detections = sv.Detections.from_inference(result)\n",
"\n",
" # Apply bounding box to detections on a copy of the frame.\n",
" annotated_frame = bounding_box.annotate(\n",
" annotated_frame = box_annotator.annotate(\n",
" scene=frame.copy(),\n",
" detections=detections\n",
" )\n",
Expand Down
Loading

0 comments on commit 85e053e

Please sign in to comment.