Going from yolov5 results to expected folder
.
#48
-
I have a finetuned yolo_v5 model, and can get the results from it just fine as a
Now the only hints I get are that I need a I tried a couple of different ways, but nothing throws errors, it just looks like nothing is parsed. i.e. evaluator.labels is empty. |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 4 replies
-
Just jotting down my notes for following up on this. If you follow the
Where Where parser was defined back in from_txt:
this glob of .txt annotation files of unknown format are then passed through here:
where iterable is the glob, and parse the def thread_map(
fn: Callable[[U], V],
it: Iterable[U],
desc: Optional[str] = None,
total: Optional[int] = None,
unit: str = "it",
verbose: bool = False,
) -> "list[V]":
disable = not verbose
total = total or length_hint(it)
results = []
with tqdm(desc=desc, total=total, unit=unit, disable=disable) as pbar:
futures = SHARED_THREAD_POOL.map(fn, it)
for result in futures:
results.append(result)
pbar.update()
return results So it is applying the Eventually it calls return Annotation.from_txt(
file_path=file,
image_id=image_id,
box_format=box_format,
relative=relative,
image_size=image_size,
separator=separator,
conf_last=conf_last,
)
Then the .txt file is split into lines: Then each line is itterated over to generate the
Then it checks if there are either 5 or 6 values, else throw ParsingError. It appears those with 5 values are used as ground truth annotations (i.e. confidence=None) thus I also now know I need 6 values. Continuing on, I know
Now is yolov5 results gives both class (numerical id) and name (text string), so need to confirm which one it needs here. Confidence should indeed be from 0 to 1 as per this assertion
Following the return call to So in summary to answer my question
|
Beta Was this translation helpful? Give feedback.
-
Without a real plan for confirming what Using the text labels, I still see all 0.00% for my dataset, which I can't image is true. So I narrowed it down to a single image for "unit" testing where I know my model infers pretty well. I stopped using the COCO format for this as it was harder to parse the COCO single json file rather than just using a single label and image file for this unit test. In doing so I found the dataset in Yolo5 Pytorch format from Roboflow comes with coords that are relative=True but the coords from the yolov5 model in results are relative=False. The only workaround I could find was to use the
This gave me nonzero results as expected. Is this a known issue? Guessing its not ideal that this |
Beta Was this translation helpful? Give feedback.
-
Hi @topherbuckley, The You should not use this method if you want to parse in-memory annotations / predictions such as ones of type
|
Beta Was this translation helpful? Give feedback.
Hi @topherbuckley,
The
from_yolo_v5
method is for parsing YOLOv5 annotations stored to disk in .txt files. Thefolder
parameter refers to the directory containing such annotations while theimage_folder
refers to the directory containing the images corresponding to the annotations (could be the same asfolder
.You should not use this method if you want to parse in-memory annotations / predictions such as ones of type
yolov5.models.common.Detections
. For this purpose I recommend using theBoundingBox.create
method. Here is how you should proceed to build anAnnotationSet
:BoundingBox.create
.Annotation
…