AI Consensus Scoring (AI CS) is one of Hasty’s key features offering an AI-powered Quality assurance solution. Initially, AI CS supported Class, Object Detection, and Instance Segmentation reviews. However, you can now use it to evaluate your Image Tags annotations. This is precisely what we are going to cover on this page.

Let’s jump in.

AI CS for Image Tags is another option in the AI Consensus Scoring environment. So, to schedule the corresponding run, you need to:

  1. Have Image Tags in your annotations strategy and project taxonomy;
  2. Annotate some images using the tags;
  3. Navigate to the AI Consensus Scoring Environment through the project’s menu.
It might be challenging to determine the exact number of images that must be annotated before scheduling an AI CS run because such a number is highly case specific. Our suggestion is to be patient, activate the Tag prediction AI assistant, and keep annotating until the assistant starts showing little to no mistakes. At this point, an AI CS run is almost guaranteed to produce accurate and valuable suggestions.

4. Click on the “Create new run button“ and choose “Tag review” as the run type.

5. Once the run is complete, click on its name in the AI CS environment to see the suggestions.

Tag review focuses on identifying the following mistakes:

  • Misclassification:

    • For a single-choice tag group, if AI CS disagrees that your tag is correct;

  • Missing tag:

    • For multiple-choice tag groups or default tags;

    • For a single-choice tag group, if the original annotation misses any choice for it;

  • Extra tag:

    • For multiple-choice tag groups or default tags.

As of today, the suggestions are displayed in the following fashion:

  • You see the whole image with all the tags (original and suggested) on the thumbnail card;

  • You can accept or reject all the suggestions in one click by clicking on the “Accept all” / “Reject all” options on the thumbnail card. Additionally, paying attention to the colors displayed after a user’s action might be worth it. The bright blue one means that the tag was added to the image and the transparent one signalizes that after user action (could be both accept or reject depending on the type of error), the tag was not added to the image;

  • You can also accept or reject suggestions one by one on the thumbnail card;

  • By clicking on an image, you get a more detailed overview with an additional option of adding a missing tag directly from the AI CS environment through a dropdown menu.

As you may have noticed, AI CS results are not divided by error types. Instead, you can see all potential tag errors per image. Still, if you want to use the filter by error type options, you can easily do so via the topbar options.

Filter options
Today, AI Consensus Scoring for Image Tags does not support the AI CS in the Annotation Environment feature. There is no way to check out the Tag review results directly in the Annotation Environment, so all the QA processes should be held in the AI CS view. However, it might change with further releases.

Boost model performance quickly with AI-powered labeling and 100% QA.

Learn more
Last modified