When facing new annotation tools, you might wonder which suits your task the best and how to use them. To address this question, we prepared this cheat sheet for you.

  1. Start the annotation process by using image tags.
  2. After annotating 10 images, activate and test how the image classification assistant performs.
  3. If it's not giving you the desired results, add tags to 10 more images manually.
  4. Rinse and repeat until the assistant is accurate enough.
  5. When satisfied with our assistant, you can click "Accept all" to classify an image.
  6. When you do not edit or change suggestions in an hour or so, you can try using automated labeling to batch-process the rest of your dataset.
  1. Start annotating by using the Bounding box tool.
  2. After labeling 10 images, activate the Object Detection assistant and see how it is performing.
  3. Are you getting good predictions? If yes, great - accept them and add any missing bounding boxes. If the projections do not satisfy you, annotate 10 more images using the bounding box tool and try again.
  4. As you continue to label, the Object Detection assistant will learn and improve - so keep using and testing it.
  5. When you see that the predictions made by the assistant are consistently good, you might want to apply automated labeling to batch-process the rest of your dataset.
  1. Start annotating by using either the Polygon or the Brush tool if you want to perform the annotation manually.
  2. However, do not forget to try DEXTR and ATOM segmenter tools as they might speed up the process (especially for complex annotations) considerably.
  3. After labeling 10 images, activate the Instance Segmentation assistant and see how it performs.
  4. Are the assistant's suggestions helpful? If so, great - accept them and move on. If some suggestions are not that good, you can accept and edit them or create the labels from scratch.
  5. If the assistant's predictions are not that great, do not give up: the more data you add, the better they become. Annotate 10 more images and then test again.
  6. As you annotate more images, the Instance Segmentation assistant will learn and become more accurate - so keep using it.
  7. When the predictions made by the assistant become consistently good, give a try to automated labeling to batch-process the rest of your dataset.
  1. Before you start, make sure that the classes you annotate with can be found under "semantic classes." If not, any data you annotate will be used to train our Instance Segmentation assistant instead.
  2. Start by using the Brush tool (preferably) or the ATOM segmenter.
  3. Quick side note: you can also use the Polygon or DEXTR tool, but neither is great for annotating non-object regions. Hasty does not support many polygons grouped into one, while DEXTR is pre-trained on Instance Segmentation data.
  4. After labeling 10 images, check the performance of the Semantic Segmentation assistant.
  5. If the results are good - accept the predicted labels and edit them if needed. If the assistant's suggestions are not that great, annotate 10 images and try again.
  6. Since our underlying AI model progressively learns while you label, continue to annotate more data to automate and speed up the process.
  1. To use Panoptic Segmentation in Hasty, you basically have to perform both Instance and Semantic segmentation.
  2. Start by annotating images until both assistants are unlocked. A tip would be to try the ATOM segmenter, as it is well-suited to both IS and SES tasks.
  3. Use the guidance provided above to get your first annotations done.
  4. When both the Instance Segmentation Assistant and the Semantic Segmentation Assistant become available, try both and see how they perform.
  5. As usual, keep adding more data to increase the performance of both assistants.
  6. When one or both of the assistants perform well, you can use auto-labeling to label the rest of the images in one click. Please note that you need to start two separate auto-labeling runs to perform a complete Panoptic Segmentation.
  1. To use attributes, first, perform an Object detection or Segmentation annotation task on an image.
  2. When you have added all annotations, select each annotation individually and fill out the attributes. You can access and edit the attribute labels in the right menu in the second accordion.
  3. After you have annotated 10 images, the Label Attribute Assistant will be available.
  4. As usual, activate the assistant and see how it performs. If the results need to be tuned, continue labeling more images, and the assistant will continuously improve.
  5. With sufficient data, the assistant will perform well enough to save you considerable time. You can click "Accept all" and review the annotations.

Boost model performance quickly with AI-powered labeling and 100% QA.

Learn more
Last modified