Data annotation is the process of adding metadata to a dataset so that AI and ML models can effectively learn patterns and generalize to new data. This can be as simple as drawing boxes around cars on a street (bounding boxes or as complex as labeling every pixel in an image (segmentation) for a computer vision model.
Tackling data labeling is a multi-faceted challenge because of its scale and variability. First, datasets for real-world AI models are enormous, both for training and testing. Second, the best approach depends heavily on context: labeling medical images to detect tumor cells requires a very different workflow—and budget—than annotating social media posts for sentiment analysis. Obviously.
A major solution in recent years has been the rise of SaaS and open-source products known as data annotation platforms, such as Scale AI, Roboflow, and Unitlab AI. These platforms handle data curation, dataset management, version control, collaborative labeling, and low-code or no-code auto-annotation tools, along with many other features that make dataset preparation faster, cheaper, and more reliable.
In this guide, we will explore the potential of no-code and low-code auto-labeling tools with bounding box examples, and compare them with manual and scripting-based annotation. Our example will use Unitlab Annotate, a fully automated AI-powered annotation platform.
By the end, you will understand:
- What no-code and low-code tools mean for data annotation
- The differences between manual, scripted, and no-code approaches
- When and why coding is still essential
What are Low-Code and No-Code, Essentially?
In short, low-code and no-code platforms are software environments where users can build applications with little to no traditional programming. These tools rely on intuitive drag-and-drop interfaces and prebuilt components. As a result, they make app development accessible to business users and professionals without programming backgrounds.

With these platforms, users can build mobile apps, websites, and internal tools quickly. They are particularly useful for small projects or when time and resources are limited. In fact, many SaaS companies are now starting to adopt this approach, as shown in this showcase from Bubble.io.
Although the terms are often used interchangeably, they differ slightly:
- No-Code – Everything happens visually. Users upload data, drag and drop, and build workflows through a graphical interface. No coding required.
- Low-Code – Combines visual tools with optional scripting. Most tasks are handled in the UI, but code can extend functionality when needed.
These approaches are increasingly used in AI and ML because they drastically reduce development time. In this tutorial, we will use the no-code workflow of Unitlab AI to illustrate their application in vehicle detection. But first, let’s review manual and scripted annotation.
Manual Data Annotation
Manual labeling is exactly what it sounds like: annotators draw bounding boxes around cars, people, or other objects by hand. The software then generates machine-readable annotation files. This method produces high-quality labels but is slow, expensive, and prone to human error.
Here is a demonstration of drawing bounding boxes in Unitlab Annotate:
Manual Object Detection | Unitlab Annotate
Manual annotation works when you have 100 images. But it becomes unmanageable when you have 100,000. At scale, manual labeling simply fails in multiple dimensions.
Scripting for Data Annotation
Another option is writing scripts—most often in Python—to automate labeling. Since many open-source libraries exist (such as OpenCV, Pillow, and Ultralytics), the idea is that you can automate much of the process with code.
A typical workflow looks like this:
- Write a Python script to load images
- Use libraries to draw bounding boxes
- Save annotations in JSON, XML, or COCO format
- Re-run and debug scripts whenever the dataset changes
Let's implement this using Python. The source code for this project is available on Github.
Demo Project
We will use YOLOv8 from ultralytics
. Start by installing the required packages:
pip install ultralytics opencv-python numpy
We now write the program, starting with importing the installed packages. We use the YOLOv8n as our object detection model, read the test image with open-cv
:
# main.py
import cv2
from ultralytics import YOLO
# Load YOLOv8 model
yolo_model = YOLO("yolov8n.pt")
# Load an image
image_path = "test.jpg"
image = cv2.imread(image_path)

Finally, we detect the object with YOLOv8 and draw bounding boxes with Python's Open Computer Vision package:
# Perform detection
results = yolo_model(image)
# Draw bounding boxes and display results
for result in results:
for box in result.boxes:
x1, y1, x2, y2 = map(int, box.xyxy[0])
label = result.names[int(box.cls[0].item())]
confidence = box.conf[0]
# Draw bounding box
cv2.rectangle(image, (x1, y1), (x2, y2), (0, 0, 255), 5)
cv2.putText(
image,
f"{label}: {confidence:.2f}",
(x1, y1 - 10),
cv2.FONT_HERSHEY_SIMPLEX,
1.2,
(0, 0, 255),
4,
)
cv2.imshow("Detection", image)
cv2.waitKey(0)
In the end, we get this image as the output:

This method assumes you have an experienced engineer capable of writing and maintaining complex Python programs. For small datasets, the cost of that engineer may exceed the value of the project itself. Moreover, editing boxes for accuracy is harder in code than in a GUI.
While powerful, this process demands strong coding expertise and maintenance hurdles, which may not be worth it in most projects.
No-Code for Data Annotation
Project Setup
First, create your free account on Unitlab AI for this tutorial:
In the Projects
pane, click Add a Project:

Name the project, choose Image as the data type and Image Bounding Box for the annotation type:

Upload project data. You can use these samples we are using for this tutorial. Download them here:

Automation Config
One super advantage of data annotation platforms is that they provide most common object detection, instance segmentation, and other types of CV models as built-in foundational models. This means that you don't have to configure, deploy, or manage most of auto-annotation models yourself, but can use them for free in your projects. Pretty slick, no?
In the case of Unitlab Annotate, go to the Automation
pane within your project:

Click + New Automation:

Once created, you will see this drag-and-drop dashboard:

Click Choose Model and select Vehicle Detection:

In this pane, you can also configure the confidence threshold, IoU threshold, and the max number of detections in the image. When you choose a foundational model, it provides its own defaults, which you can edit for your project needs. For our tutorial, we leave it as it is.
Provide a differentiating name for your automation, like "Vehicle Workflow". Click Apply and Save. The no-code tool is now ready for our vehicle annotation. Go back to Annotate
pane.
No-code Annotation
Most data annotation platforms provide tools to use the no-code annotation once configured. In the case of Unitlab Annotate, we are going to use the Magic Crop Tool to detect and draw bounding boxes around cars automatically:
No-code Data Labeling | Unitlab Annotate
This project is purposefully simple. Your requirements, objectives, and tasks evolve as your project grows. So does your data labeling. With no-code tools, you can easily adjust them for your use case, instead of changing your code base. Because these tools are fully managed by your provider, you can focus on the business goal at hand, not on the specifics of a Python library or cloud deployment.
The advantage of this no-code automation and graphical user interface lies in the ability to edit bounding boxes with ease. Additional advantages include the auto-generation of machine-readable annotation files in several formats, project management, dataset organization, and versioning, as you might expect from a fully-featured data annotation platform.
In the Pro plan of Unitlab AI, you can also use Batch Auto-Annotation in order to label hundreds of images at once with your custom workflow:
No-code Batch Auto-Labeling | Unitlab Annotate
Comparison: Scripting or Low/No-Code?
The quick, general comparison between scripting and low/no-code tools in data annotation is neatly summarized in the table below:
Aspect | Scripting (Python) | Low/No-Code (Unitlab AI) |
---|---|---|
Setup | Manual environments, dependencies | Visual dashboard, templates |
Annotation | Custom scripts, CLI tools | Auto-labeling, drag-and-drop UI |
Collaboration | Git, shared folders | Roles, comments, dashboards |
QA & Versioning | Manual checks, extra code | Built-in QA, dataset versioning |
Scalability | Rewrite scripts for new cases | Scales seamlessly via UI |
Speed & Cost | Developer hours, slower iteration | 15× faster, 5× cheaper (Unitlab) |
When Code is Supreme
This comparison and post are not here to dismiss coding or Python in the realm of data annotation. Only the Sith deal in absolutes. Our intent is to say that both coding and visual no-code tools have their own uses, and often are complementary.
There are many complementary, legitimate cases where scripts remain useful:
- Custom one-off tasks not supported by platforms.
- Unique formats or specialized exports.
- Deep research pipelines that demand custom integrations.
For everyday tasks like object detection, segmentation, and classification, however, low-code/no-code platforms, such as Unitlab Annotate, is now the better default.
Conclusion
Data annotation is a central step in building practical, real-world AI systems. There are several ways of approaching this task depending on the project nature, size, manpower, and budget constraints. Therefore, the legitimate answer to "How should I label my dataset" is, "It depends."
That said, annotation platforms with no-code and low-code tools are gaining traction because they combine automation, collaboration, and ease of use. Scripts and manual annotation remain useful complements, but increasingly, no-code is the default path.
In 2025, the question isn't whether to adopt these tools. It’s how quickly and efficiently you can integrate them into your workflow.
Explore More
For additional information, check out these resources:
References
- Frederik Hvilshøj (Mar 14, 2023). How to Use Low-Code and No-Code Tools for Computer Vision. Encord Blog: Source
- alwaysAI (Dec 3, 2024). Low-Code vs No-Code Computer Vision Platforms. IoT for All: Source