Nachos
Data Labeling Tool

Reduced the working time of the data labeling tool by 25%

Team
  • 1 Product Manager

  • 2 Product Designers (Me)

  • 3 Frontends

  • 1 Backend

My Role
  • UI/UX Design

  • UX Research

  • Wireframe

  • Prototyping

Duration

3 months

Impact

Reduced working time by 25%

Background

About Nachos

To build an effective computer vision model—a field of AI—a massive and well-curated dataset is required. Nachos is a data labeling platform for creating image and video datasets.

Since the business was gradually expanding, larger and more accurate datasets were needed.

Step 1 — Receive raw dataset

Users are given a large set of unstructured images or videos without any labels or annotations.

Step 2 — Annotate the data

Step 3 — Label the data

Step 1 — Receive raw dataset

Users are given a large set of unstructured images or videos without any labels or annotations.

Step 2 — Annotate the data

Step 3 — Label the data

Step 1 — Receive raw dataset

Users are given a large set of unstructured images or videos without any labels or annotations.

Step 2 — Annotate the data

Step 3 — Label the data

Step 1 — Receive raw dataset

Users are given a large set of unstructured images or videos without any labels or annotations.

Step 2 — Annotate the data

Step 3 — Label the data

Challenge

Beginner users required nearly 2× the working time compared to experienced ones

To generate more datasets, we needed to shorten users’ working time. However, beginner users often struggled with both the data labeling workflow and the Nachos interface, which slowed their progress.

User Observation (July 28, 2020)

Research

User Observation & Interview

Goal: Identify why beginners struggle with the tool.
Date: July 28, 2020
Subjects: 2 beginners, 1 experienced user, 1 operational manager

Findings from User Observation (July 28, 2020)

I observed three users using the tool to identify and compare the challenges they faced, and also interviewed their manager.

Findings from Observation

Beginners
  • Made more frequent mistakes with limited ways to detect or correct them.

  • Frequently navigated between different areas of the interface, which slowed down their workflow.

Experienced Users
  • Developed their own workarounds when issues arose.

  • Scrolled in the label area to find a label they wanted to work on.

Both
  • Consistently reported difficulties with the workflow of navigating between the canvas and the label area.

Problem

The current UI didn’t match how users actually work, which slowed down task completion.

Pain Point 1

Lack of Immediate and Clear Feedback

Users had to complete a multi-step annotation task, but they often forgot one of the steps, resulting in incomplete results. The only feedback that let users know the issue appeared after submission, and it was ambiguous.

Solution Point 1

Provide feedback when users need it so users can immediately correct errors without leaving the task.

Pain Point 2

Fragmented Labeling Flow That Disrupts User Focus

Having to select a label on the canvas and then search for the same one in the label panel distracted users’ attention.

Solution Point 2

Allow direct labeling on the canvas, reducing unnecessary context switching between panels.

Design Decision #1

Real-time feedback with contextual action button

Real-time feedback helps users quickly notice and correct missed tasks. An action button inside the label group lets them directly select the tool to draw the missing object.

Initial Design

To provide clear and real-time feedback to alert users the moment an error occurred, such as a missing label or untagged object, my initial UI solution was below.

Feedback on Initial Design

I asked the operations manager and my team members for feedback on whether this effectively addresses pain point 1.

After—Clear error indicator with contextual action button

  1. Replaced the vague term “Error” with “Missing

  2. Added an action button inside the label group that lets users draw the missing object directly, reducing unnecessary navigation.

Design Decision #2

Labeling on Canvas

The “Labeling on Canvas” feature allows users to label directly on the canvas, minimizing navigation and helping them stay focused.

Design Decision #2

Labeling on Canvas

The “Labeling on Canvas” feature allows users to label directly on the canvas, minimizing navigation and helping them stay focused.

Initial Design—Label auto-focus

When a user clicked an object on the canvas, the corresponding layer in the label panel would be automatically scrolled into view and highlighted.

Feedback on Initial Design

Users still had to shift their mouse between the canvas and the label panel, which continued to disrupt their focus.

After—Labeling on canvas

To address this issue, I introduced an on-canvas labeling feature, inspired by CVAT (an open-source annotation tool widely used in computer vision), which allows users to label objects directly on the canvas.

Design System

Design System

Additionally, I developed a design system to ensure consistency across the product.

Reflection

Impact and Learnings

Impact

Reduced users' working time by 25%.

Learning 1

Finding alternative ways to gather user insights

Direct access to end users was limited, so I leveraged alternative methods such as reviewing observation videos shared by their manager and conducting interviews with the manager. From these methods, I was still able to gain valuable insights to address core problems.

Learning 2

Need to validate technical feasibility in advance

While solving problem 2, because of the technical issues, I had to redo the work. From this experience, I realized it’s critical to check technical feasibility early. It also made me more proactive about discussing constraints with developers and testing assumptions before I get too deep into design.