For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
R
Rizwan K.

Rizwan K.

Image and video annotator with 1.5+ years of experience

India flagEtah, India
$4.00/hrIntermediateOther

Key Skills

Software

Other

Top Subject Matter

No subject matter listed

Top Data Types

ImageImage
VideoVideo

Top Task Types

Bounding BoxBounding Box
CuboidCuboid
PolygonPolygon
PolylinePolyline

Freelancer Overview

Expertise in Polygon Segmentation, Bounding boxes, 3D cuboid, Text annotation, Landmark annotation, Semantic Segmentation,3D point cloud annotation Self-driving cars or driverless cars Done image and video annotation for automatic vehicle projects, ensuring precise data labeling crucial for AI-driven data analysis and visualization on the humanloop tool. Directed annotation tasks in diverse projects like cuboid, scene, and segment labeling, plus lidar 3D vehicle annotation, bolstering data quality for robust data analysis and visualization. We have enforced stringent annotation guidelines, elevating data integrity and quality for AI model training and subsequent data analysis and visualization.

IntermediateHindiUrduEnglish

Labeling Experience

Scene Labeling with 3D Box Annotation for Multi-Class Object Detection

OtherImageBounding Box
This project focused on scene labeling using 3D bounding boxes to annotate multiple object classes in real-world environments. The annotations were used for training AI models in autonomous driving, urban mapping, and smart surveillance systems. Project tasks involved Placing 3D cuboids on objects like vehicles, pedestrians, cyclists, trees, poles, traffic lights, and traffic signs. Project Size: Annotated thousands of frames/scenes with multi-class objects per frame (10–40 objects per scene). Daily target: ~150–300 objects per day depending on complexity. Team size: ~30–70 members, including QA reviewers and annotation leads. Duration: ~4–5 months, progressing through training → production → audit → rework Maintained accuracy of 97–98% based on client KPIs such as 3D box fit, object classification accuracy, and spatial alignment.

This project focused on scene labeling using 3D bounding boxes to annotate multiple object classes in real-world environments. The annotations were used for training AI models in autonomous driving, urban mapping, and smart surveillance systems. Project tasks involved Placing 3D cuboids on objects like vehicles, pedestrians, cyclists, trees, poles, traffic lights, and traffic signs. Project Size: Annotated thousands of frames/scenes with multi-class objects per frame (10–40 objects per scene). Daily target: ~150–300 objects per day depending on complexity. Team size: ~30–70 members, including QA reviewers and annotation leads. Duration: ~4–5 months, progressing through training → production → audit → rework Maintained accuracy of 97–98% based on client KPIs such as 3D box fit, object classification accuracy, and spatial alignment.

2022 - 2023

Segment Labeling in Video Annotation for Scene Understanding

OtherVideoSegmentation
This project involved video annotation with segment labeling to support training datasets for autonomous driving, scene recognition, and AI-based video analysis. The main task was to watch video sequences and accurately label segments based on their content—such as urban areas, rural areas, highways, traffic zones, intersections, residential zones, and environmental features like vegetation, buildings, or roads. Project Size: Labeled hundreds of long video sequences, each containing thousands of frames. Daily target: ~8–12 videos or ~1500–2500 frames, depending on the complexity and number of labels required. Team size: ~25–60 annotators and quality analysts. Project duration: ~3 months, including onboarding, production, QA cycles, and guideline updates. Maintained labeling accuracy of 99–100%, based on scene correctness, boundary clarity, and frame continuity.

This project involved video annotation with segment labeling to support training datasets for autonomous driving, scene recognition, and AI-based video analysis. The main task was to watch video sequences and accurately label segments based on their content—such as urban areas, rural areas, highways, traffic zones, intersections, residential zones, and environmental features like vegetation, buildings, or roads. Project Size: Labeled hundreds of long video sequences, each containing thousands of frames. Daily target: ~8–12 videos or ~1500–2500 frames, depending on the complexity and number of labels required. Team size: ~25–60 annotators and quality analysts. Project duration: ~3 months, including onboarding, production, QA cycles, and guideline updates. Maintained labeling accuracy of 99–100%, based on scene correctness, boundary clarity, and frame continuity.

2022 - 2022

Top-View Freespace Annotation for Autonomous Navigation

OtherImagePolyline
The project focused on annotating top-view freespace maps to help autonomous vehicles understand drivable areas. Using inputs from camera and/or LiDAR data, annotators were responsible for identifying and labeling the freespace region—the area where the vehicle can safely move—excluding obstacles like vehicles, pedestrians, barriers, curbs, or non-drivable terrain. Project Size: Annotated thousands of top-view frames, including urban, rural, and highway scenes. Daily output per annotator: ~300–500 frames (depending on road complexity and image quality). Team size: ~20–50 members including annotators, trainers, QA, and team leads. Duration: 3 months, including guideline training, production phase, QA cycles, and rework rounds if necessary. Maintained accuracy of 99%+, based on metrics such as polygon alignment, boundary precision, and area consistency.

The project focused on annotating top-view freespace maps to help autonomous vehicles understand drivable areas. Using inputs from camera and/or LiDAR data, annotators were responsible for identifying and labeling the freespace region—the area where the vehicle can safely move—excluding obstacles like vehicles, pedestrians, barriers, curbs, or non-drivable terrain. Project Size: Annotated thousands of top-view frames, including urban, rural, and highway scenes. Daily output per annotator: ~300–500 frames (depending on road complexity and image quality). Team size: ~20–50 members including annotators, trainers, QA, and team leads. Duration: 3 months, including guideline training, production phase, QA cycles, and rework rounds if necessary. Maintained accuracy of 99%+, based on metrics such as polygon alignment, boundary precision, and area consistency.

2022 - 2022

LiDAR Point Cloud Annotation for Autonomous Vehicles

OtherImagePolygonPolyline
This project involved the annotation of raw LiDAR point cloud data to identify and label various objects in a 3D environment. The focus was on vehicles (cars, buses, trucks, two-wheelers, etc.), and the goal was to support AI models used in self-driving technologies by helping them detect, classify, and track moving and static objects in real-time. Annotated thousands of LiDAR sequences, each consisting of multiple frames (a single sequence could contain up to 1000+ frames). Daily annotation volume: ~500–700 frames per annotator depending on the frame complexity and scene density. Team size: 30–70 members including annotators, reviewers, and project leads. Project duration: ~3-4 months with multiple deliverable phases (Training → Production → QA Review → Rework if needed). Maintained 98%+ accuracy on key quality metrics including correct classification, boundary accuracy, and consistency across frames.

This project involved the annotation of raw LiDAR point cloud data to identify and label various objects in a 3D environment. The focus was on vehicles (cars, buses, trucks, two-wheelers, etc.), and the goal was to support AI models used in self-driving technologies by helping them detect, classify, and track moving and static objects in real-time. Annotated thousands of LiDAR sequences, each consisting of multiple frames (a single sequence could contain up to 1000+ frames). Daily annotation volume: ~500–700 frames per annotator depending on the frame complexity and scene density. Team size: 30–70 members including annotators, reviewers, and project leads. Project duration: ~3-4 months with multiple deliverable phases (Training → Production → QA Review → Rework if needed). Maintained 98%+ accuracy on key quality metrics including correct classification, boundary accuracy, and consistency across frames.

2022 - 2022

Cuboid Annotation for 3D Object Detection

OtherImageCuboid
The project involved annotating 3D bounding boxes (cuboids) around objects in images or LiDAR point cloud data to help train AI models for object detection, depth estimation, and spatial recognition. Common objects included vehicles, pedestrians, traffic signs, and furniture, depending on the dataset. Worked on thousands of image frames or point cloud scenes (daily target: ~800–1000 objects, depending on complexity). The team size ranged from 10–50 data annotators depending on the project phase. Project duration: Several months, covering multiple phases such as training, production, and quality audit. Maintained a quality accuracy rate of 97% or higher, as per client benchmarks. Followed a double-review or audit process — initial annotation followed by QA team verification.

The project involved annotating 3D bounding boxes (cuboids) around objects in images or LiDAR point cloud data to help train AI models for object detection, depth estimation, and spatial recognition. Common objects included vehicles, pedestrians, traffic signs, and furniture, depending on the dataset. Worked on thousands of image frames or point cloud scenes (daily target: ~800–1000 objects, depending on complexity). The team size ranged from 10–50 data annotators depending on the project phase. Project duration: Several months, covering multiple phases such as training, production, and quality audit. Maintained a quality accuracy rate of 97% or higher, as per client benchmarks. Followed a double-review or audit process — initial annotation followed by QA team verification.

2021 - 2022

Education

A

Aligarh Muslim University

Bachelor of Technology, Engineering

Bachelor of Technology
2020 - 2020

Work History

R

Randstad

Process Executive

Hyderabad
2021 - 2023