For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
F
Frankline M.

Frankline M.

Technical Data Annotator | AI Training | Python & Engineering Expertise

Kenya flagNairobi, Kenya
$30.00/hrIntermediateInternal Proprietary Tooling

Key Skills

Software

Internal/Proprietary Tooling

Top Subject Matter

No subject matter listed

Top Data Types

Computer Code ProgrammingComputer Code Programming
ImageImage
VideoVideo

Top Task Types

Computer Programming/CodingComputer Programming/Coding
Fine-tuningFine-tuning
Prompt + Response Writing (SFT)Prompt + Response Writing (SFT)
RLHFRLHF

Freelancer Overview

I have hands-on experience as an AI Data Trainer specializing in training data for large language models (LLMs) and multimodal AI systems. My work involves crafting effective prompts, evaluating model outputs, and debugging AI-generated code to improve accuracy, safety, and performance. I have annotated diverse datasets—including text, images, and videos—across NLP and computer vision tasks, contributing to the development of smarter, safer AI. What sets me apart is my strong foundation in Python programming and my ability to bridge human insight with machine learning workflows. I’m certified in Machine Learning with Python (IBM) and Deep Learning with Keras & TensorFlow. My proficiency in prompt engineering, reinforcement learning from human feedback (RLHF), and multilingual data labeling enables me to adapt quickly to complex AI training projects.

IntermediateSwahiliEnglish

Labeling Experience

Basketball Court Mapping and Player Localization for Visual Scene Parsing

Internal Proprietary ToolingImageMapping
In this project, I worked on annotating basketball images to train computer vision models for spatial understanding and player localization. My main task involved labeling court lines (e.g., free throw line, three-point arc, center circle) to establish a spatial reference system within each image. This allowed the model to accurately infer player positions and movements relative to key areas on the pitch. I also annotated static elements such as barriers, crowd sections, and bystanders to help the model differentiate dynamic subjects (players) from the background. The annotation process required pixel-level precision and contextual understanding to ensure visual consistency across diverse camera angles, lighting conditions, and crowd densities. This contributed to building robust datasets for downstream tasks like player tracking, event detection, and scene segmentation in sports analytics.

In this project, I worked on annotating basketball images to train computer vision models for spatial understanding and player localization. My main task involved labeling court lines (e.g., free throw line, three-point arc, center circle) to establish a spatial reference system within each image. This allowed the model to accurately infer player positions and movements relative to key areas on the pitch. I also annotated static elements such as barriers, crowd sections, and bystanders to help the model differentiate dynamic subjects (players) from the background. The annotation process required pixel-level precision and contextual understanding to ensure visual consistency across diverse camera angles, lighting conditions, and crowd densities. This contributed to building robust datasets for downstream tasks like player tracking, event detection, and scene segmentation in sports analytics.

2024 - 2025

Video Labeling for Scene Composition and Subject Action Recognition

Internal Proprietary ToolingVideoAction RecognitionRLHF
In this project, I worked on video labeling tasks to support training datasets for machine learning models focused on scene understanding and activity recognition. My role involved watching videos and correcting model descriptions of the major actions, camera movements, lighting, and composition. For each clip, I described elements such as camera motion (e.g., panning, tracking), camera angles, environmental conditions (e.g., lighting, weather), spatial arrangement, and background details. This process required close observation and consistency to capture not only what was happening but also how it was visually presented, enabling models to learn context-aware representations from human-level descriptions. I also identified key subjects within each scene—such as people, animals, or vehicles—and described their actions over time. This included noting interactions, posture, movement direction, and engagement with objects or environments

In this project, I worked on video labeling tasks to support training datasets for machine learning models focused on scene understanding and activity recognition. My role involved watching videos and correcting model descriptions of the major actions, camera movements, lighting, and composition. For each clip, I described elements such as camera motion (e.g., panning, tracking), camera angles, environmental conditions (e.g., lighting, weather), spatial arrangement, and background details. This process required close observation and consistency to capture not only what was happening but also how it was visually presented, enabling models to learn context-aware representations from human-level descriptions. I also identified key subjects within each scene—such as people, animals, or vehicles—and described their actions over time. This included noting interactions, posture, movement direction, and engagement with objects or environments

2024 - 2024

Python Code Generation Fine-Tuning (SFT + RLHF)

Internal Proprietary ToolingComputer Code ProgrammingRLHFFine Tuning
As part of a large-scale AI training initiative, I contributed to fine-tuning a code-generating language model using Supervised Fine-Tuning (SFT) and Reinforcement Learning with Human Feedback (RLHF). The focus was on enhancing the model’s ability to interpret Python programming prompts and generate accurate, efficient, and well-structured code. During the SFT phase, I crafted prompts asking the model to write Python functions or full programs. I then produced correct, clean code samples with clear logic and included unit tests to ensure functionality and coverage. In the RLHF phase, I evaluated model responses to new Python prompts, checking for correctness, logic, readability, and structure. When code failed to meet expectations, I revised it to ensure it ran as intended, met prompt requirements, and followed best practices.

As part of a large-scale AI training initiative, I contributed to fine-tuning a code-generating language model using Supervised Fine-Tuning (SFT) and Reinforcement Learning with Human Feedback (RLHF). The focus was on enhancing the model’s ability to interpret Python programming prompts and generate accurate, efficient, and well-structured code. During the SFT phase, I crafted prompts asking the model to write Python functions or full programs. I then produced correct, clean code samples with clear logic and included unit tests to ensure functionality and coverage. In the RLHF phase, I evaluated model responses to new Python prompts, checking for correctness, logic, readability, and structure. When code failed to meet expectations, I revised it to ensure it ran as intended, met prompt requirements, and followed best practices.

2023 - 2024

Education

D

Dedan Kimathi University of Technology

Bachelor of Science, Mechatronics Engineering

Bachelor of Science
2018 - 2023

Work History

I

Invisible Technologies

Advanced AI data Trainer

California
2023 - Present
D

Devki Limited

Mechatronics Engineer

Nairobi
2022 - 2023