Math Expertise
In this project, I contributed to the development and refinement of a Large Language Model (LLM) by focusing on text generation, evaluation/rating, and prompt + response writing for supervised fine-tuning (SFT). Leveraging Snorkel AI, I was responsible for creating high-quality training datasets that involved generating diverse and contextually relevant text, evaluating model outputs for accuracy and coherence, and crafting effective prompts to guide the model's responses. My work ensured that the LLM was trained on well-annotated data, enhancing its performance in understanding and generating human-like text. This project required a deep understanding of natural language processing techniques, attention to detail in data labeling, and the ability to collaborate with cross-functional teams to align data annotation efforts with the model's training objectives. The outcome was a more robust and reliable LLM capable of delivering accurate and contextually appropriate responses.