LLM Response Evaluator
Contributed to a large language model (llm) training by generating prompts and evaluating AI-generated responses for human-likeness, coherence, factual accuracy, and guideline compliance. Assessed outputs for tone, clarity, and relevance, and provided structured feedback to improve model performance and response quality.