Skip to content
← Back to explorer

CricBench: A Multilingual Benchmark for Evaluating LLMs in Cricket Analytics

Vaibhav Devraj, Dhruv Kumar, Jagat Sesh Challa, Parth Agarwal, Navya Kommuri, Trizal Garg, Prisha Singhal, Dhruv Shah · Dec 26, 2025 · Citations: 0

How to use this paper page

Coverage: Stale

Use this page to decide whether the paper is strong enough to influence an eval design. It summarizes the abstract plus available structured metadata. If the signal is thin, use it as background context and compare it against stronger hub pages before making protocol choices.

Best use

Primary protocol reference for eval design

Metadata: Stale

Trust level

High

Signals: Stale

What still needs checking

No major weakness surfaced.

Signal confidence: 0.90

Abstract

Cricket is the second most popular sport globally, commanding a massive following of over 2.5 billion fans globally. Enthusiasts and analysts frequently seek advanced statistical insights, such as long-term historical performance trends or complex player comparisons, that are often unavailable through standard web searches. While Large Language Models (LLMs) have advanced significantly in Text-to-SQL tasks, their capability to handle the domain-specific nuances, complex schema variations, and multilingual requirements inherent to sports analytics remains under-explored. To investigate this potential capability gap, we present CricBench, a comprehensive benchmark suite for evaluating LLMs on specialized cricket data. To curate a "Gold Standard" dataset, we collaborate with domain experts in cricket and SQL to manually author complex queries, ensuring logical correctness. Recognizing linguistic diversity, we construct the benchmark in both English and Hindi, establishing a framework that is open for further extension to other regional languages. We evaluate six state-of-the-art models, including GPT-4o, Claude 3.7 Sonnet, and open-source models, using a strict evaluation protocol. Our results reveal that high performance on general benchmarks does not guarantee success in specialized domains. While the open-weights reasoning model DeepSeek R1 achieves state-of-the-art performance (50.6%), surpassing proprietary giants like Claude 3.7 Sonnet (47.7%) and GPT-4o (33.7%), it still exhibits a significant accuracy drop when moving from general benchmarks (BIRD) to CricBench. Furthermore, we observe that code-mixed Hindi queries frequently yield parity or higher accuracy compared to English, challenging the assumption that English is the optimal prompt language for specialized SQL tasks.

HFEPX Relevance Assessment

This paper has strong direct human-feedback and evaluation protocol signal and is suitable as a primary eval pipeline reference.

Best use

Primary protocol reference for eval design

Use if you need

A concrete protocol example with enough signal to inform rater workflow design.

Main weakness

No major weakness surfaced.

Trust level

High

Eval-Fit Score

75/100 • High

Use this as a primary source when designing or comparing eval protocols.

Human Feedback Signal

Detected

Evaluation Signal

Detected

HFEPX Fit

High-confidence candidate

Extraction confidence: High

What This Page Found In The Paper

Each field below shows whether the signal looked explicit, partial, or missing in the available metadata. Use this to judge what is safe to trust directly and what still needs full-paper validation.

Human Feedback Types

strong

Expert Verification

Confidence: High Direct evidence

Directly usable for protocol triage.

Evidence snippet: Cricket is the second most popular sport globally, commanding a massive following of over 2.5 billion fans globally.

Evaluation Modes

strong

Automatic Metrics

Confidence: High Direct evidence

Includes extracted eval setup.

Evidence snippet: Cricket is the second most popular sport globally, commanding a massive following of over 2.5 billion fans globally.

Quality Controls

strong

Gold Questions

Confidence: High Direct evidence

Calibration/adjudication style controls detected.

Evidence snippet: Cricket is the second most popular sport globally, commanding a massive following of over 2.5 billion fans globally.

Benchmarks / Datasets

strong

DROP, BIRD, Cricbench

Confidence: High Direct evidence

Useful for quick benchmark comparison.

Evidence snippet: To investigate this potential capability gap, we present CricBench, a comprehensive benchmark suite for evaluating LLMs on specialized cricket data.

Reported Metrics

strong

Accuracy

Confidence: High Direct evidence

Useful for evaluation criteria comparison.

Evidence snippet: While the open-weights reasoning model DeepSeek R1 achieves state-of-the-art performance (50.6%), surpassing proprietary giants like Claude 3.7 Sonnet (47.7%) and GPT-4o (33.7%), it still exhibits a significant accuracy drop when moving from general benchmarks (BIRD) to CricBench.

Rater Population

strong

Domain Experts

Confidence: High Direct evidence

Helpful for staffing comparability.

Evidence snippet: To curate a "Gold Standard" dataset, we collaborate with domain experts in cricket and SQL to manually author complex queries, ensuring logical correctness.

Human Data Lens

  • Uses human feedback: Yes
  • Feedback types: Expert Verification
  • Rater population: Domain Experts
  • Unit of annotation: Unknown
  • Expertise required: Coding, Multilingual
  • Signal basis: Structured extraction plus abstract evidence.

Evaluation Lens

  • Evaluation modes: Automatic Metrics
  • Agentic eval: None
  • Quality controls: Gold Questions
  • Signal confidence: 0.90
  • Known cautions: None surfaced in extraction.

Protocol And Measurement Signals

Benchmarks / Datasets

DROPBIRDCricbench

Reported Metrics

accuracy

Research Brief

Metadata summary

Cricket is the second most popular sport globally, commanding a massive following of over 2.5 billion fans globally.

Based on abstract + metadata only. Check the source paper before making high-confidence protocol decisions.

Key Takeaways

  • Cricket is the second most popular sport globally, commanding a massive following of over 2.5 billion fans globally.
  • Enthusiasts and analysts frequently seek advanced statistical insights, such as long-term historical performance trends or complex player comparisons, that are often unavailable through standard web searches.
  • While Large Language Models (LLMs) have advanced significantly in Text-to-SQL tasks, their capability to handle the domain-specific nuances, complex schema variations, and multilingual requirements inherent to sports analytics remains under-explored.

Researcher Actions

  • Compare this paper against nearby papers in the same arXiv category before using it for protocol decisions.
  • Validate inferred eval signals (Automatic metrics) against the full paper.
  • Use related-paper links to find stronger protocol-specific references.

Caveats

  • Generated from abstract + metadata only; no PDF parsing.
  • Signals below are heuristic and may miss details reported outside the abstract.

Recommended Queries

Research Summary

Contribution Summary

  • To investigate this potential capability gap, we present CricBench, a comprehensive benchmark suite for evaluating LLMs on specialized cricket data.
  • We evaluate six state-of-the-art models, including GPT-4o, Claude 3.7 Sonnet, and open-source models, using a strict evaluation protocol.
  • While the open-weights reasoning model DeepSeek R1 achieves state-of-the-art performance (50.6%), surpassing proprietary giants like Claude 3.7 Sonnet (47.7%) and GPT-4o (33.7%), it still exhibits a significant accuracy drop when moving…

Why It Matters For Eval

  • To investigate this potential capability gap, we present CricBench, a comprehensive benchmark suite for evaluating LLMs on specialized cricket data.
  • We evaluate six state-of-the-art models, including GPT-4o, Claude 3.7 Sonnet, and open-source models, using a strict evaluation protocol.

Researcher Checklist

  • Pass: Human feedback protocol is explicit

    Detected: Expert Verification

  • Pass: Evaluation mode is explicit

    Detected: Automatic Metrics

  • Pass: Quality control reporting appears

    Detected: Gold Questions

  • Pass: Benchmark or dataset anchors are present

    Detected: DROP, BIRD, Cricbench

  • Pass: Metric reporting is present

    Detected: accuracy

Related Papers

Papers are ranked by protocol overlap, extraction signal alignment, and semantic proximity.

Get Started

Join the #1 Platform for AI Training Talent

Where top AI builders and expert AI Trainers connect to build the future of AI.
Self-Service
Post a Job
Post your project and get a shortlist of qualified AI Trainers and Data Labelers. Hire and manage your team in the tools you already use.
Managed Service
For Large Projects
Done-for-You
We recruit, onboard, and manage a dedicated team inside your tools. End-to-end operations for large or complex projects.
For Freelancers
Join as an AI Trainer
Find AI training and data labeling projects across platforms, all in one place. One profile, one application process, more opportunities.