Go from paper to working code
Search by keyword, paper title, arXiv ID, DOI, or URL. We return matching papers so you can click into the right page.
Popular right now
Try quick examples
Everything you need to go from paper to code
Paper2Code aggregates implementation artifacts, reproducibility signals, and research context for over 10,000 papers — so you can spend less time searching and more time building.
Ranked Implementations
Find the most reliable GitHub repo for any paper. We score and rank by maintenance, stars, CI, licenses, and official authorship.
Hugging Face Artifacts
Direct links to models, datasets, and demo spaces associated with each paper, curated for relevance and quality.
Reproducibility Signals
Dependency manifests, Docker files, CI pipelines, and license data scored into a reproducibility verdict: Strong, Moderate, or Limited.
Research Context
Citation counts, influential citations, related papers, and semantic task/method/domain classification from the research graph.
Featured Papers
High-signal implementation pages from the current Paper2Code snapshot.
- Deep High-Resolution Representation Learning for Human Pose Estimation
This is an official pytorch implementation of Deep High-Resolution Representation Learning for Human Pose Estimation.
2 reposimplementation baselineBenchmark trust: grounded evidenceUpdated Feb 25, 2019 - Matrix Information Theory for Self-Supervised Learning
Matrix Information Theory for Self-Supervised Learning is the primary contribution described in this paper.
2 reposimplementation baselineBenchmark trust: thin evidenceUpdated May 1, 2023 - EchoNet-Synthetic: Privacy-preserving Video Generation for Safe Medical Data Sharing
EchoNet-Synthetic: Privacy-preserving Video Generation for Safe Medical Data Sharing is the primary contribution described in this paper.
1 reposimplementation baselineBenchmark trust: grounded evidenceUpdated Jun 1, 2024 - MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention
MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention presents a transformer method.
1 reposimplementation baselineBenchmark trust: thin evidenceUpdated Jul 1, 2024 - OpenHands: An Open Platform for AI Software Developers as Generalist Agents
OpenHands: An Open Platform for AI Software Developers as Generalist Agents is the primary contribution described in this paper.
2 reposimplementation baselineBenchmark trust: thin evidenceUpdated Jul 1, 2024 - Grounding DINO 1.5: Advance the "Edge" of Open-Set Object Detection
Grounding DINO 1.5: Advance the "Edge" of Open-Set Object Detection is the primary contribution described in this paper.
2 reposimplementation baselineBenchmark trust: thin evidenceUpdated May 1, 2024 - From Static to Dynamic: Adapting Landmark-Aware Image Models for Facial Expression Recognition in Videos
From Static to Dynamic: Adapting Landmark-Aware Image Models for Facial Expression Recognition in Videos is the primary contribution described in this paper.
2 reposimplementation baselineBenchmark trust: grounded evidenceUpdated Dec 1, 2023 - FiLM: Frequency improved Legendre Memory Model for Long-term Time Series Forecasting
FiLM: Frequency improved Legendre Memory Model for Long-term Time Series Forecasting is the primary contribution described in this paper.
2 reposimplementation baselineBenchmark trust: grounded evidenceUpdated May 1, 2022 - Data-Free Knowledge Distillation for Heterogeneous Federated Learning
Data-Free Knowledge Distillation for Heterogeneous Federated Learning is the primary contribution described in this paper.
1 reposimplementation baselineBenchmark trust: thin evidenceUpdated May 1, 2021 - Uncertainty Baselines: Benchmarks for Uncertainty & Robustness in Deep Learning
Uncertainty Baselines: Benchmarks for Uncertainty & Robustness in Deep Learning is the primary contribution described in this paper.
1 reposimplementation baselineBenchmark trust: thin evidenceUpdated Jun 1, 2021 - BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.
1 reposimplementation baselineBenchmark trust: grounded evidenceUpdated Oct 11, 2018 - Distributionally Adversarial Attack
Recent work on adversarial attack has shown that Projected Gradient Descent (PGD) Adversary is a universal first-order adversary, and the classifier adversarially trained by PGD i…
3 reposimplementation baselineBenchmark trust: grounded evidenceUpdated Aug 16, 2018
Snapshot refreshed: Mar 7, 2026. The landing page now renders from cached snapshot data so search and internal links stay fast even when the API is under load.
Popular Research Areas
Jump directly into common implementation searches.
Need human evaluators for your AI research? Scale annotation with expert AI Trainers.