Go from paper to working code
Search by keyword, paper title, arXiv ID, DOI, or URL. We return matching papers so you can click into the right page.
Popular right now
Try quick examples
Everything you need to go from paper to code
Paper2Code aggregates implementation artifacts, reproducibility signals, and research context for over 10,000 papers — so you can spend less time searching and more time building.
Ranked Implementations
Find the most reliable GitHub repo for any paper. We score and rank by maintenance, stars, CI, licenses, and official authorship.
Hugging Face Artifacts
Direct links to models, datasets, and demo spaces associated with each paper, curated for relevance and quality.
Reproducibility Signals
Dependency manifests, Docker files, CI pipelines, and license data scored into a reproducibility verdict: Strong, Moderate, or Limited.
Research Context
Citation counts, influential citations, related papers, and semantic task/method/domain classification from the research graph.
Featured Papers
High-signal implementation pages from the current Paper2Code snapshot.
- Deep High-Resolution Representation Learning for Human Pose Estimation
This is an official pytorch implementation of Deep High-Resolution Representation Learning for Human Pose Estimation.
2 reposimplementation starting pointBenchmark trust: unstable signalUpdated Feb 25, 2019 - From Static to Dynamic: Adapting Landmark-Aware Image Models for Facial Expression Recognition in Videos
From Static to Dynamic: Adapting Landmark-Aware Image Models for Facial Expression Recognition in Videos is the primary contribution described in this paper.
2 reposimplementation baselineBenchmark trust: grounded evidenceUpdated Dec 1, 2023 - FiLM: Frequency improved Legendre Memory Model for Long-term Time Series Forecasting
FiLM: Frequency improved Legendre Memory Model for Long-term Time Series Forecasting is the primary contribution described in this paper.
2 reposimplementation baselineBenchmark trust: grounded evidenceUpdated May 1, 2022 - BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.
1 reposimplementation baselineBenchmark trust: grounded evidenceUpdated Oct 11, 2018 - Unified Latents (UL): How to train your latents
We present Unified Latents (UL), a framework for learning latent representations that are jointly regularized by a diffusion prior and decoded by a diffusion model.
0 reposbenchmark referenceBenchmark trust: grounded evidenceUpdated Feb 19, 2026 - DrugGen: Advancing Drug Discovery with Large Language Models and Reinforcement Learning Feedback
Traditional drug design faces significant challenges due to inherent chemical and biological complexities, often resulting in high failure rates in clinical trials.
2 reposimplementation baselineBenchmark trust: grounded evidenceUpdated Nov 20, 2024 - Attention-based Extraction of Structured Information from Street View Imagery
Attention-based Extraction of Structured Information from Street View Imagery presents a transformer method.
1 reposimplementation baselineBenchmark trust: grounded evidenceUpdated Apr 1, 2017 - Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation
Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation presents a transformer method.
2 reposimplementation starting pointBenchmark trust: missingUpdated May 1, 2021 - Matrix Information Theory for Self-Supervised Learning
Matrix Information Theory for Self-Supervised Learning is the primary contribution described in this paper.
2 reposimplementation starting pointBenchmark trust: missingUpdated May 1, 2023 - Grounding DINO 1.5: Advance the "Edge" of Open-Set Object Detection
Grounding DINO 1.5: Advance the "Edge" of Open-Set Object Detection is the primary contribution described in this paper.
1 reposimplementation starting pointBenchmark trust: missingUpdated May 1, 2024 - Distributionally Adversarial Attack
Recent work on adversarial attack has shown that Projected Gradient Descent (PGD) Adversary is a universal first-order adversary, and the classifier adversarially trained by PGD i…
3 reposimplementation starting pointBenchmark trust: thin evidenceUpdated Aug 16, 2018 - ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators is the primary contribution described in this paper.
1 reposimplementation starting pointBenchmark trust: missingUpdated Mar 1, 2020
Snapshot refreshed: Mar 9, 2026. The landing page now renders from cached snapshot data so search and internal links stay fast even when the API is under load.
Popular Research Areas
Jump directly into common implementation searches.
Need human evaluators for your AI research? Scale annotation with expert AI Trainers.