Go from paper to working code
Search by keyword, paper title, arXiv ID, DOI, or URL. We return matching papers so you can click into the right page.
Popular right now
Try quick examples
Everything you need to go from paper to code
Paper2Code aggregates implementation artifacts, reproducibility signals, and research context for over 10,000 papers — so you can spend less time searching and more time building.
Ranked Implementations
Find the most reliable GitHub repo for any paper. We score and rank by maintenance, stars, CI, licenses, and official authorship.
Hugging Face Artifacts
Direct links to models, datasets, and demo spaces associated with each paper, curated for relevance and quality.
Reproducibility Signals
Dependency manifests, Docker files, CI pipelines, and license data scored into a reproducibility verdict: Strong, Moderate, or Limited.
Research Context
Citation counts, influential citations, related papers, and semantic task/method/domain classification from the research graph.
Featured Papers
High-signal implementation pages from the current Paper2Code snapshot.
- FiLM: Frequency improved Legendre Memory Model for Long-term Time Series Forecasting
FiLM: Frequency improved Legendre Memory Model for Long-term Time Series Forecasting is the primary contribution described in this paper.
2 reposimplementation baselineBenchmark trust: grounded evidenceUpdated May 1, 2022 - BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.
1 reposimplementation baselineBenchmark trust: grounded evidenceUpdated Oct 11, 2018 - Unified Latents (UL): How to train your latents
We present Unified Latents (UL), a framework for learning latent representations that are jointly regularized by a diffusion prior and decoded by a diffusion model.
0 reposbenchmark referenceBenchmark trust: grounded evidenceUpdated Feb 19, 2026 - DrugGen: Advancing Drug Discovery with Large Language Models and Reinforcement Learning Feedback
Traditional drug design faces significant challenges due to inherent chemical and biological complexities, often resulting in high failure rates in clinical trials.
2 reposimplementation baselineBenchmark trust: grounded evidenceUpdated Nov 20, 2024 - Attention-based Extraction of Structured Information from Street View Imagery
Attention-based Extraction of Structured Information from Street View Imagery presents a transformer method.
1 reposimplementation baselineBenchmark trust: grounded evidenceUpdated Apr 1, 2017 - Distributionally Adversarial Attack
Recent work on adversarial attack has shown that Projected Gradient Descent (PGD) Adversary is a universal first-order adversary, and the classifier adversarially trained by PGD i…
3 reposimplementation starting pointBenchmark trust: thin evidenceUpdated Aug 16, 2018 - ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators is the primary contribution described in this paper.
1 reposimplementation starting pointBenchmark trust: missingUpdated Mar 1, 2020 - UL2: Unifying Language Learning Paradigms
UL2: Unifying Language Learning Paradigms is the primary contribution described in this paper.
2 reposimplementation starting pointBenchmark trust: missingUpdated May 1, 2022 - Beyond Self-Supervision: A Simple Yet Effective Network Distillation Alternative to Improve Backbones
Beyond Self-Supervision: A Simple Yet Effective Network Distillation Alternative to Improve Backbones is the primary contribution described in this paper.
1 reposimplementation starting pointBenchmark trust: missingUpdated Mar 1, 2021 - Improving Sampling for Masked Diffusion Models via Information Gain
Masked Diffusion Models (MDMs) offer greater flexibility in decoding order than autoregressive models but require careful planning to achieve high-quality generation.
1 reposcontext onlyBenchmark trust: missingUpdated Feb 20, 2026 - Revisiting Weight Regularization for Low-Rank Continual Learning
Continual Learning (CL) with large-scale pre-trained models (PTMs) has recently gained wide attention, shifting the focus from training from scratch to continually adapting PTMs.
1 reposcontext onlyBenchmark trust: missingUpdated Feb 19, 2026 - Perspective Reconstruction of Human Faces by Joint Mesh and Landmark Regression
Perspective Reconstruction of Human Faces by Joint Mesh and Landmark Regression is the primary contribution described in this paper.
1 reposimplementation baselineBenchmark trust: thin evidenceUpdated Aug 1, 2022
Snapshot refreshed: Mar 9, 2026. The landing page now renders from cached snapshot data so search and internal links stay fast even when the API is under load.
Popular Research Areas
Jump directly into common implementation searches.
Need human evaluators for your AI research? Scale annotation with expert AI Trainers.