Skip to main content
•2 min read

My Current ML Reading List

learningMLresources

people ask what i read to keep up with ML. here's my current list, organized by category.

disclaimer: i haven't finished all of these. it's aspirational reading as much as completed reading.

foundational papers (everyone should read)

attention is all you need (2017) the transformer paper. genesis of modern AI.

scaling laws for neural language models (2020) kaplan et al. understanding how scale affects capability.

training compute-optimal large language models (2022) chinchilla paper. changed how we think about training efficiency.

constitutional AI (anthropic, 2022) understanding the safety approach i work within.

recent papers i've enjoyed

mechanistic interpretability papers from anthropic understanding what models actually do inside.

various RLHF papers how we align models to human preferences.

efficiency papers flash attention, lower-precision training, etc.

books

deep learning (goodfellow, bengio, courville) the classic textbook. dense but comprehensive.

designing machine learning systems (huyen) practical systems thinking about ML.

the alignment problem (christian) non-technical look at AI safety. accessible.

blogs and newsletters

anthropic research blog i'm biased but it's genuinely good.

the gradient curated ML research.

import ai (jack clark) weekly AI news summary.

various substack newsletters too many to list.

podcasts

gradient dissent wandb's podcast. good interviews.

the robot brains pieter abbeel interviewing ML people.

how i actually read

i don't read everything cover-to-cover. my approach:

  1. skim abstracts and conclusions
  2. go deep on papers relevant to current work
  3. flag things to read later (the pile grows)
  4. accept that i'll never read everything

staying current vs. going deep

the field moves fast. you can either:

  • stay broad (know a little about everything new)
  • go deep (know a lot about specific areas)

i'm trying to balance both. deep on alignment science. broad on everything else.

recommendation

if you're starting out: pick one area. go deep. build from there.

if you're established: stay curious. keep reading. the field rewards constant learners.


adding three new papers to the list as i write this. the backlog never ends.