Skip to main content
β€’3 min read

So There's This Company Called Anthropic

AIanthropiccareer

i've been casually following anthropic for a while now. today i went deep.

this is my attempt to understand who they are, what they do, and why they interest me so much.

the basics

anthropic was founded in 2021 by dario and daniela amodei, along with other people who left openai. they're headquartered in san francisco. they make claude.

but the interesting part isn't the facts. it's the philosophy.

the thesis

from what i can tell, anthropic's core belief is something like:

"AI is going to be transformative. maybe the most transformative technology ever. if we build it wrong, very bad things could happen. so we need to build it right, carefully, with safety as a first-class concern."

this isn't uniqueβ€”lots of people say safety mattersβ€”but anthropic seems to actually mean it. their research prioritizes understanding AI systems, not just making them more capable.

the research that caught my attention

constitutional AI: train the model to self-critique and improve using a set of principles. instead of just human feedback, the model learns to align itself.

interpretability: trying to understand why models do what they do. not just what outputs they produce, but how they produce them.

scaling laws: understanding how capabilities emerge as models get bigger. anticipating what might happen before it happens.

red-teaming: actively trying to break their own models to find problems before users do.

the culture (from outside)

everything i read suggests they're serious about their mission. job listings mention safety. blog posts discuss ethics. they publish research even when it highlights limitations.

this might be marketing. but it feels genuine?

why this matters to me

i'm a second-year CS student thinking about the future.

option A: work at a company that builds cool stuff and worries about safety later.

option B: work at a company that builds cool stuff while actively trying to make it safe.

anthropic is option B. and honestly, option B sounds like where i want to be.

the dream (unrealistic probably)

work at anthropic. contribute to AI safety research. help build systems that are beneficial and not catastrophic.

is this likely? no. they hire experienced researchers with phds and impressive track records.

but the dream is forming. and dreams can become plans.

for now

i'm going to:

  • keep reading their research
  • build relevant skills
  • work on my thesis (which is vaguely related)
  • apply when the time comes (probably years from now)

and if not anthropic, then somewhere with similar values. the field is growing.


spent 4 hours reading anthropic's research blog. no regrets. slightly obsessed.