Watching Claude Evolve From the Inside
i've been at anthropic for 15 months now. in that time, i've watched:
- claude 3.5 sonnet ship
- computer use launch
- claude code go live
- claude 4 opus and sonnet release
- claude opus 4.5 drop
that's a lot of evolution to witness from the inside.
what i've learned about model development
1. it's never done every model is immediately followed by work on the next one. there's no "finished."
2. the eval process is extensive so much work goes into understanding what a model can and can't do. safety testing takes time.
3. internal use is revealing using models internally before release surfaces issues. dogfooding matters.
4. feedback loops are tight what users say after launch influences what we work on next.
watching capability increase
from 3.5 sonnet to 4 opus: the improvements are real and noticeable.
reasoning got better. coding got better. instruction following got better.
and with each release, what seemed impossible becomes routine.
the anthropic approach
what i appreciate about how we work:
- safety is integrated, not bolted on
- we take time rather than rushing
- there's genuine belief in the mission
- criticism is welcomed internally
i'm biased obviously. but i think the approach is right.
the emotional experience
watching something you contributed to go live: surreal and gratifying.
watching external feedback pour in: anxiety-inducing.
seeing people genuinely helped by the model: the point of all this.
claude's personality
one thing outsiders may not know: there's a lot of thought put into how claude communicates.
the helpfulness, the honesty, the harmlessnessβthese aren't accidents. they're intended.
working on tools that millions interact with means thinking carefully about that interaction.
looking forward
more capabilities coming. more improvements. more questions about what AI should be.
i'm privileged to be part of this conversation from the inside.
a user told me today that claude helped them debug a tricky problem. small moment, but that's why we do this.