lawrencejones.dev
Engineer at https://incident.io/. Previously @GoCardless | Writes at http://blog.lawrencejones.dev | @lawrjones on Twitter
278 posts
616 followers
359 following
Regular Contributor
Active Commenter
comment in response to
post
Squeaky clean
comment in response to
post
Yeah we’ve been doing this recently for catching bad error handling practices or fixing small things like “all UIs should use sentence case labels”
It’s really great for that type of thing!
comment in response to
post
Otoh, AI is a great tool to audit their codebase for several of the actions suggested in the post mortem.
Can easily find and fix unsafe nil pointer checks in the service control binary using AI tools nowadays.
comment in response to
post
Equally though, it’s right on the edge of being ‘too designed’.
I’m proud of the result, but engineers reading the posts will see our design team helped put the pages together, which can trigger people.
Hard balance, even when you care a lot about getting it right.
incident.io/building-wit...
comment in response to
post
I think we do an ok job of this at incident? Not perfect, but decent.
The AI microsite we launched recently is a good example: content is straight from eng, very behind the scenes, there is no marketing in it.
comment in response to
post
It’s really hard to preserve the tone of early stage company blogs as you grow.
You have competing pressure of:
- Marketing blog posts which detract from eng
- Increasingly high bar for ‘polish’
- Fear of saying the wrong thing
Which combine to be a hostile environment for solid tech writing.
comment in response to
post
…we are!
Both for product engineering roles in general, but also for AI engineering roles if that’s what you want for the next chapter in your career.
Message me on here if you’re interested. This job is the most fun I’ve had in my career, I can’t recommend it enough.
comment in response to
post
One nice touch is the “Note from the team” with all our faces on the signature.
I think it really captures how the team feels, from our ‘win together’ mindset to the pride we take in our work.
I’ve attached a copy with a big “WE’RE HIRING” message because…
comment in response to
post
It is very good, well worth a read
comment in response to
post
And if you want to read more about what our AI team have been up to, we’ve created a microsite for exactly those stories.
It covers everything from prompt optimisations, speculative execution or build-vs-buy.
incident.io/building-wit...
comment in response to
post
It's taken on a new meaning recently (Claude code etc) I've found it useful in our team to callout when we're working on gut instead of numbers.
There's a lot of this working with LLM prompts, where changes are subjective... until you have evals, and until then, it's vibes (derogatory).
comment in response to
post
I don’t try and claim that LLMs are producing valuably novel data on the regular, but it definitely makes sense to me how they might.
And I’m not sure it’s intuitively just search & retrieval when the answer is implicitly retrieving from all training data at once, combining that into a response.
comment in response to
post
…should encode knowledge as a broad body of information that isn’t just what is in the article itself.
That knowledge is what the LLM encodes and is why it can provide an answer that is not wholly present, but is constituted from many other pieces of the training that is genuinely novel.
comment in response to
post
Wouldn’t phrase it like the original post, but viewing them as a search engine rules out the idea that key knowledge can’t be encoded across the enormous amount of training data that isn’t represented specifically in the data itself.
That feels wrong to me, intuitively training data…
comment in response to
post
Hey, I think you’ve got this sorted from what I can see internally but shout if that’s not the case!
comment in response to
post
You should use PlanetScale so you never have to hit these problems yourself!
comment in response to
post
Original write-up here: gocardless.com/blog/inciden...
I remember this incident very clearly, especially the:
- Ok, I’m on the replica, please can you reboot the primary!
- Done!
- Oh dear, why has my screen gone dead…
Exchange that will haunt me forever.
comment in response to
post
Thank you! It’s a rule I live by, and helps me catch when I’m about to try eating too much of a problem at once.
comment in response to
post
😂
comment in response to
post
We were checking all sorts of things and benchmarking the laptop.
We had checked this first but done it through go env which didn’t pick up the override, so it was the compiler trace with a single thread that made us to whatttt
comment in response to
post
Sigh
comment in response to
post
Hahahaa the more people draw these comparisons the more I think well yeah, that’s how I think too!
comment in response to
post
There is definitely a panic that’s making people say “we need to hire people who have experience here”
If you ask what experience, they’ll normally say research or deep-model dev exp, which imo isn’t solving for their problem. Easy to pattern match poorly for novel roles.
comment in response to
post
The best thing we’ve found are markdown files that read like READMEs explaining how we do things/how to use our tools.
Uploaded about 20 of them and now the project knows how things work.
comment in response to
post
Yep! I hear great things about cursor but I actually like that what I do in Claude is somewhat separate from my IDE/codebase, if that makes sense?
Copy-pasting between them doesn’t bother me and I can control exactly what goes in the context window (vs AI deciding what to look at)
comment in response to
post
Nowadays I think most new frontend at incident is drafted by AI, especially if it's complicated.
Having uploaded our code conventions to Claude it's really easy to just say what you want and get a component + stories.tsx for it.
I use AI to draft initial backend tests now too.
comment in response to
post
Mostly random posts on Linkedin, but:
- Checking code for bugs: www.linkedin.com/posts/lawren...
- Bunch of examples from the team: www.linkedin.com/posts/lawren...
- Using Claud projects: www.reddit.com/r/Experience...
comment in response to
post
I’m just not sure of the motivation.
If there are criticisms of the technology it seems more productive to acknowledge the positives too, or people may fairly assume the person is not familiar with the subject at all!
comment in response to
post
Agree there is a lot of noise and companies talking way ahead of what’s currently (or maybe ever) possible.
But I can’t relate to posts like these that flatly deny AI has a use. How I work has been changed so much by LLMs that me just 1y ago wouldn’t recognise it.
comment in response to
post
It is not tricky at all! Rubocop is an incredible tool and Ruby an amazing language to make this stuff easy to write.
We wrote hundreds of these at GoCardless to do all sorts of crazy stuff. I miss Rubocop more than any other tool in the Ruby ecosystem.
comment in response to
post
It’s extremely on point too.
I used to write notes about how to start my work the next day before I sign off, now I start a thread in Claude so it’s ready to go in the morning.
comment in response to
post
Thank you! Appreciate the share and glad you enjoyed it.
comment in response to
post
That's a lovely thing to say, the team will really appreciate this!
comment in response to
post
I’ve outlined the maturity stage of teams adopting AI and stressed the specific tools and processes you need if you want to get past the beginner stage and make real products.
Hopefully it’s useful to teams adopting these tools, more so than another “all you need is” statement!
comment in response to
post
The community loves saying “all you need is X” as if that answers everything.
But “all you need is evals” is:
- Reductive: evals are a complex topic!
- Will need tailoring for your context
- Only has meaning if you already know
As advice, it’s pretty poor. More of an ‘iykyk’ wink than useful.
comment in response to
post
Hardly-knows 😂 amongst the rest that one was quite funny