Profile avatar
nateberkopec.bsky.social
126 posts 1,016 followers 48 following
Regular Contributor
Active Commenter

@tammyeverts.com Are you aware of any data sources on what % of mobile traffic is actually conducted on 3G or worse? I imagine to get the real data you need it from the telcos.

on my way to say "fix the N+1s" to everyone who hires me to audit the perf of their Rails app

It's crazy to me that so many people seem to base their _opinion_ of a language on the TIOBE index and "does it include my favorite feature X which I'm used to in this other language I know". Base your opinion on whether it's the right tool for the job!

Great work from Aaron on speeding up FFI (and potentially other things) in Ruby. https://buff.ly/4aZrxuC I want to call out the bit about Chris Seaton. Chris put a lot of great ideas in people’s heads, it’s one of the things he was best at. I miss him.

Guess when I enabled jemalloc on @hatchbox.io ?

@alancouzens.bsky.social oh no! The benchmarks have broken! alancouzens.com/blog/benchma... I tell people about this page all the time...

99% of people targeting LCP as a metric in their org have no idea that Safari (desktop and mobile), which probably accounts for ~25% of their audience, doesn't support it.

It's been more than 10 years since "the RapGenius/Heroku incident" but I still see shop after shop repeating the same mistakes as the Genius team did https://genius.com/James-somers-herokus-ugly-secret-annotated Study what came before!

If you're not using Ruby-LSP, you're ngmi

We are excited to be backing veteran open source infrastructure developer Evan Phoenix and Miren on their journey to deliver cloud computing’s first major upgrade in decades. blueyard.medium.com/miren-bd717a...

I always think its funny when organizations get this org-memory around certain numbers and strings. At Gusto all the senior platform people could quote incident numbers like it was chapter/verse of the bible. "Well in incident 132..." meanwhile we're currently on like incident 754.

Pretty wild that there's no way to configure VSCode to not send specific files to the model. You can exclude autocompletes on a particular file type, but you can't exclude them from being added as chat context. We're really YOLOing the security on this stuff eh?

Rivers Cuomo has a better looking Github contribution graph than I do.

In the last 6 months I'm seeing an explosion of incidents at clients re: Postgres and Multixact buffers. Short version is that internal caches in Postgres overflow under a particular kind of write load which either uses subtransactions, LWLocks, or multiple connections locking the same row.

I realized today that people are giving me real American dollarbucks in exchange for my engineering advice and that it’s a pretty rad job

It's extremely easy to do perf work that _feels_ impactful and gets lots of nice reacts in the team Slack, but has no impact at all. Post a single trace with 1000 SQL queries, show how you fixed it. Great, but how often was that happening? No one asks that question.

I've seen a lot of these "public roadmap" style things where people can vote up/comment on what they want, anyone ever used one that they like a lot?

Most of the truly top-end, principal/fellow level engineers don't care about TC (they're already rich), they just want to be left alone. Lot of people end up moving just because they feel like they can't get things done w/o someone getting in the way.

New Puma release - a few goodies for JRuby, refork users (more coming for this as well), and some observability/debuglogging stuff

IME, the first scaling bottleneck of Rails applications is at ~100,000 transactions per minute (requests + jobs) +/- 50%, which is where vertical scaling the database stops working. Horizontally scaling SQL is hard, but at some point you have no other choice.

no gods no masters or as they say in australia, no rules, just right

I released v4.0 of a 14 year old gem today

Another day, another big Rails app seeing gains from turning on YJIT for the first time. This is a big app which does mostly GraphQL responses: P90 Mean Latency: 405ms (was 521ms, a 22% improvement) P99 Mean Latency: 1.02s (was 1.46s, a 30% improvement) Nice!

People like to poke fun at gurus but: 1. I've been eating Tim Ferris' lentil/egg/spinach breakfast for 15 years. 2. I've been following GTD for ~20 years. 3. Peter Attia's book refocused my life around athleticism Just because they have fads and misses doesn't mean you should ignore them.

Care package for our first retainer client ever, Whop.