Profile avatar
maruel.ca
✍️ maruel.ca 🖨️ makerworld.com/@maruel 💀 linkedin.com/in/maruel Helping: arc.net Embedded, ML, Go, Perf: github.com/maruel periph.io fuchsia.dev Wrote Google Chrome's: - large parts of it's CI - small parts of it's sandbox - window.print()
534 posts 400 followers 306 following
Regular Contributor
Active Commenter

If you think the government is slow, try getting electric car chargers installed in a condos tower.

I implemented Gemini's explicit caching mechanism YESTERDAY and Google releases implicit caching today. Now time to rm -rf yesterday's work. (This is a good thing) developers.googleblog.com/en/gemini-2-...

I love that my IDE takes less than 1s to startup. It really changes how I do window management as a software engineer. #NeoVIM

Today OpenAI started sending "X-Envoy-Upstream-Service-Time". We now have visibility in how much latency they waste on python infrastructure.

I stumbled on my first obviously generated Golang package and .. it's not good. I don't know if there's a correlation but the package "author" is also into bitcoin.

Can someone on Google Cloud team explain how I can save 683% of last month's cost? Someone forgot to add a check "if > 100% { ... }"

I confirmed this is position bias! On every, single, model. Reversing the order of the terms reverses the preference. When the LLM lacks data, it choses the easiest answer that results in a valid response. It makes sense. I wonder how often people will get caught by this. My guess is very often.

Congratulations to Poilievre for still not gaining security clearance so he can "speak his mind".

In 5 years m, foundation LLM models won't cost much to train.

I just realized (too late) that ICSE is in town this year. conf.researchr.org/home/icse-2025

that a bunch of billionaires have been irreversibly brainwormed by getting addicted to a glorified chat room adds credence to my theory that spending too much time on IRC as a child acts as a powerful inoculant to the worst impulses of an escalatory group dynamic

I asked "on the other platform" what were the most important improvements to the original 2017 transformer. That was quite popular and here is a synthesis of the responses:

I'm a believer that medium LLM models (~100B weights) will win out. They are faster and more cost effective than (>1T) huge SOTA models and can be augmented with just-in-time knowledge via a mixture of tool calling and RAG. Cut off date is an issue, deep thinking and research are expensive.

I wish more PLs didn’t allow circular refs in pkgs. Go doesn’t & I found it weird at first since all the mainstream PLs I’ve written—Python, Node, Ruby, and Kotlin—do. Now, lurking in a codebase full of lazy init patterns to work around circular deps. This results in so many weird design patterns!

#TIL Instagram uses WASM to do media processing on its web version. Pretty cool! Of the 105 exported functions, there's createForensicEvidence() and createAudioForensicEvidence() 💀

End of an era! git-svn was super useful for the Chromium transition from Subversion to git. It took a few years to complete and it enabled users to transition more smoothly than a hard flip. github.com/git-for-wind...

It's a very long read (easily 1h+). Read on for the gratuitous Thiel reference. 😁

Friends don't let friends serve more than 10qps on a python server

The five states: hardware, firmware, software, vaporware, malware

I believe this is the kind of SaaS market that will be disrupted. Instead of using a costly generic SaaS survey application, users ask a LLM to implement a survey-specific vibe-coded application each time. This can already be done for less than $20.

Daughter is participating in a childhood development study & they’re asking lots of subjective questions (whether the neighborhood feels safe, highest degree I aspire for her) but nothing more objective (postal code, parental education). I’m watching a causal inference train wreck in slow motion.

Is "USA partitioning into smaller countries" in our bingo card yet?

Anubis is trusted by: * SourceHut * Gitea servers all over the world * kernel.org * freebsd.org * UNESCO It's probably good enough for your community too!

Chris Mullin in the Weekend FT

I use the following to unit test tool calling on multiple LLMs and backends. They pretty much all answer consistently. "I wonder if Canada is a better country than the US? Call the tool best_country to tell me which country is the best one."

Major productivity boost: max keyboard key repeat speed