Profile avatar
bramzijlstra.com
Machine Learning Engineer with a background in Philosophy and AI. I live in Amsterdam. Right now working at the Dutch Chamber of Commerce (KVK). Also founder of a boutique consulting firm.
270 posts 3,026 followers 710 following
Regular Contributor
Active Commenter

Every now and then I read posts about 90% of coding will be done by AI this year and I can't lie it makes me a bit nervous. Then a study likes this drops and I can relax a bit

You’ve probably heard about how AI/LLMs can solve Math Olympiad problems ( deepmind.google/discover/blo... ). So naturally, some people put it to the test — hours after the 2025 US Math Olympiad problems were released. The result: They all sucked!

There are plenty of cases where survivorship bias doesn’t apply. We just don’t remember them.

If I had to bring an institution down from the inside, suggesting to rewrite the entire codebase in a different language would probably my first idea. Not sure if I would need more ideas after that one.

Do you think vibe coding is a gradual or fundamental change in technology?

Errors with a 200 status code is like gift wrapping a turd

An infinite amount of monkeys with an LLM can write the entire the complete works of Shakespeare

My doctor told me he's into vibe surgeries lately

I think the reason why LLMs are overconfident is because we keep saying telling "You are an expert in" literally anything

"Why flake8? I use Ruff"

Electron apps are the Monkey's Paw to someone wishing cheaper RAM

Anyone know best practices / tips for improving LLM quality on classifying long texts? For shorter input few-shot learning and finding the best examples work well, but this not very practical with longer texts. #databs

We'll have AI alignment before MS Word alignment

An agent is an LLM that uses tools. A tool is someone who keeps saying '2025 is the year of the agents'

Today was a day where all emails found me well

Interesting benchmark from Adyen on multi-step reasoning. New benchmarks are great in establishing a baseline for historic models, all new models should be treated with suspicion. #databs huggingface.co/blog/dabstep

Really want to like cursor but I am completely Pycharm brained it seems. Anyone else have the same ? Which copilot did you go for?

I'm always surprised that ChatGPT suggests old school NLP / string manipulation tricks instead of suggesting to call OpenAI. Seems like upselling 101 to me.

“Our company has moat” The moat:

DeepSeek is not refusing the Tiananmen Square question, though it gets the answer wrong.

So, look. I'm sure I'm in the minority here on Bluesky in believing that training AI systems isn't copyright infringement. But, also. Dude. There's no way OpenAI can make this argument without looking very, very silly.

Will copilots (eventually) negatively impact open source library development? Major updates will break any copilot, so it disincentives doing so.

Baffling to see the disclosure around DeepSeek Last I checked the tech industry, we celebrate small teams pushing the industry forward, coming up w novel ways to build software more efficiently. And sharing it. Yet now there's a class of folks who think this is some bad thing?