Profile avatar
royvanrijn.com
Director of OpenValue Rotterdam, is a @Java_Champions, founder of Rotterdam JUG, public speaking: Java, AI, math, algorithms
2,047 posts 1,144 followers 164 following
Regular Contributor
Active Commenter
comment in response to post
If that’s how you treat (former) allies, it’ll be a lonely future, the US is no longer part of “the West” and could become very poor and isolated.
comment in response to post
Time to start shopping on this list? european-alternatives.eu/alternatives...
comment in response to post
* de-US…. America is bigger and better than just the US.
comment in response to post
I’ve done Muay Thai for a year, my friends called me Bruise Lee.
comment in response to post
🤦🏻‍♂️
comment in response to post
Twee dagen avondeten en twee pakken met druiven 👌🏼
comment in response to post
Thanks 🙏🏻
comment in response to post
Red and a little bit of white… right?
comment in response to post
Turns out the cause was an EMI (electromagnetic interference) spike from the chair’s gas cylinder… 🤯
comment in response to post
Every time there is a terrorist attack or something, we expect all Muslims to distance themselves and condone this behavior. We should now do the same: this is wrong. “The West” is dead, it’s not Europe, Australia and the US. They voted for this and it’s turning out exactly as most of us feared.
comment in response to post
It takes your project information and outputs an initial prompt you can paste into your chatbot of choice to start coding. It works well enough for me (as it is), but is far from a polished tool, however... feel free to give it a spin and improve on the idea: github.com/royvanrijn/p...
comment in response to post
Things like: - Java version - Maven dependencies - The README (for context) - A couple of reference files, what I want the code to look like. To avoid me having to write this initial prompt every time, for each project, I've created a very simple Maven plugin called promptsmith.
comment in response to post
The French use it as well? Looked really German/Austrian to me haha, look at the coat of Tyrol:
comment in response to post
So my best guess would be Tyrol, near the German border somewhere
comment in response to post
That coat of arms looks very Austrian...
comment in response to post
A regular hashmap is so very fast if you just get a simple entry. This theoretically optimal method has some tiny managing steps, jumping arounds, that adds up. I recommend the YouTube recording for the general idea: www.youtube.com/watch?v=ArQN...
comment in response to post
The funnel probe at the moment is slower btw.
comment in response to post
Just realized the 20% gain was without using the funnel probing approach yet, it was just the initial regular linear probing; I’ve now updated the code with the experimental funnelProbe, but it’s not optimized at all yet.
comment in response to post
More information about the discovery here: "Undergraduate Upends a 40-Year-Old Data Science Conjecture" www.quantamagazine.org/undergraduat...
comment in response to post
They demonstrate it's possible to construct an open-addressed hash table with improved expected search complexities, both amortized and worst-case, without the need to reorder elements over time. My (crude) Java implementation does show some encouraging results: github.com/royvanrijn/o...
comment in response to post
It's happening haha, self-improvement, kinda.
comment in response to post
We’re pretty close to be able to generate much more and better synthetic data; these models can already check and self-critique… we’re already very close, we just need more compute. And of course architecture improvements will only accelerate 🤯
comment in response to post
That’s V3 though, the base model; not R1, the reasoning model. Really curious what other teams can do with these new steps. I love that they now do RL-only first (for R1-Zero), not RLHF, like AlphaGo and better results with AlphaZero.
comment in response to post
Absolutely, DeepSeek is mostly doing what Microsoft already proved to be effective with their Orca series. That doesn't bring you a new frontier model though, but does show the incredible power of synthetic data. It's also why I'm a firm believer the whole "we're running out of training data" is bs.
comment in response to post
Imagine you need to answer a complex question and you *need* to say the first words that come to mind... no matter what. That won't probably lead to a coherent and correct answer as well.
comment in response to post
It's writing down all the things it knows before one-shotting an answer, which is very clever. It has more context and can catch mistakes early (and allows to make it) instead of just hallucinating some wrong answer.
comment in response to post
I believe it's doing mostly the same as Microsoft did with Orca: www.infoq.com/news/2023/12... Where it's using a much more advanced model to create synthetic (labelled) data explaining reasoning, and learning from that. Bootstrapping the knowledge and making it _seem_ very smart.
comment in response to post
True, it's clearly been trained on more advanced models and mimicking gets you pretty far. The reasoning steps work, but big prompts with enough clear information does the same. I usually gather information using 4o and based on that I let o1 come up with a good answer... that works even better.
comment in response to post
They had a crypto startup three years ago, NFT blockchain expert a year ago, they’ll probably find something else pretty soon.
comment in response to post
But doing it twice definitely isn’t.
comment in response to post
Once is a coincidence…
comment in response to post
Put your $VOTE coin on-chain to cast your vote for the 2028 General Election. Not in the registered voter airdrop? Visit your local DOGE admin. 💲 When we mint a new president at 24:00 you’ll receive a limited edition portrait NFT! 🖼️ 🇺🇸
comment in response to post
Wait a day, launch a second shitcoin named after your wife; signaling it’s actually all worthless and crash both 🥲 So presidential.