Profile avatar
hollandseuil.bsky.social
Software Engineer at JetBrains Work on Amper Was born in Siberia, live in Amsterdam
101 posts 31 followers 25 following
Regular Contributor
Active Commenter
comment in response to post
Also it worth to note, that there are no as fast purely functional programming languages as you probably imagine in your reality not because there is some magical limitation on representation of the CPU (modern imperative langs don't comply long ago), but because it's quite hard to write on FP langs
comment in response to post
Profile-based optimization can make local optimizations in a very restricted use cases with aggressive invalidation, though, but in the most cases it works solid and skyrocket your performance like god And this is where F# beats your Rust or whatever you consider as fast
comment in response to post
If you write anything on "performant" languages, you probably gonna end up with a worse performance than undergraduate would have written the same code in Java You very underestimated all the science which lies under VMs Jit has benefit of having traces at the runtime
comment in response to post
Look at the BEAM VM, btw Erlang is faster than Python for instance, or take a look at F# it has comparable speed with C# (they run on the same VM and translates to approx similar instructions) The problem with FP is not performance and never has been
comment in response to post
I completely can't buy that: it's an a abstraction layer, it shouln't somehow follow how CPU works, we can translate the code whatever we want, there is no problem with that We also could write the whole runtime, which would know how to execute functional command the most effective way
comment in response to post
The closest programming language that does that is Haskell. It's not C-like completely, it's statically typed though and very concise and expressive. This is exactly where FP shines. Also, if transactional memory is introduced, you can semi-automatically parallelize some little things
comment in response to post
You have to make sure that the functions you're calling are side-effect free, and then compiler actually can infer the execution graph itself, however, in real life, it's 1. very hard problem to verify that your function is actually pure, 2. parallelization isn't free, if using wrong could slow down
comment in response to post
Europapa, europapa!!! Welkom in Europa. Blijf hier tot ik dood ga
comment in response to post
But you get your coding AI agent in the end with ability to choose whatever model you want directly without these AI wrapper companies Additional benefit: you control cost, you can upload the whole project into the context, or you can use caching or whatever technique Use it!
comment in response to post
All models support OpenAI compatibility. Besides you can use models with open router or other SaaS providers that also provides openAI compatibility Yes, it will be a limitation, no thinking process available, no animation of letters during the dialog, just calling MCP tools and answer at the end ⬇️
comment in response to post
Is it a home office?
comment in response to post
Kotlin
comment in response to post
Take away your IDE from you, that’s it
comment in response to post
But considering the same sonnet solves the problem using the minimum context possible, I think sonnet is still winning the programming battle Everything is based on my personal experiments, so I can be a bit subjective
comment in response to post
Notion subscription totally worth the price However, I really don’t wanna pay for collaborative groceries Despite of Apple notes really failed and screw up things for us couple of times, we’ve decided to give it a chance one more time And meanwhile it can be the case of good KMP app to dogfood
comment in response to post
What could be better than 1 cat, yeah, you’re right, two
comment in response to post
Northern counties are too damn expensive
comment in response to post
Mobile app is in fact solid, it’s a specific plugin working awfully on iOS, my setup is not very typical, otherwise I was just using git
comment in response to post
Yeah, that’s cheaper than notion
comment in response to post
Yeah, I’m going to experiment with it, it turned out that on my iOS device due to sandboxing it’s just impossible to open a vault in an arbitrary place, which makes it useless for sharing and collaborating, but I’ve found that it’s possible to use couchbase as vault though There are also other opts
comment in response to post
You don’t even need to run it on your computer, and you can clone and analyze the PR using a remote development, so no more top 64GB m4 max mbp
comment in response to post
This guy told us that 640KB of memory will be enough for everything Why listen to him?
comment in response to post
What tooling support is already implemented? Like LSP? IntelliJ support? Build system?
comment in response to post
I can use a new llama provided by groq through the OpenAI compatibility layer and switch models and experiment even faster Because how much time it take to create integration by company with much of inertia And you can just vibe engineer that by 30 minutes and two prompts We live in great time!