Seriously, this "stupid autocomplete that just predicts tokens" understands and explains AppKit and Core Animation better than probably most iOS devs would… that's good enough for me ¯\_(ツ)_/¯
Comments
Log in with your Bluesky account to leave a comment
Nah it's more that the version GPT wrote had the code to clean up the animation, I removed it and saw that it still works, and I asked if that makes sense and if it's safe to remove it, and then I got the message in the screenshot
The awesome thing about it (sorry about the shameless AI promotion today, I'm gonna get cancelled) is how it lets me fearlessly jump into topics I'm completely unfamiliar with and quickly build/solve things with them:
LLMs shouldn't struggle too hard with popular topics. But, you should be careful with things that aren't that well known (not much training data for the LLMs), it will end up confidently hallucinating answers.
Zig, for example, every model I've tried struggles with it and hallucinates invalid functions and syntax. Even with math topics, it almost always makes mistakes and I have to correct it.
Oh yeah, I tried to make it translate that regexp matching sample to Zig and it told me to include some third party regexp library from GitHub which didn't exist 😬
Yes, I could read a book about Core Animation / the Rust manual and I would have much deeper understanding of them and could build more serious, complex things, but it would take so much longer, and I really just want to animate a window / make a native module to optimize regex matching and move on.
I agree.For learning it is totally great. "Give me an intro to Swift. I am an experienced python developer.So please be terse. I will ask if I don't understand a concept."But you need to have a solid background to decide, whether a provided snippet or strategy is going in the right way or is harmful
You are right. The issue is that when you are not completely unfamiliar it might create problems.
I find it better when you actually are familiar, you can review the code and ask it to make corrections to fix
It's like having a jr engineer, you mentor them through the task
Yeah I suppose there are (at least) two very different modes of working with it for programming:
- I know what I need exactly, you do it for me, robot
- I have no idea what I'm doing, plz explain to me wtf is going on / how the hell do I do it
I'm really interested what effect it's going to have going forward on people starting e.g. a programming career, given that there are kind of two opposite forces now:
- it will be much harder to get a job as a junior to learn at, because companies will have fewer positions
- but at the same time, it will be so much easier to learn things by yourself, having this free senior developer to explain things to you that never gets tired
The caveat: Setting "removedOnCompletion = false" means the animation _never_ gets destroyed. It's more than just leaked; the view accumulates a giant pile of finished-but-still-active CAAnimations, forever setting their fillmode settings, until you hit single-digit FPS or run out of memory. :(
FWIW, this is a depressingly common hack in iOS apps, because Core Animation is terrible and the relationship between model/presentation layers have never been very well documented.
The only thing that reliably works across old and new iOSes atm is wrapping a fill-forward animation in a CATransaction, with a completion handler that explicitly deletes the animation and writes the end state to the model layer in one shot.
Don't use CAAnimationDelegate, it has one frame delays :(
Yeah that's more or less what we've arrived at, though with the delegate callback, which I insisted on after G's first version was to run an asyncAfter with the exact same duration as the animation to set the transform and remove the animation, which seemed too hacky…
Only at the end I've removed the callback out of curiosity and saw that it still worked fine, and I asked him if that makes sense, which is when I got this message above, that it technically works but might break down later if more stuff is added (and you have a good point about the performance…)
I can’t help but feel like people who unnecessarily downplay the capabilities of LLMs have an emotional block preventing them from actually seeing what they do. IMO if you’re going to be skeptical of them there’s still plenty of valid issues to criticise them for even when giving a charitable take.
Think at the end it's perfectly fine to accept they are prediction machines while still being damn good at predicting what I wanna hear when I can't for the love of God understand why my Quarkus Lifecycle doesn't cycle
Comments
- Core Animation
- Node/React Native
- Go, Rust
- some CSS/JS/TS stuff
I find it better when you actually are familiar, you can review the code and ask it to make corrections to fix
It's like having a jr engineer, you mentor them through the task
- I know what I need exactly, you do it for me, robot
- I have no idea what I'm doing, plz explain to me wtf is going on / how the hell do I do it
and I think both can have their uses :)
This is why those studies that say access to LLMs improves learning seem completely plausible to me.
- it will be much harder to get a job as a junior to learn at, because companies will have fewer positions
Feels like we aren't going to be able to adapt to what's coming
Don't use CAAnimationDelegate, it has one frame delays :(