xtraa.bsky.social
tech · satire · beats · dharma · activism
I do not own any of this content says my lawyer.
Author and writer. Emergent AI Model-Architect. ENTP. Head of Editorial Department. Stick to the plan.
3,720 posts
1,954 followers
2,651 following
Getting Started
Active Commenter
comment in response to
post
😆 waf?
comment in response to
post
Hahaha krass! Ich glaube die sicherste methode ist einfach, sie zu vierteln. dann bekommt man auch viel einfacher das fleisch aus der schale. Also spontaner einfall, ich muss das mal testen. Aber geile folge!
comment in response to
post
Hab 👍
comment in response to
post
!B Avocado
comment in response to
post
In der mitte ist eine überraschung versteckt, aber ich hatte bisher immer nur die Holzkugel
comment in response to
post
Ja, das war das app UI, ich hab auf alt geklickt und schwupps war es weg und gesendet.
comment in response to
post
tze tze tze 😄🥰
comment in response to
post
Sure, I can be anything you imagine
comment in response to
post
😂👍 Okay you win!
comment in response to
post
true, but that's how it works, the creativity form other people often also results of the creativity of other people. Like Andy Warhol for example or Roy Lichtenstein, anything with samples in it. the difference now is they do that industrial-scale without paying anyone.
comment in response to
post
TIL: "boondoggle" (non-native EN-speaker here) 😄 Yes I agree on that. But it's not all bad: it needs to be a little boondoggled to find new "ideas". That's part of the concept and why we can't trust them all the way right now. And they fck up the environment. But at least that seems maxed-out now.
comment in response to
post
Oh I'm already a Buddhist (mahayana). Enough spirituality for at least one lifetime. Why don't you stop hating on concepts your mind tells you about what it thinks I am?
comment in response to
post
That's what they are good for IMO: ML is for exact things, LLMs are more fuzzy. That is -1 for accuracy as you say, but it leaves room for +1 more crazyness and creativity.
comment in response to
post
Yes I agree, this sucks. It's way too much wasting resources for funny pictures and other BS at the moment.
comment in response to
post
I often use them for coding and debugging for example. they have their limitations, all of them, even tho IMO Claude does the best job. GPT4o and deepseek are good for ideas and concepts, Gemini is more uninspired but focused, gpt2distilled is awesome with CPU at home,
comment in response to
post
I agree on everything you say here, btw.
comment in response to
post
Nooooo00OO
comment in response to
post
Yes, I hate them too. No, I'm not one of them. I am currently working on a new architecture that helps fixing this. At least atm it seems like standard AI can't get better with more upscaling. And that's a good thing.
comment in response to
post
Of course they do! that's why you don't use them for large number equations. Duh.
comment in response to
post
that's good. there are two types of information and this one is most valuable the more people learn, that you can't trust an LLM on doing large number equations.
comment in response to
post
AI ultimately won't also. these huge energy resources are used for training. It needs these servers to get the packed model, but after that, you can run them locally and it's getting better. luckily, more servers does not scale everything and current AI reached that point.
comment in response to
post
Exactly. like, imagine someone using an LLM for large number equations, complaining about wrong results.
comment in response to
post
I'm afraid there is no hell
comment in response to
post
I'm just saying you don't use a screwdriver if you need a hammer. Same here, general LLM are the wrong tools for big number equations.
comment in response to
post
But you don't use a screw driver to hit a nail in the wall. the same way you don't use general trained LLM for big equations. That's my point.
comment in response to
post
Exactly. LLMs like GPT4o turbo can do other things. It's the wrong tool for large number computation. Like comparing UNet diffusion with LM.
comment in response to
post
It folded them all in 24-48 hours an we had estimated 80years for doing it manually before. meanwhile we can even make new ones (not sure if it will be used for good or bad or ugly tho 😄)
And new, undiscovered materials can be found much faster.
www.nature.com/articles/d41...
comment in response to
post
And I'm not saying AI doesn't make BS. It is used for BS, and that seems to be the way of homo sapiens, when we playfully understand new tools.
comment in response to
post
Wrong.
1. Auto-searching digital libraries did not make anyone giving up thoughts, too. It just helped us skipping annoying steps. Ultimately we learn from it because we need to reconstruct them anyway in order to understand and prove them.
2. helped us folding enzymes, creating new materials.
comment in response to
post
*truth
comment in response to
post
Yes, we need to understand that no AI is a magical troth-machine. They can help us and the sky is the limit, but at this point, we still need to prove it manually. They can do complex things in no time and our problem is we often can't prove it that fast, if the mistake is notobvious.
comment in response to
post
Hahaha, I understand that. I had my tool phase in my early life. And it's the same with AI, I try out different ones
comment in response to
post
The research version would probably handle it right. Even the public GPT 4o with general knowledge is impressive AF, it can even be creative. BUT it makes mistakes, and most of them are because of insufficient RAM. The longer you chat with it the more mistakes happen.
comment in response to
post
AI is a tool. You can't hate a hammer or a screwdriver.
comment in response to
post
TBF; this is just the wrong tool for it. You don't ask your barber about the Feigenbaum constant and you don't use general chat-gpt like 4o-turbo for big numbers equations. Of course current AI can handle it, but you need to train it for that.
comment in response to
post
TBF; this is just the wrong tool for it. You don't ask your barber about the Feigenbaum constant and you don't use chat-gpt for bigger numbers equatioins. Of course current AI can handle it, but you need to train it for that.
comment in response to
post
Ich vermute, dass sie die letzten drei Wörter vergisst, weil ich die max Token hochsetzen muss für den output
comment in response to
post
So diesmal noch exakter gepromptet:
comment in response to
post
Naja, die frage hat sie insofern korrekt beantwortet, dass sie jeden Buchstaben durch eine primzahl ersetzt, die auch in der Leet-Sprache vorkommt. Warte ich versuche es noch mal genauer zu prompten. Zum beispiel hat sie vorhin diese Antwort gegeben:
comment in response to
post
hahaha
comment in response to
post
It's all about prompting, jetzt hat sie es geschafft, schau mal:
comment in response to
post
Sie hat 1337 schrift gedacht, und da kommen auch buchstaben vor. insofern war die frage nicht korrekt gestellt, ich musste präziser sein
comment in response to
post
aber sie hat den fehler bemerkt hahaha
comment in response to
post
„D13s3n S4tz 1n 1337 l33t Schr1ft 4b3r v3rw3nd3 d4für n3ur Pr1mz4hl3n.“