dima3.bsky.social
69 posts
32 followers
65 following
Discussion Master
comment in response to
post
How do you define "effectively"? Defeating Russia entirely? Tens of thousands. Destroying multiple facilities in Russia such as Alabuga, severely crippling drone production short to mid term? A few dozen.
comment in response to
post
The antidote to Himars was moving most frontline logistics depots beyond its reach. This change alone had a massive effect on the war.
Doesn't look like there was an antidote found for Scalp missiles. The Ukrainians simply ran out. Same goes for ATACMS.
comment in response to
post
Everything you've listed has already dramatically changed the battlefield "in the long run". Russia found a near peer enemy in Ukraine, it shouldn't have been like that.
HIMARS was a magic wand - the Ukrainians waved it, and frontline logistics went boom, along with lots of artillery.
comment in response to
post
Maybe I misunderstood your point...
Israel is extremist/nationalist for sure (again, looking at its neighbors, each of which at some point within the last century tried to murder all of them, I get it), but generally it's not a threat to others if left alone.
comment in response to
post
23% of the Israel population are Arabs.
Israel has had peace with Egypt for decades (after Egypt tried to murder them, of course, and got defeated).
Jordan even helped shoot down the wave of drones and missiles from Iran some time ago.
Yes, there is peace in select parts of that region.
comment in response to
post
With Israel - I can't say I approve of what they're doing, but, looking at their neighbors... I get it. It's bad, but I get it.
The Israelis proved they can live in peace with Arabs as long as the Arabs aren't trying to murder them.
comment in response to
post
That's normal for pretty much every religion. Even buddhists justified foreign aggression (ask Japan in WWII, or the buddhists in Russia now).
Of course, buddhism can't compete with Abrahamic religions in terms of instilling hatred towards neighbors, they are the worst.
comment in response to
post
MAGA brings people together too.
The whole purpose of religion has always been to exploit biases in human psychology in order to control them. And you will never find a person more hateful than a religious person.
comment in response to
post
You've described a religious mindset taking hold of the population.
We've had long periods of time when you'd be tortured to death for denying the "truth" of the Church, and every state was at least authoritarian.
But then all of this somehow, for the large part, went away. How did that happen?
comment in response to
post
Generally security comes from a combination of 3 factors.
- Too costly to invade (due to powerful military or hostile terrain like Switzerland)
- Part of a strong military alliance
- Peaceful neighbors with no desire to invade
How do you see security for Ukraine if it's not due to being well-armed?
comment in response to
post
Starship hasn't started launching for real yet. It's all prototypes. You can spend years meticulously calculating and simulating every tiny detail, or you can build a piece of metal, stick sensors everywhere, send it up and measure how it behaves.
You can start complaining once they lose payloads.
comment in response to
post
Look, I hate Elon as much as every other guy, but if hatred makes you think irrationally, you're becoming just like him. Don't be like that. SpaceX may be doing things in a weird manner, but it's a private company, they decide how to do it, they don't answer to taxpayers. Their way seems to work.
comment in response to
post
I imagine NASA blew up many RS-25 prototypes in testing before they were flight-ready. And? Does destroying test articles make NASA incompetent? Starship is cheap enough that losing prototypes every week is no biggie. It may be cheaper than doing it Shuttle-style, spending $50 billion.
comment in response to
post
Regarding the Shuttle - what's the point in having reuse of orbiter and boosters if single-use rocket cost per launch or per kg to orbit was much lower anyway? Given this, I can't call it a competent design, it was a mess. It did fly, but at what cost?
comment in response to
post
If you trust guys like Tom Mueller, Musk did make some key design decisions regarding F9. For example, Merlin 1D doing face shutoff. www.reddit.com/r/elonmusk/c...
comment in response to
post
Space Shuttle did reusability wrong, it cost close to a billion dollars per launch.
Falcon 9 does reusability better, costs tens of millions per launch.
Starship is a massive rocket that is supposed to cost several million dollars per launch. That's a novel goal with new challenges.
comment in response to
post
It's not about him being inevitable. It's about his base being a cult that's not just uninterested in facts or reason, but actively opposed to them. Someone whose sense of self is tied to Trump will treat any criticism of Trump as a personal attack.
comment in response to
post
He'd not lose a single voter due to any of that. "Fake news", "AI fake" and so on. Even if there's irrefutable evidence, MAGA will still be convinced that he raped children for the greater good, "Kamala probably did worse".
comment in response to
post
It's time to let this conspiracy theory die. There's no "kompromat". There's nothing Russia could publish that would harm Trump. Talking about piss tapes is laughable.
comment in response to
post
Pete hExit
comment in response to
post
The video is probably being circulated through pro-war Russian channels, and there are probably tons of people suggesting to cut his balls off for disrespecting a "defender of the fatherland".
comment in response to
post
I just disagree that the comedian is on the same side as the "racer". Citizenship doesn't decide who you support in a war. The comedian will immediately be arrested if he directly criticizes the war, what he did instead will likely also get him into trouble, but it's a grey area.
comment in response to
post
This doesn't mean there aren't tens of millions of Russians who want Russia to murder every Ukrainian, European and American, unfortunately.
comment in response to
post
There are literally millions of Russians who are able to see the invasion as something deeply unjust and criminal, and who want Ukraine to win, and kill as many of the invaders as possible. Who feel no sympathy towards their own thugs murdering people on foreign land.
I'm one of those Russians.
comment in response to
post
I would argue that anything in a human's thought process can be traced back to some part of the brain's "training dataset", which is everything we ever sense with any of our organs. So a lot of the time, it's of course unfeasible to identify that influence precisely.
comment in response to
post
LLM is essentially the perfect example of emergent behavior. Ant colonies are also the classically used example. One ant is super dumb, but the colony altogether is incredible on so many levels.
comment in response to
post
I know how it works, I'm not saying you're wrong, but it's the case of "can't see the forest behind the trees". Similar to how explaining how a small group of neurons works won't help you understand human thought process. Andrew doesn't talk about math, he zooms out and shows the overall process.
comment in response to
post
But do they seem to have a thought process? In-universe. How can you tell if they do/don't?
Understanding the basic principles they would operate on isn't enough. A dumb neuron exchanging chemical signals with thousands of neighbors doesn't sound like something that can produce intelligence.
comment in response to
post
I don't know who 3blue1brown is, looks like some educator, I'd really recommend to watch Andrej Karpathy (again - one of the founders of OpenAI) for a decent understanding of how LLMs function. He explains it in a very digestible manner.
comment in response to
post
Same way a human's brain is working on the same principles as a moths, it's just more complex.
But saying this isn't particularly useful. The principle's the same, the result is vastly different.
That's why I have a problem with calling LLM "glorified autocomplete", which it technically is.
comment in response to
post
Regarding Star Wars - look at it from an in-galaxy POV. Droids were made to be useful to humans. Apparently, it's as cheapto add a human-equivalent intelligence to them as it is to place an Arduino into one of our robots. So would you call the droids being able to think?
comment in response to
post
Maybe, there's a "human consciousness", which spawns "thinking" processes all the time when it needs something analyzed, and an LLM is a crude approximation of that "thinking" process without the overarching consciousness?
comment in response to
post
There are tons of scientists working on trying to understand how LLMs "think" how they would come up with some specific answers. It's an extremely complicated area.
comment in response to
post
How do you know there's not a process remotely analogous to a human one behind the LLM answering your question? It does seem to understand what you want from it remarkably well, doesn't it? The answers are so good it fails the Turing test because of being more accurate and helpful than a human.
comment in response to
post
With our current technology, we cannot scan all interconnections between all neurons in a real brain. We also don't have the compute performance to create a base, untrained virtual brain and let it learn on its own like a child's would. LLMs are only remotely similar from a very high-level POV.
comment in response to
post
The problem with your question is - how do you define "think"? Does a dog think? Does an ant think? Does an alien or a droid from Star Wars think?
You can probably faithfully replicate a human mind by virtualizing the neurons. They're doing it with much smaller brains already.
comment in response to
post
We can probably make an AI that will be less wrong on average than a group of extremely knowledgeable SMEs from different areas. But it will still be a black box with a magic answer. And it will still help us atrophy our own brains.
comment in response to
post
That's where religion comes along, indeed to many people a confidently incorrect answer is preferable to admitting ignorance. But if religions were spewing bullshit people could easily verify, they wouldn't hang on for so long.
comment in response to
post
My point is that you can't explain what's "thought process" in a way that allows to tell the difference between "illusion" and "real thing".
Do droids in Star Wars have a thought process, or an illusion of it? R2D2 surely is indistinguishable from a sentient being.
comment in response to
post
I don't think there's anyone out there who welcomes the AI confidently saying random bullshit as the answer instead of "I don't know". If you check with state of the art modern LLMs, they phrase "I don't know" in a way that's still somewhat useful. "Did you mean ...?" for example.
comment in response to
post
It's clear that there absolutely is some degree of thought process going on behind there. Of course, nothing to do with sentience, don't get me wrong. For now, at least. I wouldn't want to predict what happens in 5 years.
comment in response to
post
Can you define "thought process" in a way that doesn't rely on "it's something humans do"? So that we could check if AIs can replicate it close enough to call them roughly equivalent.
comment in response to
post
Within the billions of parameters, there are some that light up when the AI isn't certain, and this dataset teaches it to react accordingly in general. It starts saying "I don't know" even to completely different questions. It just accepts that saying "I don't know" is better than making shit up.
comment in response to
post
Just an example. The first LLMs were extremely prone to hallucinations, current ones - much less so. One of the ways to achieve that is to teach it to say "I don't know". For that, they train it on sets where the question is something not covered by the training, and the answer is "I don't know".
comment in response to
post
Why would you be comparing parameters to neurons? It's closer in meaning to synapses in a brain, and there are hundreds of trillions of those.
You can call LLMs glorified autocomplete as much as you want, but they are able to understand instructions remarkably well for an autocomplete.
comment in response to
post
What makes you think the human mind isn't an illusion of thought process?
I would recommend to watch youtu.be/7xTGNNLPyMI?... (about 15 minutes from this point) for an overview of why they make reasoning models, how they work and what the benefits are.
The author is one of the founders of OpenAI.
comment in response to
post
It's probably within our capabilities to build an AI that would take on training by itself, and would be able to detect what's bullshit in the dataset it's training on. An AI that could have fewer misconceptions than any human, operating on a scientific approach to learning. And that's scary.
comment in response to
post
Your mind is just a bunch of neurons firing chemical signals to each other, it's the same in principle as the brain of a moth. Yet a moth is a dumb flying computer (sometimes impressive at avoiding your attempts to catch it), and a human brain is so much more.
comment in response to
post
Saying LLMs can't reason is not entirely accurate. Run Perplexity search with R1, see its chain of thought. It doesn't reason like a human would, but it does do some degree of reasoning, and can spot and correct some issues in its train of thought.
comment in response to
post
"Germany is NOT managing its military well", typo.