Foreign policy is, obviously, about advancing priorities (and dealing with the legacy of previous governments’ foreign policies) while broadly trying to hold everything else as constant as possible in order to achieve that.
The policies themselves and the institutions that develop and implement them reflect that focus. In order to function within budget and staffing constraints they have to assume that lower-priority areas are stable or unimportant enough not to spend a lot of time and attention on.
But that’s just not possible now. The collapsing Assad regime in Syria; Ukraine; Georgia; Gaza; Romania; chaos in French politics; sub-threshold attacks against NATO by Russia; South Korea; political uncertainty in Germany; Russia’s economic fragility.
And looming over it all, the rapidly approaching Trump administration, which is adding to current instability as well as offering the prospect of chaos at home and abroad from January.
Which state has the bandwidth to cope with all of that? Several of the countries in various degrees of crisis are also among the most powerful, complicating their own foreign policy and adding uncertainty to the international system as a whole.
That's true of the US above all, of course. The cautious pragmatism of the Biden administration (which looks much like Obama's foreign policy, but one that somehow failed to learn any major lessons from Obama) is not set up to deal with all this, but its too late to adapt.
I'm eagerly awaiting for humanity to be watched over by machines of loving grace. We are not good at managing our own affairs and computers do not have greed, malice, various other psychopathic tendencies and they do not have dicks or vaginas that cause all kinds of other human stupidity.
You'd think so, but unfortunately the underlying machine learning models aren't very smart i.e. they don't learn fast. You need enormous amount of data to train them. Essentially everything ever written. That's why it's essentially impossible to remove bias from the data that they are trained on.
It's really a brute force approach to AI. There's nothing particular clever about LLM tech although the tech bros who are selling it to the public pretend otherwise.
Not exactly, they run on massive matrix multiplications processing word vectors predicting follow up text from a prompt. You can't predict the output of the LLM from looking at the matrices that make up the trained model. Just like you can't get equations from looking at a slice of Einstein's brain.
Comments