The “pieced together fragments of Gilgamesh” story, while cool, has nothing to do with generative AI afaict. It’s just machine learning pattern recognition on a new scale. Like solving jigsaw puzzles. Coop breakthrough but like the definition of “not really AI”
so call it "MLPR" instead of AI, or whatever the fuck. then address the issue, ie, prove that the risks of deployment outweigh the rewards, and why a minority of risk-lovers should have the right to impose a transformed world they never got consent for
Having read Casey's essay, it seems you are the one creating the false dichotomy. Casey wasn't talking about the whole AI-discussion space. He was talking about the groups he saw at a specific AI-focused conference. Attendees there are going to be a highly self-selecting group and not representative
I work at a giant company, and we're about to hit the panic button with generative AI. Luckily RAG and "local" LLM things are promising and have many more use cases, but I worry about SO many companies heavily investing in limited applications
I think that because AI evangelists share a lot of overlap with crypto there is a tendency to assume that AI is equally useless. AI does represent some technological advance and is more immediately useful to corporations even if only to occlude their crimes behind a supposedly neutral machine.
Sure. The problem is that no one is really making good utilitarian cases for AI. They just point to some nebulous possible benefits while hand waving away obvious ethical concerns in ways that a) prove they have very little knowledge about the field they think AI can improve…
2/…b) assume all fields work the same way as the tech sector, have similar priorities, & therefore will be improved by implementing the same solutions. Or c) tack on some inchoate, abstract ideology that doesn’t hold up to critical interrogation or graft onto the real world…
3/…the similarities with crypto/blockchain advocacy is particularly ironicin the case of C, mostly b/c the technologies are ontologically opposed (blockchain advocates argue that value comes from scarcity/ownership, while at the same time argue that IP ownership is theft when advocating AI)…
4/…these are positions that can only be held by people who are insulated from the material concerns that people within some of these fields have & solutions that grossly misunderstand their problems. In some cases, the proposed solutions actually make those problems worse…
5/…All of this is to say that it’s hard to take the utilitarian arguments of AI advocates seriously if they are unwilling to meaningfully address or even acknowledge the very real trade-offs, downstream effects, & ethical issues brought about by these technologies…
I can at least offer some comfort there, we are no closer to machine cognition than we were 70 years ago when AI was first made an official field of study in computer science
But they speak English and walk and simulate physics and stuff. Naysay all you want, but “computers haven’t advanced since Turing” is going a bit far! Unless by “cognition” you’re referring to an ineffable human soul, in which case, fair enough
By "cognition" I'm referring to ideation and imagination directed by consciousness; I don't need to bring a soul into things to know that no level of elaboration upon simple computation can approach organic consciousness because the two are categorically different, even opposite things
Ok so the brain is doing something that no machine ever could. How? What does evolution have that machines don’t?
Separately: why care about cognition, if given such a definition? Other than “humans have it”, I guess. What do you call a machine that navigates a complex, unforeseen problem space?
> If there is one thing you take away from my essay or Alkhatib, it’s should be this: reach for your wallet when someone starts offering simple taxonomies for understanding artificial intelligence.
Confused on the closer - you mean, like to protect it? Or, invest because that would be impressive?
The term is slightly broader than 'technology masquerading as ai but actually teleoperated' (but there are plenty of examples of startups doing exactly that e.g https://outsideinsight.com/insights/ai-startup-using-human-developers-build-apps/ ) - it encompasses for example Amazons Mechanical Turk, where human labor is obscured by digital facade
people should narrow down the definition of AI, saying “AI is bad” is like saying “electronics are bad” or molecular biology” is bad. I believe the giants are squandering hundreds of billion$ on specific applications of AI, but to say “AI is bad” seems lazy or intentionally obtuse.
I mean that's part of the grift. Take credit for applications LLMs do well and sweep under the rug that AGI isn't actually happening, but call it all AI so nobody knows what's going on.
it's fraudulent, dangerous, and being deployed without the consent of those it will affect. you can call that lazy, but maybe the laziness is in deflecting the claims
Comments
I'm just afraid that precisely *because* AI is such complex tech, people want people like Newton to provide them with simplifying analyses.
i would concur
AI is different in that there are technologies there
the grifts are still the same tho
(and as i note from lotta literally the same grifters)
…it is? Where? “Littered”? I must be in a bubble, the only example I know of is the Tesla robot dancers
Separately: why care about cognition, if given such a definition? Other than “humans have it”, I guess. What do you call a machine that navigates a complex, unforeseen problem space?
Confused on the closer - you mean, like to protect it? Or, invest because that would be impressive?