Modern-Day Oracles or Bullshit Machines?
Jevin West (@jevinwest.bsky.social) and I have spent the last eight months developing the course on large language models (LLMs) that we think every college freshman needs to take.
https://thebullshitmachines.com
Jevin West (@jevinwest.bsky.social) and I have spent the last eight months developing the course on large language models (LLMs) that we think every college freshman needs to take.
https://thebullshitmachines.com
Comments
neat!
π
thanks ect
but also
can i haz plain text
&
"any sufficiently advanced CSS/hypertext is indistinguishable from [s]magic[/] AI"
π
but the graphic on the TOC says the same as Lesson 8, "Poisonous Mushrooms and Doggie Passports"
Great resource. Thanks!
Thank you!
here:
https://thebullshitmachines.com/lesson-16-the-first-step-fallacy/index.html
there are a few others
in
https://thebullshitmachines.com/lesson-14-authenticity/index.html
βThey donβt engage in logical reasoning. They donβt have any sort of embodied understanding of the world. They donβt even have a fundamental sense of truth and falsehood.β
One comparison Iβve drawn is - LLMs are like subconscious processes.
To me, the subconscious seems to not only fit your description but also the mistakes our subconscious can make remind me of the mistakes LLMs can make.
What would you rather see as a name for this resource?
;-)
LLMs cannot be here to stay, wasting energy exponentially and degenerating themselves regressively, undermining everything where they're applied.
Are you open to translations? I think it would be great to be able to offer this in multiple languages and could prob find resources to work on a Dutch version
Itβs a humanities course about how to learn and work and thrive in an AI world.
Neither instructor nor students need a technical background. Our instructor guide provides a choice of activities for each lesson that will easily fill an hour-long class.
They are suitable for self-study, but have been tailored for teaching in a flipped classroom. https://thebullshitmachines.com/table-of-contents/index.html
Large language models are both powerful tools, and mindlessβeven dangerousβbullshit machines. We want students to explore how to resolve this dialectic.
We marvel at what LLMs can do and how amazing they can seem at timesβbut we also recognize the huge potential for abuse, we chafe at the excessive hype around their capabilities, and we worry about how they will change society.
We've given it a huge amount of thought, and we don't want to lecture *at* students about the evils of genAI.
We don't think that works. Students use LLMs already; we want them to discover the problems for themselves.
I on he other hand think a technical background is helpful to dismantle the promises by the big tech and the oligarchs getting us to worship their AI god, which is a mere tool for power.
AI: NO
Beeeeeeeep Boooooop
......1.21 Gigawats later
Would you like to see the true nature of time ?
Tells you how limited their grasp of expertise is.
The fuzzy stuff people are fascinated with is more frustrating than expert-guided algorithms in science.
I must say, the following excerpt is spot-on. I feel seen, even though I know I'm far, far, far from the only person thinking this.
Right now the whole effort is self-funded and done on spare time; I could see making a plaintext version if we can get the support.
I missed it earlier but just saw it trending #1 on hacker news.
Is there a study on that experiment?
https://www.wired.com/story/400-dollars-to-build-an-ai-disinformation-machine/
Do you think maybe wired got duped?
What do you think of the larger systems that use LLMs partnered with curated databases and RAG to ground the LLM in domain knowledge and eliminate the bullshit aspect?
(I'm on lesson 3, so apologies if this is covered later)
Course is missing a deeper dive on how to leverage the technology correctly. Plenty of bad examples, few good ones. Many of the problems you mention have viable solutions today. Show what good looks like, not just bad.
Awesome work!
https://thebullshitmachines.com/lesson-1-autocomplete-in-overdrive/index.html
Part of my project, #onindividuality, argues #agi is here and we're too afraid to admit it.
If you ever get a chance to take a look I'd love your take π€
With you,
Ron
"We can only play Whac-A-Mole in trying to fix their flaws with after-the-fact patches layered atop the existing architecture." Is this not what nature is doing with biology? Isn't this evolution writ large?
Yes, this is what happens with biology. And it takes a few billion years to get anywhere while working massively in parallel.
What's particularly interesting about the biology case is that there are some things you can do playing whac-a-mole and some you can't.
(I'd love to see a @cbo.bsky.social essay on all this. He could do it much better.)
For now one defense is that it is my own privately owned and maintained website, so going after the UW does nothing.
They would have to blatantly trash the first amendment to take it down.
(I know, I know, give them a week or two⦠)
Thank you!
This is really good educational material.
For a technical look behind the scenes I can highly recommend to watch this new 3 1/2 hour explainer from Andrej Karpathy ->
worked for what ever
weird presentation frame work ( name it and i will file the bug!)
it would be nice
an ( i guess) more comparable with les powerful older hw
as apposed to exorbitantly expensive *new* hard ware
( which probably has built in "AI" processor (besides gpu))
thanks
more "easily understood or appreciated"
mine is more : just plain-text π₯Ή
https://bsky.app/profile/partickle.bsky.social/post/3lhsm3upxbs2b