people can learn to get better at searching. there are even classes. then you can just go straight to the links without worrying about "hallucinations"
Fully agreed. Not every search has the goal of being a learning experience.
There are different words with similar meaning. When you do a string based search, it will only return the term you search for. A semantic search (even without AI) will return relevant results with different strings.
You have pretty much zero way to make sure the chat bot isn't hallucinating data, sources and references, and if you're gonna bother verifying all the data output you might as well research shit normally without the extra step that gives you trash data that took 60 tons of fresh water to generate
And what the fuck does that do when they still hallucinate shit that is entirely unrelated? What if the citations are hallucinated? AI chatbot bullshit isn't good enough for most important work yet. I can give you 15 citations that "prove" that Joe Biden is a fruit fly, just don't expect me to...
verify that any of said citations are even remotely related to what I say. Critical thought does 100x more for a person than any LLM slopbot 9000 that some circle-jerking techbros ejaculated in their ketamine induced fugue states.
Also believing it to be a "tool" is laughable, they never developed it to be a tool, it was made to be a replacement, you're not needed for its intended goal
It can make code development faster. It works well augmenting humans.
I'm a former engineering manager, about a decade removed from a daily coding job. With an LLM, I produce more usable code in a day than my junior engineers could in a week.
Also, you're fucking hilarious. I know how to use the LLMs, and I know how to do "prompt engineering" (a pseudo-intellectual term coined by halfwits looking to legitimate the fact that they defer all thought to a hallucination machine). I dismiss it outright because IT IS NOT SECURE AT ALL.
Comments
Many people are conflating the consensus of "people are doing stupid things" with "AI can't do useful things."
And if you go as far as needing to research every citation that ChatGPT uses, you might as well have just written it yourself.
Yeah, they can hallucinate. Not checking its work is just reckless.
There are different words with similar meaning. When you do a string based search, it will only return the term you search for. A semantic search (even without AI) will return relevant results with different strings.
Choosing to do something the slow way is not virtue.
Why would someone drive a car when they could walk?
Doesn't that make them "lazy?"
Code function, which are testable, are very well supported. LLMs are good at parsing unstructured information and extracting data.
In what world is any of the money from that tech company going to make it to the hands of the "right" people?
There are absolutely things it can't do. There are also things it can do well.
Dismissing it outright because their use case doesn't work well is a mistake.
There are people that saw what it could do, after it was built, that intended it to replace people.
AI is not replacing people. People that know how to use AI are displacing people that don't.
I'm a former engineering manager, about a decade removed from a daily coding job. With an LLM, I produce more usable code in a day than my junior engineers could in a week.