I'm shocked that we went from *rightfully* mocking a ex-Google engineer from believing an LLM was sentient after spending too much time without to having half of the tech industry believe the exact same thing.
Don't get high on your own supply, don't drink the entire bottle of Koolaid folks!
Don't get high on your own supply, don't drink the entire bottle of Koolaid folks!
Comments
I usually gather information using 4o and based on that I let o1 come up with a good answer... that works even better.
But more folks should actually read the research and architecture behind them.
It's not a magic box, it's very clever tech. :)
https://www.infoq.com/news/2023/12/microsoft-orca-2-llm/
Where it's using a much more advanced model to create synthetic (labelled) data explaining reasoning, and learning from that. Bootstrapping the knowledge and making it _seem_ very smart.