Models like o1 suggest that people won’t generally notice AGI-ish systems that are better than humans at most intellectual tasks, but which are not autonomous or self-directed
Most folks don’t regularly have a lot of tasks that bump up against the limits of human intelligence, so won’t see it
Most folks don’t regularly have a lot of tasks that bump up against the limits of human intelligence, so won’t see it
Comments
While great, all my tasks got stuck in infinite loops so I ended up just going back to the standard model.
But overall either I’m completely missing the power of o1 (and Ethan is totally right) or o1 is the productization of agent coordinators on top of regular LLMs (and am not asking it hard problems, and Ethan is right).
How often do most people find themselves "stumped" in their job? And even then that's mainly referring it up the management chain
Of, for, and by the people: the legal lacuna of synthetic persons
https://link.springer.com/article/10.1007/s10506-017-9214-9
#AIEthics #AILaw #digitalGovernance 3/3