marcelorinesi.bsky.social
Niche-specific AI architect, consultant, and amateur SF writer. AI != LLM.
On https://rinesi.com there are links to my
* Blog
* Newsletter
* Short SF newsletter, including "Viral Fixpoint," a free compilation of (very) short stories
3,445 posts
299 followers
160 following
Prolific Poster
Conversation Starter
comment in response to
post
... tasks despite the "boring stuff only" discourse and the fact that they don't work. My concern is that the latter isn't fully visible in company metrics -some of it is a loss of positive externalities from expertise integrity norms- so we can end in a Pareto-pessimal equilibrium. 2/2
comment in response to
post
Technical preferences aside, 100% agree with you that the use of [*]AI to develop expertise and autonomy is a relatively untapped space. It's where the cutting edge will be, but -talk of AGI aside- I don't think orgs are trying to push their cognitive frontiers as much as just cutting labor costs.
comment in response to
post
I'm more sanguine on repurposing tools to develop better non-NL knowledge discovery and management tools (think e.g. "package management for DAGs over standarized domain ontologies" or "the proper artifact for a meeting isn't a pile of graphs, it's a joint probability distribution") but 🤷
comment in response to
post
I'm extremely willing to be proved wrong -it has happened before *g*- but TBH I'm not entirely sure genAI offers a viable long-term path: it just doesn't compose cleanly/is not really reusable as expertise representation/repository. But I'm aware that's not the mainstream bet these days.
comment in response to
post
That's a good point - our social cues are [were? might end up not being?] tied up with pre-internet affordances for packing up and leaving. Perhaps that's similar to the rural-urban transition in terms of a dislocation of social assumptions (now made worse by deepfakes, bots, etc).
comment in response to
post
This creates a political problem in the generalized sense: some of the people best positioned to push back against systemically damaging deployment are also the ones who use them in positive ways and, worse, as a group are more used to think about technical capabilities than second-order impact.
comment in response to
post
That's very much the script of every good LLM use I've seen. You have expertise on both coding in general and the relevant domain: that makes you a viable LLM user, but at the same time I think the kind of people LLMs are (as business/social strategy) ultimately deployed *against.*
comment in response to
post
Now that you mention it, so do I.
comment in response to
post
We don't.
comment in response to
post
... in general I'm skeptical of crowd-sourcing without solid task factorization (although, again, there's an argument to be made that in this case it's really an atomic binary classification task; IMHO this is only true for papers you want to reject in limine but YMMV). 2/2
comment in response to
post
One of my tl;drs of the year is that just as wild success as a tech founder doesn't mean you know squat about tech, science, or much besides making money as a tech founder wild success in finance is entirely compatible with, or doesn't prevent you from developing, absolutely bonkers ideas. 2/2
comment in response to
post
... brutality offers, as usual, an extremely direct litmus test for the democratic commitments of organizations and individuals. It's been flunked more often than not by institutions in the US; for any journalistic organization standing up for Bertrand should be both a duty and an easy win. 2/2
comment in response to
post
... and don't require decision-makers to learn or defer to expertise they don't have, hence their immediate elevation to consigliere or even grand vizier.) 2/2
comment in response to
post
That does track as well, yes.
comment in response to
post
... proposed equations w/o the hole, most other people think they are overall worse than the ones we already have."
This is a common misperception about physics papers: reading proposed new maps as discoveries, while they are that, proposed new maps/thought experiments. 2/2
comment in response to
post
My [layperson] understanding is that even partial hearing losses impact cognition through [probably oversimplifying] the brain redirecting resources to trying to do inference on worse inputs. I wonder if something similar could happen with smell.