tombielecki.bsky.social
@[email protected] Ops Manager, Aligned Outcomes. CEO @PrintToPeer (Techstars ’15). #roam 𐃏
13 posts
112 followers
67 following
Getting Started
comment in response to
post
Perhaps a broader pattern:
As capability increases, the need for complex demonstration decreases.
The highest form of capability might be one that can afford to be transparent, simple, and direct about both its strengths and limitations.
comment in response to
post
Recursive Irony: This thread itself becomes a perfect example of valuable training data, containing nuanced technical discourse, community dynamics, and ethical debate - exactly the kind of content AI researchers would want to train on.
comment in response to
post
Human language learning strategies might serve as inspiration for designing more effective LLM training paradigms through language games. Polyglots like Prof. Arguelles describe language learning as an integrative, discovery-based process, including multi-modal integration.
comment in response to
post
I've been doing LLM analysis of threaded online technical discussions and extracting patterns, roles, rules, etc. representing things like sensemaking, knowledge crystallization, and seeing single roles take turns as teacher and student. Possible language game for training?
comment in response to
post
Have you tried adding MCP to datasette?
comment in response to
post
Experiment synthesizing Bluesky threads
comment in response to
post
Interesting, was it actually revising the text file incrementally? Do you think it would use a document app window if it was open?
comment in response to
post
Preview for planning, Mini for execution
comment in response to
post
Completely agree. Around 2013 I rode in a self-driving taxi in Abu Dhabi (Masdar City) and it was the highlight of the trip!