Profile avatar
wilbertcopeland.bsky.social
Life Science, Responsible AI, Solarpunk Dreams
11 posts 33 followers 29 following
Conversation Starter
comment in response to post
I regret nothing. Bring crackers.
comment in response to post
China’s retaliation reminds me of your earlier post about the uncertainty surrounding AI’s maturation in a competitive, multipolar world. Have you previously written about how multipolar (vs. bipolar) AI advancement might uniquely challenge well-being and social cohesion? (Or recommended reading?)
comment in response to post
The crux of my interpretation is on the positive-sum aspect. Whether achieved primarily by policy, technology, or magic… I think it’s key to believing that we can do better for the many.
comment in response to post
Yea, policy is a major lever. And you are right: it’s complicated to do “well” for so many reasons. Regulation is key. Didn’t mean to imply otherwise. Maybe swap “let go” with “reconsider and revise”? Not all policy hinders supply, and some hindrance can be good for other reasons. Thanks!
comment in response to post
It focuses on positive-sum actions. Ezra Klein, and others, have been discussing for years. Basic idea: progressives should focus on increasing supply to overcome modern problems of scarcity (eg energy, health, homes). And progressives need to let go of their practices that hinder supply.
comment in response to post
Yea, I appreciate this take. A new platform presents new opportunities to grow and thrive. Sometimes the fear of what you might lose can hold you back from all there is to gain. Plus you get peace by moving past cognitive dissonance.
comment in response to post
Hey, I know that guy! Did not realize DSA was on BlueSky. 😊
comment in response to post
This could be an opportunity to shift what is being measured. Maybe for the better? Gradually building and refining arguments is important, and good writers do that. If students have to keep written material in class, then is it bad to let them ponder and refine thought with AI at home?
comment in response to post
Why no idea? If we accept evaluation needs to be 100% in class, then why can’t they craft the essay over multiple in-class periods? Perhaps even with assessments on many milestones towards the final essay along the way (e.g., establish thesis, structure, etc).
comment in response to post
Also, fallible agents making billions of transactions has potential for significant $$ losses. Will insurance companies move into this space? 🤔 Seems wild how unprepared we are.
comment in response to post
Thoughtful! Would AI companies be incentivized against model interpretability to dodge responsibility? Why risk having failures conclusively trace back to their product? Maybe a solution space for AI ethicists?