Profile avatar
carlo.pinciroli.net
Associate Prof. at Worcester Polytechnic Institute, I study #algorithms and #software #engineering for #robot #swarms. I bake stuff in my spare time. https://carlo.pinciroli.net #robotics #multirobot #multiagent #ai #risotto
54 posts 1,748 followers 1,306 following
Regular Contributor
Active Commenter
comment in response to post
3/ AI breakthroughs like AlphaFold wouldn’t be possible without decades of work on datasets. e.g., AlphaFold was trained on protein structures from the Protein Data Bank (PDB), which took 50+ years and ~$20 *billion* to create. This is the kind of foundational effort AI needs.
comment in response to post
I have tried and failed to make a similar argument for years. Every NSF proposal I submit about SoftEng for robotics gets destroyed because it's understood as mere development. It's much more, making the right tools yields thinking frameworks that become productivity and creativity amplifiers.
comment in response to post
Only by Feb 2012 had Twitter's volume crossed 100k tweets in a month for the first time. It took until April of 2013 for monthly tweet volumes to reach over 200k. And Jan 2014 was the first monthly total over 300k tweets about research. Public Bluesky is only "February 2024 years old" and has 395k.
comment in response to post
There are already many articles for which there is more attention on Bluesky than on other comparable micro-blogging sites, meaning the academic community and the general public have clearly adopted Bluesky as one of its core places to disseminate and discuss new research. A Place of Joy.
comment in response to post
Interesting. I didn't know it would be so accessible. I guess I have some reading to do...
comment in response to post
Well, then it's worse than I was expecting :-)
comment in response to post
Yes, it was a figure of speech. We agree on this.
comment in response to post
To me, defining what we're opening when we decide to open AI is core to discussing who should have access to what we're opening. I see how my way of debating has been confusing though.
comment in response to post
I just don't understand your point, I'm sorry. I'll leave it at that.
comment in response to post
But giving for granted that there's no scenario in which more powerful AI will find dangerous uses we can't foresee today is just naive. I'm not afraid of AI, I think it will be a huge net benefit, but debate and regulation must be part of its development.
comment in response to post
Think about this: 5 years ago deep nudes were not a thing, and they are widespread now. You might argue that today's AI might not be so dangerous, and this will stay for a while, and I agree with that.
comment in response to post
I still maintain that point. I replied to "AI being opened is fact" with another opinion that says the opposite. It's just not "fact".
comment in response to post
I guess my being an engineer with a strong humanities background makes me less enthusiastic than most about "move fast and break things" :-)
comment in response to post
More or less. It's yet another opinion that reaches my same conclusion - "open AI" is a vague term, and achieving it can't be done without careful regulation and debate.
comment in response to post
Nope www.nature.com/articles/s41...
comment in response to post
I agree. A possible middle ground for AI open sourcing would be to distribute not the full data, which might be infeasible, but the scripts used to fetch it and clean it up.
comment in response to post
We need to agree on what "open source" means though. To my knowledge, Chinese companies are not opening their training data, but the model weights. It's hard to retrain, but it's easy to integrate with other products. That's not really what "open source AI" means: tiny.cc/1obyzz.
comment in response to post
I don't agree with this cynical view "well, others are doing it, so it's free for all now", but at least we agree bad actors exist that could exploit it. BTW – by "bad actors", I mean terrorist groups, homegrown bot farms, etc. Again, it's a complicated topic to treat carefully.
comment in response to post
Not the bad actors I am thinking about. Even if we were talking of the same bad actors, facilitating them is not a good idea. It depends on what "open source AI" means – is it about the code, the weights, access to datasets, ...? Either way, it's different from the idea of open source behind Linux.
comment in response to post
Awesome! Thank you so much :-)
comment in response to post
Are my little projects enough to get included? I like this starter pack idea!
comment in response to post
I see your point, but it's complicated. Open source AI also risks democratizing a weaponized use of it by bad actors. Preventing both risks (concentration of power and multiplication of bad actors) is the biggest challenge ahead IMHO.