Profile avatar
jzleibo.bsky.social
I can be described as a multi-agent artificial general intelligence. OK, so some people pointed out that I am not in fact artificial, contradicting my bio. To them I would reply that I am likely also a cognitive gadget. www.jzleibo.com
24 posts 2,969 followers 233 following
Regular Contributor
Conversation Starter

Looking for a principled evaluation method for ranking of *general* agents or models, i.e. that get evaluated across a myriad of different tasks? I’m delighted to tell you about our new paper, Soft Condorcet Optimization (SCO) for Ranking of General Agents, to be presented at AAMAS 2025! 🧵 1/N

CAIF's new and massive report on multi-agent AI risks will be really useful resource for the field www.cooperativeai.com/post/new-rep...

I can't believe they've just cancelled the Epidemic Intelligence Service program at CDC. This program trains the best & brightest epidemiologists, who then go on to have distinguished careers in public health, serving at CDC, in state health departments, overseas, ...

Happy to see SfN sign on to this

Well said, @carlbergstrom.com. I also feel the dismantling of our scientific institutions & funding agencies for basic science is an attack on all scientists, wherever they might be (government or corporate lab, academic institution, ...). Our collective identities are about advancing knowledge.

Video from our tutorial @NeurIPSConf 2024 is up! @dhadfieldmenell @jzl86 @rstriv and I explore how frameworks from economics, institutional and political theory, and biological and cultural evolution can advance approaches to AI alignment neurips.cc/virtual/2024...

Seriously! This pain is real

Very happy to announce the publication of our latest paper: A theory of appropriateness with applications to generative artificial intelligence arxiv.org/abs/2412.19010 And happy new year everyone!

I’ve always tried to separate substantive politics with broad human impact from the symbolic posturing of online subcultures — but it keeps getting harder, and if the GOP is going to be run by the owner of X I may give up.

Deadline for faculty research grants up to $60k is 27 Jan! “The Research Scholar Program provides unrestricted gifts to support research at institutions around the world, and is focused on funding world-class research conducted by early-career professors” google.submittable.com/submit/ac6d7...

Nice post on talking to LLMs about philosophy: theendsdontjustifythemeans.substack.com/p/why-you-sh...

Do we (as a community, whoever that includes) think "reasoning" capability is in part responsible for our intelligent behavior? If so, why? People are terrible reasoners (and also terrible planners)

The Concordia NeurIPS workshop is today 9:00 AM - 12:00 PM! West meeting room 215, 216

Exactly 💯💯 💯

Our tutorial on cross-disciplinary insights on alignment is tomorrow neurips.cc/virtual/2024...

at #neurips2024 @neuripsconf.bsky.social from Wed-Sun. Please come to our workshop on Sunday neurips.cc/virtual/2024... We'll be discussing all things multiagents, cognitive modeling, social intelligence, and computational social science, and much more!

Very cool work from the genie team!

My tutorial with @dhadfieldmenell.bsky.social @jzleibo.bsky.social and Rakshit Trivedi on "Cross-disciplinary insights for alignment in humans and machines" is Tuesday at 1:30 Pacific; scroll down to bottom of this long list of other JHU papers and workshops!

Very interesting paper here comparing a MARL model and a Concordia model on the same topic!

Great summary thread!

These two papers, taken together, really cause a rethinking of behavioral economies. Rather than having anomalous risk preferences; it looks like people have complexity aversion to "hard" decisions, especially on valuation, which drives behavioral anomalies. Herbert Simon ftw.

Featuring @rand.org’s @toddhelmus.bsky.social:

Nice article here on the relationship between AI and other sciences, also including relevant things for social sciences too: deepmind.google/public-polic...

most people want a quick and simple answer to why AI systems encode/exacerbate societal and historical bias/injustice and due to the reductive but common thinking of "bias in, bias out," the obvious culprit often is training data but this is not entirely true 1/

Be there, will be wild!

Concordia is a library for generative agent-based modeling that works like a table-top role-playing game. It's open source and model agnostic. Try it today! github.com/google-deepm...