Profile avatar
horizonevents.info
Non-profit dedicated to advancing AI safety R&D through targeted events and community initiatives. https://horizonevents.info/
17 posts 25 followers 12 following
Regular Contributor

AI Safety Events & Training: 2025 week 1 update aisafetyeventsandtraining.substack.com/p/ai-safety-...

AI Safety Events & Training: 2024 week 51 update – and 2024 review aisafetyeventsandtraining.substack.com/p/ai-safety-...

Guaranteed Safe AI Seminars 2024 review horizonomega.substack.com/p/guaranteed... The monthly seminar series grew to 230 subscribers in 2024, hosting 8 technical talks. We had ~490 RSVPs, with ~76 hours and ~900 views of the recordings. Seeking 2025 funding; plans include bibliography and debates.

Using PDDL Planning to Ensure Safety in LLM-based Agents by Agustín Martinez Suñé Thu January 9, 18:00-19:00 UTC Join: lu.ma/08gr7mrs Part of the Guaranteed Safe AI Seminars

Compact Proofs of Model Performance via Mechanistic Interpretability by Louis Jaburi Thu December 12, 18:00-19:00 UTC Join: lu.ma/g24bvacw Last Guaranteed Safe AI seminar of the year

Our goals for 2025: - Guaranteed Safe AI Seminars - AI Safety Unconference 2025 - AI Safety Events & Training newsletter - Monthly Montréal AI safety R&D events - Grow partnerships We are looking for donations to support this work. More info: manifund.org/projects/hor...

Today on the Guaranteed Safe AI Seminars series: Bayesian oracles and safety bounds by Yoshua Bengio Relevant readings: - yoshuabengio.org/2024/08/29/b... - arxiv.org/abs/2408.05284 Join: lu.ma/4ylbvs75

Bayesian oracles and safety bounds by Yoshua Bengio, Scientific Director, Mila & Full Professor, U. Montreal November 14, 18:00-19:00 UTC Join: lu.ma/4ylbvs75 Part of the Guaranteed Safe AI Seminars

Announcing the Guaranteed Safe AI Seminars. This monthly series brings together researchers to discuss and advance the field of GS AI, which aims to produce AI systems equipped with high-assurance quantitative safety guarantees. horizonomega.substack.com/p/announcing...

Constructability: Designing plain-coded AI systems by ​Charbel-Raphaël Ségerie & Épiphanie Gédéon August 8, 17:00-18:00 UTC Join: lu.ma/xpf046sa As part of the Guaranteed Safe AI Seminars

You are invited to the Guaranteed Safe AI Seminars, July 2024 edition. Proving safety for narrow AI outputs – Evan Miyazono, Atlas Computing Thursday, July 11 11:30-12:30 UTC-5 RSVP: lu.ma/2715xmzn

Introducing Horizon Events: A non-profit consultancy dedicated to advancing AI safety R&D through high-impact events and initiatives. horizonomega.substack.com/p/introducin...

Next edition of the Provable AI Safety seminars: Gaia: Distributed planetary-scale AI safety By ​Rafael Kaufmann, Co-founder and CTO. Thursday June 13, 13:00-14:00 Eastern, online. Join us! lu.ma/qn8p4wp4

You are invited to the 2nd edition of the Provable AI Safety Seminars: **Provable AI Safety, Steve Omohundro** May 9th, 13:00-14:00 EDT, online lu.ma/3fz12am7

Announcing the first edition of the Provable AI Safety Seminars. April 11th, 13:00-14:00 EDT. Monthly, on 2nd Thursday. RSVP: lu.ma/provableaisa... Talks: - Synthesizing Gatekeepers for Safe Reinforcement Learning (Sefas) - Verifying Global Properties of Neural Networks (Soletskyi)

AI Safety Events Tracker, February 2024 edition. A newsletter listing upcoming events and open calls related to AI safety. aisafetyeventstracker.substack.com/p/ai-safety-...

AI Safety Events Tracker, December 2023 edition Listing upcoming events and open calls related to AI safety aisafetyeventstracker.substack.com/p/ai-safety-...

At Devconnect Istanbul tomorrow and interested in AI risk? You are invited to a 2h participative discussion tackling topics at the intersection of web3 and AI risk. lu.ma/vh9hrgme