Profile avatar
christophmolnar.bsky.social
Author of Interpretable Machine Learning and other books Newsletter: https://mindfulmodeler.substack.com/ Website: https://christophmolnar.com/
131 posts 5,685 followers 966 following
Regular Contributor
Active Commenter

Can an office game outperform machine learning? My most recent post on Mindful Modeler dives into the wisdom of the crowds and prediction markets. Read the full story here:

A year ago, I took a risk & spent quite some time on a ML competition. It paid off—I won 4th place overall & 1st in explainability! Here's a summary of the journey, challenges, & key insights from my winning solution (water supply forecasting)

OpenAI right now

The original SHAP paper has been cited over 30k times. The paper showed that attribution methods, like LIME and LRP, compute Shapley values (with some adaptations). The paper also introduces estimation methods for Shapley values, like KernelSHAP, which today is deprecated.

To this day, the Interpretable Machine Learning book is still my most impactful project. But as time went on, I dreaded working on it. Fortunately, I found the motivation again and I'm working on the 3rd edition. 😁 Read more here:

How I sometimes feel working on "traditional" machine learning topics instead of generative AI stuff 😂

It's quite ironic how people who built the best prediction models are such bad predictors themselves. They throw all their knowledge about how to make good predictions overboard and just claim things like: AI will replace radiologists in a few years or when they expect AGI.

The problem with all these AI demos (especially for image and video generation): They are the most impressive, cherry-picked examples. That includes cherry-picking prompts and themes that produced better results. But as a user, you want good results for every prompt/theme relevant to your use case

Looking for a Christmas gift for a stubborn Bayesian or an over-hyped AI enthusiast? Modeling Mindsets is a short read to broaden your perspective on data modeling. christophmolnar.com/books/modeli... *Hat not included.

My personal rules for AI-assisted writing: • Use AI only for small and specific stuff, like grammar fixes or making suggestions for factual corrections. • Never let an LLM change voice and tone. • I review any changes made by AI.

What a sad timeline, where vaccines — one of medicine's clearest wins with all upside and minimal downside — have become targets. Can't we have like an anti-knee arthroscopy movement or whatever instead?

Citing a non-deterministic, "hallucinating", and non-reproducible LLM output is wild. While the norms and best practices are evolving, citing them seems the wrong way. (even wilder when some people add "ChatGPT" as their co-authors)

What are shapley interactions and why should you care about them? This is a guest post by Julia, Max, Fabian and Hubert on my newsletter Mindful Modeler. I also learned a lot from this post and definitely recommend checking out the shapiq package. mindfulmodeler.substack.com/p/what-are-s...

Is anyone aware of a completely AI-generated book that people actually read? Excluding books that are dedicated "AI experiments" and where the book is more about the experiment. Also excluding AI-assisted books where generative AI played a minor role

The unofficial GIF-based pandas library documentation. pandas.DataFrame.rolling

Without non-linear activation functions, neural networks would be linear models, no matter how many layers are stacked.