Profile avatar
tdietterich.bsky.social
Safe and robust AI/ML, computational sustainability. Former President AAAI and IMLS. Distinguished Professor Emeritus, Oregon State University. https://web.engr.oregonstate.edu/~tgd/
634 posts 7,343 followers 475 following
Regular Contributor
Active Commenter
comment in response to post
I’ve seen posts saying that there was a very long line at one of the gates and yet no lines at other entrances. Wayfinding signage was evidently not good
comment in response to post
I meant writing and checking of formal proofs of correctness. And we need those because people do make mistakes when coding, especially introducing security vulnerabilities. Checking proofs in papers would also be great!
comment in response to post
I agree that generality is important, and that is what makes LLMs so interesting. But people treat AGI as an end goal as if achieving it will unlock huge economic value. I'm skeptical
comment in response to post
Each of these capabilities exceeds human performance, and that is exactly the point. People are not good at these tasks, and this is why we need computational help. We should be evaluating AI systems for these kinds of capabilities rather than given them IQ tests. Building AGI is a distraction end/
comment in response to post
4. speeding up physical simulations such as molecular dynamics and numerical weather models, 5. maintaining situational awareness of complex organizations and systems, 6. helping journalists discover, assess, and integrate multiple information sources, and many more. 5/
comment in response to post
Examples include 1. writing and checking formal proofs (in mathematics and for software), 2. writing good tests for verifying engineered systems 3. integrating the entire scientific literature to identify inconsistencies and opportunities 4/
comment in response to post
I think we should be building systems that complement people; systems that do well the things that people do poorly; systems that make individuals and organizations more effective and more humane. 3/
comment in response to post
It defines "intelligence" entirely in terms of human performance. It says that the most important AI system capabilities to create are exactly those things that people can do well. But is this what we want? Is this what we need? 2/
comment in response to post
You are a good follow
comment in response to post
allwitnobrevity.com/how-to-convi... ? Everything about this is *chef's kiss*
comment in response to post
One: The LDS church is famously communitarian. How much does that help entrepreneurs either directly as or a safety net? Two: These small startups are not particularly productive (in output per hour worked). How does productivity in small businesses relate to the future of our economy?
comment in response to post
A great paper, but the abstract would have been better if they had omitted "carefully crafted" -- those are just self-congratulatory words. Nowadays, we see LLMs putting similar stuff into papers and abstracts. Abstracts like this are presumably where the LLMs learn such behaviors.
comment in response to post
Sorry to see you go. Bsky needs to find a way to address this.
comment in response to post
I think it would be cool to have the system annotate my photos with field marks. I love Merlin, and it also makes mistakes. It is still very useful.
comment in response to post
Let's think about how this AI tool can be rigorously evaluated. I hope iNaturalist deploys a good UI for collecting feedback from users. I think iNaturalist users will also want a clear accounting of the carbon emissions of this new service. Transparency!
comment in response to post
It's as old as Genesis. The migratory herders didn't trust the people in Sodom and Gomorrah
comment in response to post
Yes. I’ve also seen some work using LVMs to test user interfaces, charts, and graphs prior to testing on people. If the LVMs are confused, it MAY be a sign that people would be confused too.
comment in response to post
I suppose synthetic users may be useful in some cases, but it seems to me that you would always need a user study to validate that in each case. And if you are doing a user study, you are better off not using AI at all
comment in response to post
There would be no supply if there was no demand. We all share responsibility
comment in response to post
Fair or not, the Portland protests were a disaster for Portland. A small number of chaos agents can destroy the political effectiveness of a demonstration. Resistance requires disciplined organization including identifying and excluding chaos agents.
comment in response to post
Y = AGI, of course
comment in response to post
This is how research works. We hypothesize “If we can do X, then we will achieve Y.” But when we succeed in doing X, we don’t get Y, because we were unaware of other factors. Y turns out to be more complicated than we expected. But maybe if we do Z we can get Y? Repeat
comment in response to post
The party was also a very broad coalition. Before LBJ's presidency, it included many racist segregationists!
comment in response to post
I'm hoping those consequences are Democratic wins
comment in response to post
Proud to see Oregon State here!!
comment in response to post
Mr. President, Sir.
comment in response to post
Unlike many of the replies, I find some of these tools very useful. However, I agree about the revenue, at least in the 5-year horizon. These are mostly "nice to have" rather than "life changing" tools.
comment in response to post
Isn't the message that you need to be careful HOW you use your pencil/AI rather than that you SHOULD use it?
comment in response to post
Your can get estimates of P(Y|X) from decision trees directly from leaf counts. But yes, not from SVMs without fitting a second probabilistic layer
comment in response to post
Yes!! Originally, generative ML modeled a joint distribution, e.g., P(X,Y), whereas discriminative ML modeled a conditional distribution, e.g., P(Y|X). A famous paper by Ng and Jordan (papers.nips.cc/paper_files/...) compared naive Bayes (generative) to logistic regression (discriminative). 1/
comment in response to post
Bottom line: I agree that calling something "generative AI" is not very useful and it is better to talk about specific tasks.
comment in response to post
One further sense of "generative" contrasts the ability to generate Y from a distribution (say, P(Y)) vs being able to evaluate the probability (or likelihood) P(Y) given a particular Y. Evaluating P(Y) is good for scoring, ranking, and anomaly detection. 6/
comment in response to post
GANs (Generative Adversarial Networks) model P(Y) where Y is often a large object such as an image. They are generative in both senses of the word, but of course conditional GANs have been more useful, and they model P(Y|X). 5/
comment in response to post
Another sense of generative is that these systems "generate" fairly complex outputs (a document, an image, etc.). Hence, they may model P(Y|X), but X (the prompt) is much smaller than Y (the output). 4/
comment in response to post
Today's generative models (LLMs, image generators, etc.) are primarily trained conditionally. E.g., the LLM is trained to predict P(y_(t+1) | y_1, ..., y_t). And diffusion models are trained similarly. So according to the old usage, they are not generative. 3/
comment in response to post
They concluded that discriminative models were almost always to be preferred because they learned faster (i.e., with less data). 2/
comment in response to post
Yes!! Originally, generative ML modeled a joint distribution, e.g., P(X,Y), whereas discriminative ML modeled a conditional distribution, e.g., P(Y|X). A famous paper by Ng and Jordan (papers.nips.cc/paper_files/...) compared naive Bayes (generative) to logistic regression (discriminative). 1/