I haven’t polished my thoughts for part two of this projected three parts but actually I do think it’s important that our specialist objections are also registered in addition to the fundamental objections.
I can recognize that ‘AI’ is a misleading term here: clearly some uses more specialized, compelling/ethical but there is a huge wave of really really unthoughtful uses of the widely available chatbots by researchers and students. ‘Oh it’s a research assistant’. ‘Oh it can edit my writing.’ 👎👎👎
I like your idea of advocating for openness about the environmental cost. I might think about how often and in what ways I used it if it was a bit more concrete the consequences of doing so. I think that critique has a longer shelf life than a lot of the discussions about accuracy (which changes)
What concerns me a lot about the accuracy discussion is that knowing whether or not these tools are is an expert skill. I don’t think it is *better* if their accuracy increases to 90%, but most people using them don’t know which 10% is wrong. In fact I think that’s worse
I am not someone who thinks AI, even currently, has zero acceptable use cases. I do think the concerted push to have every monkey at every typewriter pushing inputs to these models and simply making use of them is emphatically not for our benefit. They want ubiquity for a reason.
Comments