I manage engineering teams for a heavily-regulated financial institution. We are moving deliberately slowly with the introduction of AI because there are bad consequences for not keeping consumers’ financial information safe and secure.
One reason might be because we’re subject to Dodd-Frank making unfair, deceptive, or abusive acts or practices (UDAAP) illegal. Everything we’ve heard about GenAI hallucinations alone sounds tailor-made to put a financial institution in violation of that.
There are definitely other fields where the hallucination tendencies of GenAI have already proven to be a problem, such as legal filings. Imagine what happens when Intuit and/or H&R Block use GenAI and hallucinates you into paying more or less in taxes than you’re supposed to owe?
Our pace of hiring college graduates in CS is *increasing*. The choice of limiting AI usage being limited to low-stakes, low-risk activities means we have plenty of work remaining for software engineers.
Comments