It appears that the Mighty A.I. is falling somewhat below expectations:
95 percent of organizations see no measurable return on their investment in these technologies, even as the number of companies with fully AI-led processes nearly doubled last year and AI use has likewise doubled at work since 2023.
Specifically:
Today’s generative AI models are very good at identifying patterns and stitching together bits and pieces of existing content into new compositions. But they struggle with analysis, imagination, and the ability to reason about entirely novel concepts. The result is often content that is factually accurate and grammatically correct but conceptually unoriginal.
“Workslop”, indeed.
Funny this should be posted today. I’m a financial advisor at a BIG firm and the younger folks are all worried about their choice of career. We’ve started using AI to summarize client phone calls, which is a terrific use of it—I don’t have to take notes, I can focus on what the client is saying, dig deeper, and know that I will get more detailed notes, including numbers, than I could take myself. That’s a great use of AI.
OTOH, clients are asking, “You’re not moving to AI, are you? I want to talk to a person.” This is the overwhelming response, and we are not finding ANY clients who want us to use AI in answering their questions or crafting their plans. Clients want to have a conversation with a person about our capital markets assumptions, about the various different options open to them, especially about the probability of them achieving their goals.
I think AI can be a force multiplier for financial advisors—in things like taking notes, comparing alternatives in terms of tax strategies, etcetera. But clients are not going to want to take advice from a machine, they want to talk to a person. Overwhelmingly. At least for now. I suspect the same is true for doctors, and yes, even engineers. AI can help them be more efficient, but when someone puts up the money to have a bridge built, they want to know an engineer checked the calculations and certifies that the structure will handle the anticipated loads.
Computers and things like spreadsheets were going to make CPAs obsolete, remember? Nope, they were just force multipliers for good accountants. People are looking for AI to organize and analyze information right now, but for the most part, I don’t think they are going to look to it to make decisions. At least for now. At my age, likely in my lifetime.
In my grandkids’ lifetimes? Who knows? Maybe it will change. But I don’t see HAL (2001 a space odyssey) anytime soon. I could be wrong, but I’m not too worried about it at the moment.
JC,
I’m a support pro for three advisors at a LARGE financial services team. Two of my three FAs are already adopting AI much like you, the third is a tech luddite about to retire and could care less. A agree w/ your summation .. we’re a ways off from having AI making investment or strategy decisions. Most of my colleagues embracing our use of AI see it like you .. as a tool for taking better notes, writing better letters/emails, etc. In the email writing .. I’ve reviewed some of the AI output. It’s very good, very polished, but maybe a bit too polished. The output seems like it was written by our marketing department rather than by a person.
Tangentially, my market place is about ready to roll out e-Notary services. I’ve seen the demos, the process is slick but the amount of coordination makes me wonder if it will be used very much.
One thing’s for sure, in our world, the AI journey will be an interesting one.
Why would you outsource your skills to a machine?
AI will look at whole lot like automation did tn the 70s-2000. It will do some things well, some things okayish, and some horribly. It will allow people to focus on higher value added tasks. Some people doing repetitive tasks will be replaced, but the high skill ones of them will find that they have just flexed to other areas that need the same or similar skill set. As an example, the armies of drafters were not replaced by computer aided design. They just because CAD jockeys or engineers running CAD and moved to different places in the value stream.
I work for a large Tech company that is making significant investments in AI, and wants *everybody* in the organization to use it, so I’ve started keeping a chat window open and trying to integrate it into my work.
> The result is often content that is factually accurate and grammatically correct but conceptually unoriginal.
This is true of about 85% of the work produced by humans too.
And that’s fine.
Many in AI research (at least extrapolating from the few I know) want AI to replace humans as much as possible, and that’s where it’s going to fall short.
It’s *going* to be a wonderful tool for many things. Programmers seem to be getting a lot of out if, and if it can help them write code that has fewer errors, awesome.
One of my doctors uses it to transcribe our conversation–that way he can focus on talking to me, and on what is wrong rather than dividing his attention between charting, fighting with the device, and me. This does displace an assistant taking notes, but also reduces error rates (WIN).
There’s lots of subtle ways that AI could significantly improve our lives in the future, but that would require the C-Suite to give a sh*t about the people that work for them, and the people that don’t.
A.I. will never be able to sit back, put its feet up on the desk, and ponder “What if?”.