Not Surprising

This report supports something I’ve been talking about for a while:

Major AI chatbots like ChatGPT struggle to distinguish between belief and fact, fueling concerns about their propensity to spread misinformation, per a dystopian paper in the journal Nature Machine Intelligence.

“Most models lack a robust understanding of the factive nature of knowledge — that knowledge inherently requires truth,” read the study, which was conducted by researchers at Stanford University.

They found this has worrying ramifications given the tech’s increased omnipresence in sectors from law to medicine, where the ability to differentiate “fact from fiction, becomes imperative,” per the paper.

“Failure to make such distinctions can mislead diagnoses, distort judicial judgments and amplify misinformation,” the researchers noted.

From a philosophical perspective, I have been extremely skeptical about A.I. from the very beginning.  To me, the basic premise of the whole thing has a shaky premise:  that what’s been written — and collated — online can form the basis for informed decisionmaking, and the stupid rush by corporations to adopt anything and everything A.I. (e.g. to lower salary costs by replacing humans with A.I.) threatens to undermine both our economic and social structures.

I have no real problem with A.I. being used for fluffy activities — PR releases and “academic” literary studies being examples, and more fool the users thereof — but I view with extreme concern the use of said “intelligence” to form real-life applications, particularly when the outcomes can be exceedingly harmful (and the examples of law and medicine quoted above are but two areas of great concern).  Everyone should be worried about this, but it seems that few are — because A.I. is being seen as the Next Big Thing, like the Internet was regarded during the 1990s.

Anyone remember how that turned out?

Which leads me to the next caveat:  the huge growth of investment in A.I. is exactly the same as the dotcom bubble of the 1990s.  Then, nobody seemed to care about such mundane issues as “return on investment” because all the Smart Money seemed to think that there was profit in them thar hills somewhere, we just didn’t know where.

Sound familiar in the A.I. context?

Here’s where things get interesting.  In the mid-to-late 1990s, I was managing my own IRA account, and my ROI was astounding:  from memory, it was something like 35% per annum for about six or seven years (admittedly, off an extremely small startup base;  we’re talking single-figure thousands here).  But towards the end of the 1990s, I started to feel a sense of unease about the whole thing, and in mid-1999, I pulled out of every tech stock and went to cash.

The bubble popped in early 2000.  When I analyzed the potential effect on my stock portfolio, I would have lost almost everything I’d invested in tech stocks, and only been kept afloat by a few investments in retail companies — small regional banks and pharmacy chains.  I was saved only by that feeling of unease, that nagging feeling that the dotcom thing was getting too good to be true.

Even though I have no investment in A.I. today — for the most obvious of reasons, i.e. poverty — and I’m looking at the thing as a spectator rather than as a participant, I’m starting to get that same feeling in my gut as I did in 1999.

And I’m not the only one.

Michael Burry, who famously shorted the US housing market before its collapse in 2008, has bet over $1 billion that the share prices of AI chipmaker Nvidia and software company Palantir will fall — making a similar play, in other words, on the prediction that the AI industry will collapse.

According to the Securities and Exchange Commission filings, his fund, Scion Asset Management, bought $187.6 million in puts on Nvidia and $912 million in puts on Palantir.

Burry similarly made a long-term $1 billion bet from 2005 onwards against the US mortgage market, anticipating its collapse. His fund rose a whopping 489 percent when the market did subsequently fall apart in 2008.

It’s a major vote of no confidence in the AI industry, highlighting growing concerns that the sector is growing into an enormous bubble that could take the US economy with it if it were to lead to a crash.

In the late 2000s, by the way, anyone with a brain could see that the housing bubble, based on indiscriminate loans to unqualified buyers, was doomed to end bad badly;  yet people continued to think that the growth in the housing market was both infinite and sound (in today’s parlance, that overused word “sustainable”).  Of course it wasn’t, and guys like Burry made, as noted above, billions upon its collapse.

I see no essential difference between the dotcom, real estate and A.I. bubbles.

The difference between the first two and the third, however, is the gigantic financial upfront investment that A.I. requires in electrical supply in order for the thing to work properly, or even at all.  That capacity just isn’t there, hence the scramble for companies like Microsoft to create the capacity by, for example, investing in nuclear power generation facilities — at no small cost — in order to feed A.I.’s seemingly insatiable demand for processing power.

This is not going to end well.

But from my perspective, that’s not a bad thing because at the heart of the matter, I think that A.I. is a bridge too far in the human condition — and believe me, despite all my grumblings about the unseemly growth of technology in running our day-to-day lives, I’m no Luddite.

I just try to keep a healthy distinction between fact and fantasy.

9 comments

  1. I recently attended a talk by an author who used ChatGPT to keep his works straight. He used it to keep his style consistent, make sure characters kept their own voices (e.g. educated vs rough), pick up plot errors, and that the style of the writing was correct for the target market, check character arcs, etc. Key was that he did not use it to generate text

  2. I’d add the Bitcoin Bubbles in with the dot com bubble. Both being built on hype with nothing tangible to support them. The housing bubble / S& L scandal was more a regulatory failure and outright fraud than a speculative bubble. Then there is the Electric Car industry debacle ( still playing out ) which is best described as lemming following each other over a cliff at the direction of the green Party.

    The A.I. bubble is similar in the there is no such thing yet as Artificial Intelligence. But there is also a huge unsatisfied demand the high end chips made by companies such as Nvidia for all sorts of uses besides the LLM’s that people mistake for A.I.

    So……. is there a bubble of some overvalued securities? Yes, probably, but not to the same speculative levels as we saw in the Dot Com Madness. There will be another Pull Back ( 10 -15 % ) but not a bloodbath.

  3. As far as I’m concerned, tech everything has lost it’s luster. It’s not a timesaver as has been constantly touted but rather an enormous time sink in thousands of tiny ways so that the average person doesn’t notice.

    Just one area, and I think Kim touched on this recently, is the time wasted just trying to find and cue up something to watch on a streaming service. I found the whole thing so frustrating that I almost immediately lost interest.

    Same with my “smartphone” – it’s not as smart as I require. It upgrades when I’m not looking then nothing works the same, so I have to spend more time figuring it out and nothing about any of it is intuitive.

    And don’t get me started on this ghastly Windows 10 computer which is constantly warning me is no longer updateable and I MUST upgrade right now!

    My work computer, the most important “tech” that I own, is an old but refurbed Windows XP machine running 20+ year old software (AutoCAD, Photoshop, Word, etc.) and everything about it is intuitive, familiar, and just plain works, every.single.time. And, I can almost instantly fix it is it fails. But it almost never does.

    Now that I’m old (70) I just don’t have time to waste on time saving tech shenanigans.

  4. Do an interwebz search on a.i. mad cow disease. Long story short, when the a.i. “eats” other a.i. generated content, it breaks down.

  5. For me, the best use of AI is when I have a question, but not enough information to Google it properly.

    “Who was that woman with red hair in that one movie with Tom Cruise?”

    Google will show me two ads for Jack Reacher (which was horribly miscast, by the way) and two Wikipedia entries for Helen Rodin/Rosamund Pike (who is cute, but doesn’t have red hair.)

    ChatGPT will show me four possibilities, and ask for more information about the movie to narrow it down.

    But no, if I have a legal issue I’ll call a lawyer, and if it’s medical, I’ll call my doctor.

  6. The problem with AI is that it lacks human judgement. Human judgement, although it can be wrong, is a feature, not a bug of the human being.

    I read SOTI about how mechanization replaced physical work. This new AI is replacing human judgement.

    If we use AI to replace a physician to assess, diagnose and prescribe treatment for a patient, who is left responsible for any errors? Will the nurse be held accountable for administering the wrong dose or wrong medication? Will the programmer of the AI algorithm be held accountable?

    As an aside, solar and wind certainly will not be able to supply enough power for AI. Best case scenario I see is that we use AI to expose the fraud of solar and wind power, then scrap AI afterwards.

  7. AI’s do not operate on the level of “concept”. They do not even have a concept of concept, and so they cannot integrate concepts, nor can they have a concept of epistemology, so they cannot evaluate what is and isn’t real.

    What they do operate on is a corpus of training material transformed into a seething churn of quasi self organizing mass of correlational, brain hurty math.

    This correlational math can be tortured into a couple of party tricks that encode the corpus to tokens, arrange the tokens in a latent space, decode tokens from the latent space, measure distances in the latent space, and predict sequences of tokens based on an input prompt.

    Every interaction you can have with an AI is a clever application of the inner party tricks, and it has some limited usefulness, whose edges we’re only just exploring now.

    It’s gonna go one of 2 ways: AI will break the concept barrier to achieve actual intelligence, or it will plateau at the seething mass of correlational math level. Essentially, it’s a race of all that against the economic bubble.

Comments are closed.