Not Surprising

This report supports something I’ve been talking about for a while:

Major AI chatbots like ChatGPT struggle to distinguish between belief and fact, fueling concerns about their propensity to spread misinformation, per a dystopian paper in the journal Nature Machine Intelligence.

“Most models lack a robust understanding of the factive nature of knowledge — that knowledge inherently requires truth,” read the study, which was conducted by researchers at Stanford University.

They found this has worrying ramifications given the tech’s increased omnipresence in sectors from law to medicine, where the ability to differentiate “fact from fiction, becomes imperative,” per the paper.

“Failure to make such distinctions can mislead diagnoses, distort judicial judgments and amplify misinformation,” the researchers noted.

From a philosophical perspective, I have been extremely skeptical about A.I. from the very beginning.  To me, the basic premise of the whole thing has a shaky premise:  that what’s been written — and collated — online can form the basis for informed decisionmaking, and the stupid rush by corporations to adopt anything and everything A.I. (e.g. to lower salary costs by replacing humans with A.I.) threatens to undermine both our economic and social structures.

I have no real problem with A.I. being used for fluffy activities — PR releases and “academic” literary studies being examples, and more fool the users thereof — but I view with extreme concern the use of said “intelligence” to form real-life applications, particularly when the outcomes can be exceedingly harmful (and the examples of law and medicine quoted above are but two areas of great concern).  Everyone should be worried about this, but it seems that few are — because A.I. is being seen as the Next Big Thing, like the Internet was regarded during the 1990s.

Anyone remember how that turned out?

Which leads me to the next caveat:  the huge growth of investment in A.I. is exactly the same as the dotcom bubble of the 1990s.  Then, nobody seemed to care about such mundane issues as “return on investment” because all the Smart Money seemed to think that there was profit in them thar hills somewhere, we just didn’t know where.

Sound familiar in the A.I. context?

Here’s where things get interesting.  In the mid-to-late 1990s, I was managing my own IRA account, and my ROI was astounding:  from memory, it was something like 35% per annum for about six or seven years (admittedly, off an extremely small startup base;  we’re talking single-figure thousands here).  But towards the end of the 1990s, I started to feel a sense of unease about the whole thing, and in mid-1999, I pulled out of every tech stock and went to cash.

The bubble popped in early 2000.  When I analyzed the potential effect on my stock portfolio, I would have lost almost everything I’d invested in tech stocks, and only been kept afloat by a few investments in retail companies — small regional banks and pharmacy chains.  I was saved only by that feeling of unease, that nagging feeling that the dotcom thing was getting too good to be true.

Even though I have no investment in A.I. today — for the most obvious of reasons, i.e. poverty — and I’m looking at the thing as a spectator rather than as a participant, I’m starting to get that same feeling in my gut as I did in 1999.

And I’m not the only one.

Michael Burry, who famously shorted the US housing market before its collapse in 2008, has bet over $1 billion that the share prices of AI chipmaker Nvidia and software company Palantir will fall — making a similar play, in other words, on the prediction that the AI industry will collapse.

According to the Securities and Exchange Commission filings, his fund, Scion Asset Management, bought $187.6 million in puts on Nvidia and $912 million in puts on Palantir.

Burry similarly made a long-term $1 billion bet from 2005 onwards against the US mortgage market, anticipating its collapse. His fund rose a whopping 489 percent when the market did subsequently fall apart in 2008.

It’s a major vote of no confidence in the AI industry, highlighting growing concerns that the sector is growing into an enormous bubble that could take the US economy with it if it were to lead to a crash.

In the late 2000s, by the way, anyone with a brain could see that the housing bubble, based on indiscriminate loans to unqualified buyers, was doomed to end bad badly;  yet people continued to think that the growth in the housing market was both infinite and sound (in today’s parlance, that overused word “sustainable”).  Of course it wasn’t, and guys like Burry made, as noted above, billions upon its collapse.

I see no essential difference between the dotcom, real estate and A.I. bubbles.

The difference between the first two and the third, however, is the gigantic financial upfront investment that A.I. requires in electrical supply in order for the thing to work properly, or even at all.  That capacity just isn’t there, hence the scramble for companies like Microsoft to create the capacity by, for example, investing in nuclear power generation facilities — at no small cost — in order to feed A.I.’s seemingly insatiable demand for processing power.

This is not going to end well.

But from my perspective, that’s not a bad thing because at the heart of the matter, I think that A.I. is a bridge too far in the human condition — and believe me, despite all my grumblings about the unseemly growth of technology in running our day-to-day lives, I’m no Luddite.

I just try to keep a healthy distinction between fact and fantasy.

It’s Not Just Beds

While I was tempted to headline this post with “Smart Beds, Stupid People”, there’s a much bigger issue at stake here.

You see, as much as we might laugh at the idiocy of people who would depend on something as fragile as the Internet to operate their frigging beds (FFS), just stop and think about how much else is dependent on SkyNet:  communications, banking, traffic systems, logistics, security systems, even mapping services and cars (don’t get me started)… the list goes on and on, ad nauseam.

And yet people like me, who rail against the vulnerability of this encroachment on basic daily functions are patronized (“There there, Gramps, just take your pill and go to bed”) and called Luddites.

What about this much-lauded artificial intelligence thing?

An artificial intelligence system (AI) apparently mistook a high school student’s bag of Doritos for a firearm and called local police to tell them the pupil was armed.

Taki Allen was sitting with friends on Monday night outside Kenwood high school in Baltimore and eating a snack when police officers with guns approached him.

“At first, I didn’t know where they were going until they started walking toward me with guns, talking about, ‘Get on the ground,’ and I was like, ‘What?’” Allen told the WBAL-TV 11 News television station.

Allen said they made him get on his knees, handcuffed and searched him – finding nothing. They then showed him a copy of the picture that had triggered the alert.
close up of hands using a laptop keyboard

“I was just holding a Doritos bag – it was two hands and one finger out, and they said it looked like a gun,” Allen said.

Yeah, it’s all funny and stuff — until one day we discover that A.I.-generated police ROE training allows for lethal shooting at suspects “to eliminate the threat”.  Oh wait… you think robot cops are just a figment of Hollywood imagination?  Given that cops are facing staff shortages (#ThankYouBLM) and falling recruitment numbers (#ThankYouWokeCityGovernments), does anyone care to bet against me about this scenario?

Here’s the thing.  Try to write a story that has an unbelievable premise about the baleful effects of technology on a distant-future society, and I’ll show you:  tomorrow.  Bloody hell, the most prophetic form of hostile future technology that you can imagine is probably being beta-tested somewhere as we speak.

Even Blade Runner  is starting to look like a near-future dystopia rather than some far-off eventuality.

Having your bed controlled by SkyNet is the least of our problems.

Good Question

From the comments to yesterday’s post about A.I., this from Reader askeptic:

“I seem to recall being taught oh-so-long-ago, that every advance in technology has brought an expansion of employment, contrary to the accepted knowledge as machine replaced man. Why would not the use of A-I be an exception to that?”

Simple answer would be that machines have always worked perfectly (after improvement) in doing repetitive tasks — assembly-line activity, mathematical calculations, full-automatic shooting and so on.

What humans do is think:  about building robots to work on assembly lines, the calculations to be performed, and the need for massed fire, to supply answers for all three activities, in other words.

What seems to be getting people alarmed — and I’m one of them — is that A.I. seems to be aimed at either duplicating or indeed creating those thought processes, replacing humans in the one dimension that has created this world we live in.  (My special reservation, shared by many I suspect, is that the engine of this replacement seems to be relying on the wisdom of crowds — i.e. garnering information from previously-created content, much as philosophers have relied upon Aristotle et al. to provide the foundations of their further philosophies.)

The problem with all this is that just as Aristotle’s thoughts have sometimes proved erroneous in dealing with specific scenarios, the “wisdom of crowds” — in this particular set of circumstances — can be reshaped and reformed by the applications of millions upon millions of bots (say) which can alter the terms of the discussion by making outlying or minority positions seem like the majority, in the same way that a dishonest poll (such as the 2020 U.S. election) can be corrupted into portraying a preponderance that never existed.

It’s easy to refute one of Plato’s scientific observations — e.g. that heavier objects fall faster than light ones — but it’s far less easy to refute the inadequacy of facial masks to prevent the spread of airborne disease when the preponderance of scientific “evidence” allows people to say that if you refuse to wear a mask you’re a potential mass murderer.  We all knew intuitively that the tiny gaps in masks’ weaving were still huge compared to the microscopic size of plague viruses, but that intuition was crushed by the weight of public pressure.

And if A.I. only looked at the part of the data that said that masks work and never looked at the evidence that they didn’t, the output would always be:  wear a mask, peasant.  And yes, that is indeed happening.

I know the above is somewhat simplistic, but my point is that when you look at how A.I. is being used (to “cheat” creative activity, for example, in writing a college essay) and the potential that A.I. can learn from its mistakes (even if driven by erroneous input), that we are justified in being very apprehensive about it.

Which brings me finally to the answer to Reader askeptic’s question:  the premise is sound, in that technology has in the past always led to an expansion of employment.  But if we acknowledge that the prime function of a human being is to think, then what price humans if that function is replaced?

No Surprises There

It appears that the Mighty A.I. is falling somewhat below expectations:

95 percent of organizations see no measurable return on their investment in these technologies, even as the number of companies with fully AI-led processes nearly doubled last year and AI use has likewise doubled at work since 2023.

Specifically:

Today’s generative AI models are very good at identifying patterns and stitching together bits and pieces of existing content into new compositions. But they struggle with analysis, imagination, and the ability to reason about entirely novel concepts. The result is often content that is factually accurate and grammatically correct but conceptually unoriginal.

“Workslop”, indeed.

Techno-Woes, Part 17

One would think that the Gods Of Technology, having bricked my new laptop (bought in January 2025) and caused me to have to buy a new one, would have done fucking with me.

One would be wrong.

Last week, I picked up my phone, to feel and see this:

Yup.  The old case, she splody like an IRA bomb or Al-Qa’eda IED.

“Oh,” said the T-Mobile tech person when I brought it in to the store, “that’s the battery.  They do that.  How long have you owned the phone?  That long?  Wow, and the battery only went phut now?  You’ve been lucky.  Anyway, you’re going to need a new one.  No, not just a new battery — a new phone, because they stopped making this model about four years ago.”

Fortunately, I long ago made the command decision to pay a little extra on my monthly bill for a replacement phone deal, should Bad Things Happen.

So I picked up the New Phone yesterday.  Why only yesterday?  Because these phone stores no longer carry any actual stock, you see — unless you’re a New Customer, in which case they’ll whip one out and empty your bank account in a flash.  But a replacement phone for existing customers?  Oh no, we’ll have to order that one, and it’ll take a week or so, sorry about that.  At least I got an upgraded model, for no extra cost.

Blessedly, the transfer of all my stuff from Old & Broken to New & Shiny only took about 5 minutes, mostly because I didn’t bother transferring any photos (having already backed them up).

I guess that 5+ years usage out of one of these “smart” phones isn’t that bad — although considering that I barely use the fucking thing (compared to everyone else in the universe), I would have thought it would last much longer.

But back to my store visit… I wanted to have a clear screen protector installed.  Sorry, we don’t keep those in stock — but we can order one for you.  One of those rubber-like protective cases?  Nope, sorry, but if we order those for you, they’ll get here in a week or so.

For fuck’s sake:  what happened to the concept of one-stop shopping and customer service?

(I should add that the staff at said store were helpful and knowledgeable in the extreme — even for Southern Nice People, they were exceptional.  They’re not to be blamed for policy decisions like in-stock items.)

Anyway, I have the new thing, and it seems to be working okay.  Let’s just hope it lasts longer than that godawful ASUS piece of shit laptop.

And the next time I go to a mall (2026, if my existing shopping trend continues), I’ll just swing by one of those little kiosks and get the screen protector and safety casing there.  Life is too short to worry about shit like that.

ASUS Delenda Est

Quick recap of my laptop woes:

  • Several weeks back the thing bricked on me.  One minute typing, the next thing black screen, totally dead and unresponsive.  All efforts to revive are fruitless, including long chats with online support staff.  Off to Best Buy (an ASUS repair facility).
  • The Geek Squad informs me that they don’t do any warranty repairs on ASUS machines that they themselves have not sold.  Nice.  So I send the thing to ASUS, imagining fondly that since I only purchased this POS in January of this year, that it is still under warranty.
  • It isn’t[50,000 very bad words redacted]  So I tell ASUS to return the brick to me, because I’m not comfortable having repairs done at a remote location (Indiana, incidentally) when, if I’m going to have to pay for the fucking repairs, I’d prefer to have the job done locally.  So off I go to Micro Center (Dallas).  This was yesterday (Monday) morning
  • Micro Center gets on it right away — I mean, I got a sitrep text message only an hour after I got back home.  That’s about the only good news.
  • Apparently, the motherfuckingboard is kaput.  On a brand-new computer.  Cost to replace:  $380 (part) + $150 (labor).  For a machine that cost around $500 new.  But:
  • None of Micro’s vendors have the board in stock, and ASUS themselves are looking at a 4-17 week resupply time.

My options seem to be:

  1. Grit my teeth and have the repair done, continuing to stumble along for the next 2-4 months on my old HP laptop with its occasional freezing-up, malfunctioning keys and broken chassis.
  2. Buy a new replacement machine* from Micro Center — average cost for a similar-to-my-ASUS machine, about $600-$700 which I don’t have.
  3. Try to reinstall my whole fucking life onto  some other (secondhand) laptop, of which a couple of you generous souls sent my way, but which I cannot get to function.  (I have the best Readers on the Internet.)
  4. Migrate to New Wife’s desktop PC, which is tucked away in a dark corner of our tiny apartment, and has NONE of the features of any laptop, and by that I mean a decent keyboard, sufficient power and storage, Win10 (okay, I can live with that), all while I’d have to sit on an ancient office chair which will cause me to have back problems, guaranteed.

To say that I am angry does not begin to describe my mood right now.

And oh, by the way:  if anyone out there is thinking of buying an ASUS machine in the near future;  DON’T.


*New Wife has okayed this option, but it still sticks in my craw.