If You Don’t Use It…

…of course you’re going to lose it.  This post on Musk-X triggered a train of thought from me:

Just had a fascinating lunch with a 22-year-old Stanford grad. Smart kid. Perfect resume. Something felt off though. He kept pausing mid-sentence, searching for words. Not complex words – basic ones. Like his brain was buffering. Finally asked if he was okay. His response floored me.

“Sometimes I forget words now. I’m so used to having ChatGPT complete my thoughts that when it’s not there, my brain feels… slower.”

He’d been using AI for everything. Writing, thinking, communication. It had become his external brain. And now his internal one was getting weaker.

This concerns me, because it’s been an ongoing topic of conversation between the Son&Heir (a devout apostle of A.I.) and me (a very skeptical onlooker of said thing).

I have several problems with A.I., simply because I’m unsure of the value of its underlying assumption — its foundation, if you will — which believes that the accumulated knowledge on the Internet is solid:  that even if there were some inaccuracies, they would be overcome by a preponderance of the correct theses.  If that’s the case, then all well and good.  But I am extremely leery of those “correct” theses:  who decides what is truth, or nonsense, or (worst of all) highly plausible nonsense which only a dedicated expert (in the truest sense of the word) would have the knowledge, time and inclination to correct.  The concept of A.I. seems to be a rather uncritical endorsement of “the wisdom of crowds” (i.e. received wisdom).

Well, pardon me if I don’t agree with that.

But returning to the argument at hand, Greg Isenberg uses the example of the calculator and its dolorous effect on mental arithmetic:

Remember how teachers said we needed to learn math because “you won’t always have a calculator”? They were wrong about that. But maybe they were right about something deeper. We’re running the first large-scale experiment on human cognition. What happens when an entire generation outsources their thinking?

And here I agree, wholeheartedly.  It’s bad enough to think that at some point, certain (and perhaps important) underpinnings of A.I. may turn out to be fallacious (whether unintended or malicious — another point to be considered) and large swathes of the A.I. inverted pyramids’ points may have been built, so to speak, on sand.

Ask yourself this:  had A.I. existed before the reality of astrophysics had been learned, we would have believed, uncritically and unshakably, that the Earth was at the center of the universe.  Well, we did.  And we were absolutely and utterly wrong.  After astrophysics came onto the scene, think how long it would take for all that A.I. to be overturned and corrected — as it actually took in the post-medieval era.  Most people at that time couldn’t be bothered to think about astrophysics and just went on with their lives, untroubled.

What’s worse, though, is that at some point in the future the human intellect, having become flabby and lazy through its dependence on A.I., may not have the basic capacity to correct itself, to go back to first principles because quite frankly, those principles would have been lost and our capacity to recreate them likewise.

Like I said, I’m sure of only two things in this discussion:  the first is the title of this post, and the second is my distrust of hearsay (my definition of A.I.).

I would be delighted to be disabused of my overall position, but I have to say it’s going to be a difficult job because I’m highly skeptical of this new wonder of science, especially as it makes our life so much easier and more convenient:

He’d been using AI for everything. Writing, thinking, communication. It had become his external brain.

It’s like losing the muscle capacity to walk, and worse still the instinctive knowledge of how to walk, simply because one has come to depend completely on an external machine to carry out that function of locomotion.


P.S.  And I don’t even want to talk about this bullshit.

16 comments

  1. Weren’t Plato and/or Socrates against writing as it would destroy the exercise of memory as one could just look something up.

    I guess about the only AI I use is spellcheck and stay away from all else.

  2. Sturgeon’s Law applies here, as it does everywhere else: 90% of Everything is Crap.

    People are lazy and selfish. Anything that lets them (us) indulge those traits will be embraced by the vast majority who don’t want to consider the consequences of such indulgence.

  3. Many people who rely on technology today have become dumb as a pile of rocks. Many age groups too not just young. After the SCAMdemic it became worse. There’s people who can’t even have any kind of decent in person conversation. And even phone conversations. People space out. Can’t focus. You tell someone something and two seconds later they say you never told them that. It’s frustrating.

    Watch the movie Idiocracy if you have not yet watched it. I think that movie is what our real world is becoming.

  4. That young man sounds very like my post-severe-concussion brain, only much worse, and he did it to himself, by choice.

    At least there’s hope that I’ll eventually get back to baseline. It sounds like it will take him a lot longer, and a lot more work, if ever to get back to what the 20th century considered normal functioning.

  5. The calculator example is the interesting one – when younger I could do mental math almost as fast, and sometimes faster, than another person could punch the numbers into a calculator. I did math all the time in my job and my mind stayed flexible. Now, 30+ years later, I rarely need to do mental math. I had something come up a few days ago and was horribly frustrated with myself being unable to do a reasonably simple math problem in my mind. We were driving long distance and I finally had to wait until a pit stop to type it into my phone. My “math” muscle had atrophied.

    At least I had the experience of knowing and using mental math, writing original essays, etc. Younger people who’ve never had to do that and rely fully on AI will never truly develop the mental capacity. Language is “logic” in a very real sense and the ability to put thoughts on paper that are understandable to others is a first principle logic exercise. It helps develop the mind.

    Just remember, it’s all downhill from here. Our civilization has peaked.

  6. Comedian — and astute social observer — Robin Williams did a quietly touching skit about a post-apocalyptic hermit living in a cave.
    His character messed with invasive clones by turning sideways to confuse their vision monitors…
    …then brought them into his cave to make jukeboxes out of them.
    .
    He closed the segment with [from memory, because the video eludes me]:
    * “Always stay a little bit crazy, they can’t tax it or regulate it, especially full goose bazongo, that’s the only place you can be free.”
    .
    Artificial Intelligence can probably get really good at artificial, but it cannot be real.
    As much as BOLSHEVIKS and other Hive drones wish, we live in the Real-World.
    As for me, I seem to exist on this particular physical plane for one singular purpose:
    * turn sideways and make a lot of jukeboxes.
    .
    It’s all right here… in my Manifesto.
    (carn-snargled auto-destruct keeps changing my perfectly rational “in my Manifesto” to its ridiculous “I’m my Manifesto”)

  7. My company is big into AI. AI everywhere. It’s a thneed.

    Our CEO is convinced it’ll make us live longer, and get smarter (He actually said it would increase IQ)

    They did a dog and pony show on it at our convention, showing how to answer an RFP.

    After watching the thing, I turned to a coworker and said, “I dunno. Looks like it’ll produce a lot of dumbshit sales execs, if they don’t have to work to answer anything.

      1. A google search supported by AI got me this gibberish on “thneed”.

        “AI Overview
        Learn more

        A thneed is a fictional, versatile garment from Dr. Seuss’s The Lorax that can be worn in many different ways:

        Description
        A thneed is a knitted object made from the leaves of the Truffula tree. It can be reshaped to suit different purposes, but its default form is similar to a sweater. The Once-ler describes it as “a fine something that all people need”.

        Meaning
        The thneed represents what people are told they want and need, and what they work for. However, the thneed comes at a cost, such as the world around us and our relationships with loved ones.

        In the story
        In the book, the Once-ler chops down a Truffula tree and uses its leaves to create a thneed. A creature called the Lorax emerges from the tree’s stump to express his disapproval of the Once-ler’s actions.

        In the 2012 film
        In the 2012 film adaptation of The Lorax, humans visit the Once-ler after realizing how incredible his thneed invention is.

        Thneed may also refer to:

        A useless product that is advertised as something everyone needs, but that no one actually needs

        A light to mid gain drive pedal from Supercool Pedals

        A song from the 2012 film The Lorax called “Everybody Needs a Thneed”

        Generative AI is experimental.”

  8. 100% agree with your assessment above, and I am in a similar industry to “himself” on this.

    AI is now the buzzword around, before it was “Cloud”. The two go hand in hand. One of the reasons to push everything to the cloud is to build bigger datasets for AI to draw from and get more powerful, harder to do that if everything is regionalized, and isolated.

    AI is only as independent as well as the people programming it, so it will be a Silicon Valley mindset writ large.

    Finally everyone says AI will make it easier. Which brings my response of “Why in the hell is that a good thing?” Being able to do difficult things, solve difficult problems, accumulate experience and expertise, all gives my life purpose. Having nothing to accomplish, no mountain to climb, nothing other than pills to make me feel good, sounds like hell to me.

    I don’t understand people who just want to sit around and do nothing but shove lego bricks up their ass, and vote for team Jackass, hoping for salvation.

  9. “What’s worse, though, is that at some point in the future the human intellect, having become flabby and lazy through its dependence on A.I., may not have the basic capacity to correct itself, ….”

    Brawndo’s got what plants grave!

    1. Basically, then, AI is tribal lore. It only can observe the present, and record the past: intuition, inspitation, creativity – all are beyond the horizon. Divine – programmer – intervention still is possible but increasingly unlikely as the years roll by. Sounds familiar. Perhaps we shoud have a Constitution to limit its powers.
      Of course for a sizeable part of the population, life within The Machine is pleasant
      .

  10. Hooboy, I could talk about this for some time.

    (Background: I’ve dabbled in AIs and their precursors for decades, and for the last couple of years, it’s been my primary professional focus)

    Kim, there are TWO distinct phenomena going on here. The more concerning, honestly, is the devolution and literal dis-integration of human cognition. AI, on the other hand, is an independent phenomena, which, as you say, threatens to occupy the increasingly vacant niche of the human cognitive ecosphere. It is not the cause of cognitive devolution, and if it hadn’t been discovered yet, something else would have taken its place.

    Regarding AI:
    In a short paragraph: Appearance aside: AI absolutely does not operate at the level of concepts. It is a seething mass of self organizing correllative math that can do a small number of party tricks, which can in turn be exploited in surprisingly large number of ways. Without delving into it too much, everything its fed (“trained on”) is reduced to a number (“token”) and churned through various training math excercises to produce a latent vector space. This space can then be used to determine how near or close each token is to each other, and given a series of tokens, predict, within whatever likelyhood thresholds you set, what the next token will be. It’s creepiest trick is if you dig into the vector space about halfway between “apple” and “orange”, you’ll get something that looks like a “peach”, even if no peaches existing in the training set. As you intuit, the training set quality has everything to do with the results. GIGO remains a ruling factor.

    Thing is, the latest OpenAI o3 model arguably qualifies as a Artificial General Intelligence (summary: roughly human levels of capability), *if* you ignore the the time & economic constraints that are part of the ARC-AGI metric. As I said waaaay back in 2017, we are on track for the Singularity, and it is pants shittingly close. Like it, hate it, fear it, question its sapience and accuracy…it’s here, and it works for definitions that include “close enough for hand grenades”. It’s not a bell you can un ring.

    The ugly question will be “who does AI serve”? Part of that answer will be informed by “Who owns the GPUs that it runs on?” The NVIDIA H100 is a basic unit of AI hardware, costing around $20k per. Big LLMs are cooked on clusters of these numbering in the thousands for months, horking down the entire output of nearby 800 megaWatt power plants the whole time. Hardly the democratic digital revolution the personal computerer ushered in, is it?

    Regarding Cognitive Decline:

    Human cognitions and performance requires education, training, and social reinforcement, all three of which have been entirely suborned in the last handful of decades for purposes of political exploitation. That is a far bigger problem. Lazy, undisciplined minds that don’t even understand that epistemology matters can be, to bastardize Voltaire, induced to commit atrocities, because they believe absurdities. Without base ability to reason, arguments become irrelevant, no matter their virtues and flaws, and concepts of honor, integrity and character become dismissible, tenuous abstractions.

Comments are closed.