From the comments to yesterday’s post about A.I., this from Reader askeptic:
“I seem to recall being taught oh-so-long-ago, that every advance in technology has brought an expansion of employment, contrary to the accepted knowledge as machine replaced man. Why would not the use of A-I be an exception to that?”
Simple answer would be that machines have always worked perfectly (after improvement) in doing repetitive tasks — assembly-line activity, mathematical calculations, full-automatic shooting and so on.
What humans do is think: about building robots to work on assembly lines, the calculations to be performed, and the need for massed fire, to supply answers for all three activities, in other words.
What seems to be getting people alarmed — and I’m one of them — is that A.I. seems to be aimed at either duplicating or indeed creating those thought processes, replacing humans in the one dimension that has created this world we live in. (My special reservation, shared by many I suspect, is that the engine of this replacement seems to be relying on the wisdom of crowds — i.e. garnering information from previously-created content, much as philosophers have relied upon Aristotle et al. to provide the foundations of their further philosophies.)
The problem with all this is that just as Aristotle’s thoughts have sometimes proved erroneous in dealing with specific scenarios, the “wisdom of crowds” — in this particular set of circumstances — can be reshaped and reformed by the applications of millions upon millions of bots (say) which can alter the terms of the discussion by making outlying or minority positions seem like the majority, in the same way that a dishonest poll (such as the 2020 U.S. election) can be corrupted into portraying a preponderance that never existed.
It’s easy to refute one of Plato’s scientific observations — e.g. that heavier objects fall faster than light ones — but it’s far less easy to refute the inadequacy of facial masks to prevent the spread of airborne disease when the preponderance of scientific “evidence” allows people to say that if you refuse to wear a mask you’re a potential mass murderer. We all knew intuitively that the tiny gaps in masks’ weaving were still huge compared to the microscopic size of plague viruses, but that intuition was crushed by the weight of public pressure.
And if A.I. only looked at the part of the data that said that masks work and never looked at the evidence that they didn’t, the output would always be: wear a mask, peasant. And yes, that is indeed happening.
I know the above is somewhat simplistic, but my point is that when you look at how A.I. is being used (to “cheat” creative activity, for example, in writing a college essay) and the potential that A.I. can learn from its mistakes (even if driven by erroneous input), that we are justified in being very apprehensive about it.
Which brings me finally to the answer to Reader askeptic’s question: the premise is sound, in that technology has in the past always led to an expansion of employment. But if we acknowledge that the prime function of a human being is to think, then what price humans if that function is replaced?
“But if we acknowledge that the prime function of a human being is to think, then what price humans if that function is replaced?”
I would contend that the majority of people don’t actually think. I’d be willing to place that level of non-thinkers at 80% or higher. They might think they think, they might consider themselves smart, they may actually be good at some type of work, but really they just react. They just go thru the motions. They follow instructions, work by procedure, and assembly part A to part B by rote memory.
The auto mechanic at your dealership? He doesn’t “troubleshoot” the problem, he plugs in the computer and then replaces the part that the computer says is bad. That’s all. The one guy that programmed the computer they all use, well, he was a thinker. And his thinking now instructs thousands of mechanics who no longer have to think, just replace parts. Now that’s not entirely correct. The guy in the shop certainly might think, after performing the same repair dozens of times, that there’s a better way to do something. Fine and good, but still 99% of his job is a parts changer, not a thinker.
The guy two office doors down from mine? He doesn’t think. He fills out monthly reports and click-clacks data all day. He stuffs that data into some statistical processor and presents the results to management periodically, saying things like “there’s a 3.5% market growth in bio-liners over the past 6 months” and other things that make it sound like he thinks. But really he doesn’t. He makes 6 figures and can’t rationally explain what he does for a living to a 5th grader.
Me? I used to think until I realized there’s not much money in it and the company doesn’t care, either way. Now I clickity-clack on my computer and look busy while counting days to my retirement. Any thinking I might do, I do on my own time for my own benefit. Thinking is highly overrated. A trained monkey could do my job. A nutless trained monkey. And I’m more than willing to step aside and let that little nutless bastard have it.
I think I need a drink.
I want AI that can clean my house, do laundry, and wash the dishes, so I’ll have time to write, paint, and make music. For some reason we seem to be making AI that writes, paints, and makes music. That’s exactly the opposite of what we want.
I’m still waiting for my Cherry 2000.
Me too.
As Robert Heinlein had his main character in “The Door into Summer” say about his robotics model, the”Hired Girl”, cleaning the house, doing laundry and washing dishes take judgment. Modern Art, for only one example, is Imagination without skill, and it shows.
Here’s the quote I was referencing; “Skill without imagination is craftsmanship and gives us many useful objects such as wickerwork picnic baskets. Imagination without skill gives us modern art.” that was said by Tom Stoppard, author of the play, “Rosenkrantz and Guildenstern are Dead” and many other plays.
I cannot lay hands on my copy of “The Door into Summer” in anything approaching a reasonable time, even though I am reasonably sure it is within the sound of my voice. Not that that helps me now.
A. I is somewhat misnamed. It’s not “Intelligent”. It needs to be trained by someone. In other words, fed large amounts of relevant data about subjects. Data that already exists. It then can sort though that Data quickly and piece together parts that fit as an answer to whatever question is presented. In doing so it might appear to be making decisions, be it’s really just evaluating all the relevant information and selecting the most probable answer.
That was HAL’s problem. It came to a logical conclusion based on the data it had. But someone forgot to tell it that the mission was NOT more important than the people on board.
What it can’t do is to come up with a new way to do something. It can’t make the leap to question why are we doing something that way? It can’t imagine a new solution to a problem that has not been done before.
For example It can produce an image if someone describes what they envision and they describe it with eough detail, but it can’t create a new image on it’s own.
It can’t write a new song lyric with out some one suggesting a subject matter.
It can’t invent a new process or machine to fill a need that it deemed needed to be filled.
It will not be capable of original thought ………. at least not yet. When it ever does — that’s when we need to worry.
> AI is somewhat misnamed. It’s not “Intelligent”.
By any definition of “intelligent” that includes half–or more–of the humans on this planet, modern LLMs pretty much are.
> It needs to be trained by someone.
So do humans. A baby, unless “trained” eats with it’s hands and pisses and craps wherever it happens to be when it’s time. Even walking is “trained” by observing that other people do it, so the baby tries to emulate them.
LLMs do a lot of the same things that people do–they make stuff up, they get things wrong in ways consistent with *what they have been taught*.
They are–like any powerful tool-very useful, but you have to be very careful with them.
In other words, fed large amounts of relevant data about subjects.
A great many moons ago I got tasked (by a somewhat myopic manager) with “developing an AI system.”
I requested a list of requirements for such a system, which baffled the manager. Much sturm und drang ensued regarding exactly what consisted “an AI system.” BLUF, what he described was closer to what is often termed “an expert system” and, without knowing who Alan Turing was, or even that he existed, closely resembled the famed “Turing machine.”
Which is nothing more than a “stimulus/response” system, quite well defined by Don Curton and GT3Ted above. Those, I could build in my sleep, because I had been heavily involved in supporting the “break/fix” part of the business. Which, in today’s iteration of ‘A.I.,” is exactly what we have and exactly what so many are orgasming about.
My opinion is that, first, we do not have a fully accepted definition of what constitutes AI, although almost everyone thinks we do, and second, from a hardware perspective (my default setting, since I spent years in semiconductor manufacturing, and while not in chip design, you do have to know at least something about the particular chips to be able to manufacture them) we’re eons short of reaching human brain function and capacity with solid state hardware. (And, no, HAL ain’t it, HAL (which some know as being 1 letter north in the alphabet from “IBM”), because HAL was, exactly as GT3Ted describes, a very large, very complex, but still severely limited, stimulus/response system – stimuli outside it’s known range was greeted with what HAL had been programmed to interpret “the closest acceptable response.” Which is not AI.
Will we ever get there? Well, I’ve learned to “never say never” but that horizon is sufficiently distant that I think it safe to proclaim “certainly not in our lifetimes, but, maybe, possibly, our children may experience it in their dotage, but more likely, it’ll be their kids. We’ll come close (if we can mantain a fully bastardized defintion of AI) but there’s a rather wide chasm between “extremely large and extremely capable stimulus/response systems” and “human creativity, which is dependent upon “thinking”.
Ever heard the phrase “thinking outside the box”? With true AI, as with true human thought and creativity, there is no box.