From the comments to yesterday’s post about A.I., this from Reader askeptic:
“I seem to recall being taught oh-so-long-ago, that every advance in technology has brought an expansion of employment, contrary to the accepted knowledge as machine replaced man. Why would not the use of A-I be an exception to that?”
Simple answer would be that machines have always worked perfectly (after improvement) in doing repetitive tasks — assembly-line activity, mathematical calculations, full-automatic shooting and so on.
What humans do is think: about building robots to work on assembly lines, the calculations to be performed, and the need for massed fire, to supply answers for all three activities, in other words.
What seems to be getting people alarmed — and I’m one of them — is that A.I. seems to be aimed at either duplicating or indeed creating those thought processes, replacing humans in the one dimension that has created this world we live in. (My special reservation, shared by many I suspect, is that the engine of this replacement seems to be relying on the wisdom of crowds — i.e. garnering information from previously-created content, much as philosophers have relied upon Aristotle et al. to provide the foundations of their further philosophies.)
The problem with all this is that just as Aristotle’s thoughts have sometimes proved erroneous in dealing with specific scenarios, the “wisdom of crowds” — in this particular set of circumstances — can be reshaped and reformed by the applications of millions upon millions of bots (say) which can alter the terms of the discussion by making outlying or minority positions seem like the majority, in the same way that a dishonest poll (such as the 2020 U.S. election) can be corrupted into portraying a preponderance that never existed.
It’s easy to refute one of Plato’s scientific observations — e.g. that heavier objects fall faster than light ones — but it’s far less easy to refute the inadequacy of facial masks to prevent the spread of airborne disease when the preponderance of scientific “evidence” allows people to say that if you refuse to wear a mask you’re a potential mass murderer. We all knew intuitively that the tiny gaps in masks’ weaving were still huge compared to the microscopic size of plague viruses, but that intuition was crushed by the weight of public pressure.
And if A.I. only looked at the part of the data that said that masks work and never looked at the evidence that they didn’t, the output would always be: wear a mask, peasant. And yes, that is indeed happening.
I know the above is somewhat simplistic, but my point is that when you look at how A.I. is being used (to “cheat” creative activity, for example, in writing a college essay) and the potential that A.I. can learn from its mistakes (even if driven by erroneous input), that we are justified in being very apprehensive about it.
Which brings me finally to the answer to Reader askeptic’s question: the premise is sound, in that technology has in the past always led to an expansion of employment. But if we acknowledge that the prime function of a human being is to think, then what price humans if that function is replaced?





























