Fine Motor Control

…and I’m not talking about Porsche’s new gearbox, either.

Consider this, which arrived on my recently-acquired laptop w/Windows 11:

It’s the scrolling button on the extreme right of any open window, and Alert Readers will no doubt have realized that it replaced the old square one that we all grew up with.

I have two questions about this shrunken silliness.

Firstly, as any fule kno, I use a Logitech Ergo Trackball:

…whose giant “thumb ball” controller gives one plenty of ability to steer the pointer over to the tiny space in the top right-hand side of the window with relative ease.  How do people achieve the same goal using the sloppy and imprecise finger pad of a laptop?

Secondly, and this is a question for the propeller-heads out there:  is there any way one can change the shape / size of the scrolling thingy back to its old appearance?  (I’d bet there isn’t because Microsoft, but I’ll gladly be proved wrong in this case.)

I get by okay with the LogiTech mouse, but even so it’s not as easy as it used to be, which irritates the shit out of me, and I can’t be the only one thus affected.

As always with Microsoft, change seems to come not only unrequested and generally unwanted, but also in such a manner that it requires considerable effort to manage it.

Garbage Collection

For a bunch of supposed scientists, these tits seem to be remarkably unworldly [sic]:

Earth’s orbit is filling up with junk. Greenhouse gases are making the problem worse.
By the end of the century, a shrinking atmosphere could create a minefield for satellites.

I’m going to ignore the “greenhouse gases” bit because I have an abiding suspicion of headlines which require that we stop buying SUVs and generating electricity.

I’ll buy the first part, though, because that’s actual scientific observation.

Now I’m not a scientist, make no claims to be one, and I’m certainly no astrophysicist.  But I am a capitalist, and it seems to me that the solution is not to turn off all lightbulbs on Earth, but to let the market take care of the junk problem, by simply collecting it and disposing of it as we do with all our other household junk.

Here’s my suggestion:  have ol’ Elon Musk design a giant Shop-Vac that can be mounted on one of his rockets, and launch it into space to collect debris.  Then, when the receptacle is full, launch the craft into the general direction of the Sun for eventual incineration.  This action could be repeated with more Junk-X spacecraft until our atmosphere is neat and tidy again.

Now this job and technology wouldn’t be cheap, and SpaceX would need to be paid (because Elon may sometimes be a philanthropist, but he’s not a complete sucker either).  But paid by whom?

Well, considering that this would benefit mankind in general, it should not be funded by any single country — yeah, ten guesses which country would be expected to fund it — but by all nations on Earth.

Is there a global organization which should sponsor SpaceX to complete this function? Uh, lemme think… oh yeah, how about this lot?

You might think that the U.N. doesn’t have the funds to pay SpaceX, but I’ll be that if their budget was scrutinized, there’d be a whole bunch of inefficiencies and waste which could be re-purposed towards so noble an objective.

And in a Great Circle Of Life manifestation, I bet that Elon’s DOGE whizzkids could find the dollars in about a couple of days, if they could be let loose on the United Nations’ budget…

Censorship By Algorithm

…or by A.I., the outcome is the same.

Seen SOTI:

Since when could we not say simple words like “racists” and “pedophiles”*?

Since “bad” words could be flagged by built-in website algorithms and cause the post and/or writer to be “flagged” or even “banned”, is when.

Which is why I don’t bowdlerize my writing here;  if I want to say “rapist” I’ll fucking well say “RAPIST”, and if I want to say “porn” I’ll say that too, and not “p*rn” or its pathetic ilk.

It’s too bad, because the above statement lends itself to being quite funny, provided that you don’t encounter the linguistic roadblock of having to hunt for the substitute letters for the asterisks.


*(For my Brit Readers, “paedophiles” which would emerge as “p**dophiles”, which is doubleplusunreadable.)

If You Don’t Use It…

…of course you’re going to lose it.  This post on Musk-X triggered a train of thought from me:

Just had a fascinating lunch with a 22-year-old Stanford grad. Smart kid. Perfect resume. Something felt off though. He kept pausing mid-sentence, searching for words. Not complex words – basic ones. Like his brain was buffering. Finally asked if he was okay. His response floored me.

“Sometimes I forget words now. I’m so used to having ChatGPT complete my thoughts that when it’s not there, my brain feels… slower.”

He’d been using AI for everything. Writing, thinking, communication. It had become his external brain. And now his internal one was getting weaker.

This concerns me, because it’s been an ongoing topic of conversation between the Son&Heir (a devout apostle of A.I.) and me (a very skeptical onlooker of said thing).

I have several problems with A.I., simply because I’m unsure of the value of its underlying assumption — its foundation, if you will — which believes that the accumulated knowledge on the Internet is solid:  that even if there were some inaccuracies, they would be overcome by a preponderance of the correct theses.  If that’s the case, then all well and good.  But I am extremely leery of those “correct” theses:  who decides what is truth, or nonsense, or (worst of all) highly plausible nonsense which only a dedicated expert (in the truest sense of the word) would have the knowledge, time and inclination to correct.  The concept of A.I. seems to be a rather uncritical endorsement of “the wisdom of crowds” (i.e. received wisdom).

Well, pardon me if I don’t agree with that.

But returning to the argument at hand, Greg Isenberg uses the example of the calculator and its dolorous effect on mental arithmetic:

Remember how teachers said we needed to learn math because “you won’t always have a calculator”? They were wrong about that. But maybe they were right about something deeper. We’re running the first large-scale experiment on human cognition. What happens when an entire generation outsources their thinking?

And here I agree, wholeheartedly.  It’s bad enough to think that at some point, certain (and perhaps important) underpinnings of A.I. may turn out to be fallacious (whether unintended or malicious — another point to be considered) and large swathes of the A.I. inverted pyramids’ points may have been built, so to speak, on sand.

Ask yourself this:  had A.I. existed before the reality of astrophysics had been learned, we would have believed, uncritically and unshakably, that the Earth was at the center of the universe.  Well, we did.  And we were absolutely and utterly wrong.  After astrophysics came onto the scene, think how long it would take for all that A.I. to be overturned and corrected — as it actually took in the post-medieval era.  Most people at that time couldn’t be bothered to think about astrophysics and just went on with their lives, untroubled.

What’s worse, though, is that at some point in the future the human intellect, having become flabby and lazy through its dependence on A.I., may not have the basic capacity to correct itself, to go back to first principles because quite frankly, those principles would have been lost and our capacity to recreate them likewise.

Like I said, I’m sure of only two things in this discussion:  the first is the title of this post, and the second is my distrust of hearsay (my definition of A.I.).

I would be delighted to be disabused of my overall position, but I have to say it’s going to be a difficult job because I’m highly skeptical of this new wonder of science, especially as it makes our life so much easier and more convenient:

He’d been using AI for everything. Writing, thinking, communication. It had become his external brain.

It’s like losing the muscle capacity to walk, and worse still the instinctive knowledge of how to walk, simply because one has come to depend completely on an external machine to carry out that function of locomotion.


P.S.  And I don’t even want to talk about this bullshit.

It’s Not Just Gen Z

I had no idea that this was the case:

In May a survey found that a third of Brits panic when their phone rings unexpectedly and many don’t even answer calls, with Gen Z pleading ‘just text me’.

In a time where cold callers and scammers ringing you up out of the blue happens more often than not, almost 37 per cent of those asked said they are less likely to answer when they receive a call without notice than they were five years ago.

Some 12 per cent of those surveyed said it has been a week – or even longer – since they last spoke to someone on the phone.

And Gen Z have flocked to TikTok to beg people ‘text me’ and telling their viewers how they just sit ‘watching my phone ring’ if ‘absolutely anyone’ calls. 

Yeah, I don’t ever answer my phone either, unless the number is in my address book, or else it’s an identified call from a company or person I already know.

As it is, I get two to three text messages a day from some unidentified source or other, saying they found my number in their callers’ list and don’t know who I am (or similar nonsense).  And even worse are the texts that say junk like “Hi!  We haven’t chatted for ages.  Can you call me?” (#Trashdump #Unacknowledged)

I did look up the area codes listed by a few of these text callers, and imagine my surprise when I discovered that all of them are commonly-used fronts for spam calls which originate in exotic locales like the Philippines, China or Central Europe.  (They’re the new Nigeria of email fame.)

Hell, I don’t even answer unidentified calls from my own area code.

It’s a minefield out there, folks, and ignoring this bullshit is not paranoia, but prudence.


Parallel thought:  this panic comes from, of all places, the BritGov, who calls people to collect statistics and now can’t get the info they want.  Let us all remember the immortal words of Governor John Cowperthwaite of Hong Kong, talking about his refusal to let his government collect data from the population:

“If I let them compute those statistics, they’ll want to use them for planning.”

Wiser words were seldom spoken.

Skynet Hates You

Here’s one consequence of putting your trust in technology, this time from India:

Three men have died in a road accident after their car’s sat-nav sent them careening off the 30ft-high edge of an unfinished bridge. 

Their bodies, trapped inside the mangled car, weren’t discovered until 9:30am the next day, local media reported. 

Investigators found that the trio had been following an out-of-date map on Google Maps at speed. 

The mapping service allegedly told them to travel down the bridge, which had no signs indicating it was out of use after it suffered a collapse in 2022 following heavy flooding. 

Oops.

One would think that there should have been warning signposts about the bridge being down, but then again, this is Third World India.  Let’s review:

  • Trust technology:  not always wise.
  • Trust government:  also not always wise.

A life (-or-death) lesson, there.