So Much For That Stereotype

My buddy (whom I’ll call “Brian” because, well, that’s his name) was once married to a gorgeous but rather empty-headed girl named Irene (also her real name).  Over the course of his twenty-odd year marriage, he would unfailingly buy her a new Honda Accord every two years or so.  When I asked Brian why always an Accord, his answer was quite succinct:

“Because not even Irene can fuck up a Honda.”

Well, that may have been true back then, but apparently it’s not so true anymore:

The National Highway Traffic Safety Administration (NHTSA) has launched an investigation into more than 1.4 million Honda and Acura vehicles over defective connecting rod bearings that can cause complete engine failure. The probe targets 3.5-liter V6 engines in popular models including the Honda Pilot, Odyssey, and Ridgeline, along with several Acura vehicles.

The investigation underscores growing safety concerns about widespread engine problems that could leave drivers stranded or create hazardous situations on busy roadways.

Federal regulators opened the probe on August 20. They are focusing on the J35 V6 engine used across multiple Honda and Acura model lines. The investigation covers 2016–2020 Acura MDX vehicles, 2018–2020 Acura TLX models, 2018–2020 Honda Odyssey minivans, 2016–2020 Honda Pilot SUVs, and 2017–2019 Honda Ridgeline pickup trucks.

NHTSA has received at least 414 complaints involving engine failure tied to the defective connecting rod bearings.

Oops.

Strange that this problem should surface in their V6 engines;  I always thought they’d be bulletproof compared to the smaller 2-liter 4-bangers, but there ya go.

Readers thinking of buy a new-model Honda with said engine:  caveat emptor.

Quote Of The Day

…from Jim Treacher, talking about Grok:

“This thing is just telling me what I want to hear. Which is a nice feeling, but that’s all it is. The user is being manipulated, by design. People are now learning the hard way that these machines are programmed to give an answer, not necessarily the answer. They’re incredibly sophisticated, but they literally don’t know what they’re talking about. They don’t know anything.”

It’s received, not actual “wisdom”, because it’s only as good as what’s been fed into it.  Moreover, there are no footnotes to say where they got it, and there’s no telling how many hands may have played with it, massaged it and directed it before it reaches the end user.

Caveat lector.

Fine Motor Control

…and I’m not talking about Porsche’s new gearbox, either.

Consider this, which arrived on my recently-acquired laptop w/Windows 11:

It’s the scrolling button on the extreme right of any open window, and Alert Readers will no doubt have realized that it replaced the old square one that we all grew up with.

I have two questions about this shrunken silliness.

Firstly, as any fule kno, I use a Logitech Ergo Trackball:

…whose giant “thumb ball” controller gives one plenty of ability to steer the pointer over to the tiny space in the top right-hand side of the window with relative ease.  How do people achieve the same goal using the sloppy and imprecise finger pad of a laptop?

Secondly, and this is a question for the propeller-heads out there:  is there any way one can change the shape / size of the scrolling thingy back to its old appearance?  (I’d bet there isn’t because Microsoft, but I’ll gladly be proved wrong in this case.)

I get by okay with the LogiTech mouse, but even so it’s not as easy as it used to be, which irritates the shit out of me, and I can’t be the only one thus affected.

As always with Microsoft, change seems to come not only unrequested and generally unwanted, but also in such a manner that it requires considerable effort to manage it.

Garbage Collection

For a bunch of supposed scientists, these tits seem to be remarkably unworldly [sic]:

Earth’s orbit is filling up with junk. Greenhouse gases are making the problem worse.
By the end of the century, a shrinking atmosphere could create a minefield for satellites.

I’m going to ignore the “greenhouse gases” bit because I have an abiding suspicion of headlines which require that we stop buying SUVs and generating electricity.

I’ll buy the first part, though, because that’s actual scientific observation.

Now I’m not a scientist, make no claims to be one, and I’m certainly no astrophysicist.  But I am a capitalist, and it seems to me that the solution is not to turn off all lightbulbs on Earth, but to let the market take care of the junk problem, by simply collecting it and disposing of it as we do with all our other household junk.

Here’s my suggestion:  have ol’ Elon Musk design a giant Shop-Vac that can be mounted on one of his rockets, and launch it into space to collect debris.  Then, when the receptacle is full, launch the craft into the general direction of the Sun for eventual incineration.  This action could be repeated with more Junk-X spacecraft until our atmosphere is neat and tidy again.

Now this job and technology wouldn’t be cheap, and SpaceX would need to be paid (because Elon may sometimes be a philanthropist, but he’s not a complete sucker either).  But paid by whom?

Well, considering that this would benefit mankind in general, it should not be funded by any single country — yeah, ten guesses which country would be expected to fund it — but by all nations on Earth.

Is there a global organization which should sponsor SpaceX to complete this function? Uh, lemme think… oh yeah, how about this lot?

You might think that the U.N. doesn’t have the funds to pay SpaceX, but I’ll be that if their budget was scrutinized, there’d be a whole bunch of inefficiencies and waste which could be re-purposed towards so noble an objective.

And in a Great Circle Of Life manifestation, I bet that Elon’s DOGE whizzkids could find the dollars in about a couple of days, if they could be let loose on the United Nations’ budget…

Censorship By Algorithm

…or by A.I., the outcome is the same.

Seen SOTI:

Since when could we not say simple words like “racists” and “pedophiles”*?

Since “bad” words could be flagged by built-in website algorithms and cause the post and/or writer to be “flagged” or even “banned”, is when.

Which is why I don’t bowdlerize my writing here;  if I want to say “rapist” I’ll fucking well say “RAPIST”, and if I want to say “porn” I’ll say that too, and not “p*rn” or its pathetic ilk.

It’s too bad, because the above statement lends itself to being quite funny, provided that you don’t encounter the linguistic roadblock of having to hunt for the substitute letters for the asterisks.


*(For my Brit Readers, “paedophiles” which would emerge as “p**dophiles”, which is doubleplusunreadable.)

If You Don’t Use It…

…of course you’re going to lose it.  This post on Musk-X triggered a train of thought from me:

Just had a fascinating lunch with a 22-year-old Stanford grad. Smart kid. Perfect resume. Something felt off though. He kept pausing mid-sentence, searching for words. Not complex words – basic ones. Like his brain was buffering. Finally asked if he was okay. His response floored me.

“Sometimes I forget words now. I’m so used to having ChatGPT complete my thoughts that when it’s not there, my brain feels… slower.”

He’d been using AI for everything. Writing, thinking, communication. It had become his external brain. And now his internal one was getting weaker.

This concerns me, because it’s been an ongoing topic of conversation between the Son&Heir (a devout apostle of A.I.) and me (a very skeptical onlooker of said thing).

I have several problems with A.I., simply because I’m unsure of the value of its underlying assumption — its foundation, if you will — which believes that the accumulated knowledge on the Internet is solid:  that even if there were some inaccuracies, they would be overcome by a preponderance of the correct theses.  If that’s the case, then all well and good.  But I am extremely leery of those “correct” theses:  who decides what is truth, or nonsense, or (worst of all) highly plausible nonsense which only a dedicated expert (in the truest sense of the word) would have the knowledge, time and inclination to correct.  The concept of A.I. seems to be a rather uncritical endorsement of “the wisdom of crowds” (i.e. received wisdom).

Well, pardon me if I don’t agree with that.

But returning to the argument at hand, Greg Isenberg uses the example of the calculator and its dolorous effect on mental arithmetic:

Remember how teachers said we needed to learn math because “you won’t always have a calculator”? They were wrong about that. But maybe they were right about something deeper. We’re running the first large-scale experiment on human cognition. What happens when an entire generation outsources their thinking?

And here I agree, wholeheartedly.  It’s bad enough to think that at some point, certain (and perhaps important) underpinnings of A.I. may turn out to be fallacious (whether unintended or malicious — another point to be considered) and large swathes of the A.I. inverted pyramids’ points may have been built, so to speak, on sand.

Ask yourself this:  had A.I. existed before the reality of astrophysics had been learned, we would have believed, uncritically and unshakably, that the Earth was at the center of the universe.  Well, we did.  And we were absolutely and utterly wrong.  After astrophysics came onto the scene, think how long it would take for all that A.I. to be overturned and corrected — as it actually took in the post-medieval era.  Most people at that time couldn’t be bothered to think about astrophysics and just went on with their lives, untroubled.

What’s worse, though, is that at some point in the future the human intellect, having become flabby and lazy through its dependence on A.I., may not have the basic capacity to correct itself, to go back to first principles because quite frankly, those principles would have been lost and our capacity to recreate them likewise.

Like I said, I’m sure of only two things in this discussion:  the first is the title of this post, and the second is my distrust of hearsay (my definition of A.I.).

I would be delighted to be disabused of my overall position, but I have to say it’s going to be a difficult job because I’m highly skeptical of this new wonder of science, especially as it makes our life so much easier and more convenient:

He’d been using AI for everything. Writing, thinking, communication. It had become his external brain.

It’s like losing the muscle capacity to walk, and worse still the instinctive knowledge of how to walk, simply because one has come to depend completely on an external machine to carry out that function of locomotion.


P.S.  And I don’t even want to talk about this bullshit.