For once, there’s an article worth reading at National Review, and for once, I find myself somewhat in agreement with the rabid Leftoids (albeit for different reasons).
[T]here’s a consistent and surprisingly effective effort to convince you that the biggest threat to your community is the plans for a new AI data center on the other side of town. Read on.
Democrats’ Data Center Obsession
Back in 2024, I observed that when some of America’s biggest tech companies realized that they needed significantly more electrical power to run their data centers in the decades to come, they decided that restarting decommissioned nuclear plants was the best, most cost-effective, and most reliable option. And with the seeming snap of their fingers, a slew of those closed nuclear plants were scheduled to start operating again in the coming years.
And it wasn’t just Republican governors like Glenn Youngkin of Virginia eager to re-embrace nuclear power; Democrats like Pennsylvania Governor Josh Shapiro, Washington Governor Jay Inslee, Michigan Governor Gretchen Whitmer, and Virginia Senators Mark Warner and Tim Kaine all jumped on board. It was a case of the right policy finally being enacted after decades of foot-dragging and fearmongering, but more than a little frustrating that years of conservatives winning the policy argument and being right on the facts didn’t move the needle on the issue; it was Microsoft, Amazon, Google, and other big companies simply saying, “We want this.”
We should have known that eventually the progressive wing of the Democratic Party would wake up and galvanize opposition; now an increasingly loud swath of Americans, mostly on the left, seem to hate data centers the way they used to hate your SUV, your Big Mac, and, well, you.
Of course, the reason the Watermelons are being stirred to violence is because electricity is eeeevil, as is nuuuuclear powerrrr etc. etc.
I don’t care about any of that.
What concerns me about A.I. is more of a philosophical nature because while I can see many benefits of having computing power save humans a lot of grunt work and so on, I am profoundly disturbed by the implications of letting A.I. run things — and more especially, run the activities and affairs of humans. As long as it’s a tool, therefore, I think I can get behind it; but as a management system, I remain deeply skeptical.
And my skepticism stems from two sources.
Firstly, I think it’s all too easy (through laziness or indifference) to hand over the reins to outside control — we just have to see how cars are being thus transformed as an example — and as far as I’m concerned, the jury is still out (way far out) on whether this is a good, bad or evil thing.
My second concern stems from the basic premise of A.I., as I’ve said before, in that the collective [sic] wisdom can form a secure foundation for intelligence. As someone who has often used and manipulated data myself, I am intimately familiar with how this process can be affected by, let’s call it malevolent forces. And whereas in the past one could rely on some kind of human element to be a firewall on this issue, we are now faced with the prospect of A.I.-driven bots to not only speed up the process massively, but also to conceal what’s actually going on.
I’m not going to do anything stupid like bomb some data center, of course, nor would I ever support the assholes that do this kind of thing. If they do something vile like this, or even plan to do something like this and get caught, then by all means hang them, bury them under a prison or stick them in some deep dark jail cell forever.
I do think that we aren’t being careful enough with the drive to A.I., because the guys who are building it are obsessed with performance / generation. As with all science, though, we need to continuously ask ourselves the question: “Just because we can, are we sure that we should?”
And I see very few people asking that question of A.I. — which means that the field of resistance is being left open for the loony Leftoids.