WC.com

Sunday, March 17, 2024

Artificial Extinction?

The news very recently reported regarding a United States State Department Commission study on artificial intelligence. The results reportedly include some quite dire predictions. Reading the report I was reminded of a quote from the 1980s classic Ghostbusters, discussing the safe use of their technology:
"Spengler: It would be bad.
Venkman: I'm fuzzy on the whole good/bad thing. What do you mean, "bad"?
Spengler: Try to imagine all life as you know it stopping instantaneously and every molecule in your body exploding at the speed of light.
Venkman: Right. That's bad. Okay. All right. Important safety tip. Thanks, Egon."
Ok, so this AI thing could be "bad." We got that part. 

I was fortunate, in my youth, to engage in some philosophical conversations with individuals of great intellect. One I recall, was fascinated and engaged by the demise of the dinosaurs. Left to his own devices, this individual’s thoughts and conversation would often turn to the theories of the great extinction. He enjoyed propounding his perspectives on “extinction events.“

One of his perceptions, often voiced, was the unpredictability of cataclysmic environmental events. He was a subscriber to the meteor theory of dinosaur extinction. I can recall him several times uttering the phrase “they never knew what hit them,“ and similar observations, such as "there’s no way they could’ve seen it coming.”

I recall, biting my tongue. All my petit brain could muster was, essentially, “Yeah dinosaurs didn't have telescopes." To me, the observation of unpredicted, or unprepared, seemed axiomatic and irrelevant there. Whether their demise was cataclysmic or evolutionary, they no more saw it coming than they mapped the dinosaur genome, built self-driving dino-vehicles, or put a dinosaur on the moon. That is flippant, and a bit sarcastic, apologies. That said, I see little relevance in discussing our future with reverence or reference to the demise of dinosaurs. The parallels escape me.

That said, there are some noted in the State Department report that believe the human race may be headed for a similarly cataclysmic "extinction" event. No, not a meteor, but an extinction nonetheless. The fears do not seem centered upon computers impersonating Tom Hanks, inappropriate, or malignant photographs (see Deep Fakes in Florida, March 2024), or even disproportionate, or discriminatory, censorship of viewpoints and perspectives (yes, there is some evidence that some platforms compress or exalt various viewpoints).

The significant fears in the State Department report are two-fold. First is the potential that miscreant humans will engage AI in a weapon "heist" form, and thus use the appropriated weapons in a manner that will disable and disrupt the very underpinnings of our societal existence. This is a hacker fear, and in fairness, there have been some intriguing instances in which miscreants used technology against those who created it. As I read the report, I wondered how this "theft of AI" threat is more potentially destructive than nuclear weapon theft?

The second seemingly popular threat is that artificial intelligence will become sentient, and, without the help of miscreants it will itself engage in destructive behavior. This might be through the intent of its designers, the inadvertence of programmers, or other accidents.

As I read, I wondered aloud: "Is this War Games (MGM 1983) or Terminator (Orion 1984)?"

Unfortunately, the human race seems inclined to sloth. Increasingly, the benefits and virtues of hard work dedication and focus have become punchlines rather than goals. Young people are increasingly drawn to the virtual world, with its great benefits, friendship proxies, and unfortunate risks. Societally, humans of the most ardent, independence and liberty seem repeatedly inclined to yield their inherent God-given rights (“endowed by their Creator”) to the government in exchange for security, or protection. They seem as eager to yield to social media for convenience, collectivity, and acceptance.

Why my mind is drawn toward Hollywood in these regards, I cannot explain. But, I wonder if we are creeping toward the ruined world portrayed in Wall-E (Pixar 2008). In that classic, the good folks have deteriorated into a codependent technology relationship that is clearly and humorously toxic. They persist in pointless lives serving computers that serve them. The humans had become obese, ignorant, and, frankly, irrelevant to themselves and others. There is symbiosis, social decline, and despite the movie's humor, it is quite depressing.

Maybe, just maybe, some solutions lie in our continued intellectual development, evolution as humans, and engagement in our lives?

The good news is that government is going to solve all this. We must remember that more government and laws is always the answer (sarcasm). Try to remember Ronald Reagan's point
“If more government is the answer, then it was a really stupid question.”
With the Gipper in mind, let's delve into the State Department Report (executive summary is available on the internet -below). To view the whole report, you have to request it. The information overlords say that the price for this knowledge is that the government must know that you asked for and received it. Um, well, ok. Let's face it, the government can presently know virtually anything about you anyway.


Here are the five main points that the experts recommend in order to address the Extinction Level threat that we face (in their perspective):
  1. Establish interim safeguards to stabilize advanced AI development
  2. Strengthen capability and capacity for advanced AI preparedness and response
  3. Increase national investment in technical AI safety research and standards development
  4. Formalize safeguards for responsible AI development and adoption by establishing an AI regulatory agency and legal liability framework
  5. Enshrine AI safeguards in international law and secure the AI supply chain
Some will shudder at the use of "enshrine" ("cherish or sacred") and International Law. The world has witnessed a parade of challenges with international law, and there are failures here, there, and everywhere. Successes? Certainly, there are successes. But we must know in our hearts that International law is no panacea or be-all. The UN response to any variety of world threats might cause skepticism. Or, the impotence of the World Court's dependence upon the Security Council (with full veto power in the hands of various nations) may give pause.

But to the substance. The recommendations are essentially (1) we need emergency rules right now, to regulate the what, who, and how of AI; (2) we build defenses against AI (and the miscreants perhaps) and decide how and when we will deploy them; (3) we spend more money (your money) in studies and rules; (4) we work out more, build muscles, and thus ready ourselves for the assault and our response; (5) we invest faith and hope in the international community, United Nations, treaties, and World Court.

In the end, the solution proposed by the government is seemingly more government. Is it possible that there are existing laws and tools in place that regulate activity? Will there be consideration regarding the prosecution of those who commit mayhem with hammers, or will the focus remain squarely on examining and defining what a hammer is or could be? Will we focus on the fact that there is now a larger hammer?

In the end, is it of any concern to anyone that number one on the Report's list is not "evaluation and implementation of current laws as regards the use or misuse of AI and other technology?" If someone builds a bigger hammer, shall we jump to a new "bigger hammer" law or might we consider that hitting someone with a hammer is already a law and that the size of the hammer is largely irrelevant?