Biden Jumps on AI Bandwagon Way Too Fast

President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence and follow-up OMB Policy statement throws AI at everything. It mentions (take a deep breath) global warming, detecting the use of opioids, healthcare, financial services, education, decreasing drug costs, housing, law, transportation, supporting American workers, deconfliction of air traffic, assuring equal opportunity, curtailing justice discrimination and bias, advancing racial equity, job displacement, labor standards, workplace equity, and health.

AI will “expand [government] agencies’ capacity to regulate, govern, and disburse benefits, and … cut costs and enhance the security of government systems.” Borrowing from the Pledge of Allegiance, Biden’s order says AI can “advance civil rights, civil liberties, equity, and justice for all.”

I’m not an expert in most of these fields, and maybe AI will have impact in many of them. It seems, though, the author of Biden’s executive order jotted down every topic that came to mind without any filtering.

This is history repeating itself. In the mid-20th century a Bell Labs genius, Claude Shannon, wrote a landmark paper that ushered in the digital age. Digital technology today is used for transmitting voice, video, and data over the internet, cellular networks and other communication mediums.

If you’ve received a perfect picture of you granddaughter when you have one bar and it’s raining outside, thank Claude Shannon. His work has more impact on our daily lives than Albert Einstein’s.

So remarkable was Shannon’s paper on information theory, that everyone tried to apply it to whatever they were doing. This troubled Shannon.

In a paper titled “The Bandwagon” he expressed his concerns that apply today to AI:

“Information theory has, in the last few years, become something of a scientific bandwagon. … [It] has received an extraordinary amount of publicity in the popular as well as the scientific press. … Although this wave of popularity is certainly pleasant and exciting for those of us working in the field, it carries at the same time an element of danger… It will be all too easy for our somewhat artificial prosperity to collapse overnight when it is realized that the use of a few exciting words like information [theory] do not solve all our problems.”

Nowhere is the troubled AI Bandwagon more evident than Biden’s executive order. To paraphrase Shannon, “use of a few exciting words like artificial intelligence and machine learning do not solve all our problems.” Biden’s executive order radiates a contrary view.

AI remains an exciting, often mind-blowing technology, but hyped futuristic depictions of AI in The Terminator and The Matrix are unrealizable science fiction. Unlike humans, AI will never understand what it is doing, be creative or experience qualia. AI is a tool. Like electricity or thermonuclear energy, it can be used for either good or evil.

Many AI evils, though, are not new nor constrained to AI.

Securing systems against hacking is of paramount importance. However, cybersecurity has long been a challenge that demands ongoing vigilance. The back-and-forth volleys between system protectors and hackers continues as an arms race.

An unexpected AI outcome can be dangerous. Early large language models (LLMs) slandered the innocent and offered vile advice to the young.

A lot of this has been fixed in the LLM ChatGPT. But the number of possible outcomes increases exponentially as system complexity increases linearly. Many LLM outcomes can be unwanted and even harmful.

Recently, I amusingly showed ChatGPT4 cannot be instructed to not do something. Tell it to draw a picture with NO ELEPHANTS and the chances are you’ll see an elephant in the generated picture.

This can be fixed, but there are other unexpected outcomes lurking in deep recesses of LLM’s trillion plus degrees of freedom. The LLM developer will keep putting Band-Aids on the cuts as they are discovered.

Another serious AI danger is the nefarious use of deep fakes. Fakes in photos date back to the 1920s and the Cottingley Fairies where two teenage girls supposedly posed with fairies they found fluttering in the woods.

Sir Arthur Conan Doyle, the creator of Sherlock Holmes, believed the faked photos to be real. The AI faking of images today is exponentially more sophisticated, and, to date, there is no 100% foolproof method for detection.

Deep fake videos of politicians and celebrities with matching deep fake voices are getting more realistic and more problematic. But solutions are actively being sought and, with proper standards, the deep fake problem will be successfully addressed.

Some perceived AI dangers are red herrings. There is a call to end bias in AI. No one wants AI to spew racial slurs or laud xenophobia.

But distinguishing between bias and legitimate social or political viewpoints can be challenging. What one person views as bias, such as opinions on climate change or COVID, may be considered convictions by others.

Eliminating bias entirely from AI is a fantasy. AI void of bias is like water without wet.

With his executive order, Biden has jumped on the AI bandwagon that carries all the baggage of the media’s AI disinformation and hype.

Like Shannon’s information theory, AI is not a cure-all for every problem. A more impactful executive order could be crafted with input from cutting edge domain experts who know what they are talking about.

Robert J. Marks Ph.D. is Distinguished Professor at Baylor University and Senior Fellow and Director of the Bradley Center for Natural & Artificial Intelligence. He is author of “Non-Computable You: What You Do That Artificial Intelligence Never Will Never Do,” and “Neural Smithing.” Marks is former Editor-in-Chief of the IEEE Transactions on Neural Networks. Read more Dr. Marks’ reports — Here.

© 2024 Newsmax. All rights reserved.