AI depends on training with good data. That is not happening with most AI platforms. When used in medicine, more people will die.
When Bill Gates and Microsoft incorporated Chat GPT into its Bing search engine, the large number of factual errors triggered an avalanche of criticism. A Microsoft spokesperson said the company expected “mistakes.” Having tried Chat GPT myself, what I received in response to my queries was loaded with garbage. As Roger McNamee has written, most popular AI programs scrub the internet for inputs, whether those inputs are validated data or just made-up BS. Imagine an AI platform scrubbing data from the internet that was propagated by another AI platform. This is likely happening, and means that stuff on the internet contains synthetic, mutiplicative BS. As San Francisco’s Dr. Rodney Brooks, Ph.D., former professor at MIT, has said about ChatGPT, “It doesn’t have any connection to the world. It is correlation between language.” Yet AI, which is just people writing code that collects data, whether it’s of good quality or not, then synthesizes those data and spits out a response, is controlling, and has been controlling, many things in our lives for over a decade. From electing thugs, Donald Trump, to be president of the US, to crashing Tesla cars on the highway, AI is here. Be aware, many of those controlling AI companies, such as Bill Gates, are college dropouts who couldn’t be bothered to attain a broad based, liberal arts education, but instead they left college to pursue monetizing, i.e. making money, techy things. What happens when we leave these poorly educated, money-making techies in charge of the internet of things (IOF) without regulations? Look around, listen, it’s a world full of BS and anarchy, and the rich guys, such as Bill Gates who came from a wealthy, politically connected family, control it in a society described as one of “Economic-Elite Domination.” Most people don’t know that Bill Gates, with his politically connected lawyer-father, unscrupulously killed the world’s most advanced operating system (32-bit in 1995) for small-platform computers, called OS/2 Warp, a joint venture between IBM and Microsoft, for the sake of dominating the market. I was a victim of Gate’s shenanigans when my self-made server, operating in my university lab using OS/2, no longer had support and I had to eventually adopt the shitty Microsoft Windows, 16-bit operating system. I was furious with the high-pitch voiced incel who was ruining the once rapidly advancing computer and internet technology space. I really wished he had spent more time trolling the streets of Seattle for strippers and not screwing the computer industry – unfortunately, only later did he hook-up with Jeffrey Epstein.
Now AI is moving into the medical treatment system, euphemistically called the healthcare system. You may have heard the hype about the study where Google AI claimed to better detect breast cancer than did physicians. Let’s ask one of the questions that I always teach my students to ask, “Compared to What?” The Google study found that AI performed better than radiologists who were not specifically trained in examining mammograms. So, when asking, “Compared to What?,” Google AI in some sense performed better than naive physicians who have not been trained in detecting breast cancer. But there are larger problems when using AI to screen for breast cancer. As Dr. Peter Gotzsche has published, “As screening does not reduce the incidence of advanced cancers, we would not expect screening to have an effect on breast cancer mortality today.” Further, “The fundamental error with these models is that they do not distinguish between clinically relevant cancers, which would have appeared at a later time if there had not been screening, and the overdiagnosed cancers that would never have appeared. The models include all of them, but in actual fact, the lead time of clinically relevant cancers is less than a year.” What Dr. Gotzsche is saying is that the models used to diagnose cancer are fundamentally flawed, where cancers that will never cause problems are detected, then treated, and the patient goes on to survive. Problem is, the patient would have survived anyway without the treatment. The model therefore erroneously uses these data to report that the early detection of the cancer was a success because the patient survived.
And what does the treatment do to the patient? It increases the probability of cancer, whether the treatment is chemo or irradiation. So in this case, AI will help to cause cancer! Hence, AI is neither artificial or intelligent, rather it is real and it is stupid. I call it Actualized Societal Stupidity, or using an acronym, ASS. A number of studies have found that these screenings by physicians cause harm to their patients. But this is big business for physicians, drug companies, insurance companies, and hospitals alike. Christie Aschwanden at Wired has some more thoughts on how AI is a problem within the medical world. As John Horgan of the Stevens Institute of Technology (one of the oldest technology institute in the US) has written in Scientific American, “Cancer medicine generates enormous revenues but marginal benefits for patients.” AI, better known as ASS (Actualized Societal Stupidity), in the hands of corporate medicine with money-incented physicians at the helm, will likely make the cancer business bigger with poorer outcomes. As Dr. Rodney Brooks has said, “one of the deadly sins was how we humans mistake performance for competence.”
AI can work, and work well in many cases. When the US government controls it and puts it into good use, the results can be fantastic. Case in point, Primer, a small artificial-intelligence firm based in downtown San Francisco, one of my favorite big cities because it’s a focal point of innovation. As the NY Times reports, not long after the war in Ukraine started, Primer’s engineers, working with Western allies, tapped into a tidal wave of intercepted Russian radio communications. Primer used its advanced software to clean up the noise, automatically translated the conversations, and most importantly, isolated moments when Russian soldiers in Ukraine were discussing weapons systems, locations, and other tactically important information. The same work would have used hundreds of intelligence analysts to identify the few relevant clues in the mass of radio traffic. Now it was happening in a matter of minutes.
AI can be used for the greater good if society, acting through our government, understands its uses, limitations, and implements these platforms in a thoughtful manner. Leaving it up to unregulated corporations and their money-hungry executives will continue the disaster that is now happening. Right now, AI is being used to deny Medicare Advantage (this is a privatized, deregulated substitute for Medicare brought to you by Republicans) patients their needed treatments. Insurance companies tweak AI programs to deny the healthcare, and a physician employed by the company signs-off on the document, without having reviewed the claim. Physicians and company make money, Medicare Advantage patient is screwed. All hail unregulated AI.