AI depends on training with good data. That is not happening with many AI platforms. When used in the current paradigm of deregulated privatized medicine, more people will die.
There’s a new boom happening in San Francisco. It’s big. It’s AI. Investors have already announced $10.7 billion in funding for generative A.I. start-ups within the first three months of 2023, a thirteenfold increase from a year earlier. Salesforce in San Francisco has announced their VC arm will invest $500 million in generative AI startups. The possibilities of AI are endless, and have been and can be of great value to society. Dr Ken Goldberg, Ph.D., professor and director of a robotics lab at the University of California, Berkeley, sums it up nicely in a short, and well informed, opinion piece. As Dr. Goldberg writes, “Engaging with AI’s unique form of creativity could lead to unexpected new discoveries.” Already the discoveries are diverse and dazzling. Who would have thought a doctor of astrophysics, working in Berkeley at his company, Climax Foods, would use AI to discover a combination of plant molecules that mimic bovine casein protein to make a vegan Brie indistinguishable from the unsustainable, and cancer causing (e.g Liver, Breast and Prostate), stuff from cows.
Some will hype AI as having emergent properties and to be conscious. This is pure dross. Computers process information entirely through mathematics to find correlations while humans think primarily with reason. The AI in computers makes associations and is unable to reason and to make intellectual leaps. They are simply good good at processing huge data sets, and then summarizing those data. As Celeste Kidd at UC Berkeley has written in Science, “Overhyped, unrealistic, and exaggerated capabilities permeate how generative AI models are presented, which contributes to the popular misconception that these models exceed human-level reasoning and exacerbates the risk of transmission of false information and negative stereotypes to people.” Further, “Once a faulty belief is fixed within a person—and especially if the same fabrication or bias is passed and then becomes fixed in many people who use the same system—it can pass among people in the population in perpetuity.” Young people can be particularly vulnerable. In other words, the AI hype goes into a positive feedback loop. People believe the AI hype, repeat it, and then AI amplifies what has been repeated. This is no more than false history repeating. Scientist at Stanford have found that so-called emergent abilities may be creations of the researcher’s choices, not a fundamental property of the model family on the specific task. We’re early in the hype curve, similar to what I’ve explained for stem cells in the biotech arena. The hype is real, but so are the benefits of AI. The hype will settle in time as we more deeply understand AI, and reality spreads to a broader audience. A quick glimpse, for example, in the food industry finds AI driven robots, made by Monarch tractors (Livermore, CA), in the fields of California fertilizing the crops, while AI driven robots in the kitchen at Chipotle (Newport Beach, CA) peel the avocados (Vebu Labs, Los Angeles) and fry the tortillas (Miso Robotics, Pasadena, CA).
Highly educated people are flocking to San Francisco, including Drs. Bernardo Aceituno and Toni Rosinol, both of whom earned their Ph.D.s at MIT. They’ve built a platform that allows LLM (Large Language Model) to be more easily constructed. This will be huge. Expect a Unicorn. They moved their company from NYC to San Francisco to be in the center of action. They began their journey in the San Francisco Bay Area at the famous startup accelerator, Y Combinator, in Mountain View. Most companies in the Y Combinator are located in the SF Bay area. Of a recent batch of 270 start-ups, 86 percent participated locally. Companies accepted into the Y Combinator have a total valuation of over $600 billion. Repeat, not a typo, $600 billion. Many small, startup AI companies can now use core AI platforms from companies such as Open AI in San Francisco, and build their particular model on top of the Open AI core.
When Bill Gates and Microsoft incorporated Chat GPT into its Bing search engine, the large number of factual errors triggered an avalanche of criticism. A Microsoft spokesperson said the company expected “mistakes.” Having tried Chat GPT myself, what I received in response to my queries was loaded with garbage. Now, a16z-backed Character.AI app is claiming to have over 1.7 million new installs in less than a week on the market. I’ve not tried the Palo Alto-based company’s product, but many have been sucked-in and according to the company, users quickly become engaged after first use. However, compared to web-based knowledge resources, such as Google Search, that return numerous results and require the searcher to synthesize information, ChatGPT can work well. Case in point, the use of ChatGPT to answer public health questions. In one 2023 study by scientists at UCSD, ChatGPT consistently provided accurate evidence-based answers to public health questions, although it primarily offered advice rather than referrals. In other words, rather than someone having to sift through content on Google Search to find the relevant info, ChatGPT did a good job of finding that info for them.
Now for the popular, glitzy stuff. As Roger McNamee has written, most popular AI programs scrub the internet for inputs, whether those inputs are validated data or just made-up BS. Imagine an AI platform scrubbing data from the internet that was propagated by another AI platform. This is likely happening, and means that stuff on the internet contains synthetic, multiplicative BS (including AI hallucinations). Open AI in San Francisco has been a springboard for many of these startups by offering an AI platform that allows people without computer science backgrounds to generate their own startups. People with doctorates in computer science, such as John Schulman, PhD, who received a degree working with Dr. Pieter Abbeel at Berkeley EECS (my old haunt back in the 80s when I was a Research Engineer at EECS, Berkeley – I remember viscerally the day in 89 when the quake made Cory Hall undulate), enabled people coming from healthcare to start their own AI companies. No experience required, just smarts, a good idea and a good education, and hard work. AI Companies, such as Anthropic AI in San Francisco, cofounded by Daniela Amodei, a graduate of UCSC with a degree in English Literature, is an example. They try to use a set of rules to validate their data inputs and named their process for doing this, Constitutional AI.
There are new chipmakers for AI too. Currently Nividia, in Santa Clara, is the major player. They make cutting edge graphics processing unit that are priced at about $30,000 apiece, and which AI startups are clamoring to get their hands on. But those chips are still built for graphics, not language and therefore now there are newer AI-specific chips on the market for LLMs. Two basic types that I know about are, 1. use a huge number of transistors to process the input extremely fast, such as made by Cerebras in Sunnyvale, or 2. separate the data ahead of time and only feed through the chip what you need in sequence, such as Groq in Mountain View. The Cerebras chip is huge – the size of a dinner plate. Either way, the chips are in high demand.
As San Francisco’s Dr. Rodney Brooks, Ph.D., former professor at MIT, has said about ChatGPT, “It doesn’t have any connection to the world. It is correlation between language.” It’s next word prediction. Yet AI, which is just people writing code that collects data, whether it’s of good quality or not, then synthesizes those data and spits out a response, is controlling, and has been controlling, many things in our lives for over a decade. All of this information that AI spits at you can generate “context bubbles.” In other words, the algorithms give you inputs that you want to hear or that will elicit emotional responses. What’s presented is not necessarily information, and you will not necessarily learn anything. The inputs are garbage, either telling you false inputs you already know and believe, or feeding you false content that you don’t know, but, unfortunately, will believe. The few signals available in the inputs will be hidden in the noise. This can help to bring about those who are agape, spewing emotional nonsense, often vitriol. This happens for the sake of money; their business models are based on being paid to show ads when people spend time engaging content, and therefore many companies have an incentive to show you what will be the most engaging content for you, regardless of the content’s quality, accuracy, and impact on you or on society. Many AI leaders, such as Mustafa Suleyman, cofounder of DeepMind and Inflection AI, have brought attention to the power of AI and the need to use it for the greater good of society through cooperative efforts that involve government, society, and the AI companies. Tristan Harris, of the Center for Humane Technology in San Francisco, recently gave a thoughtful presentation on how society, acting through government, must control the use of AI and related technologies for the greater good. Instead of the next word prediction LLMs, reinforcement learning has been introduced by John Schulman, Ph.D., giving AI an objective and the responses rated by experts to improve responses. Improvements have been realized. A new startup in Berkeley called Perplexity AI, cofounded by Andy Konwinski, Ph.D., another Berkeley EECS grad, who cofounded Databricks, is working on solving these problems of quality, accuracy, and impingement on society.
When we’re thinking about A.G.I (artificial general intelligence) and large generative AI projects, development of these technologies favors companies with larger, proprietary data sets that can give an edge to more established companies. Some of these data sets are available for purchase, but they are expensive and start-ups often can’t afford them. This is similar to what I have experienced as an entrepreneur scientist in the biotech space where many scientists create a new technology, only to loose their company to large investors or partners during the capital intensive development of a product. The risks for AI developers are becoming like that of the biotech industry, in which research and development begins with start-ups but most of the benefits ultimately accrue to the parent company. Further, the capital-intensive nature of training large language models means that smaller companies creating their own large language models have few alternatives beyond making Faustian deal with tech giants, large corporations. Once the corporations have you, the “bean counters” take over and all that matters is money. The start-up founders’ dream often turns into a nightmare. There are many brilliant young people in the “Cerebral Valley” of San Francisco working on their AI startups and I hope their dreams come true. However, reality can be like the generative AI used in Tesla’s autonomous driving. Put it in the hands of a corporate BS artists like Elon Musk, and he’ll tell you that AI powered driver assistance is “full self driving” and charge you $12k for it. The results of his BS – massacre on the streets.– 736 crashes and 17 dead. BTW, congratulations to Mercedes for having been certified for autonomous level 3 driving in California – something Tesla hasn’t achieved. Mercedes’ AI center is in Sunnyvale, and their EV design center is in San Diego (Carlsbad).
But these overly wealthy BS artists, like Gates and Musk, control the narrative and spread false stories of their creative genius, while canceling, including through costly lawsuits, those who actually were the creators. Now, taking cues from Gates and Musk, is the newly minted BS artist working in London, UK, Emad Mostaque, who brings to AI what Musk brought to Tesla -hype, spin, and lies. According to one former employee of Mostaque, “What he is good at is taking other people’s work and putting his name on it, or doing stuff that you can’t check if it’s true.” Taking credit for what others have made, Mostaque is sucking money and attention from others in the field who are actually doing the hard work.
From electing thugs, Donald Trump, to be president of the US, to crashing Tesla cars on the highway, AI is here. Be aware, many of those controlling AI companies, such as Bill Gates, are college dropouts who couldn’t be bothered to attain a broad based, liberal arts education, but instead they left college to pursue monetizing, i.e. making money, techy things. What happens when we leave these poorly educated, money-making techies in charge of the internet of things (IOF) without regulations? Look around, listen, it’s a world full of BS and anarchy, and the rich guys, such as Bill Gates who came from a wealthy, politically connected family, control it in a society described as one of “Economic-Elite Domination.” Most people don’t know that Bill Gates, with his politically connected lawyer-father, unscrupulously killed the world’s most advanced operating system (32-bit in 1995) for small-platform computers, called OS/2 Warp, a joint venture between IBM and Microsoft, for the sake of dominating the market. I was a victim of Gate’s shenanigans when my self-made server, operating in my university lab using OS/2, no longer had support and I had to eventually adopt the shitty Microsoft Windows, 16-bit operating system. I was furious with the high-pitch voiced incel who was ruining the once rapidly advancing computer and internet technology space. I really wished he had spent more time trolling the streets of Seattle for strippers and not screwing the computer industry – unfortunately, only later did he hook-up with Jeffrey Epstein.
Now AI is moving into the medical treatment system, euphemistically called the healthcare system. You may have heard the hype about the study where Google AI claimed to better detect breast cancer than did physicians. Let’s ask one of the questions that I always teach my students to ask, “Compared to What?” The Google study found that AI performed better than radiologists who were not specifically trained in examining mammograms. So, when asking, “Compared to What?,” Google AI in some sense performed better than naive physicians who have not been trained in detecting breast cancer. But there are larger problems when using AI to screen for breast cancer. As Dr. Peter Gotzsche has published, “As screening does not reduce the incidence of advanced cancers, we would not expect screening to have an effect on breast cancer mortality today.” Further, “The fundamental error with these models is that they do not distinguish between clinically relevant cancers, which would have appeared at a later time if there had not been screening, and the overdiagnosed cancers that would never have appeared. The models include all of them, but in actual fact, the lead time of clinically relevant cancers is less than a year.” What Dr. Gotzsche is saying is that the models used to diagnose cancer are fundamentally flawed, where cancers that will never cause problems are detected, then treated, and the patient goes on to survive. Problem is, the patient would have survived anyway without the treatment. The model therefore erroneously uses these data to report that the early detection of the cancer was a success because the patient survived. In other words, garbage in, garbage out.
And what does the treatment do to the patient? It increases the probability of cancer, whether the treatment is chemo or irradiation. So in this case, AI will help to cause cancer! Hence, in this case, AI is neither artificial or intelligent, rather it is real and it is stupid. I call it Actualized Societal Stupidity, or using an acronym, ASS. A number of studies have found that these screenings by physicians cause harm to their patients. But this is big business for physicians, drug companies, insurance companies, and hospitals alike. Christie Aschwanden at Wired has some more thoughts on how AI is a problem within the medical world. As John Horgan of the Stevens Institute of Technology (one of the oldest technology institute in the US) has written in Scientific American, “Cancer medicine generates enormous revenues but marginal benefits for patients.” AI, better known as ASS (Actualized Societal Stupidity), when it is in the hands of corporate medicine with money-incented physicians at the helm, will likely make the cancer business bigger with poorer outcomes. As Dr. Rodney Brooks has said, “one of the deadly sins was how we humans mistake performance for competence.”
AI can work, and work well in many cases. When the US government controls it and puts it into good use, the results can be fantastic. Case in point, Primer, a small artificial-intelligence firm based in downtown San Francisco, one of my favorite big cities because it’s a focal point of innovation. As the NY Times reports, not long after the war in Ukraine started, Primer’s engineers, working with Western allies, tapped into a tidal wave of intercepted Russian radio communications. Primer used its advanced software to clean up the noise, automatically translated the conversations, and most importantly, isolated moments when Russian soldiers in Ukraine were discussing weapons systems, locations, and other tactically important information. The same work would have used hundreds of intelligence analysts to identify the few relevant clues in the mass of radio traffic. Now it was happening in a matter of minutes. All of this will become better and faster for many reasons, including the new computer architectures being created for AI at start-ups such as Cerebras in Sunnyvale, SambaNova in Palo Alto, Habana in San Jose, and Groq in Mountain View. Cerebras has built a 2 exaflop computer system called Condor Galaxy 1. At the heart of their system is an AI-specific processor with 2.6 trillion transistors and 850,000 AI cores made from a full wafer of silicon. It’s friggin huge.
And, down the street from my laboratories in San Diego, Brain Corp is using AI in their factory robots, which have been successfully deployed throughout the world. AI can be used for the greater good if society, acting through our government, understands its uses, limitations, and implements these platforms in a thoughtful manner. Leaving it up to unregulated corporations and their money-hungry executives will continue the disaster that is now happening. Right now, AI is being used to deny Medicare Advantage (this is a privatized, deregulated substitute for Medicare brought to you by Republicans) patients their needed treatments. Insurance companies tweak AI programs to deny the healthcare, and a physician employed by the company signs-off on the document, without having reviewed the claim. Physicians and company make money, Medicare Advantage patient is screwed. All hail unregulated AI.
