RBS News

Written By: Alan J. Ross | 2024-08-16

What Just Happened? Initial Thoughts on the AI Revolution

For the past few months, I have devoted most of my time to learning the ins and outs of artificial intelligence, which I might add is neither artificial nor intelligent. This process will continue well into the future. I have read quite a few books on the subject, some good, some great. I have revisited calculus, derivative equations, and probability and statistics, the foundations of neural networks – hey, it’s been more than fifty years since I took those courses. I have gone through a few online instruction courses. And I have become intimately familiar with various Substack AI posters, as well as MIT and Medium posts, which provide daily links to literally every worthwhile new report on the latest developments in neural networks, machine learning, deep learning, agents, accuracy test factors and results, and algorithms, to name just a few of the overwhelming number of developments that AI engineers are raining down on us. I have read the original articles that reported practically every major development since Minsky and Papert put the fork in Rosenblatt’s perceptron work at Cornell in the 1950s, causing what became known as the first AI winter in 1969. I’ve even started to learn to code in Python 3.1, the programming language of most modern neural networks. I still have a ways to go.

My conclusions: We are about to enter the third AI winter. Why? Well, to quote Gary Marcus from Marcus on AI, machine learning models, certainly in their various current forms and perhaps irrespective of how they evolve in the future, cannot do what he terms “outliers.” In its simplest form, an outlier is just something the model hasn’t seen before. Marcus has been saying this for thirty years. AI engineers, data scientists, and VCs are beginning to listen. Most recently, he explained why this is such a problem. Referring to a recent Wall Street Journal video showing a driverless car accident involving a car crashing into an overturned double trailer, he quoted Carnegie Mellon computer scientist Phil Koopman’s statement in the Journal video. Koopman explained that the AI model had never been trained to recognize an overturned double trailer. So, the model had no idea what it was and just ignored its existence. And the car the model was driving crashed into it.

Humans have what is called generalized knowledge. AI models do not. And you cannot train an AI model on everything it’s likely to encounter. Machine learning involves the creation of a mathematical function. The models operate in the space occupied by multidimensional matrices and vectors. They don’t think. They are trained to place words, images, whatever their function, in a multidimensional mathematical space which they then rely upon and use in their operation. If they can’t in effect find a mathematical space that was created during training sufficiently similar to what they are being asked to decode, well, Houston, we have a problem. We get nonsense back, known as “hallucinations.” That’s not really a big problem when your teenage granddaughter gets a weird answer on Chat GPT from her imaginary friend, but the data is clearly showing that it is a problem for potential adopters who want to use AI in their businesses.

Add this to the problem created by the massive amounts of money being directed toward AI, both startups and the big players, and AI must solve big problems, or the return on investment is essentially non-existent. AI, in its current form, whether we’re talking straightforward deep learning models, like Chat GPT in its various iterations, or the newer edge models or even the combo models using symbolic and sub-symbolic AI systems together that are currently under development, does not solve big problems. So, a crash is coming, and it won’t be pleasant.

That crash will be even worse if Chairman Xi decides that reunification with Taiwan by force is necessary, thereby destroying high end chip manufacturing for a decade or more. AI depends upon 3 nm chips, which are made only in Taiwan, and which depend upon about 100 suppliers, many of which are located in China.

Does this mean that AI is going away or that you won’t have to make decisions on whether to rely on a human or a machine for any of the myriad tasks your business performs? Of course not. AI is here to stay. Machine learning and deep learning have many useful functions. The new models that are seemingly coming out every week will find uses that makes sense, cut costs, even make money. But even the companies heavily invested in AI are now calculating ROI over fifteen years, not ten years like they were last week or so. 

What does that mean for companies eager to adopt AI? Any number of things. First, you better understand exactly how it works. How it was designed. What choices were made for the myriad decisions that go into designing and engineering a neural network model. What choices were rejected? Why? How was the model trained? Where did the data come from? Will it be necessary to train the model on your business’s data? Do you own that data? What types of training will be used? What experience did the designers have? Model design is not a science. It is an art. And it requires experienced artists to make that art. Simply put, there are a myriad of issues that AI presents to a business, many technical, many practical, many legal. And every situation is likely to be sui generis.

Most of all, you better conduct a realistic assessment of the risks you run using the model. Is the model likely to encounter outliers. It’s all in the details. What kind of outliers? How will that affect output? Will liability to third parties or customers be an issue? What laws govern where you are selling? Will national, state or local regulations affect a product or service touched by AI? Will customers use your service in a way that exposes you to liability?

In the past two years, the AI industry has tossed caution to the wind. It is moving at breakneck speed and introducing new models, new approaches, new chips, new companies, new, new new! Businesses adopting this technology may well be best served by taking their time, assessing the risks, working with employees in an open, measured, and encouraging way, and recognizing that new technologies bring new risks that are often not understood until the collateral damage sets in.

The number of legal issues that AI raises at times seems infinite. Lawyers cannot even discuss this stuff with clients without first learning a vocabulary filled with terms that mean something entirely different from the meanings commonly associated with them. Although the math is not that difficult if you studied math, science, or engineering in college, most lawyers didn’t. The processes that AI machines run now are complex. Those processes are tweaked with a variety of algorithms that often operate on some, but not all, neurons in a model. The number of neurons in some of the big LLMs and the number of math calculations these models conduct literally boggles the mind. A great deal of research is currently focused on determining exactly what specific neurons in a given neural network are doing. The math behind a lot of this work is quite difficult. 

Understanding the details of each stage in the design, training, and ultimately the processing of a modern AI model, known as inference, is essential to both the proper functioning of the model when it is put into use and solving the potential problems that are likely to arise.

When dealing with this stuff, I try to remember a quote from George Box, one of the great statistical minds of the 20th century, “All models are wrong, but some are useful.”

In the coming months we will be posting our thoughts on developments in AI, the regulation of AI, and the laws governing AI as they develop and evolve. We hope at some point to publish a treatise of sorts that explains the constituents of AI models, their design, training and uses, as well as detailed treatments of both the governing regulations and the law.

The literature is filled with optimistic engineers and scientists telling us that the advent of AI constitutes an inflection point in human culture, tantamount to electricity. Perhaps. There are many voices, however, that are urging caution and deliberation. We live in interesting times. Be careful.

Contact Us