From Weak AI to Organic Artificial Intelligence

  • Publish On 7 October 2021
  • Bruno Maisonnier
  • 16 minutes

Artificial intelligence has taken center stage in forward-looking discourse on the city. In 2021, anticipating the advent of generative AIs, Bruno Maisonnier distinguishes between weak AI, which is less about intelligence than about computing power, and organic AI, developed on the model of the brain and social insects. Despite the risks inherent in the introduction of any new technology before its use has been regulated, Bruno Maisonnier offers an optimistic view of artificial intelligence, particularly with regard to the optimization of genetic engineering.

What are the specific features of the so-called “organic” artificial intelligence you’re developing, and how does it differ from the “weak” AI we know today?

What everyone is currently calling artificial intelligence has very little to do with intelligence and is instead essentially deep learning or machine learning—a means of learning through examples. By feeding a machine with millions of examples and by teaching the machine what they depict, it becomes capable of recognizing them. The duet between what is shown and what must be associated to that is essential. This association between an image and the concept it represents is called “annotation.”  Based on this, the machine will be capable of establishing a sort of screening process that will enable it to identify any new information. For instance, in order to successfully identify a tumor on a chest X-ray, millions of X-rays would first have to be fed to the machine, stating whether these images include a tumor or not, and, if they do, whether they are benign, malignant, in stage  I, II, or III. Thanks to this human-prepared categorization, the machine will then proceed by way of analogy and discrimination, and become capable of categorizing new examples not encountered previously.

This is what is currently referred to as “artificial intelligence,” but it is nothing more than statistics and computational power, not intelligence at all. My preferred expression is “AI” rather than “artificial intelligence,” for this very reason, to avoid having to utter the term. The limits of this technology are in fact quite obvious as simply trying to identify something that has never been categorized before will bring the system to a grinding halt. If you were to show an image of a pangolin to a machine without having previously taught it to recognize the animal, it would be unable to characterize it. There lies the problem with autonomous vehicles: it is impossible to draw up a comprehensive list of all the situations a self-driving car can face. Getting to zero risk of accidents is impossible. Conceptually, there is no place for surprise in the prediction system. These so-called “weak” AIs, despite being devoid of adaptability, are nevertheless immensely valuable in certain fields. In radiological applications, they are able to formulate better answers than human beings for instance, and will thus become increasingly prevalent in a society where the demand for results is ever-increasing.

The brain works differently. It doesn’t need billions of examples to achieve understanding. Two attempts are generally enough. A child that is brought to a contemporary art museum will instantly recognize a chair and its purpose, even if it happens to have an unusual shape or colors. The brain analyzes extremely pragmatic and basic situations, and then examines the structure before establishing a hierarchy and rebuilding patterns based on what it has  previously seen. This is the functioning I’m trying to reproduce. AnotherBrain’s goal is to reconstruct a small brain, an integrated circuit that would be endowed with “true” intelligence. By drawing from the way the brain works, we are trying to develop an AI that we call “organic.”

The first similarity is that it uses only very little energy, just as the cerebral cortex. Our brains only require 20 watts to elaborate on all its functions, while the so-called “cognitive” part, responsible for intelligence, consumes only 5 watts, less that the dimmest lightbulb you have hanging from the ceiling! On the other hand, deep learning requires 500 kW simply to develop a screening process, with carbon emissions equivalent to the lifetime emissions of five conventional cars. This a humongous cost when considering the society at large. Of course, if we are to compare the pros and cons, this may be warranted, but the magnitude of its carbon footprint should be borne in mind.

We are also seeking to get around the limitations of mainstream AI in terms of data. Just as a brain, organic AI doesn’t operate based on a predefined situation but is capable of analyzing and understanding a situation experienced for the first time, which is particularly useful when facing situations where data cannot be categorized and labeled, as is the case with autonomous vehicles. And it is particularly interesting in situations where data is lacking. For instance in the case of observatories that are attempting to spot new extrasolar planets, those that revolve around other stars than the sun, identification relies on detecting a planet’s effect on the radiation released by their parent star. In such an application, AI would prove very useful, but with the tools we have today, viz., deep learning, it would be necessary to establish a screening process based on light output data of a large number of stars and to annotate the nearby presence of one or more planets, which is data we do not have given that this is precisely what we’re looking for. In the vast majority of situations, it is therefore impossible to implement an effective AI, and we continue to use the usual tools: manuals for cars, optical and astronomical telescopes for searching for planets.

Finally, the technology we are developing aims to solve one of the major problems AI is facing: explainability. No weak AI is currently capable of analyzing how it reaches its verdict. It is capable of stating “yes” or “no,” “cat” or “dog,” or “healthy” or “diseased” through the analysis of millions of data points, but it is unable to point out why. This lack of justification is intolerable. Take banks for instance, which could use AI for the purpose of granting loans. Thanks to historical data on millions of clients, their account status, and their past credit history, they will be capable of defining customer profile classes—“will repay the loan” and “won’t repay the loan”—to process new applications. This is particularly important given that credit currently relies on tangible and measurable criteria such as salary, age, cost of rent, job type, and so on. Loans are granted only based on the capacity of the borrower to repay autonomously and in the present time. Life is much more complex than a bank account however. There are people who, as a matter of principle, will always pay off their debts, perhaps thanks to their relatives, and others who will intentionally try to commit fraud. Reality isn’t simply related to whether individuals have the capacity to repay a loan. It is closely linked to cultural and individual factors. Credit institutions will therefore gradually supplement the financial data they have on file with data relating to behavior and daily life, which will, concurrently, raise issues linked to information privacy.

In any event, once the system is enhanced, and the “won’t repay” profile is reliably established, how can loan rejection be justified? Is it because a given individual happens to be a female, a person of color, a single parent with two dependent children? Is it then discrimination, and how can the fact that it isn’t be possibly justified? Simply stating a rejection, with no explanations on why, isn’t acceptable. That is one of the problems with conventional AI. Well, with our system, given that it forms an answer based on an understanding of the situation, it becomes capable of providing the reason underlying it. The artificial intelligence that we are trying to develop is therefore a true “organic” intelligence given that it uses very little energy, doesn’t need a large amount of data, and produces results that can be explained.

 

How can an AI be taught how to analyze a situation? Your cross-disciplinary team includes neurobiologists. Do you draw any inspiration from the brain’s learning process?

The example of social insects—bees, ants, or termites—is more relevant here. Termites are primitive beings with relatively low “intelligence.” Nevertheless, by teaming up, they are capable of building architectures five to ten meters high, with differentiated control of temperature, humidity, and airflow velocity, depending on whether chambers are used to store eggs or to raise fungi. Even when outside temperatures reach 40°C (104°F), inside, temperatures never exceed 28°C (82°F), and all this without using any energy. But what’s really magical is that when termites are moved elsewhere, they’ll rebuild a mound with the same built-in functions, though probably with a different shape. Both a reiteration and a variation can therefore be observed. Yet, termites don’t follow a social order organized along the lines of “overseers” and “workers,” with “leaders” and “underlings.” These animals are not very good at communicating and aren’t subject to any principle of monitoring or supervision. One may then wonder how such unsophisticated individuals from a biological standpoint can result in a collective that is so intelligent. This is what is referred to as the principle of emergence.

By giving a few elementary rules to an “agent,” and then by associating several “agents” together, behaviors emerge, something happens in a wider whole. This new capacity is a feature unique to synergy—its “agents,” taken separately, lack it. It lies at the very core of how complex systems operate, including how neurons result in a brain, and cells in a body.

Though it has been observed, emergence remains highly theoretical given that no one has yet got to the bottom of this natural law, or elucidated its rules. We therefore do not know how it operates, but only its principle. What we do know however, is that each “agent” follows a number of basic rules that enables it to perform twenty different actions or so. Then all you have to do is to find the proper set of basic rules applying to each “mission.” In the case of the nest-building “mission,” which happens to be a famous coding challenge. The record for the building of a virtual ant colony was achieved with only three basic rules:

  1. When an ant finds a soil pellet that is seeped in pheromones at a level between two thresholds, it grasps the pellet and piles it on the top of another one.
  2. When the height of the pile exceeds the ant’s body size, the pellet is deposited sideways to the pellet at the top of the pile, thereby closing off the passage.
  3. When the humidity level exceeds a certain threshold, the pellet is deposited sideways to another one.

These rules are sufficient to adapt architecture to its environment. Given that the ventilation rate changes pheromone levels, the shape of chambers is based on it, and, when humidity levels increase, the pillars of the nest are made thicker to prevent it from collapsing. It is of the utmost simplicity.

Our organic artificial intelligence operates like an ant colony, following a multi-agent system approach. With a minimum amount of simple rules, it is capable of elaborating a complex task. This enables us to work around the problem conventional AI faces, for instance in the case of autonomous driving. The self-driving car is capable of identifying a motorbike, not because it will have seen a photo of it, but because it will have internalized the basic rules enabling it to  do so: two large wheels, a certain type of noise, one or two people riding it, a shiny sphere where the heads are (the motorcycle helmets), and so on. It doesn’t matter whether it’s a Harley Davidson or a Triumph, it’s a motorbike because it features the invariant attributes of a motorbike. Artificial intelligence only needs to be presented with a few dozen situations to be capable of defining these invariants in a completely autonomous way, which is why we need so little data.

I sincerely believe intelligence doesn’t exist. Similar to the termite mound that is erected “as if” termites were architects, engineers, and project managers, the brain operates “as if” it were intelligent. What is really at play is that a certain number of mechanisms placed end to end, when seen from the outside, convey an impression of intelligence. Cortical columns (tubular groups of up to 110 neurons each), just as with termites, are the building blocks of the cortex. These small tubes of neurons are all strictly identical, but depending on whether they connect to the optical nerve or the auditory nerve, they process light or sound. Given these are strictly the same columns, this means that the brain treats everything visual, auditory, tactile, everything concerning action and intelligence, strategy, cognition, and so on, in strictly the same manner.

 

Such an organic intelligence, capable of self-learning, nevertheless lack one of the core values of the living: self-awareness, which guides decision-making, particularly in critical situations. Will we be able to arrive at real AIs that would nevertheless be devoid of self-awareness? And what about contingencies and the unexpected, which play such an important part in nature?

This raises a profound question, that of belief. Either you conceive of the brain as a purely biochemical organ, and it is then reproductible, or you believe in a soul, and then machines will never have one. To the question “Will there always be a difference between technological achievements and the human brain?,” my answer is: “It depends on whether or not you believe in God…”

I’ve worked extensively on emotions through robotics. Emotions play an invaluable role in ensuring the survival of individuals. Fear leads to secreting adrenaline, alleviates pain, makes us react quicker—it’s a very efficient defense process. Love arouses the desire to procreate and produce offspring. Chemistry is behind these very sudden behavioral changes, which typically come at a significant cost for the body, which is the reason why they only occur infrequently, when the time is right.

If, one day, robots are to become sophisticated and advanced enough, if they start providing actual benefits in our daily lives, won’t we want them to be capable of ensuring their own defense? If robots perform vital functions of care for elderly people, if they take care of them, feed them, wash them, well of course we are going to endow them with the fear of fire and the love of the person that is put under their watch. This involves going well beyond the mere mimicry of feelings consisting in displaying a fake smile on a machine’s face, and actually endowing robots with real emotions, with true feelings. We’ll reach that point, inevitably.

Just as I don’t believe in the existence of “intelligence,” I don’t believe in “randomness.” I believe that any basic mechanism can be explained, even when we don’t know how to yet. By overcoming the statistical complexity of the accumulation of loads of values, it becomes possible to fully explain and analyze a phenomenon. What you call a “contingency” can therefore be anticipated. Nothing states at birth that a given individual will eventually start smoking. Yet, by analyzing their personality, how they relate to their friends, how amenable to external influences they are, their parents’ behavior, the movies they watched, endorsing or stigmatizing smoking… will statistically make it possible to predict their end behavior.

There is nothing that nature can do that we wouldn’t be capable of reproducing, as long as we accept that the world could function without a god. We are indeed quite close to the moment when we’ll be able to recreate a brain operating exactly like in nature.

 

We are currently in a situation where new technologies are facing increasingly strong distrust in the public mind, which comes from fear and a perceived loss of control. Can we imagine something that is positive and reassuring regarding the outcomes of an even more sophisticated AI?

Fear is indeed a normal, natural response, which goes hand in hand with the roll-out of any significant new invention. In the case of the steam engine for instance, Lavoisier, one of the greatest chemists of the seventeenth century, had calculated that should a steam locomotive enter a tunnel, it would cause excess pressure that would simply cause the chests of the passengers of the train to burst. Before him, other scientists had maintained that the human body wasn’t capable of withstanding speeds much higher than that of a galloping horse. Each time a new invention emerges, it generates fear by opening possibilities that haven’t yet been imagined. Fear is therefore a natural initial reaction, followed by rejection, and only then do people get to grips with the subject.

All new techniques or technologies involve significant risks in the early stages, but humanity has always been able to gradually manage them in order to reap more benefits than harm. Accidents are what has allowed the system of safeguards, laws, codes, insurances, and other ethical regulations improving the use of a new technology to build over time and they continue to play that role.

When electricity was invented, there were a lot of disasters. This unpredictable and invisible thing of tremendous power surfaced suddenly at a time when we were living in a world of relatively controllable craftsmanship. But, as things stand today, we can rightly say that it has brought more good than harm to humanity. In the case of nuclear power, we still haven’t achieved enough perspective and still fear atomic wars or disasters in the lines of Chernobyl and Fukushima. However, that would be to discount the huge number of cancers that are being treated every day with radiation therapy. If our priority is to tackle the very long-term problem, we can try to find another form of energy. But if we are to stem the destruction of the environment and the planet in the short run, the smart move is to bet on nuclear energy. More environmentally-friendly solutions will come at a later stage. This is what’s lying ahead with the research on fusion power, a technology of a completely different kind, that will enable humanity to generate clean energy, for free, and in colossal amounts. From that point of view, nuclear fusion is the real issue of the future.

With robots, AI, or the generalized Internet, the same is happening. Humanity hasn’t yet had enough time to establish a framework that mitigates risks and maximizes the positive aspects. This will inevitably involve undergoing a phase of accidents, when ill-intentioned people will try to take advantage of these technologies. We’re already facing cases of hacking on autonomous vehicles. Cars can be hijacked when driving at 100 miles per hour on the highway and criminals could threaten drivers to cut the brakes if they don’t pay ransom money. So are there risks? Yes, there are. Will we experience disasters? Yes, there will be problems. On the whole, should we discontinue research on artificial intelligence for the sake of humanity? Absolutely not. It heralds real progress for our societies.

My sole objective is to improve peoples’ lives. I dream of developing an artificial intelligence that could provide everyone with their own personal valet. A Jiminy Cricket of sorts, an artificial intelligence that knows their owner so intimately that they’d be capable of assessing their needs in real time and to cater to them in the best possible way, like a sort of annex brain. The world will become increasingly complex and artificial intelligence will help us cope, otherwise we’d be completely lost. Many professions won’t be carried by humans in the fields of law and medicine for instance, and in the light of this, there will be an inevitable shift towards universal income. The value of a doctor lies in the understanding of the person facing them. The prescription itself derives almost algorithmically from there. But how many questions can a doctor ask a patient? Twenty? Fifty? They are incapable of taking into account a large variety of factors. Yet, in a body, there are billions of bits of information that could be fed into a machine and help doctors make an infinitely more accurate diagnosis than when made by humans. It would take nothing more than a prick to analyze DNA alongside hormone levels, diet, past medical history, and so on.

The next step is none other than genetic engineering. Once again, we face some reluctance. Yet, the outcome is already being sought after by other means: in certain past societies, female babies were killed as boys were favored, in others, esthetic preferences lead to extensive and dangerous surgical procedures being carried out. We know that in-vitro fertilization and genetic testing are being used to select embryos based on criteria of sex, eye color, height, or potential intelligence, and so on. Strictly speaking, there is no alteration of the genome, but there certainly already is a longing for selection and changing physical appearance. Rather than mutilating, killing, it doesn’t seem unreasonable to imagine acting directly at the root. We have to be careful, because the answers to ethical issues are never fully white or black. It seems quite obvious to me that we won’t back down from curing genetic diseases when the technology will be mature, and from thereon, new frameworks will have to be experimented, because the lures of transhumanism will become increasingly compelling. At any rate, that is always why I believe that AI and robotics will have a great impact on society, by indirectly leading genetic engineering to be more effective. But, it is certainly genetics that will embody the most profound upheaval of humanity to come, and I am convinced that we’ll all be genetically manipulated in two decades or so.

Bibliography

explore

Article

AI in Architecture

Stanislas Chaillou

Article

AI in Architecture

Artificial intelligence sparks as much enthusiasm as fear in its applications in the built and urban environment. Architect and data scientist Stanislas Chaillou puts this innovation into perspective by replacing in its technological timeline and demystifying the way it operates, which is in fact based on statistical learning. AI brings three major contributions to architects: assistance (for tedious chores), options (in the iterative design process), and the connection to context (by taking better account of local data). AI thus carries less of a risk of standardization than an opportunity to develop a style, to adapt to multiple contexts, and to vastly increase the capabilities of architects.

Discover
Vidéo
Vidéo

AI facing complex urban environments

Hubert Beroche is the founder of the Urban AI think tank, dedicated to the field of urban artificial intelligence. He is the curator of the Eyes on the street lecture series, run together in partnership with the SCAI (Sorbonne Center for Artifical Intelligence), and explains here how urban AI can help us understand the city.

Discover
Article
Article

Technologies and metabolic city

The notion of urban metabolism can be viewed in several ways. From a quantitative perspective, by considering flows; from a political ecology perspective, by considering social factors; and from an urban design perspective, by considering the sum of intertwined environmental and social ecosystems beyond administrative borders. In each of these approaches, urban technologies and the availability of data provide exciting prospects. To read the extended version : Do cities metabolize?

Discover