top of page

Will Artificial Intelligence overcome Natural Stupidity*?

Let me start by saying that this is not the first version of this post. In fact, this is a legit illustration of what I call "Natural Stupidity". As embarrassing as it is, I must admit that I accidentally deleted my first write up. And no, I was not trying to make a point and produce an evidence, otherwise, it wouldn't be natural. But after this occurrence, I felt compelled to reload my hopes that AI will be able to address events like the one I just described. Note that I am not hoping for an undo button, but for a machine capable of learning the side effects of caffeine deprivation on me and, under these circumstances, ask me if I wanted to permanently delete 7 paragraphs before it was too late.

I think we can agree that machines today are able to learn and perform some specialized tasks better than the majority of us. Just look at your smartphone and try to remember when was the last time you have tried to learn to get somewhere, without relying on your GPS, factoring real time traffic conditions, to find the more efficient route to your destination? And, as strange as it looks, to the ones that fear robotisation, I suggest you take another serious look at your smartphone and imagine your life without that digital prothesis. Let's face it, we are already "artificial intelligent(ed)" or cyborgs, for simplicity. Finally, if you believe that machine learning is a novelty, I have to say that ML has always been AI's powerhouse. The only thing that has changed was the methods that were used overtime, largely constrained by processing resources. At first, by making use of linear rules-based algorithms, and more recently, moving to deep learning systems, based on hierarchical decision trees, neural networks, and its variations. In summary, if you think that AI is something about to come, take a good look around.

Now that we hopefully share the same time continuum, let me say that, for the sake of precision, my hopes of a sensored intelligent advisor should be categorized under the AGI bucket, which stands for Artificial General Intelligence, and may also be tagged as AI 2.0 or SuperIntelligence. To call it AI would be equivalent to drive a Ford T and a Tesla and keep calling them both cars. In other words, AGI is attributed to a machine that is not specialized on beating you in chess or in Go, or a machine that can master your tax returns, but it is a multidisciplinary system, that has the ability to reproduce the mechanisms of the mind with all its complexities and values in order to make decisions and, that may have the potential of, eventually, develop consciousness, leading to a technological singularity where machines and humans joins a single matrix.

From a technological perspective, the outlook is promising. Modern average computers can operate at 2GHz, seven orders of magnitude faster than biological neurons which operate at 200 Hz. Latest Graphical Processor Units (GPUs), which just like human brains, parallelize tasks, can transport 1 Tb past the external memory interface within a second. Our brains can perform the same operation, optimistically, at 1Kb within a second. EndFragmentMachines can now learn in less than a week, what the brightest mind in the world locked in a library, would take 20.000 years to absorb. Computers can also communicate at the speed of light or 300.000.000 m/s, compared to our brains that function at 120 m/s. And last, but not least, we are currently pretty much stuck in 100 billion neurons inside our brains, while computers are remarkably much more scalable.

But the truth is that despite the fact that it is kind of cool to think we will experience a digital apocalypse and, ultimately, we will have the opportunity to play a character of our favorite science fiction plot either in a “Terminator” or “Star Trek” fashion, just remember that it was only in 1997 that Deep Blue was able to beat a human in chess and it was not before 20 years later that a machine was able to nail Go, a game with barely one order of magnitude higher in terms of opening moves when compared to chess, but in a spiralling factored growth, almost 200 exponential times more complex. On top of the astonishing volume of combinations, the fact is that all the present industrial successes are based on pure supervised learning, in other words,"we teach what we trust is relevant", which by itself may compromise the quality of the real world scenario being represented. Moreover, as the data sharpens at every step of the learning process, the representation of the real world declines, and that explains why Facebook, Google, Amazon and all vendors involved with AI are avid to accumulate as much data as possible: to keep a healthy stream of representative data. And the challenges to reach AGI - if at all - go on.

Given all these challenges, is there any hope for our natural stupidity? Well, in order to answer that question, allow me a little digression. It is believed that first humans appeared on earth around 2 million years ago, however, it was not before 70.000 years ago that we effectively put our “biological unsupervised machine learning” to work. One would say that this was the launchpad of a cognitive era that prompted us to develop fictive language and, as a result, nurture our ability to think abstractly, or arguably, the time that we started to develop consciousness and to shape our brain close to what it is today. That being an acceptable sequence of facts, the most logical conclusion is that we are not a finished product. From an evolutionary perspective, it means that we are still evolving as humans and so are our brains. On the go. Ad hoc. Building and destroying neuronal bridges to be perfected by the next generation. What I mean by that is that this is not just a matter of chasing AGI, but to pursue a fluid AI that can adapt to a moving target, to a perpetual changing mind.

Under this light and to paraphrase O.E. Wilson, what are the odds that there will be a day where we will use our god-like technology to overcome our stone-age emotions?

* Title inspired by Amos Tversky quote " My colleagues study Artificial Intelligence; me, I study Natural Stupidity"

bottom of page