Not-so-artificial Intelligence

In researching something to write a story about, I delved into something that has always interested me. So, today, let’s talk about what some people consider a topic that seems more sci-fi than reality: artificial intelligence. As a major geek, I get excited when I read about these kinds of things. Whether it is in fiction or a scientific journal, artificial intelligence (AI) gets the nerd heart fluttering. But can it exist outside those paperback pages?

What most people don’t realize is that AI is a lot further along than we think. Most of us probably remember when Watson competed against – and crushed – contestant after contestant on Jeopardy!. Even before that, there have been machines that can learn and adapt while playing chess, quickly out-competing international grandmasters. The Stock Market is dependent on AI to make trades, an utterly fascinating topic.

High-frequency trading is trading done exclusively through algorithm-based programs. These programs can rapidly analyze and trade securities in a fraction of a second, faster than you can even think “fraction of a second”. At its height, HFT accounted for almost 75% of all US equity trade volume, though that number has declined slightly. The Sharpe ratio is a measure of reward to risk, and the use of HFT has a 10x higher Sharpe ratio than traditional methods of trading. HFT can make trades ranging from hundreds of thousands of dollars to mere pennies of profit; they occur so quickly that penny profits actually add up.

The neat thing about HFT (as if it wasn’t neat already) is that different HFT programs from different firms will compete with one another! No human interaction required, these programs will try and outthink and out-perform its rivals. This, however, was one of the factors that exacerbated the “Flash Crash” on May 6, 2010. The Flash Crash refers to a timespan of mere minutes when major stock indexes collapsed, but rebounded extremely quickly. The whole thing took less than 40 minutes, but saw the Dow Jones record a record plunge of 998.5 points, or 9%, and was recorded as one of the rockiest periods in the history of financial markets. It was triggered by the sale of a single mutual fund, one that led to 22 criminal charges – to a real, living person. However, once that mutual fund sale occurred, HFT algorithms kicked in and made the problem substantially worse. So AI can make mistakes, too.

But that is just what I would consider rudimentary AI, along the same lines that Amazon uses to personalize prices for the specific customer. Algorithms have limited parameters. Already, however, we are seeing advancements in artificial intelligence: self-driving cars are on the road, there are programs that can create original artwork, compose symphonies, build houses, perform administrative duties, and now we are seeing the progress of AI break into the field of analysts, radiologists, and scientists. There are many experts in the field who think that we will see AI that can out-compete humans in every task by the year 2035. Almost the entire field believes that it will happen in the next 50 years. Putting a name to my former claim of exponential growth in technology, Moore’s law states that the capacity of technology will double every two years. This was revised to 18 months as time progressed. So, just to re-iterate math from a previous post, let’s quantify the skill level of current AI as 2. Not very impressive, but not a 0. 18 months from now it is 4. 36 months from now it is 8. 54 months – four and a half years, it is 16. 2035 is approximately 18 years away. Applying Moore’s law, today’s AI score of 2 will be 4,096 in 2035. In 50 years from now it will be 10,822,639,160.

And that is not even taking into consideration the (self-coined) Terminator Effect – the point in time when AI will be able to automatically improve itself. It technically already has multiple names (intelligence explosion, technological singularity, recursive self-improvement) but I think mine is better. If we are able to double it every 18 months, how quickly can a program that never has to eat, sleep, rest, even take a break improve itself when it has access to the entirety of humanity’s work. How rapidly will something evolve when it can run millions of tests a day to find out which path is the best, and how it can most easily get there.

There are opponents to these views, though, and they are worth mentioning. Some argue against Moore’s law. They believe that we will, and have already begun, reaching a period of saturation that will slow this timeframe, back to two years or two and a half. There are those in the medical field who believe that the human brain is simply too complex, is a series of neural highways that is too vast to map. They believe that AI will never reach the level of a human being because of this. I am no expert in the field, so I cannot properly argue one way or the other beyond what I have researched, but I clearly favour one side.

And this does not even touch what I consider the most interesting aspect of the AI debate: morality. Perhaps we cannot replicate the human brain because it will be impossible to replicate morality, humanity, to instill a sense of benevolence into a program. It is tough to apply human rights and ideals to lines of code. I philosophically asked my partner last week a difficult question. I challenged her that, if we could make AI human-like, if we could give it emotions and feelings, if it could love and hate and have goals, what would that mean? Would we have robot rights? Would hitting the delete key equate to committing murder? What if it went the other way: what if we get to a point where we can “digitize” our brain, turning our neural impulses into software. Are we human anymore? Are we even alive?

I think I rambled there. Somebody please hit my off switch.