Opinions expressed whether in general or in both on the performance of individual investments and in a wider economic context represent the views of the contributor at the time of preparation.
Executive summary: Artificial Intelligence (or ‘AI’) is the science of making machines smart. As with other future trends we assess, while AI is inherently disruptive, its growing deployment in both the business and the personal sphere will enable us to do things more efficiently. The benefits of automation (allowing a machine to do what a human would otherwise have done) are estimated to outweigh the costs by 3-to-10 times. This explains why the AI industry has the potential at least to double in size over the next decade, reaching as much as $40bn in value. Nearly all major technology software and hardware businesses recognise the AI opportunity and are correspondingly investing heavily at present. Google and IBM seem to have established strong current positions and others such as ARM, NVIDIA and Qualcomm could also be poised to benefit.
Many people love playing games. And some get very good at it. In recent years, however, champions and world records in pursuits as diverse as chess, Space Invaders, Jeopardy! and, last month, the ancient Chinese game of Go have fallen to the might of computers. Intelligent machines though, are not a new concept, and the notion of forms of artificial intelligence has been addressed by myth, fiction and philosophy since antiquity. What’s changed is that increasing computational power and a more dedicated focus on specific problem-solving have seen the industry develop rapidly. Solutions based around the power of AI are being deployed increasingly widely and look set only to expand from here. The benefits and disruptive nature of AI described below are not the plot of apocalyptic movie scripts, but rather its impact on business.
Artificial Intelligence means many things to different people, hence some of the confusion and fear over its seemingly increasing prominence. Moreover, AI is naturally interdisciplinary, involving a number of fields including computer science, mathematics, psychology, linguistics, philosophy and neuroscience. Although there is no established unifying theory or paradigm that guides research and development within the field of AI, most accept intelligence exhibited by machines or software as being an appropriate characterisation. Put another way, AI can be thought of as the making of ‘good’ or ‘right’ decisions using a certain amount of information and time. AI does not need to emulate fully the workings of the human brain; rather, it is about the creation of software to produce intelligence.
Indeed, software and algorithms developed by researchers operating in what can be considered to be the field of AI are now integrated in applications throughout the world, even if users do not always think of them as being a product of artificial intelligence. In one sense, AI can be thought of as mundanely ubiquitous, deployed in intelligent sensors within cameras (setting aperture and shutter speed, for example) and dryers (heat and humidity probes). Furthermore, every time we check in at the airport, order books online or consult smartphones for information, some form of intelligence is being deployed to make our lives easier and more efficient. From here, it is a simple step to self-driving cars, virtual private assistants, telepresence robots and smart homes.
Marked improvements in two key areas are, however, driving the growth of the industry and a broader deployment of AI at a pace not seen previously. First, the presence of powerful and low-cost processing chips, cheap storage capacity and the corresponding growth of massive databases of information have made the potential of AI both more potent and efficient. As a corollary, the field of machine (or deep-) learning has expanded rapidly. This relates to the use of computer algorithms to recognise patterns in data and turn that data into knowledge. Such algorithms operate by building a model from example inputs in order to make data-driven predictions or decisions.
Evidence of this clear. Take the victory of DeepMind’s AlphaGo programme (developed by a subsidiary of Google) over 18-time world Go champion, Lee Se-dol. Notably, the programmers at DeepMind did not teach AlphaGo to play Go; instead, the algorithm taught itself by watching (and learning from) 160,000 professional games. Furthermore, whereas IBM’s DeepBlue programme, which famously defeated the then world-champion in chess in 1997, was developed using specially customised chips, AlphaGo simply used a large number of commodity central processing units and graphics processing units, the likes of which are deployed (albeit in smaller quantities) within conventional PCs.
Against this background, the future of AI can be thought of as unsupervised learning. Cheap and powerful hardware has the potential for machines to learn via watching and correspondingly processing information. This, of course, is how humans learn. The same processes that allowed AlphaGo to defeat Lee Se-dol could be used for recognising faces, translating between languages, helping researchers to advance science and so on.
Many of us may already be familiar with Virtual Private Assistants such as Siri (Apple), Now (Google), M (Facebook) and Cortona (Microsoft). These systems use AI and have become increasingly more intelligent, representing clear precursors to autonomous agents. Notably, Google, Microsoft and Facebook have all announced within the last six months that they will open-source their machine-learning programming libraries (respectively named TensorFlow, the Distributed Machine Learning Toolkit and Big Sur), enabling third-party programmers to create AI applications and solutions using their code. The idea is to permit for the creation of a greater level of artificial intelligence at a faster pace. Google, for example, says that over 50 of its products already use TensorFlow to harness deep-learning. Elsewhere, IBM has already licensed its AI solution (Watson) to a number of businesses and healthcare providers. IBM claims that Watson is capable of reading 800m pages of text a second, equivalent to ingesting 100 terabytes of data a day, and is able to suggest solutions based from an interpretation of this data.
Given the range of applications in which AI is already present and where it has the scope to be increasingly deployed, there is no clear consensus on the size of the industry. Nonetheless, all observers believe that its growth potential is significant. BCE Research, for example, calculates the smart machine market (comprising embedded systems, digital robots, autonomous robots and neurocomputers) was worth $6.3bn at the end of 2014, but could be worth $15.3bn by the end of the decade. The more optimistic consultants (Markets-and-Markets, for example), believe that the industry could be worth up to $40bn within ten years. Thought of from another perspective, by 2020 Gartner estimates that autonomous software agents outside of human control will participate in around 5% of all economic transactions.
Many, of course, fear that the rise of the intelligent machine will mean the end of employment across a variety of industries. Gartner’s study suggests that by 2020, up to 20% of business content will be authored by a machine (hopefully not Heptagon’s Future Trends research notes!). However, in the near- to intermediate-term, the deployment of AI may actually serve to increase worker productivity, redefining jobs rather than eliminating them. The benefits of automation outweigh by three-to-ten times the cost, according to a recent McKinsey study, ranging from heightened output to higher quality and improved reliability. In the case of healthcare, for example, the diagnosis of many health issues could be automated, increasing accuracy for more common issues and allowing doctors to focus on more acute or unusual cases. Similarly, mortgage loan officers would need to spend less time inspecting and processing routine applications, allowing them to cover more customers and again deal with particularly complex issues. Indeed, the broader ability to staff, manage and lead automated organisations may become a key competitive differentiator for all businesses over time.
Sceptics remain and range from Stephen Hawking (AI could “spell the end of the human race”) to Elon Musk (AI is “more dangerous than nukes”). Concerns centre particularly on the idea that if software were sufficiently intelligent to reprogramme and improve itself, then it could expand exponentially and surpass humans. This, however, is to somewhat miss the point – at least for now. AI does not mean human consciousness; it simply means the ability of a machine to make appropriate decisions in a general way. AlphaGo did not make sense of the world for itself; rather than inventing a new algorithm, it used existing algorithms to achieve its specific end. Over time, regulation, legal and ethical issues may come to play an increasing role in how the industry develops, but we do not yet appear to be close to this stage of development.
Nonetheless, navigating the transition to an increasingly automated economy will a global challenge that will create winners and losers at every level of society. From an investment perspective, perhaps the greatest opportunities remain within the private sphere, often businesses that have been funded by and/or spun-out of universities. Nonetheless, many of the world’s largest technology companies appear to be in an effective AI-arms race. At presemt, no publicly-listed business provides any meaningful financial disclosure on the contribution derived from AI or how they specifically intend to monetise their technology; rather, AI should be thought of as an enabler for other things. Our approach is to focus on the software and/ or hardware businesses that appear to have developed a lead in the field. Elsewhere, many advanced manufacturing technology businesses are already deploying some form of commercial AI.
Our discussions with a wide range of both industry experts and academics suggest to us that Google (Alphabet) has created a lead within the software field of AI, helped by its purchases of DeepMind (spun-out of University College London), Vision Factory and DarkBlue Labs (the latter two emerged out of Oxford University). Google’s CEO Sundar Pichai has talked extensively about AI and described machine learning at a recent investor event as “a core, transformative way by which we’re rethinking everything we’re doing.” Google remains committed to growing its AI business both organically and through acquisition. IBM has also been active in the field for many years and is increasingly commercialising its technology. Microsoft and Facebook appear to be catching up, although Apple and Amazon are lagging somewhat at present. With regard to hardware, AI creates a growing range of opportunities for ARM Holdings to deploy its processing chips, and management has characterised AI as “a leading research area for us.” NVIDIA and Qualcomm have also been visible in highlighting the capabilities of their (graphic-based) processing chips and their potential for deployment in deep-learning.
Alexander Gunz, Fund Manager, Heptagon Capital
The document is provided for information purposes only and does not constitute investment advice or any recommendation to buy, or sell or otherwise transact in any investments. The document is not intended to be construed as investment research. The contents of this document are based upon sources of information which Heptagon Capital believes to be reliable. However, except to the extent required by applicable law or regulations, no guarantee, warranty or representation (express or implied) is given as to the accuracy or completeness of this document or its contents and, Heptagon Capital, its affiliate companies and its members, officers, employees, agents and advisors do not accept any liability or responsibility in respect of the information or any views expressed herein. Opinions expressed whether in general or in both on the performance of individual investments and in a wider economic context represent the views of the contributor at the time of preparation. Where this document provides forward-looking statements which are based on relevant reports, current opinions, expectations and projections, actual results could differ materially from those anticipated in such statements. All opinions and estimates included in the document are subject to change without notice and Heptagon Capital is under no obligation to update or revise information contained in the document. Furthermore, Heptagon Capital disclaims any liability for any loss, damage, costs or expenses (including direct, indirect, special and consequential) howsoever arising which any person may suffer or incur as a result of viewing or utilising any information included in this document.
The document is protected by copyright. The use of any trademarks and logos displayed in the document without Heptagon Capital's prior written consent is strictly prohibited. Information in the document must not be published or redistributed without Heptagon Capital's prior written consent.
Heptagon Capital LLP, 63 Brook Street, Mayfair, London W1K 4HS
tel +44 20 7070 1800
fax +44 20 7070 1881
email [email protected]
Partnership No: OC307355 Registered in England and Wales Authorised & Regulated by the Financial Conduct Authority
Get The Updates
Separated they live in Bookmarks right at the coast of the famous Semantics, large language ocean Separated they live in Bookmarks right