Artificial Intelligence Trends 2019

Artificial Intelligence Trends 2019, updated 2/1/19, 10:26 PM

http://www.techcelerate.ventures

Helping tech companies raise equity investment

“The software ecosystem supporting deep learning research has been evolving quickly, and has now reached a healthy state: opensource software is the norm; a variety of frameworks are available, satisfying needs spanning from exploring novel ideas to deploying them into production; and strong industrial players are backing different software stacks in a stimulating competition.”

About Techcelerate Ventures

Tech Investment and Growth Advisory for Series A in the UK, operating in £150k to £5m investment market, working with #SaaS #FinTech #HealthTech #MarketPlaces and #PropTech companies.

Tag Cloud

Artificial Intelligence
Trends
WHAT'S NEXT IN AI?
2019
2
Table of Contents
CONTENTS
NExTT framework


3
NECESSARY

Open-source frameworks


6
Edge AI


9
Facial recognition


12
Medical imaging & diagnostics


16
Predictive maintenance


18
E-commerce search


20
EXPERIMENTAL

Capsule Networks


23
Next-gen prosthetics


26
Clinical trial enrollment


28
Generative Adversarial Networks (GANs)
31
Federated learning


37
Advanced healthcare biometrics


40
Auto claims processing


43
Anti-counterfeiting


45
Checkout-free retail


50
Back office automation


53
Language translation


55
Synthetic training data


58
THREATENING

Reinforcement learning


62
Network optimization


66
Autonomous vehicles


70
Crop monitoring


73
TRANSITORY

Cyber threat hunting


75
Conversational AI


78
Drug discovery


81
3
INDUSTRY ADOPTIONMARKET STRENGTH
High
Low
HighLowTRANSITORY
EXPERIMENTAL
THREATENING
NECESSARY
Facial
recognition
Edge
computing
Open source
frameworks
Predictive
maintenance
Medical
imaging &
diagnostics
Crop
monitoring
E-commerce
search
Conversational
agents
Cyber threat
hunting
Language
translation
Synthetic
training data
Drug discovery
Back office
automation
Check-out free
retail
Clinical trial
enrollment
Advanced healthcare
biometrics
Auto claims
processing
GANs
Federated
learning
Next-gen
prosthetics
Capsule Networks
Network
optimization
Autonomous
navigation
Reinforcement
learning
Application: C
Application: N
processing/s
Application: P
Architecture
Infrastructure
Anti-counterfeit
1
High
THREATENING
NECESSARY
Facial
ecognition
Edge
computing
Open source
frameworks
Autonomous
navigation
Application: Computer vision
Application: Natural language
processing/synthesis
Application: Predictive intelligence
Architecture
Infrastructure
Artificial Intelligence Trends in 2019
NExTT FRAMEWORK
4
We evaluate each of these trends using
the CB Insights NExTT framework.
The NExTT framework educates
businesses about emerging trends and
guides their decisions in accordance with
their comfort with risk.
NExTT uses data-driven signals to
evaluate technology, product, and
business model trends from conception
to maturity to broad adoption.
The NExTT framework's 2 dimensions:
INDUSTRY ADOPTION (y-axis): Signals
include momentum of startups in the
space, media attention, customer adoption
(partnerships, customer, licensing deals).
MARKET STRENGTH (x-axis): Signals
include market sizing forecasts, quality
and number of investors and capital,
investments in R&D, earnings transcript
commentary, competitive intensity,
incumbent deal making (M&A,
strategic investments).
TRANSITORY
EXPERIMENTAL
THREATENING
NECESSARY
INDUSTRY ADOPTIONMARKET STRENGTH
High
Low
HighLowDigital
dealership
On-board
diagnostics
Industrial internet of
things (IIoT)
AV sensors &
sensor fusion
HD
mapping
Lithium-ion
batteries
Automobile
security
Telematics
Vehicle
connectivity
AI processor
chips & software
On-demand
access
Industrial
computer
vision
Flexible
assembly
lines
Vehicle
lightweighting
Online
aftermarket
parts
Virtual
showrooms
Decentralized
production
Predictive
maintenance
Wearables and
exoskeletons
Flying robotaxis
Alternative
powertrain
technology
Vehicle-to-everything
tech
Car vending
machines
Blockchain
verification
Advanced driver
assistance
Next gen
infotainment
Mobile
marketing
Additive
manufacturing
Usage-based
insurance
Driver
monitoring
D and design
aterial supply,
rts sourcing,
d vehicle
sembly
stribution,
arketing &
les
termarket
rvices and
hicle use
Title of NExTT Framework
TRANSITORY
Trends seeing adoption but
where there is uncertainty
about market opportunity.
As Transitory trends become
more broadly understood,
they may reveal additional
opportunities and markets.
NECESSARY
Trends which are seeing wide-
spread industry and customer
implementation / adoption and
where market and applications
are understood.
For these trends, incumbents
should have a clear, articulated
strategy and initiatives.
EXPERIMENTAL
Conceptual or early-stage
trends with few functional
products and which have not
seen widespread adoption.
Experimental trends are already
spurring early media interest
and proof-of-concepts.
THREATENING
Large addressable market
forecasts and notable
investment activity.
The trend has been embraced
by early adopters and may
be on the precipice of gaining
widespread industry or
customer adoption.
NExTT Trends
5
NExTT framework's 2 dimensions
The NExTT framework's 2 dimensions
Industry Adoption (y axis)
Signals include:
Market Strength (x axis)
Signals include:
momentum of startups
in the space
market sizing forecasts
earnings transcript
commentary
media attention
quality and number of
investors and capital
competitive intensity
customer adoption
(partnerships, customer,
licensing deals)
investments in R&D
incumbent deal making
(M&A, strategic investments)
1
xTT framework's 2 dimensions
on (y axis)
Market Strength (x axis)
Signals include:
m of startups
ce
market sizing forecasts
earnings transcript
commentary
ention
quality and number of
investors and capital
competitive intensity
adoption
, customer,
ls)
investments in R&D
incumbent deal making
(M&A, strategic investments)
6
OPEN-SOURCE FRAMEWORKS
The barrier to entry in AI is lower than ever before, thanks to
open-source software.
Google open-sourced its TensorFlow machine learning library in 2015.
Open-source frameworks for AI are a two-way street: It makes AI
accessible to everyone, and companies like Google, in turn, benefit from a
community of contributors helping accelerate its AI research.
Hundreds of users contribute to TensorFlow every month on GitHub
(a software development platform where users can collaborate).
Below are a few companies using TensorFlow, from Coca-Cola to eBay to
Airbnb.
Necessary
7
Facebook released Caffe2 in 2017, after working with researchers from
Nvidia, Qualcomm, Intel, Microsoft, and others to create a "a lightweight
and modular deep learning framework" that can extend beyond the cloud
to mobile applications.
Facebook also operated PyTorch at the time, an open-source machine
learning platform for Python. In May'18, Facebook merged the two under
one umbrella to "combine the beneficial traits of Caffe2 and PyTorch into
a single package and enable a smooth transition from
fast prototyping to fast execution."
The number of GitHub contributors to PyTorch have increased in
recent months.
8
Theano is another open-source library from the Montreal Institute for
Learning Algorithms (MILA). In Sep'17, leading AI researcher Yoshua
Bengio announced an end to development on Theano from MILA as
these tools have become so much more widespread.
"The software ecosystem supporting deep
learning research has been evolving quickly,
and has now reached a healthy state: open-
source software is the norm; a variety
of frameworks are available, satisfying
needs spanning from exploring novel
ideas to deploying them into production;
and strong industrial players are backing
different software stacks in a stimulating
competition."
- YOSHUA BENGIO, IN A MILA ANNOUNCEMENT

A number of open-source tools are available today for developers to choose
from, including Keras, Microsoft Cognitive Toolkit, and Apache MXNet.
9
EDGE AI
The need for real-time decision making is pushing AI closer to
the edge.
Running AI algorithms on edge devices like a smartphone or a car or
even a wearable device instead of communicating with a central cloud
or server gives devices the ability to process information locally and
respond more quickly to situations.
Nvidia, Qualcomm, and Apple, along with a number of emerging startups,
are focused on building chips exclusively for AI workloads at the "edge."
From consumer electronics to telecommunications to medical imaging,
edge AI has implications for every major industry.
For example, an autonomous vehicle has to respond in real-time to
what's happening on the road, and function in areas with no internet
connectivity. Decisions are time-sensitive and latency could prove fatal.
10
Big tech companies made huge leaps in edge AI between 2017-2018.
Apple released its A11 chip with a "neural engine" for iPhone 8, iPhone 8
Plus, and X in 2017, claiming it could perform machine learning tasks
at up to 600 billion operations per second. It powers new iPhone features
like Face ID, running facial recognition on the device itself to unlock the
phone.
Qualcomm launched a $100M AI fund in Q4'18 to invest in startups
"that share the vision of on-device AI becoming more powerful and
widespread," a move that it says goes hand-in-hand with its 5G vision.
As the dominant processor in many data centers, Intel has had to play
catch-up with massive acquisitions. Intel released an on-device vision
processing chip called Myriad X (initially developed by Movidius, which
Intel acquired in 2016).
In Q4'18 Intel introduced the Intel NCS2 (Neural Compute Stick 2), which
is powered by the Myriad X vision processing chip to run computer vision
applications on edge devices, such as smart home devices and industrial
robots.
The CB Insights earnings transcript analysis tool shows mentions of
edge AI trending up for part of 2018.
11
Microsoft said it introduced 100 new Azure capabilities in Q3'18 alone,
"focused on both existing workloads like security and new workloads like
IoT and edge AI."
Nvidia recently released the Jetson AGX Xavier computing chip for edge
computing applications across robotics and industrial IoT.
While AI on the edge reduces latency, it also has limitations. Unlike the
cloud, edge has storage and processing constraints. More hybrid models
will emerge that allow intelligent edge devices to communicate with
each other and a central server.
12
FACIAL RECOGNITION
From unlocking phones to boarding flights, face recognition is
going mainstream.
When it comes to facial recognition, China's unapologetic push
towards surveillance coupled with its AI ambitions have hogged the
media limelight.
As the government adds a layer of artificial intelligence to its
surveillance, startups are playing a key role in providing the government
with the underlying technology. A quick search on the CB Insights
platform for face recognition startup deals in China reflect the demand
for the technology.
13
Unicorns like SenseTime, Face++, and more recently, CloudWalk,
have emerged from the country. (Here's our detailed report on China's
surveillance efforts.)
But even in the United States, interest in the tech is surging, according to
the CB Insights patent analysis tool.
14
Apple popularized the tech for everyday consumers with the introduction
of facial recognition-based login in iOS 10.
Amazon is selling its tech to law enforcement agencies.
Academic institutions like Carnegie Mellon University are also working
on technology to help enhance video surveillance.
The university was granted a patent around "hallucinating facial
features" a method to help law enforcement agencies identify masked
suspects by reconstructing a full face when only the periocular region of
the face is captured. Facial recognition may then be used to compare the
"hallucinated face" to images of actual faces to find ones with a strong
correlation.
But the tech is not without glitches. Amazon was in the news for
reportedly misidentifying some Congressmen as criminals.
Smart cameras outside a Seattle school were easily tricked by a WSJ
reporter who used a picture of the headmaster to enter the premises,
when the "smile to unlock feature" was temporarily disabled.
"Smile to unlock" and other such "liveness detection" methods offer an
added layer of authentication.
15
For instance, Amazon was granted a patent that explores additional
layers of security, including asking users to perform certain actions
like "smile, blink, or tilt his or her head."
These actions can then be combined with "infrared image
information, thermal imaging data, or other such information"
for more robust authentication.
Early commercial applications are taking off in security, retail, and
consumer electronics, and facial recognition is fast becoming a
dominant form of biometric authentication.
16
MEDICAL IMAGING & DIAGNOSTICS
The FDA is greenlighting AI-as-a-medical-device.
In April 2018, the FDA approved AI software that screens patients
for diabetic retinopathy without the need for a second opinion from
an expert.
It was given a "breakthrough device designation" to expedite the process
of bringing the product to market.
The software, IDx-DR, correctly identified patients with "more than mild
diabetic retinopathy" 87.4% of the time, and identified those who did not
have it 89.5% of the time.
IDx is one of the many AI software products approved by the FDA for
clinical commercial applications in recent months.
The FDA cleared Viz LVO, a product from startup Viz.ai, to analyze CT
scans and notify healthcare providers of potential strokes in patients.
Post FDA clearance, Viz.ai closed a $21M Series A round from Google
Ventures and Kleiner Perkins Caufield & Byers.
The FDA also cleared GE Ventures-backed startup Arterys for its
Oncology AI suite initially focused on spotting lung and liver lesions.
Fast-track regulatory approval opens up new commercial pathways for
over 80 AI imaging & diagnostics companies that have raised equity
financing since 2014, accounting for a total of 149 deals.
17
On the consumer side, smartphone penetration and advances in image
recognition are turning phones into powerful at-home diagnostic tools.
Startup Healthy.io's first product, Dip.io, uses the traditional urinalysis
dipstick to monitor a range of urinary infections. Users take a picture
of the stick with their smartphones, and computer vision algorithms
calibrate the results to account for different lighting conditions and
camera quality. The test detects infections and pregnancy-related
complications.
Dip.io, which is already commercially available in Europe and Israel, was
cleared by the FDA.
Apart from this, a number of ML-as-a-service platforms are integrating
with FDA-approved home monitoring devices, alerting physicians when
there is an abnormality.
18
PREDICTIVE MAINTENANCE
From manufacturers to equipment insurers, AI-IIoT can save
incumbents millions of dollars in unexpected failures.
Field and factory equipment generate a wealth of data, yet unanticipated
equipment failure is one of the leading causes of downtime in
manufacturing.
A recent GE survey of 450 field service and IT decision makers found
that 70% of companies are not aware of when equipment is due for
an upgrade or maintenance, and that unplanned downtime can cost
companies $250K/hour.
Predicting when equipment or individual components will fail benefits
asset insurers, as well as manufacturers.
In predictive maintenance, sensors and smart cameras gather a
continuous stream of data from machines, like temperature and
pressure. The quantity and varied formats of real-time data generated
make machine learning an inseparable component of IIoT. Over time, the
algorithms can predict a failure before it occurs.
Dropping costs of industrial sensors, advances in machine learning
algorithms, and a push towards edge computing have made predictive
maintenance more widely available.
A leading indicator of interest in the space is the sheer number of big
tech companies and startups here.
19
Deals to AI companies focused on industrials and energy, which includes
ML-as-a-service platforms for IIoT, are rising. Newer startups are
competing with unicorns like C3 IoT and Uptake Technologies.
GE Ventures was an active investor here in 2016, backing companies
including Foghorn Systems, Sight Machine, Maana, and Bit Stew
Systems (which it later acquired). GE is a major player in IIoT, with its
Predix analytics platform.
Competitors include Siemens and SAP, which have rolled out their own
products (Mindsphere and Hana) for IIoT.
India's Tata Consultancy announced that it's launching predictive
maintenance and AI-based solutions for energy utility companies.
Tata claimed that an early version of its "digital twin" technology
replicating on-ground operations or physical assets in a digital format
for monitoring them helped a power plant save ~$1.5M per gigawatt
per year.
Even big tech companies like Microsoft are extending their cloud and
edge analytics solutions to include predictive maintenance.
20
E-COMMERCE SEARCH
Contextual understanding of search terms is moving out of the
"experimental phase," but widespread adoption is still a long ways off.
Amazon has applied for over 35 US patents related to "search results"
since 2002.
It has an exclusive subsidiary, A9, focused on product and visual search
for Amazon. A9 has nearly 400 patent applications in the United States
(not all of them related to search optimization).
Some of the search-related patents include using convolutional neural
networks to "determine a set of items whose images demonstrate visual
similarity to the query image" and using machine learning to analyze
visual characteristics of an image and build a search query based on
those.
21
Amazon is hiring for over 150 roles exclusively in its search division
for natural language understanding, chaos engineering, and machine
learning, among other roles.
But Amazon's scale of operations and R&D in e-commerce search is the
exception among retailers.
Few retailers have discussed AI-related strategies on earnings calls, and
many haven't scaled or optimized their e-commerce operations.
But one of the earliest brands to do so was eBay.
The company first mentioned "machine learning" in its Q3'15 earnings
calls. At the time, eBay had just begun to make it compulsory for sellers
to write product descriptions, and was using machine learning to
process that data to find similar products in the catalog.
Using proper metadata to describe products on a site is a starting point
when using e-commerce search to surface relevant search results.
But describing and indexing alone is not enough. Many users search for
products in natural language (like "a magenta shirt without buttons") or
may not know how to describe what they're looking for.
This makes natural language for e-commerce search a challenge.
Early-stage SaaS startups are emerging, selling search technologies to
third-party retailers.
Image search startup ViSenze works with clients like Uniqlo, Myntra, and
Japanese e-commerce giant Rakuten. ViSenze allows in-store customers
to take a picture of something they like at a store, then upload the picture
to find the exact product online.
22
It has offices in California and Singapore, and raised a $10.5M Series B
in 2016 from investors including the venture arm of Rakuten. It entered
the Unilever Foundry in 2017, which allows startups in Southeast Asia to
test pilot projects with its brands.
Another startup developing AI for online search recommendations is
Israel-based Twiggle.
The Alibaba-backed company is developing a semantic API that sits on
top of existing e-commerce search engines, responding to very specific
searches by the buyer. Twiggle raised $15M in 2017 in a Series B round
and entered the Plug and Play Accelerator last year.
23
CAPSULE NETWORKS
Deep learning has fueled the majority of the AI applications today.
It may now get a makeover thanks to capsule networks.
Google's Geoffrey Hinton, a pioneering researcher in deep learning,
introduced a new concept called "capsules" in a paper way back in 2011,
arguing that "current methods for recognizing objects in images perform
poorly and use methods that are intellectually unsatisfying."
Those "current methods" Hinton referred to include one of the most
popular neural network architectures in deep learning today, known as
convolutional neural networks (CNN). CNN has particularly taken off in
image recognition applications. But CNNs, despite their success, have
shortcomings (more on that below).
Hinton published 2 papers during 2017-2018 on an alternative concept
called "capsule networks," also known as CapsNet a new architecture
that promises to outperform CNNs on multiple fronts.
Without getting into the weeds, CNNs fail when it comes to precise spatial
relationships. Consider the face below. Although the relative position of
the mouth is off with respect to other facial features, a CNN would still
identify this as a human face.
Experimental
24
Although there are methods to mitigate the above problem, another major
issue with CNNs is the failure to understand new viewpoints.
"Now that convolutional neural networks
have become the dominant approach to
object recognition, it makes sense to
ask whether there are any exponential
inefficiencies that may lead to their demise.
A good candidate is the difficulty that
convolutional nets have in generalizing to
novel viewpoints."
PAPER ON DYNAMIC ROUTING BETWEEN CAPSULES
For instance, a CapsNet does a much better job of identifying the images
of toys in the first and second rows as belonging to the same object, only
taken from a different angle or viewpoint. CNNs would require a much
larger training dataset to identify each orientation.
Artificial Intelligence Trends in 2019
25
larger training dataset to identify each orientation.
(The images above are from a database called smallNORB which contains
grey-scale images of 50 toys belonging to 1 of 5 categories: four-legged
animals, human figures, airplanes, trucks, and cars. Hinton's paper found
that CapsNets reduced the error rate by 45% when tested on this dataset
compared to other algorithmic approaches.)
Hinton claims that capsule networks were tested against some
sophisticated adversarial attacks (tampering with images to confuse the
algorithms) and were found to outperform convolutional neural networks.
Hackers can introduce small variations to fool a CNN. Researchers at
Google and OpenAI have demonstrated this with several examples.
One of the more popular examples CapsNet was tested against is from a
2015 paper by Google's Ian Goodfellow and others. As can be seen below,
a small change that is not readily noticeable to the human eye means the
image results in a neural network identifying a panda as a gibbon, a type of
ape, with high confidence.
Research into capsule networks is in its infancy, but could challenge
current state-of-the-art approaches to image recognition.
26
NEXT-GEN PROSTHETICS
Very early-stage research is emerging, combining biology, physics, and
machine learning to tackle one of the hardest problems in prosthetics:
dexterity.
DARPA has spent millions of dollars on its advanced prosthetics
program, which it started in 2006 with John Hopkins University to help
wounded veterans. But the problem is a complex one to tackle.
For instance, giving amputees the ability to move individual fingers in
a prosthetic arm, decoding brain and muscle signals behind voluntary
movements, and translating that into robotic control all require a multi-
disciplinary approach.
As Megan Molteni explained in an article for Wired last year, take a
simple example of playing the piano. After repeated practice, playing
a chord becomes "muscle memory," but that's not how prosthetic
limbs work.
More recently, researchers have started using machine learning to
decode signals from sensors on the body and translate them into
commands that move the prosthetic device.
John Hopkins' Applied Physics Labs has an ongoing project on neural
interfaces for prosthetics using "neural decoding algorithms" to do
just that.
In June last year, researchers from Germany and Imperial College
London used machine learning to decode signals from the stump of the
amputee and power a computer to control the robotic arm. The research
on the "brain-machine interface" was published in Science Robotics.
27
Other papers explore intermediary solutions like using myoelectric
signals (electric activity of muscles near the stump) to activate a
camera, and running computer vision algorithms to estimate the grasp
type and size of the object before them.
Further highlighting the AI community's interest in the space, the "AI for
Prosthetics Challenge" was one of the competition tracks in NeurIPS'18
(a leading, annual machine learning conference).
The 2018 challenge was to predict the performance of a prosthetic
leg using reinforcement learning (more on reinforcement learning in
the following sections of this report). Researchers use an open-source
software called OpenSim which simulates human movement.
The previous year's focus was "Learning to Run," which saw 442
participants attempting to teach AI how to run, with sponsors including
AWS, Nvidia, and Toyota.
28
CLINICAL TRIAL ENROLLMENT
One of the biggest bottlenecks in clinical trials is enrolling the right
pool of patients. Apple might be able to solve this issue.
Interoperability the ability to share information easily across
institutions and software systems is a one of the biggest issues in
healthcare, despite efforts to digitize health records.
This is particularly problematic in clinical trials, where matching the right
trial with the right patient is a time-consuming and challenging process
for both the clinical study team and the patient.
For context, there are over 18,000 clinical studies that are currently
recruiting patients in the US alone.
Patients may occasionally get trial recommendations from their doctors
if a physician is aware of an ongoing trial.
29
Otherwise, the onus of scouring through ClinicalTrials.Gov a
comprehensive federal database of past and ongoing clinical trials
falls on the patient.
An ideal AI solution would be artificial intelligence software that extracts
relevant information from a patient's medical records, compares it with
ongoing trials, and suggests matching studies.
Few startups are working with clients directly in the clinical trials space.
The biggest barriers to entry for smaller startups streamlining clinical
trials are that the technologies are relatively new and the industry is slow
to adapt.
Tech giants like Apple, however, have seen success in bringing on
partners for their healthcare-focused initiatives.
Apple is changing how data flows in healthcare and is opening up new
possibilities for AI, specifically around how clinical study researchers
recruit and monitor patients.
Since 2015, Apple has launched two open-source frameworks
ResearchKit and CareKit to help clinical trials recruit patients and
monitor their health remotely.
The frameworks allow researchers and developers to create medical
apps to monitor people's daily lives, removing geographic barriers to
enrollment.
For example, nearly 10,000 people use the mPower app, which provides
exercises like finger tapping and gait analysis to study patients with
Parkinson's disease who have consented to share their data with the
broader research community.
Researchers at Duke University developed an Autism & Beyond app that
uses the iPhone's front camera and facial recognition algorithms to
screen children for autism.
30
Apple is also working with popular EHR vendors like Cerner and Epic to
solve interoperability problems.
In January 2018, Apple announced that iPhone users would have access
to all their electronic health records from participating institutions on
their iPhone's Health app.
Called "Health Records," the feature is an extension of what AI healthcare
startup Gliimpse was working on before it was acquired by Apple in
2016.
In an easy-to-use interface, users can find all the information they
need on allergies, conditions, immunizations, lab results, medications,
procedures, and vitals.
In June 2018, Apple rolled out a Health Records API for developers.
Users can now choose to share their data with third-party applications
and medical researchers, opening up new opportunities for disease
management and lifestyle monitoring.
The possibilities are seemingly endless when it comes to using AI and
machine learning for early diagnosis, enrolling the right pool of patients,
and even driving decisions in drug design.
31
GENERATIVE ADVERSARIAL NETWORKS
Two neural networks trying to outsmart each other are getting very
good at creating realistic images.
Can you identify which of these images are fake?
The answer is all of the above. Each of these highly realistic images were
created by generative adversarial networks, or GANs.
(Note: the bottom right image represents a "class leakage" where
the algorithm possibly confused properties of a dog with a ball and
created a "dogball")
GAN, a concept introduced by Google researcher Ian Goodfellow in 2014,
taps into the idea of "AI versus AI." There are two neural networks: the
generator, which comes up with a fake image (say a dog for instance),
and a discriminator, which compares the result to real-world images
and gives feedback to the generator on how close it is to replicating a
realistic image.
32
This forms a constant feedback loop between two neural networks trying
to outsmart each other.
The images above are from a Sept'18 paper by Andrew Brock, an intern
at Google DeepMind, published along with other DeepMind researchers.
They trained GANs on a very large scale dataset to create "BigGANs."
One of the challenges Brock and team encountered with BigGANs:
A spider, for example, has "lots of legs." But how many is "lots"?
33
The primary challenge to scaling large-scale projects like GANs, however,
is computational power. Here's an excerpt from FastCompany, with a
rough estimation of the amount of computing power that went into this
research:
For GANs to scale, hardware for AI has to scale in parallel.
Brock's is not the only GAN-related paper published in recent months.
Using GANs, researchers from Lancaster University in the UK, Northwest
University in the China, and Peking University in China developed a
captcha solver.
The paper demonstrated that GANs can crack text-based captchas in
just 0.05 seconds using a desktop GPU, with a relatively higher success
rate compared to previous methods.
34
Researchers at CMU used GANs for "face-to-face" translation in this
iteration of "deepfake" videos. In the deepfake example below, John
Oliver turns into Stephen Colbert:
35
Researchers at the Warsaw University of Technology developed a
ComixGAN framework to turn videos into comics using GANs.
Art auction house Christie's sold its first ever GAN-generated painting for
a whopping $432,500.
36
And in a more recent paper on GANs, Nvidia researchers used a "style-
based generator" to create hyper-realistic images.
GANs aren't just for fun experiments. The approach also has serious
implications, including fake political videos and morphed pornography.
The Wall Street Journal is already training its researchers to spot
deepfake videos.
As the research scales, it will change the future of news, media, art, and
even cybersecurity. GANs are already changing how we train AI algorithms
(more on this in the following section on "synthetic training data.")
37
FEDERATED LEARNING
The new approach aims to protect privacy while training AI with
sensitive user data.
Our daily interaction with smartphones and tablets from the choice of
words we use in messaging to the way we react to photos generates a
wealth of data.
Training AI algorithms using our unique local datasets can vastly
improve their performance, such as more accurately predicting the next
word you're going to type into your keyboard.
As researchers from Google explain in a 2017 paper, "the use of
language in chat and text messages is generally much different than
standard language corpora, e.g., Wikipedia and other web documents;
the photos people take on their phone are likely quite different than
typical Flickr photos."
But this user data is also personal and privacy sensitive.
Google's federated learning approach aims to use this rich dataset, but
at the same time protect sensitive data.
In a nutshell, your data stays on your phone. It is not sent to or stored in
a central cloud server. A cloud server sends the most updated version of
an algorithm called the "global state" of the algorithm to a random
selection of user devices.
Your phone makes improvements and updates to the model based on
your localized data. Only this update (and updates from other users)
are sent back to the cloud to improve the "global state" and the process
repeats itself.
38
Google is testing federated learning in its Android keyboard
called Gboard.
Note that the mechanism of aggregating individual updates from each
node is not the novelty here. There are algorithms that do that already.
But unlike other distributed algorithms, the federated learning approach
takes into account two important characteristics of the dataset:
Non-IID: Data generated on each phone (or other device) is unique
based on each person's usage of the device. And so these datasets
are not "Independent and identically distributed (IID)" a common
assumption made by other distributed algorithms for the sake
of statistical inference, but not reflective of practical real-world
scenarios.
Unbalanced: Some users are more actively engaged with an app
than others, naturally generating more data. As a result, each phone,
for instance, will have varying amounts of training data.
39
Firefox tested out federated learning to rank suggestions that appear
when a user starts typing into the URL bar, calling it "one of the very first
implementations [of federated learning] in a major software project."
In another application of federated learning, Google Ventures-backed
AI startup OWKIN, which is focused on drug discovery, is using the
approach to protect sensitive patient data. The model allows different
cancer treatment centers to collaborate without patients' data ever
leaving the premises, according to investor Otium Venture.
40
ADVANCED HEALTHCARE BIOMETRICS
Using neural networks, researchers are starting to study and measure
atypical risk factors that were previously difficult to quantify.
Analysis of retinal images and voice patterns using neural networks
could potentially help identify risk of heart disease.
Researchers at Google used a neural network trained on retinal images
to find cardiovascular risk factors, according to a paper published in
Nature this year.
The research found that not only was it possible to identify risk factors
such as age, gender, and smoking patterns through retinal images, it was
also "quantifiable to a degree of precision not reported before."
Similarly, the Mayo Clinic partnered with Beyond Verbal, an Israeli startup
that analyzes acoustic features in voice, to find distinct voice features
in patients with coronary artery disease (CAD). The study found 2 voice
features that were strongly associated with CAD when subjects were
describing an emotional experience.
Recent research from startup Cardiogram suggests "heart rate variability
changes driven by diabetes can be detected via consumer, off-the-self
wearable heart rate sensors" using deep learning. One algorithmic
approach showed 85% accuracy in detecting diabetes from heart rate.
A more futuristic use case is passive monitoring of healthcare biometrics.
In January 2018, a Google patent was published with an ambitious vision
41
for analyzing cardiovascular function from a person's skin color or skin
displacement.
The sensors might even be positioned (per the patent's illustrations) in a
"sensing milieu" in a patient's bathroom.
By recognizing skin color changes at the wrist and cheek, for example,
and "comparing the times [of measurement] and distance between these
regions," the system could calculate a "pulse-wave velocity (PWV)."
The velocity information could then be used to determine cardio-health
metrics such as arterial stiffness or blood pressure.
"Machine learning could be applied to create a patient specific model for
estimating blood pressure from PWV," according to the patent.
Amazon applied for a similar patent for passive monitoring in 2014,
42
which was later granted in 2017. It combines recognition of facial
features (using neural nets or other algorithmic approaches) with heart
rate analysis.
For example, algorithms can track color changes in two areas of the
face, like regions near the eyes and cheek, using that data to calculate
heart rate detection.
AI's ability to find patterns will continue to pave the way for new
diagnostic methods and identification of previously unknown risk factors.
43
AUTO CLAIMS PROCESSING
Insurers and startups are beginning to use AI to compute a car owner's
"risk score," analyze images of accident scenes, and monitor driver
behavior.
China's Ant Financial, an Alibaba affiliate, uses deep-learning algorithms
for image processing in its "accident processing system."
Currently, car owners or drivers take their vehicles to an "adjuster," a
person who inspects the damage to the vehicle and logs the details,
which are then sent to the auto insurance company.
Advances in image processing are now allowing people to take a picture
of the vehicle and upload it to Ant Financial. Neural networks then
analyze the image and automate the damage assessment.
Another approach Ant is taking is to create a risk profile of the driver to
influence the actual pricing model of auto insurance.
"The development of technologies such as
Big Data and artificial intelligence enables
insurance companies to further leverage
the consumer data and analyze the probable
risk exposure of vehicle owners. Therefore,
risk factors for auto insurance can shift
from a "car-oriented" approach to a "car/
owner combination."

ALIBABA CLOUD BLOG
44
Alibaba introduced something called "Auto Insurance Points," using
machine learning to calculate a car owner's risk score based on factors
such as credit history, spending habits, and driving habits, among
other things.
Smaller startups are also getting into insurance and claims processing
but adopting a different approach.
Nexar, for instance, incentivizes drivers to use their smartphones as a
dashcam and upload the footage to the Nexar app. In return, owners get
a discount on their insurance premiums.
The app uses computer vision algorithms to monitor road conditions,
driver behavior, and accidents. It also offers a "crash recreation" feature
to reconstruct and analyze the circumstances in which accidents take
place, and works with insurance clients to process claims.
UK-based Tractable allows insurers to upload an image of the
damage and an estimate into its claims management platform. The
"AI Review" feature compares this with thousands of images to adjust
the price accordingly.
Interestingly, Tractable is targeting other players in the ecosystem as
well, such as car repairers, appraisers, vendors, and car hire companies.
45
ANTI-COUNTERFEITING
Fakes are getting harder to spot, and online shopping makes it easier
than ever to buy fake goods. To fight back, brands and pawnbrokers are
beginning to experiment with AI.
From drugs to handbags to smartphones, counterfeiting is a problem
that affects all types of retail.
Some product imitations look so authentic that they are classified as
"super fakes."
China's rapidly growing e-commerce platform Pinduoduo mentioned
"counterfeit" 11 times in its Q3'18 earnings call, describing "a very hard
fight against counterfeit goods and problematic merchants."
"In 2017, weproactively removed a total
of 10.7 million problematic products
and blocked 40 million links thatraised
infringement issuesWe have also
partnered with over 400 brands to work
together on combating counterfeit."
COLIN HUANG, FOUNDER AND CEO OF PINDUODUO
46
Brands are fighting the war against fakes on two fronts:

In the online world, identifying and removing online listings that
infringe on brand trademarks like brand name, logo, and slogans

In the physical world, identifying fake goods like luxury handbags
that are rip offs
Online counterfeiting is vast and complex in scope and scale.
E-commerce giant Alibaba, which has been under some fire for not
doing enough to counter fake goods on its sites, reported that it's using
deep learning to continuously scan its platform for IP infringements. It
uses image recognition to identify characters in images, coupled with
semantic recognition, possibly to monitor brand names or slogans in
images of products listed on its sites.
Counterfeiters use keywords and images very similar to the original
brand listing to sell fake goods on fake websites, fake goods on
legitimate marketplaces, and promote fake goods on social media sites
like Instagram.
When one listing is taken down, counterfeiters may repost the same fake
product with a different string of keywords.
Barcelona-based startup Red Points is using machine learning to scan
websites for potential infringements and find patterns in the choice of
keywords counterfeiters use. It boasts clients in the cosmetics, luxury
watch, home goods, and apparel industries, including MVMT, DOPE, and
Paul Hewitt.
47
Spotting fakes is trickier and more manual in the physical world.
When a seller posts a second-hand luxury handbag for sale, or goes
to a pawnbroker to trade it, the verification process usually involves an
authentication expert physically examining the bag, including the make,
material, and stitching pattern.
Here's how much eBay and others charge to authenticate one luxury
handbag using identification experts.
But with the rise of "super fakes" or "triple-A fakes," it's becoming nearly
impossible to tell the difference with the naked eye.
Building a database of fake and authentic goods, extracting their
features, and training an AI algorithm to tell the difference is a
cumbersome process.
Startup Entrupy worked with authentication experts to build a database
of fake vs. real goods for training its algorithms for 2 years. The process
is harder for rare vintage luxury goods.
48
Entrupy developed a portable microscope that attaches to a smartphone.
When users take and upload a picture of the product (handbag, watch,
etc), AI algorithms analyze microscopic signatures that are unique to
each product, and verify it against a database of known and authentic
products.
The database is growing, but there isn't a complete set products out
in the market. A paper published by Entrupty highlights some other
operating assumptions and limitations.
The key idea is that objects manufactured using standard or prescribed
methods will have visually similar characteristics, compared to the
manufacturing process a counterfeiter would use (non-standardized,
inexpensive mass production). Secondly, the tech may not work for
things like electronic chips that are nano-fabricated (variations at a scale
that Entrupy's microscope cannot detect).
Cypheme is taking a different approach. Its ink-based technology
can be used as a sticker on the product, or directly printed onto labels
and packaging.
Nikkei Asian Review detailed the tech in an interview with the CEO: A
random pattern is generated from a drop of ink, the pattern is surrounded
by another circle of orange ink that Cypheme claims is proprietary to
the company and impossible to replicate, then each unique pattern is
associated with a specific product on a database.
49
It uses a smartphone camera and neural networks for pattern recognition
to verify the ink pattern for the specific product against its database.
This means Cypheme has to work directly with brand manufacturers to
make sure products are shipped with the tracing ink. It recently entered
into a partnership with AR Packaging, a leading packaging company in
Europe working with food brands like Unilever and Nestle.
While printing ink on packaging is efficient for tracking an item from
the manufacturing plant and along the distribution chain, the tech
doesn't work for secondhand purchase authentication. For instance,
a buyer may remove Cypheme's sticker from the packaging of a luxury
watch, and decide to resell it at a broker shop or online. In this case,
verifying authenticity is not possible unless the printing is part of the
product itself.
The solution for luxury brands and other high-stake retailers, moving
forward, may be to identify or add unique fingerprints to physical goods
at the site of manufacturing and track it through the supply chain.
50
CHECKOUT-FREE RETAIL
Entering a store, picking what you want, and walking out almost "feels"
like shoplifting. AI could make actual theft a thing of the past and
check-out free retail much more common.
Amazon Go did away with
the entire checkout process,
allowing shoppers to grab
items and walk out.
Amazon has no public plans
to sell its tech-as-a-service
to other retailers yet, and has
been tight-lipped about the
operations, success, and pain
points only revealing that
it uses sensors, cameras,
computer vision, and deep learning algorithms. It has denied using facial
recognition algorithms.
Startups like Standard Cognition and AiFi have seized the opportunity,
stepping in to democratize Amazon Go for other retailers.
A challenge for grab-and-go stores is charging the right amount to the
right shopper.
Loss of inventory due to shoplifting and paperwork error, among other
things, cost US retailers around $47B in 2017, according to the National
Retail Federation.
"Stealing is buying," Steve Gu, co-founder and CEO of startup AiFi, said in
an interview with The AI Podcast, discussing the technology behind grab-
and-go stores.
51
So far, Amazon Go is the only successful commercial deployment, but
the the parameters of success are tightly controlled.
The chance of someone shoplifting is minimized when you control who
enters the store, and automatically charge them.
Amazon already has an established base of Prime members. All the
Go stores so far have been restricted to members, with other retail
operations like the Kindle store, which is open to the general public, still
relying on a manual checkout process.
Smaller bodegas, convenience stores, and even several established
supermarkets have to build that membership base from scratch.
Steve Gu hinted in the same podcast that there could be a "grab-and-go"
section for people willing to download the app, and a separate checkout
line for those who don't want to.
It's not clear how a store's infrastructure would support both.
That still leaves the issue of point-of-sale inventory shrinkage such
as incorrectly billed items or POS theft. China's Yitu Technology and
Toshiba, with its intelligent camera for checkout, are some of the
companies separately working on the shrinkage problem.
The complexity of preventing theft depends on the size and scale of
operations, and type of products on the shelves.
Amazon Go stores are only about 1,800 to 3,000 sq. ft, and use hundreds
of cameras covering nearly every inch of ceiling space. In comparison,
traditional supermarkets can be 40,000 sq. ft. or more.
Go, which uses weight sensors on shelves in addition to cameras for
visual recognition, currently only offers a limited selection of items, like
prepared and packaged meal kits.
Some things to consider are how floor space will be utilized, especially in
densely packed supermarkets, to ensure cameras are optimally placed
to track people and items. Loose vegetables and other produce that
52
are billed per pound would presumably rely on sensor tech, but multiple
shoppers picking items simultaneously from the same carton would not
work with sensors alone. Even pre-packaged or diced vegetables have
slight variations in price from one package to another.
Apparel too is particularly hard for computer vision systems to track.
Identifying the size (S/M/L) and tracking clothes that are easily folded
and tucked away are some of the pain points.
While startup AiFi promises to utilize existing store infrastructure and
a combination of sensors and cameras, Standard Cognition claims to
completely do away with sensors, relying solely on machine vision.
Standard Cognition announced a partnership with Paltac Corporation,
Japan's largest CPG wholesaler, to outfit 3,000 Japanese stores ahead of
the Tokyo Olympics in 2020. AiFi reportedly has around 20 retail clients
in the pipeline, including a contract with a major retailer in New York.
In the near term, it comes down to what the cost of deployment and cost
of inventory loss due to potential tech glitches would be, and whether a
retailer can take on these costs and risks.
53
BACK OFFICE AUTOMATION
AI is automating administrative work, but the varied nature and formats
of data make it a challenging task.
Challenges for automating "back office tasks" can be unique, depending
on the industry and the application.
Take clinical trials for instance. Many trials still rely on paper diaries
for entering patient data. These diaries are stored digitally, often in
difficult-to-search formats, while handwritten clinical notes pose unique
challenges for natural language processing algorithms to extract
information (accounting for spelling errors, jargon, abbreviations, and
missing entries).
Automating auto claims processing, on the other hand, brings a different
set of challenges, in this case assessing the damage and drilling down
into the root cause.
But different sectors are beginning to adopt ML-based workflow
solutions to varying degrees.
Robotic Process Automation (RPA), a loose term for any back office
drudge work that is repetitive and can be automated by a bot, has
been the subject of much buzz. But, like AI, it's an umbrella term that
encompass a wide range of tasks from data entry to compliance to
transaction processing to customer onboarding, and more.
54
While not all RPAs are ML-based, many are beginning to integrate image
recognition and language processing into their solutions.
WorkFusion, for example, automates back-end operations like Know Your
Customer (KYC) and Anti-Money Laundering (AML) processes.
Unicorn UiPath's services have been used by over 700 enterprise clients
globally, including DHL, NASA, and HP, across industries ranging from
finance to manufacturing to retail.
Automation Anywhere is another unicorn in the RPA space. One of the
company's case studies highlights a partnership with a global bank to
use machine learning to automate human resource management. An "IQ
Bot" extracts information from forms that come in from several countries
and in many languages, cleans the data, and then automatically enters it
into a human resource management system.
Despite the concept of RPA being around for years, many industries
are just beginning to overcome inertia and experiment with newer
technologies. In other areas, there's a need for digitization before there
can be a layer of predictive analytics.
55
LANGUAGE TRANSLATION
NLP for language translation is both a challenge and an
untapped market opportunity. Big tech companies are pushing
the boundaries here.
Machine-based language translation is a huge untapped opportunity with
applications in back office automation for multinational corporations,
customer support, news & media, and other things.
Baidu recently announced that it's launching new translator earbuds,
similar to Google Pixel Buds, which can reportedly translate between 40
different languages in real-time.
Some startups like Unbabel are using human-in-the-loop machine
translation systems, with the goal that the feedback loop will train the
algorithms to get better over time.
NLP for translation has several challenges. For instance, Chinese natural
language processing alone is complex, with 130 spoken dialects and 30
written languages.
A year after Yoshua Bengio, a pioneering researcher in deep learning,
published a paper proposing a new architecture for machine translation
a novel way of using neural networks instead of traditional statistical
approaches Google upgraded its own algorithms for the Google
Translate Tool.
"This breakthrough will help us provide even more accurate translations
for people around the world," CEO Sundar Pichai said in an earnings call
in 2016.
56
Google wanted to move away from its old algorithmic approach of
Phrase-Based Machine Translation (PBMT) and proposed a new Google
Neural Machine Translation (GNMT) system.
Although different papers had been published on neural machine
translation, there were limitations, like the time and computational
resources that went into training these models, and failure in translating
rare words.
Google suggested improvements to address these issues, and tested its
algorithms on English to Chinese, Chinese to English, Spanish to English,
among other examples.
Several research papers have been published on the topic. But the most
recent breakthrough comes from Facebook.
According to the paper, "Most research in multilingual NLP focuses
on high-resource languages like Chinese, Arabic or major European
languages, and is usually limited to a few (most often only two)
languages. In contrast, we learn joint sentence representations for 93
different languages, including under-resourced and minority languages."
57
As big tech companies continue devoting resources to improving
translation frameworks, efficiency and language capabilities will improve
and adoption will increase across industries.
58
SYNTHETIC TRAINING DATA
Access to large, labeled datasets is necessary for training AI
algorithms. Realistic fake data may solve the bottleneck.
AI algorithms are only as good as the data they are fed, and
accessing and labeling this data for different applications is time
and capital intensive.
Access to this type of real-world data may not even be feasible.
Consider an autonomous vehicle for instance. Training AVs on
dangerous, less frequent situations, such as blinding sun or a pedestrian
jumping out from behind parked cars, using real data is hard.
That's where synthetic datasets come in.
In March 2018, Nvidia launched a cloud-based photorealistic simulation
for autonomous vehicles called DRIVE Constellation. AVs can drive in
virtual reality simulation for billions of miles before hitting the roads a
venture aimed at creating "a safer, more scalable method for bringing
self-driving cars to the roads."
Imagine AVs driving through a thunderstorm. Nvidia's solution simulates
what data sensors in the car, (like a camera or LiDAR) would generate
under these conditions. The synthetic sensor data is fed to a computer
which makes decisions as if it were driving on an actual road, sending
commands back to the virtual vehicle.
59
An interesting emerging trend is using AI itself to help generate more
"realistic" synthetic images to train AI.
Nvidia, for instance, used generative adversarial networks (GANs) to
create fake MRI images with brain tumors.
"Together, these results offer a potential
solution to two of the largest challenges
facing machine learning in medical imaging,
namely the small incidence of pathological
findings, and the restrictions around sharing
of patient data."
NVIDIA RESEARCH PAPER
60
GANs are being used to "augment" real world data, meaning AI can be
trained with a mix of real world and simulated data to have a larger, more
diverse dataset.
Robotics is another field that can greatly benefit from high-fidelity
synthetic data.
Consider a simple task of teaching a robot to grasp something. In 2016,
Google researchers used 14 robotic arms tasked with learning how to
grasp different objects. Data from the failed and successful attempts
from all 14 robots were used to train a neural network to help the robots
"share their experiences" and predict the outcome of a grasp.
In all, it took 800,000 grasp attempts, "equivalent to about 3000
robot-hours of practice" to "see the beginnings of intelligent reactive
behaviors," according to the research team.
But simulations having hundreds of virtual robots practice in a virtual
environment can vastly simplify this process.
One of the challenges is creating realistic objects (like making the
simulation of an apple or pencil look as close to a real-life objects as
possible). In 2017, Google researchers used generative adversarial
networks (GANs) to do just that, drastically reducing the amount of real-
world data needed to train the robot.
61
Early-stage startups like AI.Reverie are developing simulation platforms
to generate datasets for a variety of industries and scenarios.
As the tech scales and synthetic data mimics real-world scenarios more
accurately, it will act as a catalyst for smaller companies that don't have
access to large datasets.
62
Threatening
REINFORCEMENT LEARNING
From training algorithms to beat world champions in board games to
teaching AI acrobatics, researchers are pushing the boundaries with
reinforcement learning. But the need for massive datasets currently
limits practical applications.
Reinforcement learning gained media attention when Google
DeepMind's AlphaGo defeated a world champion in the complex and
strategic Chinese game of Go.
In a nutshell, the point of reinforcement learning is this: What action do
you need to take to reach your goal and maximize rewards?
Because of this approach, reinforcement learning has particularly taken
off in gaming and robotic simulation.
DeepMind's AlphaGo was initially trained using supervised learning
(using data from other human players to train the algorithm) and
reinforcement learning (AI playing against itself).
DeepMind later released AlphaGo Zero, which it claimed achieved
super-human performance. It was trained purely based on reinforcement
learning (playing against itself given just a set of rules).
63
Recently, researchers at UC Berkeley used computer vision and
reinforcement learning to teach algorithms acrobatic skills from YouTube
videos. Computer-simulated characters were able to replicate the moves
in the videos without the need for manually annotating poses.
With reinforcement learning, the simulated characters can apply their
skills to new environments. For example, if a man in a YouTube video did
a backflip on flat ground, the simulated character can adapt the skill to
do a backflip on uneven terrain.
Despite these rapid advances, reinforcement learning adoption hasn't yet
taken off because of how much data it requires compared to supervised
learning, which is the most prevalent AI paradigm today.
64
"There's a rapid fall off as you go down this
list [of different approaches to learning] as
you think of the economic value created
today Reinforcement Learning is one class
of technology where the PR excitement is
vastly disproportionate relative to the actual
deployments today."

ANDREW NG, EMTECH 2017 PRESENTATION
But research into RL applications is increasing. A keyword search in title
and abstract of US patent applications shows an uptick in activity in the
last 2 years.
Top applicants include Google, IBM, Alphaics (an AI startup), Mobileye
65
(acquired by Intel), Microsoft, Adobe, and FANUC.
In earnings calls, Baidu actively discussed reinforcement learning,
mentioning it 7 times in its Q1'18 call.
"One highlight in Q1 is that for the first
time, we deployed a powerful reinforcement
learning based infrastructure that can
significantly improve our ability to better
match ads to our users and increase
clickthrough rates and conversions"
BAIDU ON A Q1'18 EARNINGS CALL
66
NETWORK OPTIMIZATION
From facilitating spectrum sharing to monitoring assets and
coming up with optimal designs for antenna, AI is beginning to change
telecommunication.
Telecommunication network optimization is a set of techniques to
improve latency, bandwidth, and design or architecture anything that
augments the flow of data in a favorable way.
For communication service providers, optimization directly translates into
better customer experience.
One of the biggest challenges in telecommunications, apart from
bandwidth constraints, is network latency. Applications like AR/VR on
mobile phones will only optimally function with extremely low lag times.
Apple was granted a patent recently to use machine learning to form
"anticipatory networks," which anticipate what action wireless-enabled
devices like smartphones may likely perform in the future and download
data packets in advance to reduce latency.
67
Another emerging application of machine learning is in spectrum sharing.
The government licenses certain frequencies of the electromagnetic
spectrum to companies like Verizon in an auction.
The Federal Communications Commission (FCC) ruled that the 3.5 to
3.7GHz spectrum will be shared between different users.
This means carriers can dynamically access shared frequencies based
on availability. This will allow them to scale bandwidth up and down
based on network demand. It will also provide spectrum access to
smaller commercial users that don't license a dedicated spectrum of
their own.
Parts of the 3.5GHz band is used by the US Navy and other federal
agencies. They are given the first tier of access, and if the spectrum is
not being used by them, then it goes to tier 2 and tier 3 users.
Companies like Federated Wireless provide Secure Spectrum Access
(SAS) to dynamically assign spectrum between different tiers of users
and ensure there's no interference with federal signals and it leverages
machine learning to do that.
In 2018, Federated Wireless was granted a patent to use ML to classify
radio signals into different categories, such as federal signals, noise
signals, and unknown signals. It does this while obscuring features of
federal signals (so that hackers never gain access to specific features or
weaknesses in military/defense signals).
68
DARPA wants to eventually move away from SAS players that facilitate
spectrum sharing to an automated ML-based system. To this end, it
launched the Spectrum Collaboration Challenge in 2016. Participants in
the competition have to use ML to come up with unique ways for radio
networks to "autonomously collaborate to dynamically determine how
the radio frequency (RF) spectrum should be used moment to moment."
DARPA also launched a Radio Frequency Machine Learning Systems
(RFMLS) program in 2017. Similar to the Federated Wireless patent
above, DARPA wants to use ML to differentiate between different types
of signal, especially spotting malicious signals that intend to hack into
end devices (such as IoT devices).
Telecom players are also preparing to integrate AI-based solutions in the
next generation of wireless technology, known as 5G.
Samsung acquired AI-based network and service analytics startup
Zhilabs in preparation for the 5G era.
69
Samsung said in a press release that AI software will be used to "analyze
user traffic, classify applications being used, and improve overall service
quality."
Qualcomm sees AI edge computing as a crucial component of its 5G
plans (edge computing reduces bandwidth constraints and frequent
communications with the cloud a main focus area for 5G).
Early research papers are also emerging exploring the use of
neural nets to come up with the most optimal design for antenna in
telecommunication networks.
70
AUTONOMOUS VEHICLES
Despite a substantial market opportunity for autonomous vehicles,
the timeline for full autonomy is still unclear.
A number of big tech companies and startups are competing intensely in
the autonomous vehicles space.
Google has made a name for itself in the auto space. Its self-driving
project Waymo is the first autonomous vehicle developer to deploy a
commercial fleet of AVs.
Investors remain confident in companies developing the full autonomous
driving stack, pouring hundreds of millions of dollars into GM's Cruise
Automation ($750M from Honda in October 2018 and $900M from
SoftBank in May prior) and Zoox ($500M in July 2018). Other startups
here include Drive.ai, Pony.ai, and Nuro.
China, in particular, has ramped up its AV efforts. The Chinese science
ministry announced last year that the nation's first wave of open AI
platforms will rely heavily on Baidu for autonomous driving.
In April 2017, Baidu announced a one-of-a-kind open platform Apollo
for autonomous driving solutions, roping in partners from across the globe.
As with other open-source platforms, the idea is to accelerate AI and
autonomous driving research by opening it up to contributions from
other players in the ecosystem. Making the source code available to
everyone allows companies to build off of existing research instead of
starting from scratch.
71
Alibaba also recently conducted test drives of its autonomous vehicle.
But interestingly, just over a year ago, Alibaba was skeptical about the
long-term commercial opportunity of autonomous vehicles, mentioning
in an earnings call that "nobody has figured out the long-term economic
model for this, but people are doing it because there is some very
interesting artificial intelligence-related technology" involved in building
autonomous vehicles.
Even with hesitation surrounding the future of the technology,
automakers are still working full steam ahead. The market is projected to
reach roughly $80B by 2025.
Some applications could see earlier adoption of fully self-driving
vehicles, such as logistics and fulfillment.
72
Autonomous logistics specifically autonomous last-mile delivery is
top-of-mind for retailers and fulfillment companies, and may be the first
area where we see full autonomy. Self-driving vehicles could help tackle
the costly and arduous challenge of delivering goods at the last mile,
which can add up to nearly a third of an item's total delivery cost.

States like Arizona which have liberal laws for autonomous vehicle
deployment are emerging as test beds. In June 2018, robotics startup
Nuro partnered with Kroger, one of the largest brick-and-mortar grocers
in the US, to deliver groceries. Nuro is designed to drive on neighborhood
roads, not just sidewalks like other delivery robot and vehicle prototypes
that have been developed.


In the restaurant space, pizza companies like Domino's and Pizza Hut
have been at the forefront of testing out autonomous vehicles. Ford is
piloting autonomous delivery in Miami with pizza, groceries, and other
goods. The OEM partnered with over 70 businesses, including Domino's,
in early 2018.
73
CROP MONITORING
Three types of crop monitoring are taking off in agriculture:
On-ground, aerial, and geospatial.
The precision agriculture drone market is expected to reach $2.9B in 2021.
Drones can map the field for farmers, monitor moisture content using
thermal imaging, and identify pest infested crops and spray pesticides.
Startups are focusing on adding a layer of analytics to data captured by
3rd party drones.
Taranis, for example, uses 3rd party Cessna airplanes to do this. Taranis
also acquired agtech-AI startup Mavrx Imaging last year, which was
developing ultra high resolution imaging tech to scout and monitor fields.
74
Taranis uses AI to stitch together images of the field and also to
identify potential issues with crops. John Deere, a farming equipment
manufacturer, tapped the startup along with a few others, to collaborate
on potential solutions for John Deere.
Deere has been reinventing itself with AI. It bought Blue River Technology
an agricultural equipment company leveraging computer vision
for $300M+. Among other things, Blue River was working on "smart
weeding" and "see-and-spray" solutions.
This type of individual crop monitoring can become a major disruptor for
the agricultural pesticide industry. If on-the-ground farming equipment
gets smarter with computer vision and sprays only individual crops as
needed, it will reduce the demand for non-selective weed killers that kill
everything in the vicinity. Precision spraying would also mean a reduction
in the amount of herbicide and pesticide used.
Beyond the field, using computer vision to analyze satellite images
provides a macro-level understanding of agricultural practices.
Geo-spatial data can provide information on crop distribution patterns
across the globe and the impact of weather changes on agriculture.
Cargill invested in Descartes Labs, which uses satellite data to develop
a forecasting model for crops like soybean and corn. This application of
computer vision has also piqued the interest of commodities traders and
government agencies. DARPA is working with Descartes to forecast food
security.
75
Transitory
CYBER THREAT HUNTING
Reacting to cyber attacks is no longer enough. Proactively
"hunting" for threats using machine learning is gaining momentum in
cybersecurity.
Advancements in computing power and algorithms are turning previously
theoretical hacks into real security problems.
According to the Breach Level Index, a global database of public data
breaches, 4.5B data records were compromised worldwide in H1'18 (for
reference, the figure was 2.6B for all of 2017).
Unlike other industrial applications of AI, cyber-defense is a
cat-and-mouse game between hackers and security personnel, both
leveraging advances in machine learning to up their game and keep
ahead of the other.
Threat hunting, as the name suggests, is the practice of proactively
seeking out malicious activity instead of merely reacting to alerts or a
breach after it has occured.
76
Hunting begins with a hypothesis on potential weaknesses in the
network, and manual and automated tools to test out the hypothesis in a
continuous, iterative process. The sheer volume of data in cybersecurity
makes machine learning an inseparable part of the process.
A quick search on Linkedin for "threat hunters" shows 70+ job listings
in the United States from organizations such as Microsoft, Raytheon,
Verizon, Booz Allen Hamilton, and Dow Jones.
While this reflects an emerging demand for threat hunters across diverse
business types, it also indicates that the title itself is still niche.
"Results from the SANS 2018 Threat
Hunting Survey show that, for many
organizations, hunting is still new and
poorly defined from a process and
organizational standpoint The survey
of 600 respondents reveals that most
organizations that are hunting tend to be
larger enterprises or those that have been
heavily targeted in the past."
- SANS 2018 SURVEY SPONSORED BY IBM
77
As the SANS 2018 survey suggests, the stakes are higher for larger
enterprises whose differentiating factor is their access to a treasure
trove of data.
Amazon, for instance, faces mounting pressure from AWS customers
to secure the cloud. Wrongfully configured AWS servers have
resulted in data breaches at customers like Verizon, WWE, Dow Jones,
and Accenture.
Amazon acquired threat hunting startup Sqrrl to develop a new product
for hunting hackers on AWS clients' accounts.
Cylance, another AI startup with a focus on threat hunting, was acquired
by Blackberry last year.
The more spread out a network becomes the more vulnerable it becomes.
Threat hunting is likely to gain further traction, however it does come
with its own set of challenges, such as dealing with an ever-changing,
dynamic environment and reducing false positives.
78
CONVERSATIONAL AI
For many enterprises, chatbots became synonymous with AI but the
promise isn't keeping up with the reality.
Recently, Google was in hot water over its conversational AI feature,
Duplex.
Duplex can make phone calls and reservations on behalf of the user, but
communicates like a real human (complete with "umms" and pauses).
It sparked ethical concerns over whether or not Duplex needs to identify
itself as a conversational agent when speaking to real people.
Google added Duplex to its new phone, Pixel 3. It has turned the Pixel 3
into an AI powerhouse, including a "screen call" option that allows the
Google Assistant to screen for spam callers.
Google has been applying to patent the interactions between two
conversational agents since 2014. The most recent application,
"Conversational Agent Response Determined Using A Sentiment," was
filed in April 2018.
79
Despite FAMGA and China's big tech companies (Baidu, Alibaba, and
Tencent) focusing heavily on this space, conversational agents
both voice- and text-based are more feasible in some applications
than others.
One of the most widespread applications of chatbots is in customer
service. Bots form the first layer of interaction with the user (note: not all
bots use natural language processing) and hand off queries to a human
based on the level of complexity.
80
This is still challenging for applications like health and insurance, where
triaging (gauging the urgency of a situation) is complex.
Similarly, shopping through voice-based conversations alone, without a
visual cue, is challenging.
Although analysts and CPG brands, from Sephora and Nestle to
Capgemini, have talked up voice shopping as the next big thing in retail, it
hasn't taken off. With the exception of reordering specific items, it fails to
provide key customer experiences that drive online commerce.
Mental healthcare is another area where chatbots seem like a potentially
disruptive force.
High costs of mental health therapy and the appeal of round-the-clock
availability is giving rise to a new era of AI-based mental health bots.
Early-stage startups are focused on using cognitive behavioral therapy
changing negative thoughts and behaviors as a conversational
extension of the many mood tracking and digital diary wellness apps in
the market.
But mental health is a spectrum. There is variability in symptoms,
subjectivity in analysis, and it requires a high level of emotional cognition
and human-to-human interaction.
This makes areas like mental healthcare despite the upside of cost
and accessibility a particularly hard task for algorithms.
81
DRUG DISCOVERY
With AI biotech startups emerging, traditional pharma companies are
looking to AI SaaS startups for innovative solutions to the long drug
discovery cycle.
In May 2018, Pfizer entered into a strategic partnership with XtalPi
an AI startup backed by tech giants like Tencent and Google to
predict pharmaceutical properties of small molecules and develop
"computation-based rational drug design."
But Pfizer is not alone.
Top pharmaceutical companies like Novartis, Sanofi, GlaxoSmithKline,
Amgen, and Merck have all announced partnerships in recent months
with AI startups to discover new drug candidates for a range of diseases
from oncology and cardiology.
"The biggest opportunity where we are still
in the early stage is to use deep learning and
artificial intelligence to identify completely
new indications, completely new medicines.
"
BRUNO STRIGINI, FORMER CEO OF NOVARTIS ONCOLOGY
Interest in the space is driving the number of equity deals to AI drug
82
discovery startups: 20 as of Q2'18, equal to all of 2017.
While biotech AI companies like Recursion Pharmaceuticals are
investing in both AI and drug R&D, traditional pharma companies are
partnering with AI SaaS startups.
Although many of these startups are still in the early stages of funding,
83
they already boast a roster of pharma clients.
There are few measurable metrics of success in the drug formulation
phase, but pharma companies are betting millions of dollars on AI
algorithms to discover novel therapeutic candidates and transform the
drawn-out drug discovery process.
84
The CB Insights platform
has the underlying data
included in this report
CLICK HERE TO SIGN UP FOR FREE
WHERE IS ALL THIS DATA FROM?