“Are the robots going to take our jobs?”
These are the two fears.
How do you address those fears?
I usually talk about AI to non-AI people
— businesspeople, actually. The way to
explain AI in a very simple way is that
artificial narrow intelligence [ANI] was
the beginning of AI. Why is it narrow?
It’s because you program a machine to
do one thing and one thing only — very
well, but one thing only. We have had AI
functioning for us and servicing us for
the last 20, 30 years. If you ask yourself
where is it in your life, well, your junk
mail is an AI. It’s an ANI. When you
land at an airport, which gate is selected
for your aircraft is actually performed
by an AI system. It’s not a human being
selecting Gate 13 for you.
We live surrounded by machinery
that has been programmed to run by
itself and make decisions by itself, basically very simple decisions. No one has
a problem with all these things, because
somehow it’s very clear the values that
it provides to society. These are jobs
that a human being will be completely
unable to do because of the massive
computer power that you would need
for your brain, right?
No one has a problem with ANI.
Where society starts to get a problem
is when no one knows how to explain
how we end up in the next level of AI —
which is artificial general intelligence.
That is pretty much what we in the
scientific community call the singularity moment. That is the day that we are
able to build a machine that runs an AI
software program and is able to think as
a human brain. We’re not there simply
because we still are not that advanced.
This is my work. There are two
camps: People that say, “Let’s develop
AI as fast as we can and with the best
abilities that we can. We don’t know
what we’re going to use it for, but we’re
just going to go ahead.” That would be
Google’s standpoint. Then there are peo-
ple like Professor [Stephen] Hawking
and Elon Musk and Bill Gates, who say,
“We need to create a digital future with
a lot of AI but still fit for humans to live
in. After all, this is our life.” How do you
create that? Well, you create that if you
separate what makes a human special
and what makes an AI system special.
You’re going to say creativity separates
the two, yes?
It’s exactly that. It’s the right brain. The
right brain is not just creativity, it’s
how human beings interpret data, what
they can see. For example, you can
teach a machine to be intelligent, but
you cannot program a machine to have
common sense. Common sense is a
You can teach a machine to do stuff
like ANI, but you cannot teach a machine
tacit knowledge. That’s one thing that
when I mention it at conferences or I
explain it to my clients, they get it. Tacit
knowledge is knowledge that as a human
being you accumulate over time from
doing things, from having lived through
situations, conceptions, skills acquired,
that all combined provide you that superior knowledge. Tacit knowledge cannot
be put into words, which is why it cannot
be programmed into a machine.
Tacit knowledge is, for example, salespeople that have been selling for 10 or 20
years, who go into a meeting and within
seconds they know if the client is going
to buy that thing or not. Professional
sports players behave and move as if they
had eyes behind their backs. That’s tacit
knowledge. It’s a knowledge that cannot
be transferred to another human being.
This is one of the reasons why the
machines will never have tacit knowledge. Machines also are very, very bad
at being human. They’re very good at
simulating being human. For example,
if you were to put a human and a
machine in some contextual scenarios
of making decisions, the human will
draw from his or her tacit knowledge
where the machine can only draw upon
what it’s been programmed to think.
‘Machines are very bad at being
human. They’re very good
at simulating being human.
[A] human will draw from his
or her tacit knowledge where the
machine can only draw upon
what it’s been programmed