Tag Archives: artificial intelligence

Finkel’s Law: robots won’t replace us because we still need that human touch

The Conversation

File 20170822 22321 13gxwmx
The rise of the ChiefBot. Wes Mountain/The Conversation, CC BY-ND

Alan Finkel, Office of the Chief Scientist

By now, you’ve probably been warned that a robot is coming for your job. But rather than repeat the warning, I’ve decided to throw down a challenge: man against machine.

First, I’ll imagine the best possible robot version of an Australian Chief Scientist that technologists could build, based on the technologies available today or in the foreseeable future. Call it “ChiefBot”.

Then I’ll try to persuade you that humanity still has the competitive edge.


Read more: The future of artificial intelligence: two experts disagree


Let’s begin with the basic tasks our ChiefBot would be required to do.

First, deliver speeches. Easy. There are hundreds of free text-to-voice programs that wouldn’t cost the taxpayer a cent.

Second, write speeches. Again, easy. Google has an artificial intelligence (AI) system that writes poetry. A novel by a robot was shortlisted in a Japanese literary competition. Surely speeches can’t be so hard.

Third: scan the science landscape and identify trends. Watson, developed by IBM, can already do it. Watson is not just history’s most famous Jeopardy! champion: he’s had more careers than Barbie, from talent scouting for professional sport to scanning millions of pages of scientific reports to diagnose and treat disease.

Fourth, and finally: serve on boards and make complex decisions.

ChiefBot wouldn’t be the first robot to serve in that capacity. For example, an Australian company now sells AI software that can advise company boards on financial services. There’s a company in Hong Kong that has gone one step further and actually appointed an algorithm as a director.

So, there’s ChiefBot. I admit he’s pretty good. We have to assume that he will capture all the benefits of ever-advancing upgrades – unlike me.

But let’s not abandon our faith in humanity without looking again at the selection criteria for the job, and the capabilities on the human resume.

Man vs machine

Start with the task we’re engaged in right now: communicating in fluent human.

We’re sharing abstract ideas through words that we choose with an understanding of their nuance and impact. We don’t just speak in human, we speak as humans.

A robot that says that science is fun is delivering a line. A human who says that science is fun is telling you something important about being alive.

That’s knowledge that ChiefBot will never have, and the essence of the Chief Scientist’s job. Chalk that up to Team Human.

Here’s another inbuilt advantage we take for granted: as humans we are limited by design. We are bound in time: we die. We are bound in space: we can’t be in more than one place at a time.

That means that when I speak to an audience, I am giving them something exclusive: a chunk of my time. It’s a custom-made, one-off, 100% robot-free delivery, from today’s one-and-only Australian Chief Scientist.

True, I now come in digital versions, through Twitter and Facebook and other platforms, but the availability of those tools hasn’t stopped people from inviting me to speak in person. Digital Alan seems to increase the appetite for human Alan, just as Spotify can boost the demand for a musician’s live performances.

We see the same pattern repeated across the economy. Thanks to technology, many goods and services are cheaper, better and more accessible than ever before. We like our mass-produced bread, and our on-tap lectures and our automated FitBit advice.

But automation hasn’t killed the artisan bakery. Online courses haven’t killed the bricks-and-mortar university. FitBit hasn’t killed the personal trainer. On the contrary, they’re all booming, alongside their machine equivalents.

Finkel’s Law

Call it Finkel’s Law: where there’s a robot, we’ll see renewed appreciation for the humans in the robot-free zone. Team Human, two goals up.

The real Chief Scientists, Alan Finkel, dancing with a robot. Australia’s Chief Scientist, Author provided

Let me suggest a third advantage: you and I can be flexible and effective in human settings. In our world, AI are the interlopers. We are the incumbents. It’s the robots who have to make sense of us. And we make it extraordinarily hard.

Think, for example, of a real estate negotiation. We could rationalise it as an exchange of one economic asset for another. In reality, we know that our actions will be swayed by sentiment, insecurity and peer pressure.

In that swirl of reason and emotion, the art of the real estate agent is to anticipate, pivot and nudge.

The human real estate agent is the package deal. She can harness AI to sharpen her perceptions and overcome cognitive biases. Then she can hit the human buttons to flatter, deflect or persuade.

That human touch is hard to replicate, and even harder to reduce to a formula and scale. Team Human, three goals to nil.

Here’s a fourth argument for the win. We humans have learned the habit of civilisation. Let me illustrate this point by a story.

The human future

A few years ago, some researchers set out to investigate the way that people interact with robots. They sent out a small robot to patrol the local mall.

That robot had a terrible time – and the villains of the story were children. They kicked him, bullied him, smacked him in the head and called him a string of indelicate names.

The point is not that the children were violent. The point is that the adults were not. They restrained whatever primitive impulse they might have felt in childhood to smack something smaller and weaker in the head, because they had absorbed the habit of living together. We call it civilisation.


Read more: Surgeons admit to mistakes in surgery and would use robots if they reduced the risks


If we want artificial intelligence for the people, of the people and by the people, we’ll need every bit of that civilising instinct we’ve honed over thousands of years.

We’ll need humans to tame the machines to our human ends. I’d say that’s Team Human, in a walkover.

Together, these points suggest to me that humanity has a powerful competitive edge. We can coexist with our increasingly capable machines and we can make space for the full breadth of human talents to flourish.

But if we want that future – that human future – we have to want it, claim it and own it. Take it from a human Chief Scientist: we’re worth it.


The ConversationThis article is based on a speech Alan Finkel delivered to the Institute of Electrical and Electronics Engineers (IEEE) international conference in Sydney earlier this month.

Alan Finkel, Australia’s Chief Scientist, Office of the Chief Scientist

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.

1 Comment

Filed under Reblogs

What does it mean to think and could a machine ever do it?

The Conversation

Peter Ellerton, The University of Queensland

The idea of a thinking machine is an amazing one. It would be like humans creating artificial life, only more impressive because we would be creating consciousness. Or would we?

It’s tempting to think that a machine that could think would think like us. But a bit of reflection shows that’s not an inevitable conclusion.

To begin with, we’d better be clear about what we mean by “think”. A comparison with human thinking might be intuitive, but what about animal thinking? Does a chimpanzee think? Does a crow? Does an octopus?

There may even be alien intelligences that we might not even recognise as such because they are so radically different from us. Perhaps we could pass each other in close proximity, each unaware that the other existed, having no way to engage.

Certainly animals other than humans have cognitive abilities geared towards understanding tools and causal relationships, communication, and even to recognising directed and purposeful thinking in others. We’d probably consider any or all of that thinking.

And let’s face it, if we built a machine that did all the above, we’d be patting ourselves on the back and saying “mission accomplished”. But could a machine go a step further and be like a human mind? What’s more, how would we know if it did?

Just because a computer acts like it has a mind, it doesn’t mean it must have one. It might be all show and no substance, an instance of a philosophical zombie.

It was this notion that motivated British codebreaker and mathematician Alan Turing to come up with his famous “Turing test”, in which a computer would interact with a human through a screen and, more often than not, have the human unsure it was a computer. For Turing, all that mattered was behaviour, there was no computational “inner life” to be concerned about.

But this inner life matters to some of us. The philosopher Thomas Nagel said that there was “something that it is like” to have conscious experiences. There’s something that it is like to see the colour red, or to go water skiing. We are more than just our brain states.

Could there ever be “something that it’s like” to be a thinking machine? In an imagined conversation with the first intelligent machine, a human might asked “Are you conscious?”, to which it might reply, “How would I know?”.

Is thinking just computation?

Under the hood of computer thinking, as we currently imagine it, is sheer computation. It’s about calculations per second and the number of potential computational pathways.

How can meat think?

But we are not at all sure that thinking or consciousness is a function of computation, at least the way a binary computer does it. Could thinking be more than just computation? What else is needed? And if it is all about computation, why is the human brain so bad at it?

Most of us are flat out multiplying a couple of two digit numbers in our heads, let alone performing trillions of calculations a second. Or is there some deep processing of data that goes on below our awareness that ultimately results in our arithmetically impaired consciousness (the argument of so-called Strong AI)?

Generally speaking, what computers are good at, like raw data manipulation, humans are quite bad at; and what computers are bad at, such as language, poetry, voice recognition, interpreting complex behaviour and making holistic judgements, humans are quite good at.

If the analogy between human and computer “thinking” is so bad, why expect computers to eventually think like us? Or might computers of the future lose their characteristic arithmetical aptitude as the full weight of consciousness emerges?

Belief, doubt and values

Then we have words like “belief” and “doubt” that are characteristic of human thinking. But what could it possibly mean for a computer to believe something, apart from the trivial meaning that it acted in ignorance of the possibility that it could be wrong? In other words, could a computer have genuine doubt, and then go ahead and act anyway?

When it comes to questions of value, questions about what we think is important in life and why, it’s interesting to consider two things. The first is if a thinking computer could be capable of attributing value to anything at all. The second is that if it could attribute value to anything, what would it choose? We’d want to be a bit careful here, it seems, even without getting into the possibility of mechanical free will.

It would be nice to program into computers a human style value system. But, on the one hand, we aren’t quite sure what that is, or how that could be done, and, on the other hand, if computers started programming themselves they may decide otherwise.

While it’s great fun to think about all this, we should spend a bit of time trying to understand what we want thinking computers to be. And maybe a bit more time should be spent trying to understand ourselves before we branch out.

The ConversationPeter Ellerton, Lecturer in Critical Thinking, The University of Queensland

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.

2 Comments

Filed under Reblogs

To stop the machines taking over we need to think about fuzzy logic

The Conversation

By Simon James, Deakin University

Amid all the dire warnings that machines run by artificial intelligence (AI) will one day take over from humans we need to think more about how we program them in the first place.

The technology may be too far off to seriously entertain these worries – for now – but much of the distrust surrounding AI arises from misunderstandings in what it means to say a machine is “thinking”.

One of the current aims of AI research is to design machines, algorithms, input/output processes or mathematical functions that can mimic human thinking as much as possible.

We want to better understand what goes on in human thinking, especially when it comes to decisions that cannot be justified other than by drawing on our “intuition” and “gut-feelings” – the decisions we can only make after learning from experience.

Consider the human that hires you after first comparing you to other job-applicants in terms of your work history, skills and presentation. This human-manager is able to make a decision identifying the successful candidate.

If we can design a computer program that takes exactly the same inputs as the human-manager and can reproduce its outputs, then we can make inferences about what the human-manager really values, even if he or she cannot articulate their decision on who to appoint other than to say “it comes down to experience”.

This kind of research is being carried out today and applied to understand risk-aversion and risk-seeking behaviour of financial consultants. It’s also being looked at in the field of medical diagnosis.

These human-emulating systems are not yet being asked to make decisions, but they are certainly being used to help guide human decisions and reduce the level of human error and inconsistency.

Fuzzy sets and AI

One promising area of research is to utilise the framework of fuzzy sets. Fuzzy sets and fuzzy logic were formalised by Lotfi Zadeh in 1965 and can be used to mathematically represent our knowledge pertaining to a given subject.

In everyday language what we mean when accusing someone of “fuzzy logic” or “fuzzy thinking” is that their ideas are contradictory, biased or perhaps just not very well thought out.

But in mathematics and logic, “fuzzy” is a name for a research area that has quite a sound and straightforward basis.

The starting point for fuzzy sets is this: many decision processes that can be managed by computers traditionally involve truth values that are binary: something is true or false, and any action is based on the answer (in computing this is typically encoded by 0 or 1).

For example, our human-manager from the earlier example may say to human resources:

  • IF the job applicant is aged 25 to 30
  • AND has a qualification in philosophy OR literature
  • THEN arrange an interview.

This information can all be written into a hiring algorithm, based on true or false answers, because an applicant either is between 25 and 30 or is not, they either do have the qualification or they do not.

But what if the human-manager is somewhat more vague in expressing their requirements? Instead, the human-manager says:

  • IF the applicant is tall
  • AND attractive
  • THEN the salary offered should be higher.

The problem HR faces in encoding these requests into the hiring algorithm is that it involves a number of subjective concepts. Even though height is something we can objectively measure, how tall should someone be before we call them tall?

Attractiveness is also subjective, even if we only account for the taste of the single human-manager.

Grey areas and fuzzy sets

In fuzzy sets research we say that such characteristics are fuzzy. By this we mean that whether something belongs to a set or not, whether a statement is true or false, can gradually increase from 0 to 1 over a given range of values.

One of the hardest things in any fuzzy-based software application is how best to convert observed inputs (someone’s height) into a fuzzy degree of membership, and then further establish the rules governing the use of connectives such as AND and OR for that fuzzy set.

To this day, and likely in years or decades into the future, the rules for this transition are human-defined. For example, to specify how tall someone is, I could design a function that says a 190cm person is tall (with a truth value of 1) and a 140cm person is not tall (or tall with a truth value of 0).

Then from 140cm, for every increase of 5cm in height the truth value increases by 0.1. So a key feature of any AI system is that we, normal old humans, still govern all the rules concerning how values or words are defined. More importantly, we define all the actions that the AI system can take – the “THEN” statements.

Human–robot symbiosis

An area called computing with words, takes the idea further by aiming for seamless communication between a human user and an AI computer algorithm.

For the moment, we still need to come up with mathematical representations of subjective terms such as “tall”, “attractive”, “good” and “fast”. Then we need to design a function for combining such comments or commands, followed by another mathematical definition for turning the result we get back into an output like “yes he is tall”.

In conceiving the idea of computing with words, researchers envisage a time where we might have more access to base-level expressions of these terms, such as the brain activity and readings when we use the term “tall”.

This would be an amazing leap, although mainly in terms of the technology required to observe such phenomena (the number of neurons in the brain, let alone synapses between them, is somewhere near the number of galaxies in the universe).

Even so, designing machines and algorithms that can emulate human behaviour to the point of mimicking communication with us is still a long way off.

In the end, any system we design will behave as it is expected to, according to the rules we have designed and program that governs it.

An irrational fear?

This brings us back to the big fear of AI machines turning on us in the future.

The real danger is not in the birth of genuine artificial intelligence –- that we will somehow manage to create a program that can become self-aware such as HAL 9000 in the movie 2001: A Space Odyssey or Skynet in the Terminator series.

The real danger is that we make errors in encoding our algorithms or that we put machines in situations without properly considering how they will interact with their environment.

These risks, however, are the same that come with any human-made system or object.

So if we were to entrust, say, the decision to fire a weapon to AI algorithms (rather than just the guidance system), then we might have something to fear.

Not a fear that these intelligent weapons will one day turn on us, but rather that we programmed them – given a series of subjective options – to decide the wrong thing and turn on us.

Even if there is some uncertainty about the future of “thinking” machines and what role they will have in our society, a sure thing is that we will be making the final decisions about what they are capable of.

When programming artificial intelligence, the onus is on us (as it is when we design skyscrapers, build machinery, develop pharmaceutical drugs or draft civil laws), to make sure it will do what we really want it to.

The ConversationThis article was originally published on The Conversation. (Reblogged by permission). Read the original article.


Leave a comment

Filed under Reblogs

Searle’s Chinese Room

Leave a comment

December 15, 2014 · 4:55 pm