Tag Archives: AI

Finkel’s Law: robots won’t replace us because we still need that human touch

The Conversation

File 20170822 22321 13gxwmx
The rise of the ChiefBot. Wes Mountain/The Conversation, CC BY-ND

Alan Finkel, Office of the Chief Scientist

By now, you’ve probably been warned that a robot is coming for your job. But rather than repeat the warning, I’ve decided to throw down a challenge: man against machine.

First, I’ll imagine the best possible robot version of an Australian Chief Scientist that technologists could build, based on the technologies available today or in the foreseeable future. Call it “ChiefBot”.

Then I’ll try to persuade you that humanity still has the competitive edge.


Read more: The future of artificial intelligence: two experts disagree


Let’s begin with the basic tasks our ChiefBot would be required to do.

First, deliver speeches. Easy. There are hundreds of free text-to-voice programs that wouldn’t cost the taxpayer a cent.

Second, write speeches. Again, easy. Google has an artificial intelligence (AI) system that writes poetry. A novel by a robot was shortlisted in a Japanese literary competition. Surely speeches can’t be so hard.

Third: scan the science landscape and identify trends. Watson, developed by IBM, can already do it. Watson is not just history’s most famous Jeopardy! champion: he’s had more careers than Barbie, from talent scouting for professional sport to scanning millions of pages of scientific reports to diagnose and treat disease.

Fourth, and finally: serve on boards and make complex decisions.

ChiefBot wouldn’t be the first robot to serve in that capacity. For example, an Australian company now sells AI software that can advise company boards on financial services. There’s a company in Hong Kong that has gone one step further and actually appointed an algorithm as a director.

So, there’s ChiefBot. I admit he’s pretty good. We have to assume that he will capture all the benefits of ever-advancing upgrades – unlike me.

But let’s not abandon our faith in humanity without looking again at the selection criteria for the job, and the capabilities on the human resume.

Man vs machine

Start with the task we’re engaged in right now: communicating in fluent human.

We’re sharing abstract ideas through words that we choose with an understanding of their nuance and impact. We don’t just speak in human, we speak as humans.

A robot that says that science is fun is delivering a line. A human who says that science is fun is telling you something important about being alive.

That’s knowledge that ChiefBot will never have, and the essence of the Chief Scientist’s job. Chalk that up to Team Human.

Here’s another inbuilt advantage we take for granted: as humans we are limited by design. We are bound in time: we die. We are bound in space: we can’t be in more than one place at a time.

That means that when I speak to an audience, I am giving them something exclusive: a chunk of my time. It’s a custom-made, one-off, 100% robot-free delivery, from today’s one-and-only Australian Chief Scientist.

True, I now come in digital versions, through Twitter and Facebook and other platforms, but the availability of those tools hasn’t stopped people from inviting me to speak in person. Digital Alan seems to increase the appetite for human Alan, just as Spotify can boost the demand for a musician’s live performances.

We see the same pattern repeated across the economy. Thanks to technology, many goods and services are cheaper, better and more accessible than ever before. We like our mass-produced bread, and our on-tap lectures and our automated FitBit advice.

But automation hasn’t killed the artisan bakery. Online courses haven’t killed the bricks-and-mortar university. FitBit hasn’t killed the personal trainer. On the contrary, they’re all booming, alongside their machine equivalents.

Finkel’s Law

Call it Finkel’s Law: where there’s a robot, we’ll see renewed appreciation for the humans in the robot-free zone. Team Human, two goals up.

The real Chief Scientists, Alan Finkel, dancing with a robot. Australia’s Chief Scientist, Author provided

Let me suggest a third advantage: you and I can be flexible and effective in human settings. In our world, AI are the interlopers. We are the incumbents. It’s the robots who have to make sense of us. And we make it extraordinarily hard.

Think, for example, of a real estate negotiation. We could rationalise it as an exchange of one economic asset for another. In reality, we know that our actions will be swayed by sentiment, insecurity and peer pressure.

In that swirl of reason and emotion, the art of the real estate agent is to anticipate, pivot and nudge.

The human real estate agent is the package deal. She can harness AI to sharpen her perceptions and overcome cognitive biases. Then she can hit the human buttons to flatter, deflect or persuade.

That human touch is hard to replicate, and even harder to reduce to a formula and scale. Team Human, three goals to nil.

Here’s a fourth argument for the win. We humans have learned the habit of civilisation. Let me illustrate this point by a story.

The human future

A few years ago, some researchers set out to investigate the way that people interact with robots. They sent out a small robot to patrol the local mall.

That robot had a terrible time – and the villains of the story were children. They kicked him, bullied him, smacked him in the head and called him a string of indelicate names.

The point is not that the children were violent. The point is that the adults were not. They restrained whatever primitive impulse they might have felt in childhood to smack something smaller and weaker in the head, because they had absorbed the habit of living together. We call it civilisation.


Read more: Surgeons admit to mistakes in surgery and would use robots if they reduced the risks


If we want artificial intelligence for the people, of the people and by the people, we’ll need every bit of that civilising instinct we’ve honed over thousands of years.

We’ll need humans to tame the machines to our human ends. I’d say that’s Team Human, in a walkover.

Together, these points suggest to me that humanity has a powerful competitive edge. We can coexist with our increasingly capable machines and we can make space for the full breadth of human talents to flourish.

But if we want that future – that human future – we have to want it, claim it and own it. Take it from a human Chief Scientist: we’re worth it.


The ConversationThis article is based on a speech Alan Finkel delivered to the Institute of Electrical and Electronics Engineers (IEEE) international conference in Sydney earlier this month.

Alan Finkel, Australia’s Chief Scientist, Office of the Chief Scientist

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.

1 Comment

Filed under Reblogs

Automation can leave us complacent, and that can have dangerous consequences

The Conversation

David Lyell

The recent fatal accident involving a Tesla car while self-driving using the car’s Autopilot feature has raised questions about whether this technology is ready for consumer use.

But more importantly, it highlights the need to reconsider the relationship between human behaviour and technology. Self-driving cars change the way we drive, and we need scrutinise the impact of this on safety.

Tesla’s Autopilot does not make the car truly autonomous and self-driving. Rather, it automates driving functions, such as steering, speed, braking and hazard avoidance. This is an important distinction. The Autopilot provides supplemental assistance to, but is not a replacement for, the driver.

In a statement following the accident, Tesla reiterated that Autopilot is still in beta. The statement emphasised that drivers must maintain responsibility for the vehicle and be prepared to take over manual control at any time.

Tesla says Autopilot improves safety, helps to avoid hazards and reduces driver workload. But with reduced workload, the question is whether the driver allocates freed-up cognitive resources to maintain supervisory control over Autopilot.

Automation bias

There is evidence to suggest that humans have trouble recognising when automation has failed and manual intervention is required. Research shows we are poor supervisors of trusted automation, with a tendency towards over-reliance.

Known as automation bias, when people use automation such as autopilot, they may delegate full responsibility to automation rather than continue to be vigilant. This reduces our workload, but it also reduces our ability to recognise when automation has failed, signalling the need to take back manual control.

Automation bias can occur anytime when automation is over-relied on and gets it wrong. This can happen because automation was not set properly.

An incorrectly set GPS navigation will lead you astray. This happened to one driver who followed an incorrectly set GPS across several European countries.

More tragically, Korean Airlines flight 007 was shot down when it strayed into Soviet airspace in 1983, killing all 269 on board. Unknown to the pilots, the aircraft deviated from its intended course due to an incorrectly set autopilot.

Autocorrect is not always correct

Automation will work exactly as programmed. Reliance on a spell checker to identify typing errors will not reveal the wrong words used that were spelt correctly. For example, mistyping “from” as “form”.

Likewise, automation isn’t aware of our intentions and will sometimes act contrary to them. This frequently occurs with predictive text and autocorrect on mobile devices. Here over-reliance can result in miscommunication with some hilarious consequences as documented on the website Damn You Autocorrect.

Sometimes automation will encounter circumstances that it can’t handle, as could have occurred in the Tesla crash.

GPS navigation has led drivers down a dead-end road when a highway was rerouted but the maps not updated.

Over-reliance on automation can exacerbate problems by reducing situational awareness. This is especially dangerous as it limits our ability to take back manual control when things go wrong.

The captain of China Airlines flight 006 left autopilot engaged while attending to an engine failure. The loss of power from one engine caused the plane to start banking to one side.

Unknown to the pilots, the autopilot was compensating by steering as far as it could in the opposite direction. It was doing exactly what it had been programmed to do, keeping the plane as level as possible.

But this masked the extent of the problem. In an attempt to level the plane, the captain disengaged the autopilot. The result was a complete loss of control, the plane rolled sharply and entered a steep descent. Fortunately, the pilots were able to regain control, but only after falling 30,000 feet.

Humans vs automation

When automation gets it right, it can improve performance. But research findings show that when automation gets it wrong, performance is worse than if there had been no automation at all.

And tasks we find difficult are also often difficult for automation.

In medicine, computers can assist radiologists detect cancers in screening mammograms by placing prompts over suspicious features. These systems are very sensitive, identifying the majority of cancers.

But in cases where the system missed cancers, human readers with computer-aided detection missed more than readers with no automated assistance. Researchers noted cancers that were difficult for humans to detect were also difficult for computers to detect.

Technology developers need to consider more than their automation technologies. They need to understand how automation changes human behaviour. While automation is generally highly reliable, it has the potential to fail.

Automation developers try to combat this risk by placing humans in a supervisory role with final authority. But automation bias research shows that relying on humans as a backup to automation is fraught with danger and a task for which they are poorly suited.

Developers and regulators must not only assess the automation technology itself, but also the way in which humans interact with it, especially in situations when automation fails. And as users of automation, we must remain ever vigilant, ready to take back control at the first sign of trouble.

The ConversationDavid Lyell, PhD Candidate in Health Informatics

This article was originally published on The Conversation. (Reblogged by permission). Read the original article.
 

Leave a comment

Filed under Reblogs

To stop the machines taking over we need to think about fuzzy logic

The Conversation

By Simon James, Deakin University

Amid all the dire warnings that machines run by artificial intelligence (AI) will one day take over from humans we need to think more about how we program them in the first place.

The technology may be too far off to seriously entertain these worries – for now – but much of the distrust surrounding AI arises from misunderstandings in what it means to say a machine is “thinking”.

One of the current aims of AI research is to design machines, algorithms, input/output processes or mathematical functions that can mimic human thinking as much as possible.

We want to better understand what goes on in human thinking, especially when it comes to decisions that cannot be justified other than by drawing on our “intuition” and “gut-feelings” – the decisions we can only make after learning from experience.

Consider the human that hires you after first comparing you to other job-applicants in terms of your work history, skills and presentation. This human-manager is able to make a decision identifying the successful candidate.

If we can design a computer program that takes exactly the same inputs as the human-manager and can reproduce its outputs, then we can make inferences about what the human-manager really values, even if he or she cannot articulate their decision on who to appoint other than to say “it comes down to experience”.

This kind of research is being carried out today and applied to understand risk-aversion and risk-seeking behaviour of financial consultants. It’s also being looked at in the field of medical diagnosis.

These human-emulating systems are not yet being asked to make decisions, but they are certainly being used to help guide human decisions and reduce the level of human error and inconsistency.

Fuzzy sets and AI

One promising area of research is to utilise the framework of fuzzy sets. Fuzzy sets and fuzzy logic were formalised by Lotfi Zadeh in 1965 and can be used to mathematically represent our knowledge pertaining to a given subject.

In everyday language what we mean when accusing someone of “fuzzy logic” or “fuzzy thinking” is that their ideas are contradictory, biased or perhaps just not very well thought out.

But in mathematics and logic, “fuzzy” is a name for a research area that has quite a sound and straightforward basis.

The starting point for fuzzy sets is this: many decision processes that can be managed by computers traditionally involve truth values that are binary: something is true or false, and any action is based on the answer (in computing this is typically encoded by 0 or 1).

For example, our human-manager from the earlier example may say to human resources:

  • IF the job applicant is aged 25 to 30
  • AND has a qualification in philosophy OR literature
  • THEN arrange an interview.

This information can all be written into a hiring algorithm, based on true or false answers, because an applicant either is between 25 and 30 or is not, they either do have the qualification or they do not.

But what if the human-manager is somewhat more vague in expressing their requirements? Instead, the human-manager says:

  • IF the applicant is tall
  • AND attractive
  • THEN the salary offered should be higher.

The problem HR faces in encoding these requests into the hiring algorithm is that it involves a number of subjective concepts. Even though height is something we can objectively measure, how tall should someone be before we call them tall?

Attractiveness is also subjective, even if we only account for the taste of the single human-manager.

Grey areas and fuzzy sets

In fuzzy sets research we say that such characteristics are fuzzy. By this we mean that whether something belongs to a set or not, whether a statement is true or false, can gradually increase from 0 to 1 over a given range of values.

One of the hardest things in any fuzzy-based software application is how best to convert observed inputs (someone’s height) into a fuzzy degree of membership, and then further establish the rules governing the use of connectives such as AND and OR for that fuzzy set.

To this day, and likely in years or decades into the future, the rules for this transition are human-defined. For example, to specify how tall someone is, I could design a function that says a 190cm person is tall (with a truth value of 1) and a 140cm person is not tall (or tall with a truth value of 0).

Then from 140cm, for every increase of 5cm in height the truth value increases by 0.1. So a key feature of any AI system is that we, normal old humans, still govern all the rules concerning how values or words are defined. More importantly, we define all the actions that the AI system can take – the “THEN” statements.

Human–robot symbiosis

An area called computing with words, takes the idea further by aiming for seamless communication between a human user and an AI computer algorithm.

For the moment, we still need to come up with mathematical representations of subjective terms such as “tall”, “attractive”, “good” and “fast”. Then we need to design a function for combining such comments or commands, followed by another mathematical definition for turning the result we get back into an output like “yes he is tall”.

In conceiving the idea of computing with words, researchers envisage a time where we might have more access to base-level expressions of these terms, such as the brain activity and readings when we use the term “tall”.

This would be an amazing leap, although mainly in terms of the technology required to observe such phenomena (the number of neurons in the brain, let alone synapses between them, is somewhere near the number of galaxies in the universe).

Even so, designing machines and algorithms that can emulate human behaviour to the point of mimicking communication with us is still a long way off.

In the end, any system we design will behave as it is expected to, according to the rules we have designed and program that governs it.

An irrational fear?

This brings us back to the big fear of AI machines turning on us in the future.

The real danger is not in the birth of genuine artificial intelligence –- that we will somehow manage to create a program that can become self-aware such as HAL 9000 in the movie 2001: A Space Odyssey or Skynet in the Terminator series.

The real danger is that we make errors in encoding our algorithms or that we put machines in situations without properly considering how they will interact with their environment.

These risks, however, are the same that come with any human-made system or object.

So if we were to entrust, say, the decision to fire a weapon to AI algorithms (rather than just the guidance system), then we might have something to fear.

Not a fear that these intelligent weapons will one day turn on us, but rather that we programmed them – given a series of subjective options – to decide the wrong thing and turn on us.

Even if there is some uncertainty about the future of “thinking” machines and what role they will have in our society, a sure thing is that we will be making the final decisions about what they are capable of.

When programming artificial intelligence, the onus is on us (as it is when we design skyscrapers, build machinery, develop pharmaceutical drugs or draft civil laws), to make sure it will do what we really want it to.

The ConversationThis article was originally published on The Conversation. (Reblogged by permission). Read the original article.


Leave a comment

Filed under Reblogs

Searle’s Chinese Room

Leave a comment

December 15, 2014 · 4:55 pm