Tag Archives: solution
Bat and ball solution
The question was: A bat and ball cost $1.10. The bat costs one dollar more than the ball. How much does the ball cost?
Answer: we are intuitively tempted to say 10c. But in that case, the total would be 10c + $1.10 = $1.20. (If the bat costs one dollar more than the ball, the bat must cost $1.10).
If the total cost is $1.10, then if the ball costs 10c, and the bat costs $1.00 then the bat costs only 90c more than the ball.
The correct answer is that the ball costs 5c (5c + $1.05 = $1.10).
Filed under Puzzles
Automation can leave us complacent, and that can have dangerous consequences
The recent fatal accident involving a Tesla car while self-driving using the car’s Autopilot feature has raised questions about whether this technology is ready for consumer use.
But more importantly, it highlights the need to reconsider the relationship between human behaviour and technology. Self-driving cars change the way we drive, and we need scrutinise the impact of this on safety.
Tesla’s Autopilot does not make the car truly autonomous and self-driving. Rather, it automates driving functions, such as steering, speed, braking and hazard avoidance. This is an important distinction. The Autopilot provides supplemental assistance to, but is not a replacement for, the driver.
In a statement following the accident, Tesla reiterated that Autopilot is still in beta. The statement emphasised that drivers must maintain responsibility for the vehicle and be prepared to take over manual control at any time.
Tesla says Autopilot improves safety, helps to avoid hazards and reduces driver workload. But with reduced workload, the question is whether the driver allocates freed-up cognitive resources to maintain supervisory control over Autopilot.
Automation bias
There is evidence to suggest that humans have trouble recognising when automation has failed and manual intervention is required. Research shows we are poor supervisors of trusted automation, with a tendency towards over-reliance.
Known as automation bias, when people use automation such as autopilot, they may delegate full responsibility to automation rather than continue to be vigilant. This reduces our workload, but it also reduces our ability to recognise when automation has failed, signalling the need to take back manual control.
Automation bias can occur anytime when automation is over-relied on and gets it wrong. This can happen because automation was not set properly.
An incorrectly set GPS navigation will lead you astray. This happened to one driver who followed an incorrectly set GPS across several European countries.
More tragically, Korean Airlines flight 007 was shot down when it strayed into Soviet airspace in 1983, killing all 269 on board. Unknown to the pilots, the aircraft deviated from its intended course due to an incorrectly set autopilot.
Autocorrect is not always correct
Automation will work exactly as programmed. Reliance on a spell checker to identify typing errors will not reveal the wrong words used that were spelt correctly. For example, mistyping “from” as “form”.
Likewise, automation isn’t aware of our intentions and will sometimes act contrary to them. This frequently occurs with predictive text and autocorrect on mobile devices. Here over-reliance can result in miscommunication with some hilarious consequences as documented on the website Damn You Autocorrect.
Sometimes automation will encounter circumstances that it can’t handle, as could have occurred in the Tesla crash.
GPS navigation has led drivers down a dead-end road when a highway was rerouted but the maps not updated.
Over-reliance on automation can exacerbate problems by reducing situational awareness. This is especially dangerous as it limits our ability to take back manual control when things go wrong.
The captain of China Airlines flight 006 left autopilot engaged while attending to an engine failure. The loss of power from one engine caused the plane to start banking to one side.
Unknown to the pilots, the autopilot was compensating by steering as far as it could in the opposite direction. It was doing exactly what it had been programmed to do, keeping the plane as level as possible.
But this masked the extent of the problem. In an attempt to level the plane, the captain disengaged the autopilot. The result was a complete loss of control, the plane rolled sharply and entered a steep descent. Fortunately, the pilots were able to regain control, but only after falling 30,000 feet.
Humans vs automation
When automation gets it right, it can improve performance. But research findings show that when automation gets it wrong, performance is worse than if there had been no automation at all.
And tasks we find difficult are also often difficult for automation.
In medicine, computers can assist radiologists detect cancers in screening mammograms by placing prompts over suspicious features. These systems are very sensitive, identifying the majority of cancers.
But in cases where the system missed cancers, human readers with computer-aided detection missed more than readers with no automated assistance. Researchers noted cancers that were difficult for humans to detect were also difficult for computers to detect.
Technology developers need to consider more than their automation technologies. They need to understand how automation changes human behaviour. While automation is generally highly reliable, it has the potential to fail.
Automation developers try to combat this risk by placing humans in a supervisory role with final authority. But automation bias research shows that relying on humans as a backup to automation is fraught with danger and a task for which they are poorly suited.
Developers and regulators must not only assess the automation technology itself, but also the way in which humans interact with it, especially in situations when automation fails. And as users of automation, we must remain ever vigilant, ready to take back control at the first sign of trouble.
David Lyell, PhD Candidate in Health Informatics
This article was originally published on The Conversation. (Reblogged by permission). Read the original article.
Filed under Reblogs
Four card solution
Obviously we need to turn over the D to check that there is a 3 on the back (everybody gets this one right). And equally obviously, there’s no need to turn over the K (and again, everybody realises this). The 3 card is a tricky one. Most people think that you need to turn this card over to see whether there is a D on the other side. This would be necessary had the claim been that “Every card that has a D on one side has a 3 on the other, and vice versa”. But it wasn’t. The 7 is the other tricky one. It doesn’t occur to most people that we need to turn this card over to check that the letter on the back is not D. If it is D, then the claim is false.
This trick illustrates the phenomenon of confirmation bias. Most people, being fairly charitable sorts, want to turn over the 3, find a D on the back and confirm the claim (“Well done, you’re right!”). And so it is with homeopathy (or conspiracy theories). People who want to believe that the treatment works actively search for opportunities to confirm this belief, focusing on homeopathy patients who seem to have got better (“3 cards”) and reject opportunities to disconfirm it, by ignoring research studies (“7 cards”).
Filed under Puzzles
Cheryl’s Birthday solution
Remember, Albert is told either May, June, July or August.
Bernard is told either 14, 15, 16, 17, 18 or 19
Let’s go through it line by line.
Albert: I don’t know when Cheryl’s birthday is, but I know that Bernard doesn’t know too.
All Albert knows is the month, and every month has more than one possible date, so of course he doesn’t know when her birthday is. The first part of the sentence is redundant.
The only way that Bernard could know the date with a single number, however, would be if Cheryl had told him 18 or 19, since of the ten date options only these numbers appear once, as May 19 and June 18.
For Albert to know that Bernard does not know, Albert must therefore have been told July or August, since this rules out Bernard being told 18 or 19.
Line 2) Bernard: At first I don’t know when Cheryl’s birthday is, but now I know.
Bernard has deduced that Albert has either August or July. If he knows the full date, he must have been told 15, 16 or 17, since if he had been told 14 he would be none the wiser about whether the month was August or July. Each of 15, 16 and 17 only refers to one specific month, but 14 could be either month.
Line 3) Albert: Then I also know when Cheryl’s birthday is.
Albert has therefore deduced that the possible dates are July 16, Aug 15 and Aug 17. For him to now know, he must have been told July. Since if he had been told August, he would not know which date for certain is the birthday.
The answer, therefore is July 16.
Filed under Puzzles
Green-eyed dragons solution
Let’s start with a smaller number of dragons, N, instead of one hundred, to get a feel for the problem.
If N = 1, and you tell this dragon that at least one of the dragons has green eyes, then you are simply telling him that he has green eyes, so he must turn into a sparrow at midnight.
If N = 2, let the dragons be called A and B. After your announcement that at least one of them has green eyes, A will think to himself, “If I do not have green eyes, then B can see that I don’t, so B will conclude that she must have green eyes. She will therefore turn into a sparrow on the first midnight.” Therefore, if B does not turn into a sparrow on the first midnight, then on the following day A will conclude that he himself must have green eyes, and so he will turn into a sparrow on the second midnight. The same thought process will occur for B, so they will both turn into sparrows on the second midnight.
If N = 3, let the dragons be called A, B, and C. After your announcement, C will think to himself, “If I do not have green eyes, then A and B can see that I don’t, so as far as they are concerned, they can use the reasoning for the N = 2 situation, in which case they will both turn into sparrows on the second midnight.” Therefore, if A and B do not turn into sparrows on the second midnight, then on the third day C will conclude that he himself must have green eyes, and so he will turn into a sparrow on the third midnight. The same thought process will occur for A and B, so they will all turn into sparrows on the third midnight. The pattern now seems clear.
Claim: Consider N dragons, all of whom have green eyes. If you announce to all of them that at least one of them has green eyes, they will all turn into sparrows on the Nth midnight.
Proof: We will prove this by induction. We will assume the result is true for N dragons, and then we will show that it is true for N + 1 dragons. We saw above that it holds for N = 1; 2; 3.
Consider N + 1 dragons, and pick one of them, called A. After your announcement, she will think to herself, “If I do not have green eyes, then the other N dragons can see that I don’t, so as far as they are concerned, they can use the reasoning for the situation with N dragons, in which case they will all turn into sparrows on the Nth midnight.” Therefore, if they do not all turn into sparrows on the Nth midnight, then on the (N + 1)st day A will conclude that she herself must have green eyes, and so she will turn into a sparrow on the (N + 1)st midnight. The same thought process will occur for the other N dragons, so they will all turn into sparrows on the (N + 1)st midnight.
Hence, in our problem all one hundred dragons will turn into sparrows on the 100th midnight.
Filed under Puzzles