[ad_1]

At first glance, the stack of medicine pumpsmight bear little resemblance to the flight deck of a modern aircraft. A series of white plastic boxes stacked on top of each other, they look similar to other pumps in a hospital that control the flow of drugs and fluids into the bloodstream of critically-ill patients.

Yet there is one crucial difference: these pumps — which can be used for insulin, painkillers and many other medications — are set up to take over important, safety-critical tasks that were once only undertaken by doctors and nurses.

Just as aircraft autopilots have improved overall airline safety by assuming control of tasks where human pilots can make errors, the pumps have been designed to conduct some of the arduous process of checking on dosage levels, patient identity and medical records that are required before any drug is administered. The hope is that they can process this information with fewer mistakes than humans.

Mark Sujan, an associate professor at the UK’s University of Warwick who has been involved in devising the system, says that the use of artificial intelligence may eventually mean that the pumps learn how to improve the administration of the drugs based on each patient’s condition.

An Ethiopian Airlines Boeing 737 Max, the same aircraft that crashed in March killing 157 people

“It frees up the nurses from constantly having to monitor what’s going on because the AI is doing it,” he says.

The experiment with the pumps at the Royal Derby hospital in England, which involves dummies rather than humans, is one of thousands of automation projects brought into stark relief by two fatal crashes, over the past six months, involving new Boeing 737 Max 8 aircraft.

Researchers and companies across a wide range of industries, from healthcare to shipping to driverless cars, are seeking to apply artificial intelligence and automation to processes that were once conducted only by humans in a bid to make them more reliable and less vulnerable to error.

Yet these scientists are grappling with a similar dilemma to the one exposed by the Boeing crashes — how to manage the interface between humans and machines. For every human mistake that is eradicated, the risk is that new ones are being introduced. Machines can make errors through lack of information or poor programming and there is vast scope for misunderstandings between humans and machines.

The core problem, says Mr Sujan, is that while human beings no longer exercise complete control over the operation of autonomous machines, they are still meant to be sufficiently engaged to understand the machine’s functioning and to step in should it start to act unsafely.

Rolls-Royce is seeking to apply artificial intelligence and automation to sea crafts to minimise the risk of human error

“All it does really is remove the human — or, in this case, the doctor — from the manual control and transforms their role to that of supervisors or monitors of the autonomous system,” Mr Sujan says. “That introduces all sorts of problems.”

Few of the researchers involved believe there are satisfactory answers yet to all the safety questions being asked. Jacques Penders, head of the centre for automation and robotics research at Sheffield Hallam University, says there are still substantial challenges with ensuring both that humans understand what automated systems are doing and that automated systems understand nearby humans.

“It’s not something that’s really solved,” Professor Penders says.

The controversy around autonomous machines has been dramatically illustrated by the fatal Boeing crashes, both involving the 737 Max 8 aircraft: the first a Lion Air flight in Indonesia in October, the other an Ethiopian Airlines flight last month, shortly after take-off. Ethiopian officials reported on April 4 that the pilots in the second crash had been unable to overcome the tendency of the Manoeuvring Characteristics Augmentation System (MCAS) — an automated system to prevent mid-air stalls — to send the aircraft’s nose plunging dangerously downwards. A similar issue was blamed for the Indonesian tragedy.

Nevertheless, there is powerful evidence that carefully-implemented automation has saved many lives.

Steve Casner, a research psychologist for Nasa, the US aeronautics and space agency, points out that, as recently as 30 years ago, fatal airline crashes were relatively common in the US. In 1989, 259 passengers died in five separate fatal incidents involving US-owned commercial aircraft. But it is now 10 years since a Colgan Air flight crashed near Buffalo, New York, in February 2009, killing 49 people — the last fatal crash of a US-owned commercial airliner. Other advanced economies have seen similarly sharp improvements in safety.

A co-working robot in Sheffield Hallam’s research centre was developed for a food distributor. The blue blocks are proxies for wedges of cheese

Mr Casner, who was involved in developing many of the automated systems now in use, says that 20 years ago he would never have thought the present safety record possible. He believes the introduction of automation to help pilots has played a key role.

“We did something right and we spread the ideas throughout the industry, especially to pilots — they participated every step of the way,” says the California-based Mr Casner. “I feel that we did something great here. We made it work and we took safety a step forward.”

For some researchers, there is substantial potential for automation to produce such safety improvements in other fields. Mr Sujan points out that when a drug is administered manually, clinicians have to make 60 to 70 different checks — from the simple question of the patient’s identity to more complex questions such as whether the drug will react adversely with anything else the patient is taking.

“These checks are extremely error-prone,” he adds, saying that it is statistically almost certain that a nurse or doctor will make at least one mistake in their career.

The great strength of automated systems is their readiness to carry out detailed but mentally unengaging tasks repeatedly and in exactly the same way, according to Prof Penders.

“You can ask the robot 1,000 times to do the same and it will do always the same thing,” he says. “If you ask a person, maybe three times.”

Yet few of the autonomous systems critical for safety can cope entirely without human help — and there are frequent problems when humans retake control of an automated system that has reached the limits of its capabilities.

Although he says it is too early to comment on the 737 incidents, Mr Casner acknowledges that, while airline crashes have become rare, incidents of “automation surprises” have formed a significant proportion of recent years’ safety incidents. Among the most serious was the loss in 2009 of an Air France Airbus A330 over the south Atlantic, which killed 228 people. Pilots in that incident failed to take the correct action after an autopilot handed back control following a sensor failure.

Mr Casner, who is also a flight instructor, says he gives trainee pilots a stark warning about the systems.

“When we get in the aeroplane, the thing I like to say is, ‘the first thing you need to know about this system is it’s going to try to kill you once in a while’,” Mr Casner says. “‘The good news is it’ll try to save your life about 10 times more often’.”

Luis Benito, a senior manager at Lloyd’s Register, which certifies the safety of ships and offshore installations, says it is particularly important to ensure software has been properly tested to check its behaviour in all the circumstances it will meet.

“It can get more and more complex as you allow the software to do more and more things,” Mr Benito says.

Steve Casner, a research psychologist for Nasa

For several years, Mr Benito says Lloyd’s Register has been dealing with requests to certify autonomous systems in the maritime and offshore oil and gas industries. While seagoing vessels have long used automatic systems that will maintain a set course at sea, the latest variants are designed to detect obstacles, such as other vessels, and plot a safe route round them without the crew’s intervention.

Mr Benito says that people who work with autonomous systems need to be trained carefully about their limitations. “Understanding what we are automating is very critical — and [so is] to know what we are not,” he adds. “The individuals need to understand when the technology is not designed for that particular operation.”

The challenge of making sure humans can work comfortably alongside machines is on display at Professor Penders’ centre in Sheffield. At a table in the centre’s main room, a co-working robot — one of a new breed of industrial robots designed to be safe around human colleagues — uses a jointed arm to pick up and sort chunks of blue plastic on to a tray. The machine was developed for a food distributor and the blocks are proxies for wedges of cheese.

However, the sheer strangeness of working alongside an autonomous machine becomes apparent when Alexandr Lucas, a researcher at the centre, stands next to the robot and starts sorting a pile of pens by colour — a task that requires some degree of concentration. Although he is used to operating alongside a machine, he says his performance still suffers when compared to the robot.

Humans find such situations stressful, according to Professor Penders, partly because humans use far more subtle means of communication than machines.

“Even if you don’t know it, your facial expressions and body language are still steering the conversation,” Prof Penders says of conversations between humans. “With robots, you have that a lot less.”

While machines can process data far more quickly than humans, they also typically rely on more limited types of information. Mr Sujan points out that a human clinician overseeing an infusion pump giving a diabetic insulin will be able to see if he or she eats a sweet snack, and discount the impending rise in the patient’s blood sugar level. However, an autonomous pump, aware only of the patient’s blood sugar level, might regard the natural rise in the sugar level as a sign that the insulin dose needs to be increased in a manner that could be dangerous.

Prof Penders says humans’ great strength is their ability to size up novel situations quickly and cope with them.

“Humans are very strong in getting the essence out of the sensor information that they have,” he says.

Bad information from sensors appears to have played a critical role in confusing autopilots in a number of air crashes, including the Air France A330 crash in 2009 and the 737 Max 8 crashes. The MCAS system in the Ethiopian accident was misled by inaccurate information from a sensor that told it the aircraft was climbing far more steeply than it in fact was.

Mr Sujan says it is vital as part of the process of solving automation’s safety problems to recognise such limitations.

“How do we ensure that the system knows enough about the world within which it’s operating?” he says. “That’s a complex thing.”

Dr Mark Sujan, associate professor at the UK’s University of Warwick

The challenge of integrating human and machine, meanwhile, is complicated by the mystery surrounding how humans make critical decisions. Nick Jennings, professor of artificial intelligence at Imperial College London, says system developers at one time sought to devise intelligent systems by asking human experts how they made up their minds — a process that revealed how different the style of human thinking is from that of machines.

“They didn’t get very far because actually the experts couldn’t articulate why they made particular choices,” says Prof Jennings.

There is also a backlash against the idea that the best automated system is the one that gives the highest proportion of the task concerned to the computer, according to Sarah Fletcher, a senior research fellow at the Centre for Structures, Assembly and Intelligent Automation at Cranfield University. Systems that assign too little for humans to do can leave operators bored and inattentive just at the moment when the machine needs to hand back control.

“Often, there’s an assumption that reducing workload is the benefit [of automation],” Ms Fletcher says. “We’ll keep decreasing, and decreasing, and decreasing it. I say, ‘No, don’t — let’s find the optimum’.”

Yet, according to many researchers, the critical factor in avoiding further air disasters or tragedies in new fields developing autonomous technology is likely to be the development of a more mature understanding of the relative roles of humans and machines.

Mr Casner says it is vital to understand that humans and automated systems will never work together perfectly. “The idea is to prepare aircrews to understand that automation has its limitations,” he says. “It sometimes does unexpected things. So do human beings. We’re two limited, fallible entities in the cockpit.”

[ad_2]

Source link