BYLINE: Paulette Campbell

News — Medical first responders take on one of the most dangerous and demanding roles in warfare and humanitarian crisis situations. As the nature of warfare changes, and disasters become more commonplace, combat and civilian medics will be called upon to treat more casualties, more extensively, over more extended periods of time.

Researchers at the Johns Hopkins Applied Physics Laboratory (APL) in Laurel, Maryland, are exploring how emerging capabilities in artificial intelligence (AI), augmented reality (AR) and robotics might support collaborative intervention by teams of medics, AI-based virtual assistants and autonomous robots.

APL’s has been working with the Army’s Telemedicine and Advanced Technology Research Center (TATRC) on ways to combine these technologies to enable medics and machines to collaborate effectively. In April, an APL team worked alongside TATRC to demonstrate the latest technologies under development for America’s warfighters at the second annual U.S. Army Medical Research and Development Command Capability Days event at Fort Detrick, Maryland.

“Future combat situations will likely be characterized by large numbers of casualties in areas where prompt evacuation is difficult or impossible,” noted , a senior roboticist in APL’s (REDD). “APL has a unique role to play in helping the Army achieve their vision for medical robotics and autonomous systems.”

Handelman is leading a cross-Laboratory team that is exploring and demonstrating how virtual and robotic assistants can support medics to improve outcomes. The team includes Casey Hanley from the Asymmetric Operations Sector, and from REDD, and Andrew Badger from APL’s .

AI, AR and robotics are poised to revolutionize field care, Handelman said. “We believe that AI-based virtual assistants can be used to give medical advice to medics and soldiers in the field,” he said. “Augmented reality can provide novel visualization of real-time information about a patient’s condition. We also want to enable robots to perform a range of tasks in support of medics.”

But there is no integrated solution, he added. As a test case for their medic-robot teaming methodology, Handelman and his team are working on a field care scenario where two robots assist a medic caring for multiple casualties. The medic approaches a casualty and starts bag-valve-mask ventilation to aid breathing. The medic asks robot number one — a one-armed four-legged robot — to fetch a nearby intubation kit. After intubating the patient, the medic attaches the ventilation bag to the intubation tube and hands the bag to the robot. The robot takes over bagging the patient to enable the medic to attend to other patients. The medic then asks robot number two — another four-legged robot — to check out a nearby casualty using specialized sensors. The robot finds the casualty, measures vital signs and sends data back to the medic for further decision-making.

The current project is bringing this scenario to life through a combination of semi-autonomous robot behaviors and medic-robot collaboration.

“We are finding that the coordinated use of AI, AR and robotics provides unique opportunities to assist medics in the field,” Handelman said. “When we show where we are headed to Army medics, they seem very excited about the possibilities, and they come up with new ideas and capabilities for us to explore. This could potentially save lives and improve the overall health of patients.”

Adaptive Human-Robot Teaming

The medic-robot teaming effort leverages ongoing research at APL sponsored by the Army Research Laboratory and the Army’s Artificial Intelligence Innovation Institute on adaptive human-robot teaming. This research combines symbolic AI and neuro-inspired machine learning to emulate human skill acquisition to achieve adjustable autonomy (dynamically adjusting whether humans or robots act, and when) and adaptive teaming (handling new situations).

“What if we could teach robots the way we teach people — through written instructions, verbal instructions, gestures and by example?” Handelman said. “We believe that new capabilities could be enabled by a wide group of non-programmer instructors and subject-matter experts, thereby democratizing the process of robot training.”

“Importantly, given that mission plans sometimes do not survive first contact with the enemy, robot behavior could be modified in situ, in the field, to adapt to dynamic environments, enemies and missions,” he added.

The researchers want to develop a system where humans provide the outline of a task — behavioral scaffolding based on known good strategies — and allow robots to learn how to perform subtasks to minimize human workload and expand the range of potential solutions.

“If the overall task is thought of as a behavior graph, or tree, we want the human to provide the structure of the tree and the robot to learn how to accomplish selected ‘leaves,’ based on desired state transitions and an understanding of what ‘good’ performance looks like,” Handelman explained.

The team is developing a cognitive architecture for robotic skill acquisition to enable adaptive human-robot teaming. Their test case involves humans wearing AR headsets and collaborating with four-legged robots to perform reconnaissance-related tasks, such as moving stealthily through an environment or searching for adversaries.

“The research continues,” Handelman said, “but one of the early lessons learned is that shared task knowledge provided by multi-agent playbooks can help humans and robots track and predict teammate behavior, and we believe it will promote team transparency, accountability and trust.”