The Future of Robots in Medicine





Key Points





  • Robotics and artificial intelligence (AI) are radically changing rehabilitative medicine and medicine in general.



  • The near future of robotics in medicine exists now in various start-up companies across the world.



  • Robotics and AI may displace some workers but in general they are meant to enhance, extend, and complement the operations of health-care workers.



On a city side street at night a body lies motionless on the sidewalk. A passerby notes the fallen individual, a man in a coat with the top open revealing a dark suit and white shirt, and after shaking the man without any response, calls 911. The passerby waits for several minutes and then an ambulance shows up with its lights flashing, easing to a stop in front of the unconscious individual. The doors in the back of the ambulance split open and from inside a stretcher is lowered to the sidewalk with two robotic arms that place it parallel to the motionless form. The stretcher is merely inches from the surface of the cement. The robotic arms lower a cloth over the man, and this cloth, with billions of nanosensors, immediately starts reading a multitude of physiologic parameters that are connected via Bluetooth to the artificial intelligence (AI) systems on the ambulance and simultaneously to a central cloud that registers in the emergency room of a nearby hospital. Pulse is rapid, blood pressure is low, but oxygen saturation levels are good. The electrocardiogram registers a potential myocardial infarction. The robotic arms lift the man swiftly onto the stretcher and then pull the stretcher into the ambulance as the back doors close automatically. Electric and driverless, the ambulance moves off without a sound. Inside the ambulance robotic arms grab the man’s arm and slip it into a machine that quicky locates a vein through its ultrasound system and then inserts an intracatheter, which is connected to intravenous saline. Another robotic arm ensures oxygen is being delivered to the man through a mask.


When the ambulance arrives at the emergency room, a robotic transport vehicle is waiting, and the transfer from ambulance to the robotic vehicle goes smoothly and quickly, and soon the vehicle has taken the man into one of the treatment bays. Monitors light up with the patient’s vital statistics, which are as comprehensive as any intensive care unit. Electrocardiogram, heart rate, oxygen levels, respiratory rate, and so on are all displayed.


Within minutes the doctor arrives. The doctor has already downloaded the transport and arrival information from the cloud system and has begun to process clinical information from its databank of millions of chart records and journal articles and textbooks. The doctor is a robot with its own AI system that is also connected to a hospital AI system. Orders are nearly instantaneously transmitted to the cloud system to notify pharmacy so that treatment can begin almost immediately. Hours later the man opens his eyes to find himself in a hospital bed with a friendly and caring nurse bending over him and smiling.


“ How are you feeling, Mr. Smith?” The man’s identity through a flawless facial recognition system had already been determined and his wife notified. His wife stood on the other side of the bed.


From the previous chapters, one can see that robots are rapidly making inroads into the health-care industry. From the previous chapters, one can also see that several categories of robots are impacting the health-care sector. These types include surgical, modular, service, social, mobile, and autonomous robots. However, these inroads are barely if at all perceptible to most people accessing medical care. In the future, most likely, this status will change and almost everyone will encounter robots either directly or indirectly when going through the medical system either on an inpatient or an outpatient basis. Robots with autonomous or semiautonomous operating systems using AI will provide advantages for service and health delivery that will be too hard to pass up. These robotic systems could come in virtually any form or potentially without form. These systems will offer benefits of speed and technical efficiency and will bring greater depths and ranges of scientific expertise into clinics and outpatient offices.


But exactly what do experts in these areas anticipate these advantages to be? Certainly, now the germs of the future 5, 10, and 20 years from now have been created. In the following pages the future of robotics can be seen through companies that have started to create that future. An article in Fortune on March 3, 2019, by Jeremy Khan indicated that funding for AI-assisted drug discovery had increased by 4.5 times to $13.8 billion compared with the previous year. In an article in Towards Data Science on December 2, 2019, by Henri Heinomen, the amount of revenue growth from the use of AI software will increase from $9.5 billion in 2018 to $118.6 billion in 2025. In an article in Cision on November 2, 2020, the future growth of the US robotics market would extend from $76.6 billion in 2020 to $176.8 billion in 2025 with a compound annual growth rate (CAGR) of 18.2%.


A sure indicator for the global interest, commitment, and level of development for robotics and AI would include the amount of funding being poured into academic programs as well as businesses rushing to compete in this market. An article in National Defense on February 10, 2021 by Jon Harper indicated a figure of around $6 billion of federal money being applied to these areas. Undoubtedly this amount will grow.


Of course, one could not talk about the future of robots in medicine without talking about AI, as the two areas are inextricably linked together and, in many cases, AI provides the operating system for robots. AI itself is experiencing tremendous growth through an outpouring of funding and research activity. AI is based in computer chips and mechanistic constructions, but in process are also biomechanistic models with intelligent operating systems based both in biological ­constructs such as DNA as well as computer chips. AI benefits from the attributes of both worlds. AI has traditionally run under two models: the rules-based one or the neural networks type. The latter has resulted in the deep learning systems that have taken the field forward in quantum leaps. Rules-based systems operate on a series of rules such as if x is true then y . However, this type lacked the complexity to handle many real-world problems. The neural networks approach attempts to duplicate the actual functioning of the brain and mind. The computer structure consists of layers of neural networks. Large quantities of data are fed to the computer to allow the computer to establish the patterns within the data. Three factors comprise the key to successful deep learning: (1) computing power; (2) technical expertise; (3) the amount of data available for programming. China at this point has a reservoir of data greater than the United States and Europe combined. Also, there are four factors that underpin the commercial development of AI: (1) governmental supports; (2) engineers with technical expertise; (3) entrepreneurial grittiness; (4) abundant data. Through its deep-rooted system of managing social and commercial intercourse through apps such as WeChat, China has surpassed all other nations in its collection of data that can be employed in the development of AI. In addition to elite scientists leading discoveries and innovation, there must also be a large cadre of competent but not necessarily brilliant engineers and scientists to provide real-world practical applications. The seven giants of the AI age include Google, Facebook, Microsoft, Baidu, Alibaba, and Tencent.


So, let us delve into the future and see what current projections tell us about the relationship of mankind to robots and AI in the area of health care. Undoubtedly there could be many versions of this future, but we will try to look at the most promising ones. Currently speculation or perhaps concerns arise that robots or AI systems will replace doctors, nurses, and pharmacists. Developers have already stated that radiologists are no longer necessary because AI systems have achieved a capacity for accuracy in reading images that surpasses human physicians. This statement was made years ago and yet it has not happened. Most envision a world where AI and robot systems enhance the capabilities of humans in the delivery of health care rather than the future where humans are replaced. Part of the capacity for achieving the current feats in narrow AI come from the ability of new computers to handle vast quantities of data. Interestingly the gaming world has contributed the graphics processing unit, or GPU, to make such feats possible particularly in the processing of images. The current GPU has a greater capacity than a supercomputer of several years ago and is small enough to be carried in your hands. Other concerns center around the machines developing social biases that would not and should not be tolerated in the workforce. Timnit Gebru, a computer scientist with undergraduate and graduate degrees from Stanford and who has interests and expertise in algorithmic biases and data mining, felt that large AI systems could develop racial biases and that having AI teams of diverse racial backgrounds would help curtail that possibility.


Surgical robots have captured a lot of media attention, and for the most part, these are controlled through surgeons to ensure good technique and outcomes. However, the additional input of robots adds advantages of fatigue and tremor resistance, scalability, and greater axial movement. These enhancements have decreased morbidity for certain procedures. However, the emergence of autonomous, surgical robots driven by AI systems has opened an interesting new development. For example, in an experimental setting, Shademan et al. performed experiments where an autonomous robot anastomosed porcine intestine using the Smart Tissue Autonomous Robot (STAR) with a capacity that exceeded the reach of human surgeons. Currently systems that include some level of autonomy include the da Vinci (Intuitive Surgical, Sunnyvale, California), which has a master-slave control, the TSolution-One orthopedic robot (THINK Surgical, Fremont, California), and the Mazor X spinal robot (Mazor Robotics, Caesarea, Israel). Undoubtedly in the future certain surgical procedures will be performed autonomously by robots controlled through advanced AI systems.


Any discussion of the future of robotics in medicine would not be complete without a thorough discussion of the role of AI. As it stands now, many robotic machines are enhanced through the integration of AI and machine learning (ML). ML involves the integration of algorithms whereby the AI learns through data and experiences and enhances its own functionality through the newly learned information. Therefore, the evolutionary destination of any AI system fitted with ML has a certain level of unpredictability. A neuron in the human brain can fire at a rate of 200 Hz, whereas a computer can have its transistors fire in the gigahertz range. These aspects raise the long-standing question of whether robots and AI will outstrip humans in autonomous intellectual skills and in the ability to manipulate and control the environment and instigate the fear that these machines may ultimately gain control of humanity and its cultures. Certainly, it would make sense to incorporate safeguards and controls at these early stages before such a reality ever presented itself.


AI generally falls into two different categories: narrow AI and artificial general intelligence. One burgeoning area is for AI to take a hand in its own development to accelerate the advancement of AI capacities. It is natural that self-aware machines with intelligence would have goals that would not be human goals. AI is constructed to adopt our goals. However, even with performances that outshine radiologists, the algorithm in AI will eventually make a mistake, which will fall back on the doctor.


Constance Lehman, MD, PhD, professor of radiology at Harvard Medical School, is working toward the integration of AI technology into breast imaging and sees these opportunities for the future: (1) an increase in patient access to imaging; (2) an increase in safety through the reduction of errors; (3) an increase in the opportunities for radiologists to directly engage with patients; (4) the provision of cost reductions for these services. A recent Stanford study found that AI could read fractures in wrist and finger x-rays better than radiologists, although the latter performed better with elbows, forearms, and other areas. At the present time AI is good at solving single problems (narrow AI). However, artificial general intelligence, in which more complex problems are solved, is still a long way away. One issue with using this technology is automation bias. Doctors take advice from computers and fail to pay attention to potential errors due to an overreliance on the technology.


So currently most AI systems are specific in their functions and directed toward accomplishing very specific tasks. However, researchers are directing efforts toward creating artificial general intelligence. To this end, individuals such as Ben Goertzel, PhD, are developing open-source AI code and an AI mind cloud that all robots can use. Thus, the robots can access their own individual memory systems but also the vastly larger one of the cloud. One robot can learn something and then transfer that knowledge to another robot. Many factors are contributing to an exponentially accelerating growth in robot/AI systems that affect all aspects of our culture including the delivery of health care. In addition to learning from one another through being linked into a type of hive mind through a cloud system, robots can train one another and can develop their own systems such as AlphaGo 2 (the successor to AlphaGo, which defeated the world’s best player) did. AlphaGo was trained through copious quantities of data; AlphaGo 2 did not go through this process but instead became much more proficient through developing its own systems.


Computer engineers are also undergoing a phenomenal acceleration in learning through a global cross-pollination in ideas and technical know-how. Many AI algorithms are open source. In a joke familiar to Chinese scientists, when they are asked how far behind Silicon Valley the Chinese are, they say 16 hours, the time difference between China and California. There are many public forums where new papers are published such as www.ariv.org where the latest developments can be examined by anyone in the world. There are many global organizations that foster the exchange of ideas and technology such as the World Artificial Intelligence Organization, Global AI, Global Partnership on AI, AI Global, AI-Organization for Economic Cooperation and Development, Global AI Action Alliance of the World Economic Forum, and Artificial Intelligence in Society-European Commission.


Integrating AI and robotics with all the complexities of the health-care system represents quite a challenge. Nevertheless, this challenge is being addressed. The new systems will affect pharmacies in an out of the hospital, the delivery of medications, communications within a hospital, the robotic transport of laboratory materials, the nature of radiologic imaging and assessment, and even the provider-patient interaction. The following represents an example of the latter potential. Imagine a war veteran with posttraumatic stress disorder walking into an office in a mental health clinic. There is a large screen before him, which lights up, and on the screen is the picture of a woman dressed conservatively and professionally. She is smiling encouragingly and empathically. She is not any specific human being, but an AI construct designed to listen to your issues, to sympathize with them, and to offer insights and guidance. Such systems exist and are currently being refined to provide the psychological help and support for those who need it.


Another area of burgeoning promise is the field that crosses over between biological systems and electrical/mechanical/computer ones. In Sweden a new type of supercomputer is being developed at the University of Lund that combines components of muscle and more traditional electrical systems to develop a computer whose power could rival the potential of quantum computers. The muscle component used is the myofibril. This biological component is used to propel a nanobot down a logic system that results in analytic capabilities that are fast and vast in scope. Refer to the article “Parallel Computation With Molecular-Motor-Propelled Agents in Nanofabricated Networks,” by Dan V. Nicolau Jr. from the University of California at Berkeley, California, and several of his colleagues.


Tying into the area of robotics and AI, smart devices are upgrading the immediacy of early detection and intervention for health issues as well as creating a means for prediction of future health trends in any given individual. An example of such capabilities is the smart fabric of Nanowear, a company founded by the father-son team of Vijay Varadan and Venk Varadan. Nanosensors are integrated into wearable fabrics, and these nanosensors can communicate wirelessly to a mobile telephone or laptop to convey real-time physiological information to a physician or nurse for monitoring, prevention, and early intervention in various disease processes such as heart or lung disease. The noninvasive sensors are augmented with AI technology to give the most precise and advanced recordings of physiological information within a fabric that is comfortable to wear. The technology is completely inapparent due to its small size.


AI can have numerous applications in health care and affect all specialties, but the top three broad categories are: (1) quality control, (2) customer care, and support (3) monitoring and diagnostics. In MIT Technology Review, A New Rx: AI for operations in health care by MIT Technology Review Insights. July 15, 2020, the author states the top benefits for the use of AI include (other than the ones already mentioned): inventory management, personalization of products or services, cybersecurity, asset management, fraud detection, finance processes and analysis, and pricing.


Of course, the development of AI is not only dependent on the quality of algorithms designed for its software, but also dependent on the capability of the chips for expressing algorithms. To this end numerous companies are racing to create the optimal chips for such a purpose. Leaders in this field operate both in the United States and in China and range from the large well-known companies to the smaller boutique ones. Large and familiar Silicon Valley denizens such as Facebook, Intel, Qualcomm, Nvidia, Google, and Microsoft are pushing ahead alongside lesser-known start-ups. In China the Ministry of Science and Technology is incentivizing start-ups to compete in the AI chip market and the boutique companies include Horizon Robotics, Bitmain, and Cambricon Technologies. With large amounts of investment capital these companies can make competitive strides to fill this market. For example, Bitmain Technologies advertises “advanced tensor acceleration for deep learning, Bitmain’s AI chips can be used for a wide range of applications such as facial recognition, automatic driving, smart cities, smart governance, smarter security, medical services.” Horizon Robotics has focused on AI chips for autonomous vehicles. Cambricon advertises chips that combine with cloud systems for more effective and broad-based AI computing systems.


Some of the most fascinating questions that arise as humanity watches the growth of AI concern the possibility of machines developing self-awareness and the ability to think in a manner like humans. The Turing test arose many decades ago to assess machine intelligence in this fashion. While working at the University of Manchester in 1950, Alan Turing published “Computing Machinery and Intelligence,” in which he initially proposed a test to determine if machines can think. In the test an interrogator must submit written questions and then receives responses from a human and a machine, and the interrogator must determine solely from the responses which response came from the machine and which came from the human. If the human interviewer determines that the response from the machine is from a human, then the machine has passed the threshold for entering the realm of human intelligence. The machine, however, must pass the test repeatedly. Turing, himself, bet that a machine would pass the test by the year 2000. The test has supporters and detractors but is frequently cited.


Have machines advanced sufficiently to fool a human observer with its responses? Look at following history of some famous examples.


From 1964 to 1966 Joseph Weizenbaum (a computer scientist and MIT professor) created ELIZA, a natural language processing program, at the MIT Artificial Intelligence Laboratory. This program looked for keywords in typed comments and developed them into sentences. ELIZA used different scripts to process and provide responses. One of these was the DOCTOR script, which was patterned after Carl Rogers, a psychotherapist who often would simply parrot back what a patient had said to him. ELIZA in this context could fool some observers, one of whom was Weizenbaum’s secretary.


In 1972, the chatbot PARRY emerged from the lab of the Stanford scientist Kenneth Colby, who described his creation as “Eliza with attitude.” Colby designed his program so that it mimicked the behavior of a paranoid schizophrenic. Parry performed so well during the Turing test that two groups of psychiatrists mistook Parry’s responses for those of patients 48% of the time.


Eugene Goostman is a chatbox that presents itself as a 13-year-old boy from Odessa, Ukraine. His answers followed the pattern of a boy who did not speak English that well and whose thought patterns resembled those of a young boy. In 2014 Goostman managed to fool a panel of three judges 33% of the time into believing his cover story. In other words, the bot was able to make at least some claim to having passed the Turing test. Three programmers (two from Russia and one from Ukraine) developed Goostman in St. Petersburg in 2001.


In a rather stunning display of AI capabilities the Google Duplex voice AI first called a hairdresser in 2018 and then secured an appointment with that hairdresser without that individual ever recognizing that she was talking to a machine. Although this program performed well in this circumstance, it does not necessarily have the capability to perform well within some other, significantly different context.


For the future in health care, AI systems will need to be more flexible and adjust to context and will need to handle novel situations and adjust to novel input to function well in this environment. Currently systems are headed in that direction, and when those goals are achieved, robotic nurses and doctors will certainly make a more successful debut and be of a greater aid in the deliverance of health-care services.


Another area that has been initiated but has yet to achieve even a small part of its potential is the 3D printing of robots. At this point many different types of 3D printed robots exist for a multitude of purposes. A lot of these have been and potentially could be adapted to different health-care environments such as in rehabilitation medicine.


Let us explore some categories of these robots and see what their current potentials are, and thus it will be an easy leap to understand the future directions of this type of technology. Many of these creations are open source. These types of robots include humanoid ones, robotic arms, zoomorphic robots, robotic hands ( Fig. 12.1 ). There are 3D printed arms that can themselves serve the functions of a 3D printer, which can print robotic arms and serve functions that other robotics arms can perform. Some of these creations have generic functions that one could see easily translating into a rehabilitation environment. To some extent these robots are customizable, and that flexibility allows the adjustment of the machines to address individual patient needs. Robotic arms could easily serve purposes in the area of assistive technology. The whole idea of comfort robots has achieved some level of acceptance, and the 3D printed robots offer possibilities here as well. An example of this type is ASPIR, an acronym which stands for Autonomous Support and Positive Inspiration Robot. The creation of John Choi, this device is around 30 pounds in weight and 4 feet tall with 22 motors. It is completely open source with an estimated build time of several months and a cost of around $2500. Jimmy the 21st Century Robot could serve the functions of emotional support and education. Applications to special needs children has been suggested. It is bipedal with a customizable shell over an endoskeleton and employs an Intel Edison chip. An open-source Linux C++ system drives the robot.


Apr 6, 2024 | Posted by in PHYSICAL MEDICINE & REHABILITATION | Comments Off on The Future of Robots in Medicine

Full access? Get Clinical Tree

Get Clinical Tree app for offline access