Skip to main content

labor The Robot Will See You Now

"In Brazil and India, machines are already starting to do primary care, because there’s no labor to do it,” says Robert Kocher, an internist, “They may be better than doctors. . ." The rising costs of health care, an aging population in the United States and other nations, are spurring investments into the development of sophisticated machines that will be able to perform tasks now done by highly skilled workers. What may be the impact on the healthcare workforce?

Harley Lukov didn’t need a miracle. He just needed the right diagnosis. Lukov, a 62-year-old from central New Jersey, had stopped smoking 10 years earlier—fulfilling a promise he’d made to his daughter, after she gave birth to his first grandchild. But decades of cigarettes had taken their toll. Lukov had adenocarcinoma, a common cancer of the lung, and it had spread to his liver. The oncologist ordered a biopsy, testing a surgically removed sample of the tumor to search for particular “driver” mutations. A driver mutation is a specific genetic defect that causes cells to reproduce uncontrollably, interfering with bodily functions and devouring organs. Think of an on/off switch stuck in the “on” direction. With lung cancer, doctors typically test for mutations called EGFR and ALK, in part because those two respond well to specially targeted treatments. But the tests are a long shot: although EGFR and ALK are the two driver mutations doctors typically see with lung cancer, even they are relatively uncommon. When Lukov’s cancer tested negative for both, the oncologist prepared to start a standard chemotherapy regimen—even though it meant the side effects would be worse and the prospects of success slimmer than might be expected using a targeted agent.

But Lukov’s true medical condition wasn’t quite so grim. The tumor did have a driver—a third mutation few oncologists test for in this type of case. It’s called KRAS. Researchers have known about KRAS for a long time, but only recently have they realized that it can be the driver mutation in metastatic lung cancer—and that, in those cases, it responds to the same drugs that turn it off in other tumors. A doctor familiar with both Lukov’s specific medical history and the very latest research might know to make the connection—to add one more biomarker test, for KRAS, and then to find a clinical trial testing the efficacy of KRAS treatments on lung cancer. But the national treatment guidelines for lung cancer don’t recommend such action, and few physicians, however conscientious, would think to do these things.

Did Lukov ultimately get the right treatment? Did his oncologist make the connection between KRAS and his condition, and order the test? He might have, if Lukov were a real patient and the oncologist were a real doctor. They’re not. They are fictional composites developed by researchers at the Memorial Sloan-Kettering Cancer Center in New York, in order to help train—and demonstrate the skills of—IBM’s Watson supercomputer. Yes, this is the same Watson that famously went on Jeopardy and beat two previous human champions. But IBM didn’t build Watson to win game shows. The company is developing Watson to help professionals with complex decision making, like the kind that occurs in oncologists’ offices—and to point out clinical nuances that health professionals might miss on their own.

Information technology that helps doctors and patients make decisions has been around for a long time. Crude online tools like WebMD get millions of visitors a day. But Watson is a different beast. According to IBM, it can digest information and make recommendations much more quickly, and more intelligently, than perhaps any machine before it—processing up to 60 million pages of text per second, even when that text is in the form of plain old prose, or what scientists call “natural language.”

That’s no small thing, because something like 80 percent of all information is “unstructured.” In medicine, it consists of physician notes dictated into medical records, long-winded sentences published in academic journals, and raw numbers stored online by public-health departments. At least in theory, Watson can make sense of it all. It can sit in on patient examinations, silently listening. And over time, it can learn. Just as Watson got better at Jeopardy the longer it played, so it gets better at figuring out medical problems and ways of treating them the more it interacts with real cases. Watson even has the ability to convey doubt. When it makes diagnoses and recommends treatments, it usually issues a series of possibilities, each with its own level of confidence attached.

Medicine has never before had a tool quite like this. And at an unofficial coming-out party in Las Vegas last year, during the annual meeting of the Healthcare Information and Management Systems Society, more than 1,000 professionals packed a large hotel conference hall, and an overflow room nearby, to hear a presentation by Marty Kohn, an emergency-room physician and a clinical leader of the IBM team training Watson for health care. Standing before a video screen that dwarfed his large frame, Kohn described in his husky voice how Watson could be a game changer—not just in highly specialized fields like oncology but also in primary care, given that all doctors can make mistakes that lead to costly, sometimes dangerous, treatment errors.

Drawing on his own clinical experience and on academic studies, Kohn explained that about one-third of these errors appear to be products of misdiagnosis, one cause of which is “anchoring bias”: human beings’ tendency to rely too heavily on a single piece of information. This happens all the time in doctors’ offices, clinics, and emergency rooms. A physician hears about two or three symptoms, seizes on a diagnosis consistent with those, and subconsciously discounts evidence that points to something else. Or a physician hits upon the right diagnosis, but fails to realize that it’s incomplete, and ends up treating just one condition when the patient is, in fact, suffering from several. Tools like Watson are less prone to those failings. As such, Kohn believes, they may eventually become as ubiquitous in doctors’ offices as the stethoscope.

“Watson fills in for some human limitations,” Kohn told me in an interview. “Studies show that humans are good at taking a relatively limited list of possibilities and using that list, but are far less adept at using huge volumes of information. That’s where Watson shines: taking a huge list of information and winnowing it down.”

If you like this article, please sign up for Snapshot, Portside's daily summary.

(One summary e-mail a day, you can change anytime, and Portside is always free.)

Watson has gotten some media hype already, including articles in Wired and Fast Company. Still, you probably shouldn’t expect to see it the next time you visit your doctor’s office. Before the computer can make real-life clinical recommendations, it must learn to understand and analyze medical information, just as it once learned to ask the right questions on Jeopardy. That’s where Memorial Sloan-Kettering comes in. The famed cancer institute has signed up to be Watson’s tutor, feeding it clinical information extracted from real cases and then teaching it how to make sense of the data. “The process of pulling out two key facts from a Jeopardy clue is totally different from pulling out all the relevant information, and its relationships, from a medical case,” says Ari Caroline, Sloan-Kettering’s director of quantitative analysis and strategic initiatives. “Sometimes there is conflicting information. People phrase things different ways.” But Caroline, who approached IBM about the research collaboration, nonetheless predicts that Watson will prove “very valuable”—particularly in a field like cancer treatment, in which the explosion of knowledge is already overwhelming. “If you’re looking down the road, there are going to be many more clinical options, many more subtleties around biomarkers … There will be nuances not just in interpreting the case but also in treating the case,” Caroline says. “You’re going to need a tool like Watson because the complexity and scale of information will be such that a typical decision tool couldn’t possibly handle it all.”

The Cleveland Clinic is also helping to develop Watson, first as a tool for training young physicians and then, possibly, as a tool at the bedside itself. James Young, the executive dean of the Cleveland Clinic medical school, told The Plain Dealer, “If we can get Watson to give us information in the health-care arena like we’ve seen with more-general sorts of knowledge information, I think it’s going to be an extraordinary tool for clinicians and a huge advancement.” And WellPoint, the insurance company, has already begun testing Watson as a support tool for nurses who make treatment-approval decisions.

Whether these experiments show real, quantifiable improvements in the quality or efficiency of care remains to be seen. If Watson tells physicians only what they already know, or if they end up ordering many more tests for no good reason, Watson could turn out to be more hindrance than help. But plenty of serious people in the fields of medicine, engineering, and business think Watson will work (IBM says that it could be widely available within a few years). And many of these same people believe that this is only the beginning—that whether or not Watson itself succeeds, it is emblematic of a quantum shift in health care that’s just now getting under way.

When we think of breakthroughs in medicine, we conjure up images of new drugs or new surgeries. When we think of changes to the health-care system, byzantine legislation comes to mind. But according to a growing number of observers, the next big thing to hit medical care will be new ways of accumulating, processing, and applying data—revolutionizing medical care the same way Billy Beane and his minions turned baseball into “moneyball.” Many of the people who think this way—entrepreneurs from Silicon Valley, young researchers from prestigious health systems and universities, and salespeople of every possible variety—spoke at the conference in Las Vegas, proselytizing to the tens of thousands of physicians and administrators in attendance. They say a range of innovations, from new software to new devices, will transform the way all of us interact with the health-care system—making it easier for us to stay healthy and, when we do get sick, making it easier for medical professionals to treat us. They also imagine the transformation reverberating through the rest of the economy, in ways that may be even more revolutionary. 

Health care already represents one-sixth of America’s gross domestic product. And that share is growing, placing an ever-larger strain on paychecks, corporate profits, and government resources. Figuring out how to manage this cost growth—how to meet the aging population’s medical needs without bankrupting the country—has become the central economic-policy challenge of our time. These technology enthusiasts think they can succeed where generations of politicians, business leaders, and medical professionals have failed.

Specifically, they imagine the application of data as a “disruptive” force, upending health care in the same way it has upended almost every other part of the economy—changing not just how medicine is practiced but who is practicing it. In Silicon Valley and other centers of innovation, investors and engineers talk casually about machines’ taking the place of doctors, serving as diagnosticians and even surgeons—doing the same work, with better results, for a lot less money. The idea, they say, is no more fanciful than the notion of self-driving cars, experimental versions of which are already cruising California streets. “A world mostly without doctors (at least average ones) is not only reasonable, but also more likely than not,” wrote Vinod Khosla, a venture capitalist and co-founder of Sun Microsystems, in a 2012 TechCrunch article titled “Do We Need Doctors or Algorithms?” He even put a number on his prediction: someday, he said, computers and robots would replace four out of five physicians in the United States.

Statements like that provoke skepticism, derision, and anger—and not only from hidebound doctors who curse every time they have to turn on a computer. Bijan Salehizadeh, a trained physician and a venture capitalist, responded to reports of Khosla’s premonition and similar predictions with a tweet: “Getting nauseated reading the anti-doctor rantings of the silicon valley tech crowd.” Physicians, after all, do more than process data. They attend at patients’ bedsides and counsel families. They grasp nuance and learn to master uncertainty. For their part, the innovators at IBM make a point of presenting Watson as a tool that can help health-care professionals, rather than replace them. Think Dr. McCoy using his tricorder to diagnose a phaser injury on Star Trek, not the droid fitting Luke Skywalker with a robotic hand in Star Wars. To most experts, that’s a more realistic picture of what medicine will look like, at least for the foreseeable future.

But even if data technology does nothing more than arm health-care professionals with tablet computers that help them make decisions, the effect could still be profound. Harvey Fineberg, the former dean of the Harvard School of Public Health and now the president of the Institute of Medicine, wrote of IT’s rising promise last year in The New England Journal of Medicine, describing a health-care system that might be transformed by artificial intelligence, robotics, bioinformatics, and other advances. Tools like Watson could enhance the abilities of professionals at every level, from highly specialized surgeons to medical assistants. As a result, physicians wouldn’t need to do as much, and each class of professionals beneath them could take on greater responsibility—creating a financially sustainable way to meet the aging population’s growing need for more health care.

As an incidental benefit, job opportunities for people with no graduate degree, and in some cases no four-year-college degree, would grow substantially. For the past few decades, as IT has disrupted other industries, from manufacturing to banking, millions of well-paying middle-class jobs—those easily routinized—have vanished. In health care, this disruption could have the opposite effect. It wouldn’t be merely a win-win, but a win-win-win. It all sounds far too good to be true—except that a growing number of engineers, investors, and physicians insist that it isn’t.

One of these enthusiasts is Daniel Kraft, age 44, whose career trajectory tracks the way medicine itself is evolving. Kraft is a physician with a traditional educational pedigree: an undergraduate degree from Brown and a medical degree from Stanford. He trained in pediatrics and internal medicine at Harvard-affiliated hospitals in Boston. Then he returned to the West Coast, to Stanford University Hospital, to complete fellowships in hematology and oncology.

But Kraft always had a flair for entrepreneurship and a taste for technology: While in medical school, he started his own online bookstore, selling texts to his classmates at a discount. (He later sold the business, for considerable profit.) At Stanford, Kraft says he used his knowledge of social media to develop a better method for communication among doctors, allowing them to exchange pertinent information while making rounds, for instance, rather than simply texting phone numbers for callbacks. “Here we are at Stanford, heart of Silicon Valley, and all we had were basic SMS text pagers—they could only do phone numbers,” Kraft recalls. “So I hacked into a Yahoo Groups thing, so we could send actual text messages through servers. Then it spread to the rest of the hospital.”  

Thus began Kraft’s second, parallel career as an inventor, an entrepreneur, and a professional visionary. He audited classes in bio-design and business, hanging out with computer nerds as much as doctors. Today he holds several patents, including one for the MarrowMiner, a device that allows bone marrow to be harvested faster and less painfully. (Kraft is the chief medical officer for a company that plans to develop it commercially.) Kraft is also the chairman of the medical track at Singularity University, a think tank and educational institution in Silicon Valley. Initially, Kraft’s primary role at Singularity was to offer a few hours of instruction on medicine. But Kraft says he quickly realized that “a lot of people, in gaming, IT, Big Data, devices, virtual reality, psychology—they were all converging on health care, and interested in applying their skills to health care.” That led Singularity to establish FutureMed, an annual conference on medical innovation that brings together financiers, physicians, and engineers from around the world. Kraft is the director.

Exponential improvements in the ability of computers to process more and more data, faster and faster, are part of what has drawn this diverse crew to medicine—a field of such complexity that large parts of it have, until recently, stood outside the reach of advanced information technology. But just as significant, Kraft and his fellow travelers say, is the explosion of data available for these tools to manipulate. The Human Genome Project completed its detailed schematic of human DNA in 2003, and for the past several years, companies have provided personal genetic mapping to people with the means to pay for it. Now the price, once prohibitive, is within reach for most people and insurance plans. Researchers have only just begun figuring out how genes translate into most aspects of health, but they already know a great deal about how certain genetic sequences predispose people to conditions like heart disease and breast cancer. Many experts think we will soon enter an era of “personalized” medicine, in which physicians tailor treatments—not just for cancer, but also for conditions like diabetes and heart disease—to an individual patient’s genetic idiosyncrasies.

A potentially larger—and, in the short run, more consequential—data explosion involves the collection, transmission, and screening of relatively simple medical data on a much more frequent basis, enabling clinicians to make smarter, quicker decisions about their patients. The catalyst is a device most patients already have: the smartphone. Companies are developing, and in some cases already selling, sensors that attach to phones, to collect all sorts of biological data. The companies Withings and iHealth, for example, already offer blood-pressure cuffs that connect to an iPhone; the phone can then send the data to health-care professionals via e‑mail, or in some cases, automatically enter them into online medical records. The Withings device sells for $129; iHealth’s for $99. Other firms sell devices that diabetics can use to measure glucose levels. In the U.K., a consortium has been developing a smartphone app paired with a device that will allow users to test themselves for sexually transmitted diseases. (The test will apparently involve urinating onto a chip attached to the phone.)

AliveCor, a San Francisco–based firm, has developed an app and a thin, unobtrusive smartphone attachment that can take electrocardiogram readings. The FDA approved it for use in the U.S. in December. While the device was still in its trial phase, Eric Topol, the chief academic officer at Scripps Health in San Diego and a well-known technology enthusiast, used a prototype of the device to diagnose an incipient heart attack in a passenger on a transcontinental flight from Washington, D.C., to San Diego. The plane made an emergency landing near Cincinnati and the man survived.

As sensors shrink and improve, they will increasingly allow health to be tracked constantly and discreetly—helping people to get over illnesses faster and more reliably—and in the best of cases, to avoid getting sick in the first place. One group of researchers, based at Emory University and Georgia Tech, developed a prototype for one such device called StealthVest, which—as the name implies—embeds sensors in a vest that people could wear under their regular clothing. The group designed the vest for teenagers with chronic disease (asthma, diabetes, even sickle-cell anemia) because, by their nature, teenagers are less likely to comply with physician instructions about taking readings or medications. But the same technology can work for everyone. For instance, as Sloan-Kettering’s Ari Caroline notes, right now it’s hard for oncologists to get the detailed patient feedback they need in order to serve their patients best. “Think about prostate surgery,” he says. “You really want to check patients’ urinary and sexual function on a regular basis, and you don’t get that when they come in once every three or four months to the clinic—they’ll just say generally ‘good’ or ‘bad.’ The data will only get collected when people are inputting it on a regular basis and it captures their daily lives.”

Read the complete article at The Atlantic online.

   *        *       *       *      *      *       *      *       *       *

Related Health Care Technology News:

How Do People Respond to Being Touched by a Robotic Nurse?

From Science Daily (Original Source: Georgia Institute of Technology, http://www.gatech.edu/)

Mar. 10, 2011 — For people, being touched can initiate many different reactions from comfort to discomfort, from intimacy to aggression. But how might people react if they were touched by a robot? Would they recoil, or would they take it in stride? In an initial study, researchers at the Georgia Institute of Technology found people generally had a positive response toward being touched by a robotic nurse, but that their perception of the robot's intent made a significant difference.

The research is being presented March 9 at the Human-Robot Interaction conference in Lausanne, Switzerland.

"What we found was that how people perceived the intent of the robot was really important to how they responded. So, even though the robot touched people in the same way, if people thought the robot was doing that to clean them, versus doing that to comfort them, it made a significant difference in the way they responded and whether they found that contact favorable or not," said Charlie Kemp, assistant professor in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University.

In the study, researchers looked at how people responded when a robotic nurse, known as Cody, touched and wiped a person's forearm. Although Cody touched the subjects in exactly the same way, they reacted more positively when they believed Cody intended to clean their arm versus when they believed Cody intended to comfort them.

These results echo similar studies done with nurses.

"There have been studies of nurses and they've looked at how people respond to physical contact with nurses," said Kemp, who is also an adjunct professor in Georgia Tech's College of Computing. "And they found that, in general, if people interpreted the touch of the nurse as being instrumental, as being important to the task, then people were OK with it. But if people interpreted the touch as being to provide comfort … people were not so comfortable with that."

In addition, Kemp and his research team tested whether people responded more favorably when the robot verbally indicated that it was about to touch them versus touching them without saying anything.

"The results suggest that people preferred when the robot did not actually give them the warning," said Tiffany Chen, doctoral student at Georgia Tech. "We think this might be because they were startled when the robot started speaking, but the results are generally inconclusive."

Since many useful tasks require that a robot touch a person, the team believes that future research should investigate ways to make robot touch more acceptable to people, especially in healthcare. Many important healthcare tasks, such as wound dressing and assisting with hygiene, would require a robotic nurse to touch the patient's body,

"If we want robots to be successful in healthcare, we're going to need to think about how do we make those robots communicate their intention and how do people interpret the intentions of the robot," added Kemp. "And I think people haven't been as focused on that until now. Primarily people have been focused on how can we make the robot safe, how can we make it do its task effectively. But that's not going to be enough if we actually want these robots out there helping people in the real world."

In addition to Kemp and Chen, the research group consists of Andrea Thomaz, assistant professor in Georgia Tech's College of Computing, and postdoctoral fellow Chih-Hung Aaron King.

YOU CAN ACCESS THE SCIENCE DAILY ARTICLE HERE