Most Popular

    Sorry. No data so far.

Winner of Amgen Patients | Choices | Empowerment Competition Emerging Star of HealthCare Engagement Award
Mayo Clinic Award - LeftA winner of the Mayo Clinic iSpot Competition for Ideas that will Transform HealthcareMayo Clinic Award - R

Philadelphia Inquirer

Posted on Mon, Jul. 18, 2011

Answers from the crowd

By Gregory Thomas

Inquirer Staff Writer

Will Dampier had a hunch last year that someone outside academia could find a more accurate way to predict how a person suffering from HIV would respond to antiretroviral drugs. He was right.

Dampier, a Ph.D. student in bioinformatics and judo instructor at Drexel University, is on the cutting edge of an emerging trend that uses crowd-sourcing – inviting the wisdom of the crowd – to help improve health care. In spring 2010, he organized a public competition, using genetic material from 1,000 people with HIV who had never received drug treatment to create a model that would predict how each person would respond to medication.

Academics who studied the same data were able to predict patients’ responses to therapy with 70 percent accuracy, Dampier says. Of the 109 individuals or teams who decided to enter, it was a college dropout from Baltimore, Internet marketing whiz Chris Raimondi, 39, who topped the pile, writing an algorithm that predicted outcomes with 78 percent accuracy.

Raimondi’s algorithm was best at linking specific mutations in HIV to how well a patient will respond to drug therapy.

“The approach of having a public competition is completely innovative,” says Richard Harrigan, an associate professor at the University of British Columbia who studies HIV drug efficacy and participated in the contest. “A number of academic approaches use some of the same data sets to address the question, so, in theory, this could be useful.”

However, Harrigan says, because Dampier’s data set drew from dated patient information, from as far back as the 1980s, the contest results aren’t clinically useful for those patients, some of whom may have died. These methods could potentially help doctors personalize patient therapy.

The point of the exercise wasn’t to find a cure for HIV, Dampier says. It was a statement about how a more inclusive approach would help solve health-care problems.

“I’m hoping the people who come into these competitions are people who have no biases like [those of academics] because they’ll have the most interesting view of the data,” says Dampier, assistant director at Drexel University’s Center for Integrated Bioinformatics. “In judo, we call it having a ‘beginner’s mind.’ ”

Crowd-sourcing could take the quest for solutions “in a novel direction,” says Robert Gross, an HIV expert at the University of Pennsylvania. “It’s very clever.”

Buoyed by the results of the HIV contest, Dampier is helping judge a second, broader contest that incorporates millions of data points showing the effects of prescription drugs, alternative medications, and lifestyle changes on more than 500 chronic ailments, ranging from eating disorders to Alzheimer’s disease. Initiated last month, the competition is hosted by, a health-care aggregator in Silicon Valley. The objective is to uncover effective treatments by cross-checking CureTogether’s data, from about 11,000 people, with patient information studies on

Dampier views the contests as a step toward personalized health care, one that leans more on data sets than expert opinions. The idea is that if health issues can be predicted, they can be prevented, reducing pain along with health costs and hospital visits.

“It’s taking a health problem and turning it into a statistics problem,” he says.

The competition, which ends in October, calls for teams to pitch health-related hypotheses they’d like to test. CureTogether has gotten about a dozen submissions so far that propose looking critically at attention deficit hyperactivity disorder and the relationship between depression and exercise, CureTogether cofounder Alexandra Carmichael wrote in an e-mail.

Crowd-sourcing is getting used around the world as a tool for improving business models, tracking weather, gathering news reports and, recently, gauging radioactivity in Japan. Some corporations have taken to incentivizing it by offering cash to people who can optimize their business models. Among the most high-profile examples was a competition that the online DVD-rental company Netflix hosted in 2006 that offered $1 million to the team that produced the most effective algorithm for predicting customer movie preferences. is now hosting a contest to enhance its music recommendation algorithm.

Applying the same approach to health care is gaining some traction on the Web, thanks in part to Dampier.

“I never really thought about [the concept] the way Will did,” said Anthony Goldbloom, founder of, the Australian-based Web company that organized the HIV competition. Launched in 2009, Kaggle facilitates data-mining contests that aim to predict everything from World Cup winners to the shapes of galaxies. “The result he got in the HIV competition was the first time we knew we were onto something serious.”

Case in point: Kaggle is now facilitating a $3.2 million contest – the site’s biggest prize yet – hosted by Heritage Provider Network, a cohort of medical groups and physicians in California. It challenges participants to predict with at least 85 percent accuracy who from a pool of 100,000 chronically ill people will visit the hospital in the future. So far the contest has drawn about 650 “players” and more than 3,200 submissions. Players can resubmit entries any number of times before the deadline, which lets them check their work against others’ and jockey for position.

Such contests make a point of masking the identity of patients – reducing them to a cluster of data points – for safety and privacy. Participants in the Heritage contest, for example, get the sex, age range, and medical history of patients, but no names, zip codes, or other personal information. They also must sign confidentiality agreements.

Releasing too much personal information can be a liability. In 2009, an anonymous Netflix user sued the company after two University of Texas researchers identified customers by matching their Netflix data with film ratings on the Internet Movie Database (IMDb). Heritage hired the winner of the Netflix contest to test the health-care data for weaknesses, says Jonathan Gluck, a Heritage senior executive.

Dampier says he prefers not knowing the identities of the people he’s studying.

As a side project for CureTogether, Dampier is modeling the effectiveness of about 100 treatments for migraines, a condition he grappled with as a teenager.

The great thing about CureTogether’s design, he says, is that “it gives you a way to test, say, wearing crystals on your hat to prevent migraines.”

On the other hand, he admits, accurately analyzing the data poses a challenge: People quantify pain and describe symptoms differently.

Still, the goal is not in dispute. “The longer someone is given a treatment that’s not helpful, the more damage they experience,” he says. “The better you choose the best one first, the longer their life expectancy.”

Original article at