If I had boots, I’d be shaking in them.
In a month I’m headed to Boston to be part of the Health 2.0 conference. I’m presenting CureTogether as part of the “Wiki vs. Expert” panel debate. I haven’t done much speaking in the last 6 years, with small children to care for. But it’s time to jump back in now.
I feel like there are millions of people in pain depending on me, wanting me to be out there trying to do whatever I can to help them. If I were in pain, I’d want someone out there trying to help me.
It’s an interesting question though. Do you trust fellow patients for health information? Or experienced doctors? Or some combination of the two? Here’s the description of the panel debate from the Health 2.0 agenda:
“User-generated content, especially when created mostly by those with little formal training, has been regarded with deep suspicion by many. But now the concept is taking hold that the crowd itself has wisdom beyond that of its member, and that wisdom might exceed that of any expert or experts.
Two leaders with extremely strong view points (Denise Basow and Dan Hoch) will debate whether and if user-generated content, crowdsourcing and other ways of surfacing content are appropriate for creating and verifying health information. And what does this mean for the traditional roles of clinical trials and evidence-based medicine?
This one will be a real barn-burner, we promise you!
Demonstrations from: CureTogether & Healthwise“.
I asked my network of 1200 or so Twitter and Facebook friends what they thought. I had some interesting replies and discussions emerge from patients, doctors, social media advocates, and regular folk:
Issue #1: Who is an expert?
ePatientDave, a patient advocate with stage IV, Grade 4 renal cell carcinoma, brought up this point. It was also echoed by other friends. It has certainly happened to me that I have walked into some doctors’ offices knowing more about treatment options, possible side effects, and diagnostic criteria than they did.
So patients can certainly be experts. And experts can be patients. But they are overlapping circles – not all patients are experts, and not all experts are patients. Defining an expert as someone with a medical degree therefore seems to have inherent limitations.
A heated debate on this topic was sparked recently at ePatients.net: “Medpedia: Who gets to say what info is reliable?” Medpedia is like Wikipedia, but only doctors can edit the content. The blog post elicited 53 rebellious comments. Patients want to have a voice too.
Issue #2: What kind of patients?
Dr. Julio Bonis Sanz, an MD-PhD in Spain, raised this question. He differentiates between chronic and acute patients. In his view, chronic patients are more likely to be experts in their disease, and their knowledge can be a valuable complement to medical expertise. Acute patients are likely to have less knowledge about their disease, so the doctor’s experience of having seen many similar cases becomes more important.
This matches my experience as a patient. For my chronic conditions I have done extensive research on my own so I bring suggestions and ideas to the doctor; for acute incidents I tend to rely more on the doctor’s experience.
Issue #3: Does expert editing overlook valuable outlier data?
At CureTogether we don’t edit or moderate the crowdsourced data that members submit for their conditions, except for obvious errors and duplications. Here’s my worry with expert moderation of user data — I think experts would be likely to censor out “non-standard” outlier data that could yield valuable clues for asking new research questions. Yes, keeping a high signal-to-noise ratio is important, but eliminating all the noise can also be problematic.
Glenn Raines, a social media analyst based in Chicago, contributed eloquently to this discussion. He suggests that how data is collected and reported is an art. Showing patients how to use the data insight for strategies and tactics instead of presenting them with a data dump will be important. Statistics are obviously a critical part of this as well. The power to analyze and find patterns in huge amounts of data, including outlier data, is incredibly valuable. But deciding what statistical methods are relevant and meaningful seems like an art in itself. Perhaps we need to define a new field: artistic statistics?
I think I’d most like to see an open collaboration platform where users and experts can both participate, without editing or censoring each other. Color-coding or ranking comments by reliability, based on community reputation points, is one idea. And full transparency of the commenter’s identity is a must. This would provide a good balance – including everyone but minimizing the noise.
So who do we trust, users or experts? The answer: it depends. But a collaboration of some kind seems win-win.
Now, to take some deep breaths, remember all the people counting on me, and get myself to Boston next month. I think the debate will be recorded. I’ll post it afterwards and you can see how I did.