Most Popular

    Sorry. No data so far.


Winner of Amgen Patients | Choices | Empowerment Competition Emerging Star of HealthCare Engagement Award
Mayo Clinic Award - LeftA winner of the Mayo Clinic iSpot Competition for Ideas that will Transform HealthcareMayo Clinic Award - R

Crowdsourcing

Crowdsourcing Medical Expertise Part Two: Reader Comments

September 3, 2008

Every once in a great while I find the comments to a post so thoughtful and intriguing I decide to “upstream” them into their own blog post, both to draw more attention to what I deem to be valuable points, and as an attempt to provoke more debate…

Daniel Reda and Alexandra Carmichael, the co-founders of the very promising CureTogether.com, both posted useful comments. Here’s Daniel:

What would happen if you crowdsourced interpretation or even diagnosis? Well, the consensus interpretation of 100 amateurs on your MRI would probably not be at all helpful. What you’d want is a method to select the best interpretations and have them bubble up to the top. How do you select the best interpretations? One way is to keep historical data on how accurate those predictions were once more data became available. Ideally we’d gather data on doctors’ performance as well. It’s not about credentials – it’s about accuracy. If the doctors don’t want to participate, then their judgments will look progressively weaker compared with those of a supposed amateur who was proven to be correct 99% of the time on thousands of MRI interpretations.

“It’s not about credientials—it’s about accuracy.” That’s as concise a definition of crowdsourcing as it gets. Here’s hoping we can keep this conversation going.

Original post at http://crowdsourcing.typepad.com/cs/2008/09/crowdsourcing-m.html