Sometimes people ask me why I work in the pharma industry. When they ask those questions they often have images in their minds of infamous price hikes (and sometimes they’re really angry). Those people don’t understand what I do at all. I help biopharma brands connect with HCPs so that those HCPs are empowered to make the best possible treatment decision for every patient in their practice.
I’ve written hundreds of scripts for medical explainer videos – the kind that are delivered by HCPs for HCPs. I can’t even count the number of times that the opening of those scripts has included the phrase “double-blind, placebo-controlled, multicenter, clinical trial…” Frankly, by the time information gets to me, it’s already been through such a rigorous process that I’m really just “checking the box” by including those qualifiers. I’m including it right next to the medical content so that there can be no confusion as to the source of the information.
Everyone I know likes to make little jokes about “Dr. Google.” I’ll admit that last year I frantically made an appointment with a dermatologist because I had Google-convinced myself that a slightly-discolored irregular brown mark on the back of my hand was melanoma (spoiler alert – it was an age spot). But the truth is that as the quality of medical information we can find on Google has improved, it has led the way to a new era – that of the informed patient.
But what about Dr. Facebook? Or (even more scary) Dr. Pinterest – a world where putting an onion in your socks is a proven way to purify your blood and frozen lemons will kill 12 types of cancer. Have those claims been proven in a randomized, double-blind, placebo-controlled, multicenter clinical trial? Of course not. They were proven in the minds of users because they had pretty thumbnail images and a cute brush script font. I’ve reported them both to the social media sites that allowed them to propagate.
But here’s the problem – when I go to report this content on Facebook, the options are nudity, unauthorized sales, harassment, hate speech, spam, false news, violence, suicide or self-injury, and “something else.” I often have a little debate with myself as to whether misleading medical information is fake news but typically I land on the “something else” bucket.
This worries me because I know that medical content is very special. It affects the lives of our friends, parents, children – everyone we know. In February 2018, Monika Bickert, Facebook’s Head of Global Policy Management, explained that “Content reviewers tend to be hired for their language expertise, and they don’t tend to come with any predetermined subject-matter expertise. Mostly they are hired, they come in, and they learn all of the Facebook policies, and then over time, they develop an expertise in one area.” It has been reported that those content reviewers receive two weeks of training. This means that people with no level of medical or biological knowledge are making decisions about whether medical information is harmful.
Misleading medical information needs its own button. One that ensures that a qualified person is allowed adequate time to check to see if references are provided, the content is presented in an appropriate way, and does no harm to the people that see, like, and share it.
Experts agree, the number of patients that come in with socially-sourced misleading medical information is on the rise. In 2017, Dr. Megha Sharma, alarmed that patients misunderstood the cause of the Zika virus, led a study to analyze Facebook as a social media health information platform. The study found that “the misleading posts were far more popular than the posts dispersing accurate, relevant public health information about the disease.”
Think about it, Mark. The health of real people is riding on those catchy thumbnails and the cute brush script. Don’t let your site trick them into thinking that they don’t need to see a doctor because they put onions in their socks.