“What we saw was a tendency to accept almost all images at first glance, regardless of subject area.”
Can blockchain save journalism? The Magic 8-Ball’s best answer thus far appears to be “Outlook not so good.” Cryptocurrency hype has receded; an Ethereum token, which would have cost you $1,438 two years ago, can now be had for $166. Changing currency doesn’t seem to change anything fundamental about the news industry’s struggles.
But if it can’t save journalism, can blockchain still be helpful to journalism, in defined, targeted ways? That was the question raised last July by the News Provenance Project — a project of The New York Times’ R&D team in collaboration with IBM Garage.
Its goal was more straightforward: Can blockchain make it easier for news consumers to understand where the photos they see online came from? There’s a need for it: Pew reported last June that 46 percent of Americans say they find it difficult to recognize when images are false or have been doctored — and that was before Peak Deepfake.
In the months since then, staffers have been doing user research and building prototypes of such a tool — taking advantage of blockchain’s ability to store data immutably and to track its usage over time. Today, the News Provenance Project is releasing some of its initial findings.
To sum up: They learned a lot of interesting things about how news consumers evaluate the images they see online — and how different people value different contextual clues when determining whether or not a photo seems real and trustworthy. They made a proof-of-concept using blockchain to store more metadata for images. But they determined that a lot of things would have to change structurally about how photos work online for any solution to be widespread.
“It’s less about it being a survey about where are people more susceptible to misinformation and more about people who are more likely to be fooled by misinformation,” said Marc Lavallee, executive director of Times’ R&D team. “What we wanted to focus on is: In a situation where if you are susceptible to that, what are the best safeguards to help head that off at the pass?”
Critical information that typically goes into a photo caption, such as time, date, location and the accurate identification of people and events shown doesn’t travel with a photo when it’s posted to social media, where it can be reposted with egregious inaccuracies.
News organizations have this information. With a bit of work and some thoughtful designs, we could use it to help platforms and, more importantly, news consumers avoid inaccurate uses.
The Times conducted in-depth interviews (“qualitative user studies”) with 34 news consumers in three rounds to both understand how people currently process the news images they see and what information would help them gauge a photo’s validity.
- Round 1: Interviews with 15 people about the current state of news and misinformation and their “mental models” of trust in news photos.
- Round 2: Prototype testing with seven people and multiple prototype designs
- Round 3: Prototype testing with 12 people and one design
Interviewees from Round 1 fell pretty evenly into one of four categories; see if you can picture them in your head:
Distrustful news skeptic (low trust, high awareness): Seeking to call out bias in mainstream media, a person in this category may use motivated reasoning to find any evidence to confirm their belief that the media is pushing a particular agenda. Building trust in specific news outlets is difficult in these cases: it may be part of a person’s identity to be skeptical of mainstream media and hyper-alert to perceived cues of bias. Importantly though, this skepticism applied more to the editorial framing of a story than a complete denial of facts, such as when and where a picture was taken.
Confident digital news subscriber (high trust, high awareness): A person in this category is digitally savvy and is comfortable distinguishing between true and false news when provided information from news outlets they trust. They want to avoid appearing uninformed or misinformed about news issues.
Media-jaded localist (low trust, low awareness): This person may feel marginalized by mainstream media and uncritically accept hot takes from unofficial accounts as truths. They want news that feels local and authentic, but they don’t want to be misled by false information intended to deceive. Additionally, they need clearer cues to identify false and misleading content from unofficial accounts that they trust in good faith.
Late-adopter media traditionalist (high trust, low awareness): A person in this category may be more comfortable learning about news through older mediums such as television or newspapers, but less comfortable making sense of news online within the noise of social media. On this front, people need more education on misinformation and disinformation tactics, as well clear cues to more readily distinguish credible content from media sources they already trust.
“What we saw was a tendency to accept almost all images at first glance, regardless of subject area,” Emily Saltz, the project’s UX lead, said. “People were more discerning the more they focused on a post…If someone was at all skeptical of a photo post and rated it as less credible, it tended to be a reaction to the perceived slant of a caption or headline, rather than a belief that it was made up or edited.”
Researchers identified two groups they felt would be most helped by better image provenance information: “those who trust the media, but lack the baseline digital literacy to reliably assess the credibility of posts, and those who are already confident in their abilities to distinguish credible news photography, but would benefit from even more context to factor into their understanding.”
They tested a number of variations on the kinds of context that could be provided with a photo. Among the things they found:
- A checkmark — like the blue one Twitter uses for verified accounts — wasn’t enough to instill confidence about an image’s credibility
- Users preferred the term “sourced” (with the ability to follow up on information) instead of “verified” (which relied on the endorsement of other people)
- Multiple images of an event were more helpful than a single photo’s edit history to convince a news consumer that the event pictured had taken place
- Users want to see the process and to know that there will be oversight and accountability for misinformation.
- Publishing incorrect information with a provenance signal — as might happen with something posted quickly during breaking news — can cause a user not to trust it again.
After the interviews, the team went about building a “proof of concept” with IBM — it looks a lot like a Facebook News Feed or just about any other social platform. (You can see it here.) Here’s an example of how a photo might appear in the feed:
Susie has posted a photo of a horrific fire and said the world was watching it “burn in real time.” But the photo’s provenance information shows not only caption and location information, but also that it was taken 7 years ago. More DVR than real time.
If you clicked on that “More,” this is what you’d see:
More context: an extended caption, credits for the photographer and the news organization, and other photos of the same event. Click on “History of This Photo” and you get its life story, recorded via blockchain:
After showing the feed to users, R&D found that, while provenance does help increase audience confidence, people were more interested in having information and resources related to the news photo: “The people we spoke to were less interested in seeing the history of how publishers used a photo and more interested in having access to other headlines, captions, summaries and links about the event depicted in the photo. This tied back to our earlier findings that interest, more than truth-seeking, drives user behavior on social platforms. People want more context because they are interested in a story, not because they are trying to prove whether a photo is real.”
“We live in a world where the most well-intentioned things — like raising awareness about climate change, or even just that particular natural disaster — can be fueled by unintentional, well-meaning misinformation. What does that mean when people actually try to do this on purpose?” Lavallee said. “We started looking at the technology angle of it and very quickly from there what we realized is: We can build a perfect technology solution, but if it doesn’t actually matter to people, then it’s not going to work.”
Lavallee emphasized that these findings are just the beginning of understanding how journalists and news organizations can start addressing visual misinformation. But effective solutions would require large-scale cooperation and commitment, across several industries. He said the News Provenance Project is also working on an adjacent project called the Content Authenticity Initiative with Adobe and Twitter. One focus of that project is making sure the tools journalists use to capture, edit, publish, and share photos that carry an editing history trail.
In a post about these findings, Koren lays out the players who would be key to a systemic solution:
- Camera makers to help photographers ensure time, date and location settings in cameras are exact.
- Every news publisher to modify their management processes for photo metadata so that they adhere to a common set of standards, such as those maintained by the International Photo Telecommunications Council (IPTC).
- All platforms such as Google, Facebook, Twitter and Apple, as well as chat apps like WhatsApp and Signal, to ensure the consistent display of this information.
That’s a big ask. As Koren acknowledges, within “this big and complex endeavor, a proof of concept is a small drop in a very big, complicated bucket.”
What’s next for the News Provenance Project? The Times says its next phase “will shift from exploration to execution to show how an end-to-end solution can help users share trusted news with confidence. The team will explore other potential solutions in emerging technology and work across newsrooms and industry to imagine what a better internet might look like.”
Whatever progress can be made, it will be essential to make it workable not only for the big guys — the Timeses, Posts, and Journals of the world — but for local outlets and small publishers too.
“We do not succeed if we end up creating a two-tier information ecosystem where only large publishers have these added veracity signals, implicitly casting more scrutiny on smaller outlets,” Lavallee said. “From a technical perspective, the solution needs to be as easy to install and use as something like WordPress. From a workflow and best practices perspective, it needs to be as easy and common sense as writing web-friendly headlines for articles.
“The work ahead is making sure that metadata is captured in a consistent way, and made available to other organizations, like social platforms, to use it responsibly.”
SHARE THIS STORY
Can blockchain save journalism? The Magic 8-Ball’s best answer thus far appears […]