YouTube tests new ‘crowdsourced’ fact-checking feature on videos

YouTube videos are getting the Community Notes treatment.

The platform will be experimenting with a community-driven fact-checking feature, announced June 17, and designed to provide “relevant, timely, and easy-to-understand” context to YouTube videos — an apt endeavor as misinformation (and disinformation) proliferates online.

“This could include notes that clarify when a song is meant to be a parody, point out when a new version of a product being reviewed is available, or let viewers know when older footage is mistakenly portrayed as a current event,” YouTube explained in a blog post.

SEE ALSO:

U.S. Surgeon General: Social media needs a warning label

Early versions of notes will be created by users determined to be in “good-standing” by YouTube, and then rated by third-party evaluators on their helpfulness. This feedback, the platform explains, will help train an in-house bridging-based algorithm that will screen notes in the future.

Mashable Light Speed

Viewers will also be asked for feedback on the helpfulness scale, with justification. “For example, whether it cites high-quality sources or is written clearly and neutrally.”

A mobile screenshot of a YouTube video with a blue box under the title. The box is labeled "Viewers added a note".


Credit: YouTube

X’s own Community Notes feature has offered an example of just how (mainly, how not) to approach a community-led fact-checking system. While CEO Elon Musk has alternated between bolstering and waging a war on the feature, a Mashable investigation found that few of the platform’s users actually see approved Community Notes addressing misinformation. “Many times, misinformation on X spreads without any Community Note. Or in another common scenario, a Community Note is approved, but then later removed from the post,” reported Matt Binder. When a post receives a community note, and it stays attached to the post, Binder wrote, “the falsehood in the post is often viewed around 5 to 10 times more than the fact-check.”

Speaking to Poynter about the efficacy of Community Notes at curbing misinformation, former Twitter head of trust and safety Yoel Roth said there were “some areas where it’s successful,” but also said he saw “many other areas where it is not a robust solution for harm mitigation.” MediaWise director Alex Mahadevan was quoted as calling the user rating system “mostly a failure.” It’s often a site wide vessel for memes.

Still, YouTube is taking a stab at a similar real-time feature, among other efforts to create a more transparent platform. The company has previously rolled out a variety of topic-specific information panels and now requires creators to disclose the use of generative AI when its applied to alterations of real people, real events, or otherwise “realistic” looking scenes.

The notes pilot will only be available in English and to select Creator Studio users in the U.S. during early tests. According to the platform: “We anticipate that there will be mistakes – notes that aren’t a great match for the video, or potentially incorrect information – and that’s part of how we’ll learn from the experiment.”

Topics
Social Good
YouTube

Source

      Guidantech
      Logo
      Shopping cart