4 min. reading

AI vs. Authenticity: The Growing Need for Provenance in Digital Content

The rise of AI-generated content has changed digital authenticity, making it harder and harder to tell the difference between real and fake media. Cases like the "Balenciaga Pope" and fake Pentagon explosions have brought this trend to light and shown how bad things can get when AI-generated images are mistaken for real ones. Deepfakes surged 400% in 2024, now representing 7% of all fraud cases including impersonations and social engineering attacks

This article was translated for you by artificial intelligence
AI vs. Authenticity: The Growing Need for Provenance in Digital Content
Source: Depositphotos

How Businesses Are Responding to the Rise of AI Content

Because of these problems, big tech companies are working on ways to improve the authenticity and provenance of media. As part of its annual Build conference, Microsoft announced that its Bing Image Creator and Designer tools would now have new media provenance features.

Users will be able to check if pictures or videos were made by AI using cryptographic methods that include information about where the content came from.

But for this system to work, different platforms need to agree to the Coalition for Content Provenance and Authenticity (C2PA) specification.

Likewise, Meta has released a tool called Meta Video Seal that can add invisible watermarks to video clips made by AI.

This open-source tool is meant to work with existing software without any problems, making it easier to find content that was made by AI.

Video Seal promises to be resistant to common edits like blurring and cropping, unlike older watermarking technologies that had trouble with video compression and manipulation.

Problems and Limitations

Even with these improvements, there are still problems with getting a lot of people to use these technologies. Many developers may be hesitant to transition from existing proprietary solutions to open-source options like Video Seal.

Meta plans to hold workshops at major AI conferences and make a public leaderboard that compares different watermarking methods in order to get more people to work together.

Also, the watermarking methods we have now aren’t always strong or effective enough when it comes to video content.

Video Seal_: Meta

Source: aidemos.meta.com

Two Main Approaches to Fighting AI-Generated Content

In the battle against AI-generated content, two distinct strategies have emerged:

  1. Watermarking (Preventive Approach):
  • Works by adding invisible signatures to content at the moment of creation
  • Acts like a digital certificate showing “this was made by AI”
  • Tools like Meta Video Seal and Microsoft’s provenance features represent this approach
  • Main advantage is immediate identification of AI content
  1. Detection Tools (Analytical Approach):
  • Analyzes existing content to determine if it was AI-generated
  • Looks for patterns and characteristics typical of AI-created content
  • Particularly useful for content that wasn’t marked at creation
  • These tools form our second line of defense

Both approaches are necessary as they complement each other: watermarking prevents misuse, while detection tools help identify unmarked content.

Detection Tools and Technologies

AI-generated content can be found in more ways than just watermarking technologies. New detection tools use complex algorithms to look at both text and image content.

Powered by AI

Source: Depositphotos

  • Originality, deep learning algorithms are used by AI to find patterns in text that was generated by AI.
  • GPTZero looks at linguistic structures and word frequencies to tell the difference between content that was written by humans and content that was created by machines.
  • CopyLeaks uses N-grams and syntax comparisons to find small changes in language that could be signs of AI authorship.

These tools are supposed to give users accurate opinions on how real content is, but how well they work can vary a lot.

In conclusion

As generative AI advances, protecting digital authenticity becomes increasingly crucial. Microsoft and Meta are leading the charge with groundbreaking standards for content authenticity and media provenance verification.

To combat deepfakes effectively, we need both industry-wide adoption of these tools and stronger collaboration between tech companies. The future integrity of digital content depends on detection technologies evolving faster than AI-generated deception.

In fact, we’ve recently covered how YouTube is taking similar steps by introducing new AI detection tools for creators and brands. Their approach includes synthetic voice identification and AI-generated face detection technologies, further demonstrating how major platforms are working to protect content authenticity in the AI era.

Share article
Similar articles
How to Spot AI-written Content
3 min. reading

How to Spot AI-written Content

Recent advances in AI language models like ChatGPT, Bard and Claude have made it remarkably challenging to distinguish machine-written content from human work. This is particularly significant in e-commerce, where AI now routinely generates everything from product descriptions to customer service responses. While these tools can produce remarkably fluent text, careful analysis reveals subtle patterns […]

Read article
The Future of E-commerce is Here: Perplexity Launches Shopping Capability
3 min. reading

The Future of E-commerce is Here: Perplexity Launches Shopping Capability

Artificial intelligence is increasingly influencing how we shop and sell. Its integration into the purchasing process brings a range of innovations that are transforming traditional approaches into more efficient, faster, and personalised experiences. This is evidenced by Perplexity – an AI-powered search engine expected to revolutionise shopping with its latest feature. With new functions that […]

Read article
Bridge Now

Latest news right NOW

10+ unread

10+