How Businesses Are Responding to the Rise of AI Content
Because of these problems, big tech companies are working on ways to improve the authenticity and provenance of media. As part of its annual Build conference, Microsoft announced that its Bing Image Creator and Designer tools would now have new media provenance features.
Users will be able to check if pictures or videos were made by AI using cryptographic methods that include information about where the content came from.
But for this system to work, different platforms need to agree to the Coalition for Content Provenance and Authenticity (C2PA) specification.
Likewise, Meta has released a tool called Meta Video Seal that can add invisible watermarks to video clips made by AI.
This open-source tool is meant to work with existing software without any problems, making it easier to find content that was made by AI.
Video Seal promises to be resistant to common edits like blurring and cropping, unlike older watermarking technologies that had trouble with video compression and manipulation.
Problems and Limitations
Even with these improvements, there are still problems with getting a lot of people to use these technologies. Many developers may be hesitant to transition from existing proprietary solutions to open-source options like Video Seal.
Meta plans to hold workshops at major AI conferences and make a public leaderboard that compares different watermarking methods in order to get more people to work together.
Also, the watermarking methods we have now aren’t always strong or effective enough when it comes to video content.
Two Main Approaches to Fighting AI-Generated Content
In the battle against AI-generated content, two distinct strategies have emerged:
- Watermarking (Preventive Approach):
- Works by adding invisible signatures to content at the moment of creation
- Acts like a digital certificate showing “this was made by AI”
- Tools like Meta Video Seal and Microsoft’s provenance features represent this approach
- Main advantage is immediate identification of AI content
- Detection Tools (Analytical Approach):
- Analyzes existing content to determine if it was AI-generated
- Looks for patterns and characteristics typical of AI-created content
- Particularly useful for content that wasn’t marked at creation
- These tools form our second line of defense
Both approaches are necessary as they complement each other: watermarking prevents misuse, while detection tools help identify unmarked content.
Detection Tools and Technologies
AI-generated content can be found in more ways than just watermarking technologies. New detection tools use complex algorithms to look at both text and image content.
- Originality, deep learning algorithms are used by AI to find patterns in text that was generated by AI.
- GPTZero looks at linguistic structures and word frequencies to tell the difference between content that was written by humans and content that was created by machines.
- CopyLeaks uses N-grams and syntax comparisons to find small changes in language that could be signs of AI authorship.
These tools are supposed to give users accurate opinions on how real content is, but how well they work can vary a lot.
In conclusion
As generative AI advances, protecting digital authenticity becomes increasingly crucial. Microsoft and Meta are leading the charge with groundbreaking standards for content authenticity and media provenance verification.
To combat deepfakes effectively, we need both industry-wide adoption of these tools and stronger collaboration between tech companies. The future integrity of digital content depends on detection technologies evolving faster than AI-generated deception.
In fact, we’ve recently covered how YouTube is taking similar steps by introducing new AI detection tools for creators and brands. Their approach includes synthetic voice identification and AI-generated face detection technologies, further demonstrating how major platforms are working to protect content authenticity in the AI era.