YouTube Now Lets You Request Removal of AI Mimicking Your Identity

in #hive-1679223 months ago

YouTube is taking a step further to curb the abuse of AI on their platform by now allowing you to request the removal of deepfakes of your face or voice. There are procedures, and they mostly concern privacy violations rather than community guidelines for creators.

Image credit: Christian Wiediger

Months ago, YouTube took the honest approach by requiring creators to tick a checkbox to indicate if they had anything AI-generated or synthetic in their content while uploading. This clearly wasn't going to be enough, and they knew that.

YouTube is indeed investing in tools to detect AI-generated content, but they may not do the job so well—at least for now—and so they are implementing yet another policy for AI-generated content.

This new policy requires first-party claims. That is, you can only request the removal of content that you deem misleading by a deepfake that truly simulates your identity. There are exceptions for minors, deceased individuals, and those without access to computers.

When a complaint is submitted, Youtube will review it and make decisions based on several factors: whether the content was stated or labelled as synthetic [perhaps using methods other than or beyond AI] or was made using AI, if it actually uniquely identifies a person, or if it could be considered parody or satire [something that could be of value or relevant to the public].

YouTube, of course, uses AI on their platform for use cases like summarizers and conversational tools for obtaining feedback on certain videos or providing recommendations, so they really are not against the use of AI. Their point here is that simply AI content doesn't protect it from being removed, especially if it violates community guidelines, but privacy violations play a slightly different role in this matter.

“For creators, if you receive notice of a privacy complaint, keep in mind that privacy violations are separate from Community Guidelines strikes and receiving a privacy complaint will not automatically result in a strike,” YouTube shared.

When a complaint is made, whoever uploaded the content has 48 hours to address the complaint themselves before YouTube begins to review it. If, however, the uploader removes the content within 48 hours of being notified, the complaint would be considered resolved, and YouTube would not take any further actions. If nothing is done by the uploader within those 48 hours, YouTube will review the complaint and make a decision based on their guidelines.

That seems like a pretty fair way to handle such situations, mostly for creators. The creators are at least given a chance to do something about it first, rather than not. What may be an issue is the latency in very sensitive and urgent cases, especially in politics these days.

Even if some content may comply with YouTube's community guidelines, it may still be removed based on the privacy guidelines if a valid removal request is sent.

YouTube won't give a penalty or such to the creator, like they would if community guidelines were violated and the content had issues with the privacy guidelines, but they could if the account repeatedly violates them.

A fair balance, I would like to think. YouTube is a big platform, and whatever goes on there can affect the world and people in some way, but it is important to be tactful in handling such matters, and that seems to be what YouTube is doing with their new policies.


Interested in more?

How a Real Photo Fooled an AI Art Contest

DeepMind's New AI Brings Videos to Life with Sound

Apple Intelligence, Siri Makeover, and ChatGPT: WWDC 2024

Posted Using InLeo Alpha

Sort:  

This is a healthy policy I must say. Users need to be protected from exploitation.

Glad they're doing something about it. YouTube is a world on its own.

🎉 Upvoted 🎉
👏 Keep Up the good work on Hive ♦️ 👏
❤️ @gwajnberg suggested sagarkothari88 to upvote your post ❤️