Hot news

YouTube announces guidelines for regulating AI content


YouTube has announced a new set of guidelines for AI-generated content on its platform to address concerns over the coming months.
YouTube will roll out new updates to notify viewers about AI-generated content, require creators to disclose their use of AI tools, and remove harmful artificial content when necessary, according to gadgets360.
This will be achieved in two ways; A new label has been added to the description panel that highlights the synthetic nature of content, and a second, more prominent label – on the video player itself – for certain sensitive topics.
Generative artificial intelligence (AI) has taken off over the past year, with the market flooded with powerful chatbots, image and video generators, and other AI tools. The new technology has also presented new challenges related to the responsible use of AI, misinformation, impersonation, and copyright infringement. And much more. 
The streaming service also said it would take action against creators who don't follow its new guidelines on AI-generated content, and the blog read: "Creators who consistently choose not to disclose this information may be subject to content removal, suspension from the YouTube Partner Program, or penalties." Other".
In addition, YouTube will also remove certain synthetic media, regardless of whether they are labeled or not, from its platform, and this may include videos that violate YouTube's community guidelines. Creators and artists will also be able to request the removal of AI-generated content that impersonates YouTube. The character of an individual who can be identified using their face or voice. YouTube said that content removals will also apply to music generated by artificial intelligence that mimics the voice of the artist singing or rapping, and these artificial intelligence guidelines and treatments will be rolled out to the platform in the coming months.
YouTube will also deploy artificial intelligence technologies produced to detect content that violates its community guidelines, helping the platform identify and catch potentially harmful or violating content more quickly. The Google-owned platform also stated that it will develop guardrails that block artificial intelligence tools. own from creating malicious content.