Hot news

How can AI images be known.. Undertaking to impose a watermark


AI- generated content has deceived users in the past few months, but a coalition of tech giants and startups has vowed to put a watermark on AI-generated content. 
According to business insider, the group, which is made up of big tech pillars like Google, Microsoft, Meta and Amazon, as well as generative artificial intelligence giants OpenAI, Anthropic and Inflection, has made "voluntary commitments" to the Biden administration in an effort to make its products safer and crack down on technology's propensity for bias and the production of misinformation. 
The pledge included commitments to companies to make "robust systems" that identify or watermark content produced by their AI tools.  
These identifiers or watermarks will distinguish the AI ​​service that was used to create the content, but omit any information that could be used to identify the original user. 
Since the OpenAI release of ChatGPT last November, generative AI tools have wowed users with their ability to conjure up text and images when prompted, but the power of the emerging technology to produce persuasive text and realistic images has already been used to spread false information. 
Markets fell briefly in May after a smoke-encrusted fake Pentagon image was circulated on social media. It was never confirmed that the image was created using artificial intelligence but it contained many unrealistic elements that sometimes appear in AI-generated images, such as the merging of physical objects with each other. 
And a study published in June found that the majority of people couldn't tell whether a tweet was written by a human or by ChatGPT, and the participants surveyed even found ChatGPT tweets more convincing than their human counterparts.