Apple Intelligence Will Label AI-Generated Images in Metadata
[ad_1]
Apple’s new artificial intelligence features, called Apple Intelligence, are designed to help you create new emoticons, edit photos and create images from a simple text prompt or uploaded photo. We now know that Apple Intelligence will also add a code to each image, helping people identify that it was created with AI.
On a recent one podcast by celebrity blogger John GruberApple executives described how the company’s teams want to ensure transparency, even with seemingly simple photo edits, such as removing a background object.
“We make sure to mark the metadata of the generated image to show that it has been modified,” said Craig FederighiApple’s senior vice president of software engineering, adding that Apple does not intend to create technology that generates realistic images of people or places.
Apple’s commitment to add information to images touched by its AI adds to a growing list of companies trying to help people identify when images have been manipulated. TikTok, OpenAI, Microsoft and Adobe have all started adding something like a digital watermark help identify content created or manipulated by AI.
Media and information experts have warned that despite these efforts, the problem is likely to worsen, especially before disputed 2024 US presidential election. A new term, “slop,” is becoming increasingly popular to describe the realistic lies and misinformation created by AI.
AI tools for creating text, video and audio have become significantly easier to use, allowing people to do all sorts of things without much need for technical knowledge. (See CNET’s hands-on reviews of AI imaging tools like ImageFX by Google, Adobe Firefly and OpenAI’s Dall-E 3 as well as more AI tips, explanations and news on our AI atlas resource page.)
Read more: How close is this picture to the truth? What you need to know in the age of AI
At the same time, AI content became much more plausible. Some of the biggest tech companies have started adding AI technology to the apps we use every day, but with decidedly mixed results. One of the most famous blunders was Google, whose AI Overview summaries attached to search results began inserting wrong and potentially dangerous informationlike suggests adding glue to the pizza to keep the cheese from sliding off.
Apple seems to be taking a more conservative approach to AI for now. The company said it intends to offer its AI tools in public “beta” test later this year. Also struck a partnership with leading startup OpenAI to add more capabilities to your iPhone, iPad and Mac computers.
[ad_2]