Indicator, an online publication focusing on studying and exposing digital deception and manipulation, has published an updated analysis of how Generative AI tools output are shown on social media platforms.
The study, an update to a previous study conducted in October, shows some good news: for example, all image and video content created by OpenAI models was correctly labeled as AI-generated by LinkedIn, Pinterest, and YouTube.
However, some significant gaps remain.
For example, some images from Meta AI were recognised as AI-generated by Instagram, but they were not recognised by LinkedIn, TikTok, Pinterest or YouTube. Meta AI uses IPTC’s Digital Source Type property in the media file’s XMP header (the typical way to use IPTC Photo Metadata) to signal AI-generated content, so this means that the IPTC DigitalSourceType property is not being examined by these platforms.
OpenAI and Google Gemini both use C2PA metadata to assert digital source type (also using IPTC’s Digital Source Type vocabulary, but this time embedded in a C2PA manifest). However Pinterest, for example, only picked up OpenAI’s version of the C2PA metadata, but not Google’s. Pinterest did surface Meta AI’s content via the IPTC Digital Source Type tag, and was equal-best overall, tying with LinkedIn who recognised all content from Google and OpenAI but not from Meta.
IPTC Managing Director Brendan Quinn was quoted in the article as saying “Tech platforms have the talent to implement C2PA tomorrow; they simply need the will to prioritize it.”
The article noted that looming legislation from California and other jurisdictions would force platforms to implement AI surfacing properly, but in the meantime there is a risk: Maurice Jakesch, assistant professor of computational social science at Bauhaus-University in Weimar, is reported as saying that “an inconsistent and incomplete labeling setup may have unexpected consequences on online trust.”
The post AI disclosure on social media “a work in progress” appeared first on IPTC.