Introduction
- TL;DR: Poland asked the European Commission to investigate TikTok over viral AI-generated videos promoting “Polexit.”
- The case ties together DSA enforcement for VLOPs and EU AI Act Article 50 transparency obligations for AI-generated/manipulated content.
- For engineers, the practical question is: can you produce verifiable evidence—provenance, labels, and audit logs—at scale?
Why it matters: Trust & Safety is becoming a data engineering problem: reproducible decisions, evidence stores, and standardized labeling fields.
What happened: Poland’s request and TikTok’s response
Reuters reports that on 2025-12-30 Poland requested an EU-level probe after AI-generated videos (featuring synthetic personas) promoted Poland leaving the EU; TikTok removed content it said violated rules, and the Commission confirmed receiving the letter. Euronews also described the spread of non-existent AI-generated personas pushing “Polexit,” noting the account was deleted.
Why it matters: When synthetic media drives systemic-risk concerns, regulators evaluate not only policy text but operational controls and evidence.
DSA angle: VLOP obligations, enforcement, and TikTok’s history under DSA
The Commission designated the first set of VLOPs/VLOSEs on 2023-04-25, with DSA obligations applying by end of August 2023. Under the DSA, the Commission can impose fines up to 6% of worldwide annual turnover after a non-compliance decision. The Commission also opened formal proceedings against TikTok in 2024-02-18 under the DSA (covering areas like risk management and data access for researchers).
Why it matters: DSA compliance is measurable: risk assessments, mitigation actions, data access, and auditability.
EU AI Act Article 50: transparency for AI-generated or manipulated content
The Commission’s AI Act Service Desk summarizes Article 50 obligations: informing users when they interact with AI, and ensuring AI-generated/manipulated content is clearly marked and detectable; deepfakes and certain AI-generated text must be disclosed (with defined exceptions). On 2025-12-17, the Commission published a first draft Code of Practice on marking/labeling AI-generated content, and stated the relevant transparency rules become applicable on 2026-08-02.
Why it matters: Labeling becomes a compliance requirement—and therefore a schema, a pipeline, and an exportable report.
C2PA / Content Credentials: turning “authenticity” into verifiable metadata
C2PA provides technical specifications to attach and verify provenance information for media (“Content Credentials”).
Reuters reported that TikTok planned to label AI-generated images/video using Content Credentials (a watermark/credential approach).
Operationally, you can extract manifests via c2patool which outputs JSON representations of manifests by default.
Cloudflare documented preserving Content Credentials through image transformations, emphasizing provenance durability across processing steps.
Why it matters: Provenance + audit logs is stronger than classifiers alone, especially for post-incident reconstruction.
Reference pipeline (diagram) and a minimal audit schema
| |
Minimal fields: content_id, provenance_present, provenance_verdict, detected_synthetic, action, reason_code, evidence_refs, decision_ts.
Why it matters: Regulators ask “show your work.” Your system needs reproducible decisions with preserved evidence.
Example: extracting a C2PA manifest with c2patool
| |
Cloudflare’s example shows redirecting detailed output to a JSON file for review and retention.
Why it matters: Retained manifests + decision logs improve incident response and compliance reporting quality.
Conclusion
- Poland’s request highlights how AI-generated media can rapidly become a DSA-level systemic risk issue.
- EU AI Act Article 50 and the Commission’s Code of Practice workstream push labeling toward standardized, machine-readable implementations.
- Engineering priorities: provenance (C2PA), labeling, durable transformations, and forensic-grade audit logs.
Summary
- DSA: VLOP risk management and enforceable fines.
Recommended Hashtags
#TikTok #AIGeneratedContent #DSA #EUAIACT #C2PA #ContentCredentials #TrustSafety #Disinformation #PlatformGovernance
References
- (Poland urges Brussels to probe TikTok over AI-generated content, 2025-12-30)[https://www.reuters.com/world/china/poland-urges-brussels-probe-tiktok-over-ai-generated-content-2025-12-30/]
- (AI-generated videos showing young and attractive women promote Poland’s EU exit, 2025-12-30)[https://www.euronews.com/2025/12/30/ai-generated-videos-showing-young-and-attractive-women-promote-polands-eu-exit]
- (DSA: Commission opens formal proceedings against TikTok, 2024-02-18)[https://ec.europa.eu/commission/presscorner/detail/en/ip_24_926]
- (Designation decisions for the first set of VLOPs and VLOSEs, 2023-12-20)[https://digital-strategy.ec.europa.eu/en/library/designation-decisions-first-set-very-large-online-platforms-vlops-and-very-large-online-search]
- (Questions and answers on the Digital Services Act, 2020-12-15)[https://ec.europa.eu/commission/presscorner/detail/en/QANDA_20_2348]
- (Article 50: Transparency obligations for providers and deployers of certain AI systems, N/A)[https://ai-act-service-desk.ec.europa.eu/en/ai-act/article-50]
- (Commission publishes first draft of Code of Practice on marking and labelling of AI-generated content, 2025-12-17)[https://digital-strategy.ec.europa.eu/en/news/commission-publishes-first-draft-code-practice-marking-and-labelling-ai-generated-content]
- (TikTok to label AI-generated content from OpenAI and elsewhere, 2024-05-09)[https://www.reuters.com/technology/tiktok-label-ai-generated-images-video-openai-elsewhere-2024-05-09/]
- (C2PA Specifications, N/A)[https://c2pa.org/specifications/specifications/2.2/index.html]
- (Using C2PA Tool, N/A)[https://opensource.contentauthenticity.org/docs/c2patool/docs/usage/]
- (Preserving content provenance by integrating Content Credentials into Cloudflare Images, 2025-02-03)[https://blog.cloudflare.com/preserve-content-credentials-with-cloudflare-images/]
- (Verifying C2PA manifests - MediaConvert, N/A)[https://docs.aws.amazon.com/mediaconvert/latest/ug/c2pa-manifest-verification.html]
- (contentauth/c2pa-attacks: Content Authenticity Security Tool, N/A)[https://github.com/contentauth/c2pa-attacks]
- (Enforcing the Digital Services Act: State of play, 2024-11-21)[https://epthinktank.eu/2024/11/21/enforcing-the-digital-services-act-state-of-play/]