Skip to content
APRIL 27, 2026 · 11 MIN READ

EU AI Act for Online Retailers: What Shop Operators Need to Know in 2026

ProductPolish Editorial··11 min read
EU AI Act for Online Retailers: What Shop Operators Need to Know in 2026

The EU AI Act has been in force since 1 August 2024, but it only becomes practically relevant for online retailers on 2 August 2026 — the day Article 50 transparency obligations kick in. Even so, a rumour keeps circulating in DACH e-commerce forums: from August 2026, every AI-generated product description must carry an "AI-generated" sticker.

That's not what the regulation says. It doesn't require this for commercial product listings — and most of the obligations don't fall on the retailer at all, but on the AI provider (ProductPolish, ChatGPT, Claude, Gemini, etc.).

This article cuts through the noise: which deadlines apply, what Article 50 actually requires, what you have to do as a retailer, what your AI tool covers, and where existing law (UWG, DSA, consumer protection) already handles the essentials.

Deadlines — what applies when

The AI Act applies in stages. The dates that matter for online retailers:

| Date | What happens | |---|---| | 1 August 2024 | AI Act enters into force (Regulation (EU) 2024/1689) | | 2 February 2025 | Prohibitions on "unacceptable risk" AI practices — relevant for social scoring etc., not for product copy | | 2 August 2025 | Obligations for GPAI providers (General-Purpose AI) — affects OpenAI, Anthropic, Google, doesn't affect you directly | | 2 August 2026 | Article 50 transparency obligations — the only practically relevant date for online retailers | | 2 August 2027 | High-risk AI obligations — product copy tools are not classified as high-risk |

What kicks in on 2 August 2026 sits in Article 50. Let's look at the three relevant paragraphs one at a time.

Article 50 in detail — what the regulation actually requires

Article 50 has three paragraphs that could plausibly apply to e-commerce tools. Only one of them does in practice — and that one targets the provider of your AI tool, not you.

Article 50(1): "You're talking to an AI"

"Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious."

This is the chatbot rule. If a buyer on your shop is chatting with an AI assistant (e.g. a Shopify chatbot answering product questions), it must be apparent that they're not talking to a human. Standard solution: a small "AI assistant" or "AI bot" label in the chat window is enough.

What this is not: an obligation for static product descriptions. A product description doesn't "interact" with the buyer — it gets read.

Article 50(2): Watermarking AI output

"Providers of AI systems […] generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated."

The crucial point: the obligation sits with the provider, not the deployer. If ProductPolish, ChatGPT, or Midjourney generates content for you, it's their responsibility to add a watermark — not yours.

What does this look like in practice?

  • Images: the most likely format is C2PA metadata — cryptographically signed information embedded in the image that machine-readably discloses AI origin. OpenAI, Adobe, and Google have already adopted C2PA. The buyer never sees it.
  • Text: here, watermarking is technically immature. The regulation itself says "where technically feasible". OpenAI and others are working on statistical watermarks, but they remain practically invisible for free-text use cases — and many regard them as unenforceable in practice.
  • Exception: "Assistive editing" that does not substantially alter the original input is exempt. An AI tool that only fixes typos doesn't need to watermark anything.

Article 50(4): the "matters of public interest" clause

"Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated."

This is the clause that gets misquoted constantly. It targets journalistic reporting, political information, health communications and similar content — areas where factual accuracy is existentially important to the public.

Product descriptions are not "matters of public interest" within the meaning of this clause. They are commercial communications and fall under advertising law — governed by UWG, the DSA, and consumer protection, not by Article 50(4).

Exception within the exception: if the AI text is reviewed and approved under the editorial responsibility of a natural or legal person, the disclosure obligation falls away anyway. That's the e-commerce default: you review the AI-generated copy, click "publish", and take responsibility — the human behind the listing is still you.

What this practically means for you

Three questions, three clear answers:

Do I have to mark AI-generated product copy as "AI-generated"?

No. Neither the AI Act, nor the DSA, nor UWG requires this on product listings. What is required: the text must not mislead. If the AI claims your battery lasts 24 hours when it actually lasts 8, that's a § 5 UWG violation (misleading commercial practice) — regardless of whether a human or an AI wrote it.

Do I have to label AI-generated images?

No, not you. The watermarking obligation sits with the image generator provider (ProductPolish, OpenAI, Midjourney, etc.). The metadata travels with the image. Some marketplaces (Meta has announced it, Amazon is reviewing) will start reading these tags and may display a small label — that's their obligation, not yours.

Do I need a notice in my imprint or terms?

Not required — but recommended. Neither the AI Act nor GDPR forces a "we use AI" disclaimer. But adding one signals transparency and reduces the surface area for an opportunistic competitor cease-and-desist. A single line in your terms — "Product copy and product images are partially AI-generated and reviewed editorially before publication" — is enough.

What doesn't change — laws that already apply

The AI Act doesn't build a new wall. It adds to existing rules that online retailers must follow anyway:

  • § 5 UWG (Germany) / § 1 UWG (Austria) — misleading practices. False claims about properties, origin, or performance are prohibited. Whether the AI invented them or an intern wrote them is irrelevant.
  • DSA (Digital Services Act). Marketplaces like Amazon, Etsy, and eBay have due-diligence obligations — they must be able to vet listings and act on illegal content. AI-generated copy falls under this like anything else.
  • GDPR. If your AI is trained on buyer data (e.g. for personalised copy), all GDPR obligations apply — legal basis, data processing agreements, right to information.

These three frameworks have been in force for years. Any retailer who already publishes accurate product descriptions today is 90% compliant with the AI Act already.

What you should do now — before 2 August 2026

A pragmatic checklist, sorted by impact:

1. Review AI output before publishing. This is the single most important step — and the only one that materially reduces risk. AI can hallucinate ("Material: stainless steel" when it's plastic), and you carry liability. A human (you or a colleague) clicks through every listing once and verifies the core facts.

2. Pick an AI tool that supports watermarking. Ask the provider explicitly: "Are you implementing Article 50(2) of the AI Act?" Serious tools answer clearly (e.g. "Yes, C2PA metadata from Q2 2026"). Vague answers mean the tool may face problems after August 2026 — which you don't carry as a deployer, but the tool may then become unavailable.

3. Review supplier contracts. If you commissioned an agency or tool to build a product catalogue, get it in writing who carries AI compliance. Ideally a clause: "Provider fulfils the requirements of Article 50 of the AI Act from the date of application."

4. Add a line about AI use to your terms. Optional, but a good signal. "Product copy and product images are partially AI-generated and reviewed editorially before publication." Nothing more is needed.

5. Quarterly fact-check spot audit. Walk through your top 20 best-selling listings and verify: every spec? Every material? Every dimension? Small hallucinations accumulate when no one reviews.

Fines — how big is the risk really?

The regulation provides for tiered penalties:

  • Up to €35M or 7% of worldwide annual turnover — for prohibited practices (social scoring, manipulative AI, etc.). Online retailers are practically never affected.
  • Up to €15M or 3% of annual turnover — for breach of Article 50 obligations (transparency). The risk sits here, but the obligation targets providers, not retailers.
  • Up to €7.5M or 1% of annual turnover — for incorrect information to authorities.

For a typical online retailer with 1–10 employees, the largest real risk isn't AI Act fines, but UWG cease-and-desist letters — which are far more common and far more likely. A single cease-and-desist averages €1,500–3,000 in disputed amount plus legal fees.

Where the AI Act is still unclear

FAQ

Do I need a visible "AI-generated" watermark on every AI-produced product image? No. The regulation requires machine-readable markings (e.g. C2PA metadata), not visible watermarks. And the obligation sits with the image generator provider, not with you.

What if I generate a text with AI and then edit it myself? If you, as the responsible person, review and approve the text before publication, the editorial responsibility exception in Article 50(4) applies. The disclosure obligation falls away — even if product descriptions were considered "matters of public interest" (which they aren't).

Am I directly addressed by the AI Act as a retailer at all? Generally no. You are a "deployer" within the meaning of the regulation. Direct obligations only hit you if you yourself develop and place AI systems on the market — rare for pure online retailers.

Can I keep using ChatGPT, Claude, Gemini, ProductPolish, etc.? Yes. Providers must ensure compliance by 2 August 2026. As a deployer, you don't have to do anything beyond reviewing outputs before publication.

What happens if the AI invents a property (hallucination)? If you publish it unchecked, you carry liability under UWG (§ 5 — misleading commercial practice). A competitor cease-and-desist is more realistic than an AI Act fine. Fact-checking before publication is the most important protective measure.

Is there an obligation to mention AI use in my terms? No. But a short clause in the terms or imprint is a good transparency move — and it reduces the attack surface for opportunistic cease-and-desist letters alleging deception.

When does the transition period for existing AI systems apply? For GPAI models (large language models) that were already on the market before 2 August 2025, a transition period until 2 August 2027 applies. For application systems like ProductPolish that use GPAI models, the regular deadline of 2 August 2026 applies.