AI DISCOVERY

Claude Projects and ChatGPT Agents in a marketing stack: what actually works.

2025 was the year "agents" went from demo to tool. Claude Projects shipped in June 2024, Custom GPTs matured through 2024, and ChatGPT Agents launched in July 2025. After running both across four client marketing teams for most of 2025, here are the use cases that earned their keep and the ones that did not.

What worked

1. Competitor content monitoring (Claude Project). Uploaded a Project with the competitor domain list, positioning documents, and recent blog RSS. Weekly prompt: "what changed in competitor content this week, and what should we react to?" Saved roughly 4 hours a week of content manager time. Not automatic (a human still reviewed) but dramatic productivity gain.

2. Ad copy iteration at scale (ChatGPT Agent). Agent with access to the brand voice document, previous winning ads (with performance), and the product feed. Generates 20 copy variations per creative brief, ranked by the agent against past winners. The lift in creative testing throughput was the biggest operational gain we saw.

3. SEO content brief generation (Claude Project). Project with SERP data (pasted in), brand voice doc, and content scoring rubric. Output: a brief that historically took a content strategist 90 minutes, now in 10 minutes of review. Quality was indistinguishable in blind review.

4. Analytics QA (Claude Project with MCP). Project wired to a warehouse view MCP server. Weekly prompt: "what anomalies do you see in last week's data vs trailing 4 weeks?" Caught two real issues in 2025 (a Meta tag firing twice, a broken UTM pattern) before anyone noticed.

What did not work

1. Full campaign planning. Agents without deep context miss the brand nuances a planner has in their head. Output felt generic. Plans needed so much human editing that the time saving disappeared.

2. Autonomous posting to channels. We tried agents that drafted and published social posts. Too many off-brand, off-tone, or factually wrong outputs. Never worth the reputational risk.

3. Customer support automation labelled as "AI assistant". Enterprise-y, outside the marketing remit, but flagged here because clients keep asking about it. The brands that rolled it out saw customer satisfaction scores decline unless the handoff to a human was seamless. Almost none had seamless handoffs.

The pattern

Agents and Projects win when (a) the task has a clear human review step, (b) the context can be loaded in full (brand voice, past outputs, data), and (c) the output is structured. They lose when the task needs judgement that is not encoded in the context window.

Sources: four client engagements using Claude Projects and Custom GPTs / ChatGPT Agents, Q2-Q4 2025; Claude Projects launch; OpenAI Agents announcement, July 2025.

© 2026 8LAB. All loops reserved.

EXPERIMENTS