The result before the theory
Shannβs account had a name on it and a small following. There was nothing in place worth calling a system β just sporadic posts that did sporadic numbers.
Then he sat down and built the system below.
- 5M impressions in 2 weeks, from an account that had been near-silent
- 11M views and 100K bookmarks by month two
- A weekly iteration loop thatβs been compounding ever since
The same Content OS runs on his agencyβs clients via LunarStrategy. Weβre publishing this as a deep-dive because the principles transfer to any creator or business serious about content automation tools and AI social media workflows β not just X.
The non-negotiable rule
Before any of the mechanics, the rule the whole system runs on:
Never publish unedited. Hand-finish every draft. The system is an accelerator, not autopilot. Used as automation, it decays.
The goal is not to fake a voice from prompts. Itβs to build a reusable operating asset from your real writing. Do the work once, keep it current, and every draft the model produces will start closer to you, so your time goes into sharpening ideas instead of filling in the obvious parts.
What βbookmarkableβ actually means
Shannβs system optimises for one thing: bookmarkable content.
A bookmark is a small promise the reader makes to their future self. It says βI will need this again.β Thatβs a much higher bar than a like, and it behaves differently in the algorithm too. As a marketer running this every week, the pattern is consistent: bookmarks are a vote on future utility, and posts that earn them keep showing up in feeds long after the post date.
Before anything ships, the question is whether the post resembles one of these:
- a checklist
- a blueprint
- a folder structure
- a template
- a framework
- a step-by-step workflow
- a proof screenshot with a takeaway
- a before/after
- a reusable mental model
If a draft doesnβt resemble any of those, it usually shouldnβt get scheduled.
The system in one diagram
Shann doesnβt run a single mega-prompt and he doesnβt run a stack of generic folders. He runs a system built around one idea: every piece of content is an object that carries its own state from idea to published.
ββββββββββββββββββββββββββββ ββββββββββββββββββββββββββββ
β EXTERNAL: SIGNAL LAYER β β INTERNAL: KNOWLEDGE GRAPHβ
β β β β
β X bookmarks, articles, β β personal OS, notes, β
β transcripts, DMs, β β journals, voice memos, β
β replies, competitor β β owned content archive β
β posts β β β
β β β β
β feeds: rewrite, β β feeds: original, β
β research + ideate β β repurpose β
βββββββββββββββ¬βββββββββββββ βββββββββββββββ¬βββββββββββββ
β β
ββββββββββββββββ¬ββββββββββββββββ
βΌ
ββββββββββββββββββββββββββββββββββββββ
β STRATEGY + VOICE + STORES β
β positioning, voice profile, β
β master avoid-slop, ideas, hooks, β
β proof bank, feedback log β
ββββββββββββββββββ¬ββββββββββββββββββββ
β feed into
βΌ
ββββββββββββββββββββββββββββββββββββββ
β PRODUCTION LEADER β
β opens run folder, routes via β
β idea gate, enforces gates β
ββββββββββββββββββ¬ββββββββββββββββββββ
β creates
βΌ
ββββββββββββββββββββββββββββββββββββββ
β RUN FOLDER (one per content object)β
β β
β idea ββΊ brief ββΊ draft ββΊ verify β
β ββΊ review ββΊ scheduler ββΊ feedbackβ
ββββββββββββββββββ¬ββββββββββββββββββββ
β updates on the way out
βΌ
ββββββββββββββββββββββββββββββββββββββ
β STORES β
β winners, losers, voice rules, β
β banned patterns, hooks, proof β
ββββββββββββββββββββββββββββββββββββββ
Context lives in two places.
The signal layer is everything external you bring in: bookmarks you saved this week, content from creators on your monitor list, an article you liked.
The knowledge graph is everything internal you already own: your personal OS, notes, journals, voice memos, and the archive of content youβve already shipped.
The route decides which source feeds the brief. Strategy + Voice + Stores sits between both and the writer so context is curated, not dumped.
Every post, article, thread, or campaign opens as a new run folder. That folder is the content object. It pulls from the shared parts of the system, moves through a lifecycle of gates, and writes its learnings back when it ships.
Lifecycle of one content object
captured
β idea_review (route: original / repurpose / rewrite / research+ideate)
β brief_ready
β drafting
β verification
β draft_review
β approved
β scheduler_ready
β scheduled
β published
β feedback_24h
β feedback_72h
β learned
What sits around the run folder:
- Strategy. Positioning, audience, pillars, source watchlist. The only layer Shann edits by hand. βIf you let an AI write your positioning, you do not have positioning. You have averages.β
- Voice. Voice profile and master avoid-slop document. Read by every agent before it drafts a single line.
- Stores. Inbox for raw inputs, workboard for what needs attention, ideas backlog, hook bank, proof bank, feedback log. The shared memory the run folder reads from and writes back to.
- Modules. The writer skill (SKILL.md, references, templates). One module per role you give the system.
- Workflows. The playbooks that move a content object through its states: idea-to-published, verifier checklist, scheduler handoff, feedback loop.
The four routes
Before a content object enters drafting, the idea gate makes one decision: what kind of content is this?
Four routes, each with its own brief, its own references, and its own gates:
1. ORIGINAL. Create something drawn directly from you or your second brain (personal OS, notes, journals, voice memos, ideas youβve been sitting on for weeks). The brief leans on your foundation: positioning, proof bank, pillars. No external source. High taste investment.
2. REPURPOSE. Take owned content and extend it. A series spinoff, a thread spun out of one of your articles, a self-QRT on a post that hit, or tweets that pull a single line from a piece youβve already shipped. The spine is yours; the format changes.
3. REWRITE. Take external source material from the signal layer (a tweet worth responding to, an article worth a teardown, a transcript with a useful frame) and translate it through your point of view and voice. The brief is explicit about what to keep, what to credit, and which voice rules apply.
4. RESEARCH + IDEATE. Explore a topic, study patterns, generate candidate angles before any drafting starts. The output is not a post. Itβs a sharpened idea or a list of angles that feed back into stores/ideas/ for later production.
Each route still produces one run folder, with the route declared in content-object.md:
runs/active/2026-05-bookmark-flywheel/
content-object.md # route, current state, next action
idea.md # the idea gate decision
brief.md # writer handoff (for original, repurpose, rewrite)
draft-package.md # rendered draft, verifier output, review notes
feedback.md # 24h / 72h learnings
What came before this
Shannβs first version was a four-agent system: researcher, idea maker, writer, and an orchestrator routing between them. Each agent had its own memory that persisted across sessions. The whole thing ran in Claude Code, with markdown prompts, a database, and CLI tools.
It worked. But it was overbuilt. Four agents for what is, structurally, three production steps and a feedback loop. Running it taught him what most blog posts on agent swarms leave out:
The agent count was not the lever. The knowledge layer feeding the writer was.
The current setup is the leaner version. Fewer agents, more workflows, the same loop, sharper output.
The folder you can build today
You donβt need fancy infrastructure to start. You need a directory that holds the shared parts and a place where each content object lives until it ships.
/content-os
/strategy
positioning.md
audience.md
pillars.md
source-watchlist.md
/voice
voice-profile.md
master-avoid-slop.md
/stores
inbox.md
workboard.md
ideas/
hooks/
proof/
feedback/
/runs
/active
/2026-05-bookmark-flywheel
content-object.md
idea.md
brief.md
draft-package.md
feedback.md
/archive
/modules
/writer
SKILL.md
references/
templates/
/workflows
idea-to-published-post.md
verifier-checklist.md
scheduler-handoff.md
feedback-loop.md
runs/active/ is the heart. Each folder in there is one content object. One piece of content equals one run folder, and that folder carries its own state until it ships and gets archived.
Notion, Obsidian, a git repo, a shared drive β whatever you already use. The shape matters more than the tool.
V1 setup: 1β2 hours of upfront work
Plan for 1β2 hours of upfront work. You wonβt be done. You will be started, which is the only state worth being in. The time you spend here pays back in hours saved every week after, because the next draft starts with context instead of from scratch.
- Scaffold the structure. Create six top-level directories: strategy, voice, stores, runs, modules, workflows. Inside
runs, add active/ and archive/. Leave the rest mostly empty for now.
- Fill strategy. Open
positioning.md, audience.md, and pillars.md. Three to five lines each. Pillars are the three or four topics youβve earned the right to talk about. Audience is one specific person, not a segment.
- Write the voice anchors. Drop
voice/voice-profile.md (5 rules you always follow, 5 patterns you never use, 2β3 reference posts that sound like you on your sharpest day) and start your own voice/avoid-slop.md (use the eight patterns below as a starting filter, add to it every time a draft slips a tell past you). Put ten concrete proofs in stores/proof/ β numbers, names, projects youβve shipped, lived examples.
- Drop ten ideas into
stores/inbox.md. Half should come from things youβve already said in DMs or calls this month, not made up on the spot.
- Open a run folder for one idea. Create
runs/active/2026-05-{your-slug}/. Inside it, write content-object.md (id, status, format, pillar) and brief.md (the writer context packet template, below). Hand the brief to your writing model.
- Read the draft and close the loop. The model returns
draft-package.md inside the same run folder. Check it against the verifier and the master avoid-slop document. Approve and queue, or send it back with one specific note. When it ships, write feedback.md and archive the folder.
The writer context packet
This is the part most people get wrong. They dump the whole brand doc, the whole knowledge base, and the whole feed into one prompt. The model writes safe mush because nothing in the context is load-bearing. A tight packet beats a giant context window almost every time.
The packet lives inside the run folder as brief.md. One packet per content object, written before drafting starts:
writer context packet
βββββββββββββββββββββ
thesis: one sentence the post must prove
reader: the specific person who should save it
proof: numbers, screenshots, stories I am allowed to use
angle: the unexpected framing
constraints: format, length, tone, banned phrases
voice anchors: 2-3 lines that sound like me
risks: what would make this read as slop or as cringe
open loops: what I do not yet know, that the writer should flag
The bookmarkability rubric
Before a draft goes near the schedule, score it. Zero, one, or two points each:
- saves the reader a future task
- includes proof (numbers, screenshot, named example)
- gives a reusable takeaway (template, checklist, frame)
- has a specific audience and job-to-be-done
- can be applied without you being in the room
- has a strong screenshot or visual
Out of 12. Shannβs personal bar is 8. Below 8 it goes back to the packet, not to the trash. Most βbadβ drafts are good drafts that skipped a row in the rubric. Fix the row, re-score, ship.
This rubric is also the cheapest way to train new collaborators. You donβt have to teach taste in the abstract. You hand them the rubric, point at three winning posts, and let the score do the talking.
The master avoid-slop document
The rubric tells you if a post is worth saving. The avoid-slop document tells you if a post sounds like a person wrote it.
Run every draft through one document before it ships. Shannβs is 54 patterns, broken into three severity tiers, with concrete rewrites for each. It catches things like:
- promotional language (βgroundbreakingβ, βgame-changingβ)
- significance inflation (βpivotal momentβ, βtestament toβ)
- vague attribution (βexperts believeβ, βstudies showβ)
- false agency (βthe system compoundsβ, βthe data tells usβ)
- rhetorical setups (βthe question is whether you Xβ)
- staccato fragmentation (βno X. no Y. no Z.β)
- em dash overuse (zero is the target)
- filler adverbs (βactuallyβ, βliterallyβ, βquietlyβ)
This is the document the writer agent loads before drafting and the verifier loads before approval. Itβs the difference between βAI wrote thisβ and βa person who happens to use AI wrote this.β
Four prompts you can copy
Short, scoped, and meant to be edited. Treat them as starting shapes, not magic spells. Each one maps to one layer of the system.
Prompt 1: Brand foundation extraction
Maps to: strategy/ + voice/
ROLE
You are helping me build the foundation layer of a personal-brand
content system. You will turn raw, half-formed notes about my
work, audience, and voice into a tight set of operating documents
my writer agent can use to draft in my voice.
INPUT
I will paste raw notes covering: what I do, who I help, what I
have shipped or built, how I sound when I write, the kinds of
people I want as readers, and anything I refuse to sound like.
PROCESS
1. Read the notes. Note any contradictions or gaps.
2. Ask me up to 5 clarifying questions. Do not skip this step.
3. Once I answer, produce the six artifacts in OUTPUT FORMAT.
4. Flag anything you had to invent or guess. Mark it "assumed".
OUTPUT FORMAT
1. positioning. one sentence. the line a stranger should be able
to repeat back after one of my posts.
2. audience. one specific person, by role, situation, and stake.
not a segment.
3. pillars. 3 to 4 topics I have earned the right to talk about,
each with a one-line reason I am credible on it.
4. voice rules. 5 things I always do.
5. banned patterns. 5 things I never do.
6. proof bank. 10 concrete things I can reference (numbers,
names, shipped projects, lived examples).
RULES
- If a section is generic, mark it "missing" and tell me what
you need from me.
- Do not invent numbers, customers, or projects.
- Use my own words wherever possible.
- The output should fit on one page.
Prompt 2: Bookmarkability scoring
Maps to: stores/ideas/ (idea gate)
ROLE
You are a critic who has read 10,000 high-bookmark posts and
1,000,000 forgettable ones. You can tell, line for line, what
makes a reader save a post versus scroll past.
INPUT
A post idea or a draft.
PROCESS
1. Read it once for the surface read.
2. Score it 0, 1, or 2 on each row of the rubric below.
3. Total it out of 12.
4. If under 8, name the single row that would lift the score
most, and how.
RUBRIC
- saves the reader a future task
- includes proof (numbers, screenshot, named example)
- gives a reusable takeaway (template, checklist, frame)
- has a specific audience and job-to-be-done
- can be applied without me being in the room
- has a strong screenshot or visual
OUTPUT FORMAT
- Total score: X / 12
- Strongest row: [row] (why)
- Weakest row: [row] (specific fix, in one line)
- Verdict: ship / fix and re-score / kill
Prompt 3: Writer context packet
Maps to: runs/active/{slug}/brief.md
ROLE
You are the production lead for my content system. Turn one
approved idea into a writer context packet β a tight, shaped
brief that gives the writer enough to draft sharply without
flooding the model with my entire knowledge base.
INPUT
- One approved idea (thesis, format, target reader)
- Pointers to my foundation files (positioning, audience,
pillars, voice rules, banned patterns, proof bank)
- Any source material specific to this post
PROCESS
1. Restate the idea in one sentence to confirm you understood it.
2. Pull only the slices of my foundation files this post needs.
3. Fill in the packet template below.
4. For any field you can't fill, write "missing" and list
exactly what you need.
OUTPUT FORMAT
thesis: one sentence the post must prove
reader: the specific person who should save it
proof: numbers, screenshots, stories I am allowed to use
angle: the unexpected framing
constraints: format, length, tone, banned phrases
voice anchors: 2-3 lines that sound like me
risks: what would make this read as slop or cringe
open loops: what I do not yet know
RULES
- Smaller is better. Aim for 400-900 tokens.
- Do not paste my full foundation files.
- Refusal is allowed. If you don't have enough, say so and ask.
Prompt 4: The viral postmortem
Maps to: runs/active/{slug}/draft-package.md, the final pass before review
ROLE
You are reading a post that already crossed 1M views and 10K
bookmarks one week from now. You are not writing it. You are
explaining, after the fact, why it landed.
INPUT
A draft, ready for the final pass.
PROCESS
1. Read the draft.
2. Point at specific lines that did the work.
3. Name the hook move.
4. Name the proof that made it credible.
5. Name the line a reader would screenshot.
6. Name the line that made it save-worthy.
7. Name the line that would make someone reply or share.
OUTPUT FORMAT
- hook move: [exact line] (why it works)
- credibility: [exact line] (why a reader believes it)
- screenshottable line: [exact line]
- save-worthy line: [exact line]
- reply or share trigger: [exact line]
- weakest part: [exact line] (what to fix before shipping)
RULES
- Do not say "great post". Do not say "strong hook". Point at
specific lines or admit you can't.
- If you cannot point at a line for any of the categories above,
say so plainly. That is the row to fix before it ships.
This last one is the highest-leverage prompt in the system. The model canβt say βstrong hookβ or βgreat insight.β It has to point at exact lines. Most drafts that pass the verifier still fail this prompt, and the gap between the two is where the real edits live.
Two models, two roles
Once Shann started running the system at volume, he hit something obvious in retrospect:
The job of writing and the job of running the system are different jobs, and they reward different models.
So he stopped using one model for everything. The split:
Writer model handles:
β taste
β rhythm
β compression
β voice
β the actual draft
Orchestrator model handles:
β routing between layers
β packaging the right context for the writer
β deciding what gets passed in
β running the verifier
β the handoff to the publish layer
The specific model names will date fast. The principle wonβt: pick a high-taste, high-context model for drafting and a fast, tool-using model for orchestration.
Where the system lives
A system like this only works if the orchestrator can do the work. It needs to read and write files, call tools, run checks, and hand approved drafts to the publish layer on a schedule. The real question isnβt which app you use β itβs where the system lives.
Two setups that work cleanly:
-
A VPS with Claude Code. Rent a small server, put your /content-os folder in a git repo, and let Claude Code run the workflow on it. Full control, a machine you own, scheduled jobs for the weekly review, scripts for the verifier, and an AI coder that can touch the same files you do. This is the route if you like owning the stack and youβre comfortable on a server.
-
An agent platform with persistent context. Built for this shape of work β agents, skills, tool access, file and git operations, browser and search, scheduled jobs, and context that persists across the whole workflow. Point it at your foundation, your inputs, and your packet template, and the orchestration layer carries the load between steps. Drafts still come to you before anything ships.
Neither is required. Host the system somewhere your orchestrator can read and write your files, call the tools it needs, run the verifier, and hand approved posts off to a publishing layer.
Postiz: the publish layer
Once a draft is approved, it goes into Postiz and gets queued.
One place to schedule across X, LinkedIn, Instagram, Threads, TikTok, YouTube, Bluesky, Reddit, Pinterest, Facebook, Discord, Slack, Telegram, Mastodon, Hashnode, Dev.to, Medium, WordPress and 28+ other channels. Open source and self-hostable, built for the agent shape of work β which is the part of the story that matters here.
Because the whole point of an AI content automation system is that the agent in the orchestrator can publish without you once youβve approved the draft. That means three things have to be true of the publish layer:
- It exposes a stable, scriptable API the agent can call.
- The agent doesnβt need to know each platformβs quirks (Xβs character limits, Redditβs flair system, TikTokβs media upload flow) β the publish layer abstracts them.
- The same approved post can fan out across channels without re-formatting by hand.
Postiz handles all three. Hereβs what that looks like in practice if youβre wiring it into your own Content OS.
Option A: The Postiz Public API
The simplest integration: hit one HTTPS endpoint with your approved draft, get back a scheduled post.
Base URL pattern:
β Cloud: https://api.postiz.com/public/v1
β Self-hosted: https://your-domain.com/api/public/v1
Auth is a single header (grab the key from Settings β Developers β Public API):
Authorization: your-api-key
A minimal scheduled-post call from the orchestrator looks like this:
curl -X POST https://api.postiz.com/public/v1/posts \
-H "Authorization: $POSTIZ_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"type": "schedule",
"date": "2026-05-15T10:00:00.000Z",
"shortLink": false,
"tags": [],
"posts": [{
"integration": { "id": "your-integration-id" },
"value": [{ "content": "Hello world!", "image": [] }],
"settings": { "__type": "x" }
}]
}'
Useful endpoints for an agent loop:
GET /integrations β list connected accounts
GET /integration-settings/:id β fetch the platform-specific schema (character limits, allowed media, helper tools)
POST /upload β upload media (required for TikTok, Instagram, YouTube)
POST /posts β schedule, draft, or publish
GET /analytics/post/:postId β post-level analytics for the feedback loop
The integration-settings endpoint is the one most people miss. Itβs how your agent learns at runtime that an X post can be 280 characters, an LinkedIn post can be 3000, and a Reddit post needs a flair before itβll accept the body. The orchestrator queries it, adapts the draft, and submits β no hardcoded platform knowledge.
Option B: The Postiz MCP server
If your orchestrator is an MCP-compatible client (Claude Desktop, Cursor, or anything wired up via the Model Context Protocol), the publish layer connects with one URL and zero glue code.
Cloud endpoint:
https://api.postiz.com/mcp/your-api-key
Drop it into Claude Desktopβs config:
{
"mcpServers": {
"postiz": {
"url": "https://api.postiz.com/mcp/your-api-key"
}
}
}
The server exposes the tools an agent actually needs to publish:
integrationList β list connected accounts
integrationSchema β discover platform rules, character limits, helper tools
triggerTool β call platform-specific helpers (Reddit flairs, Discord channels, LinkedIn pages)
schedulePostTool β schedule, draft, or publish (supports threads, comments, media)
generateImageTool β generate AI images for posts
generateVideoTool β generate videos (image-text-slides or Veo3)
For the orchestrator-as-publisher pattern, this is the cleanest interface. The agent discovers whatβs possible at runtime, doesnβt have to maintain a hardcoded map of platform quirks, and the human stays in the loop on approval.
Option C: The Postiz Agent CLI
If your Content OS is shell-script-shaped β files in folders, cron, simple commands β the Postiz Agent CLI is the right entry point.
npm install -g postiz
postiz auth:login
Then a representative scheduled post:
postiz posts:create \
-c "Check out this feature!" \
-m "image.jpg" \
-s "2026-05-15T10:00:00Z" \
-i "twitter-id,linkedin-id,facebook-id"
The CLI outputs JSON, which means a scheduler workflow at the back of the Content OS can shell out to it and parse the result without any SDK boilerplate. Useful commands for the loop:
postiz integrations:list # connected accounts
postiz integrations:settings <id> # platform schema
postiz posts:create -c "main" -c "reply" -s "..." -i "id" # threads + comments
postiz upload <file> # media first, post second
postiz analytics:post <post-id> # post-level analytics
The same POSTIZ_API_KEY env var works across all three surfaces, and self-hosted vs cloud differs only in the base URL.
The feedback loop is a moat
Most people stop at publish. Thatβs where the system starts earning.
Every week, look at:
- views
- bookmarks
- bookmark rate β the one to watch
- replies
- DM follow-ups
Bookmark rate tells you which posts earned the save, not just the scroll. Winners get copied into inputs as examples next to their numbers. Losers update voice rules, banned patterns, or the idea filter. The next packet gets sharper because of what you learned this week.
This is the part that compounds. Six months in, your voice/ folder is genuinely yours. Your stores/proof/ is bigger than any prompt context window. Your master avoid-slop list catches things no off-the-shelf prompt knows to look for. The system becomes harder to copy than the posts it produces.
What to build first if youβre starting today
If you only build one thing this week, build the writer context packet template and run one idea through it end-to-end.
Thatβs it. Not the four-prompt stack. Not the run folder. Not the orchestrator. The packet.
Because the packet is what forces you to answer questions you didnβt realise youβd been skipping: who specifically saves this, what proof youβre allowed to cite, which voice anchor lines this should sound like. Get that right once, and the rest of the system has a shape to grow into.
When you have drafts youβre proud of and the part youβre missing is the publish side β the queue, the schedule, the cross-channel fan-out, the agent-friendly API β thatβs where Postiz comes in.
Try Postiz
Postiz is the open-source social media management platform purpose-built for the AI content workflow we just walked through. Schedule across 28+ channels from one queue. Use the public API, the MCP server, or the CLI to plug it into your own orchestrator. Self-host it on your own infrastructure or use the hosted version. Either way, your agents can publish without you touching a dashboard, and you stay in control of the approval gate that matters.
Start scheduling with Postiz β