7 day free trial →
PostEverywhere
PostEverywhere Logo
Pricing
Features
Social Media Management

All-in-one platform for every workflow

Social Media Scheduler

Schedule to 8 platforms from one dashboard

Content Calendar

Visual drag-and-drop content planner

Publishing

Create and distribute across platforms

Automation

Auto-post at optimal times with AI

AI Content Generator

Generate captions, images & videos

AI Image Generator

Create visuals from text prompts

Analytics

Track performance across platforms

Multi-Account

Manage up to 40 accounts

Bulk Scheduling

Upload CSV & schedule hundreds of posts

Platforms
Instagram

Posts, Reels, Stories & Carousels

LinkedIn

Profiles & company pages

TikTok

Videos & photo carousels

Facebook

Pages, groups & Reels

X

Posts, threads & media

YouTube

Videos, Shorts & community

Threads

Text posts & media

Pinterest

Pins & idea pins

API Docs
Resources
Blog

Social media tips and strategies

Free Tools

30+ free social media utilities

AI Models

Browse 50+ AI image & video models

How‑To Guides

Step-by-step tutorials

Support

Help center & contact

For Agencies

Multi-client management at scale

For Creators

Grow your audience everywhere

Join with GoogleStart 7-day free trial
Pricing
Features
  • Social Media Management
  • Social Media Scheduler
  • Content Calendar
  • Publishing
  • Automation
  • AI Content Generator
  • AI Image Generator
  • Analytics
  • Multi-Account
  • Bulk Scheduling
Platforms
  • Instagram
  • LinkedIn
  • TikTok
  • Facebook
  • X
  • YouTube
  • Threads
  • Pinterest
API Docs
Resources
  • Blog
  • Free Tools
  • AI Models
  • How‑To Guides
  • Support
  • For Agencies
  • For Creators
Log in
ToolsDevelopers

Build an AI Social Media Agent with PostEverywhere (Full Tutorial)

Jamie Partridge
Jamie Partridge
Founder·April 26, 2026·Updated April 28, 2026·15 min read
Architecture diagram of an autonomous AI social media agent calling the PostEverywhere API

When people say "AI agent for social media", they usually mean one of two things. The first is a chatbot wrapped around a vendor's UI — useful, limited. The second is what this guide is about: a real agent with two distinct layers. An intelligence layer (the LLM) decides what to post, when, and which platforms it belongs on. An execution layer (the PostEverywhere API) handles the OAuth tokens, platform quirks, media transcoding, and retry logic.

Get those two layers right and your agent stays simple. The LLM never has to understand the difference between Instagram's media endpoint and Threads' container model, because the API normalises that surface area to one POST /posts call. The API never has to make subjective decisions about content quality, because that is the LLM's job. Each layer does what it is good at.

This guide walks through three reference architectures, then builds one of them in working Node.js. By the end, you have an agent that picks the best time to post based on past engagement, drafts platform-specific content, and ships it. Around 60 lines of agent loop, deployable to Vercel, Cloudflare Workers, or AWS Lambda.

Edited by Jamie Partridge, Founder. Reviewed 26 April 2026.

Table of Contents

  1. Why Agents Need an API, Not Browser Automation
  2. Three Architectures for Social Agents
  3. Setup: API Key, SDK, Environment
  4. The Agent Loop: Working Code
  5. How the LLM Makes Decisions
  6. Deploying to Vercel, Cloudflare, or Lambda
  7. Handling Errors, Rate Limits, and Edge Cases
  8. FAQs

Why Agents Need an API, Not Browser Automation

A short detour because this question comes up every week. "Why do I need an API at all? Can't I just use Playwright and have my agent click buttons?"

You can. It will fail within a month. Three reasons:

Tokens and bot detection. Instagram, X, and TikTok all run heuristics on session behaviour. Headless browsers driven by Playwright fingerprint differently from real Chrome. Your agent will get session-flagged, then locked out, then shadow-banned. We have seen customers come over to the API after a Playwright fleet of 30 accounts went dark in one weekend.

Platform changes. Every platform ships UI changes monthly. Your selectors break. Your test suite goes red. You spend more time fixing scrapers than working on the agent's actual behaviour.

OAuth, refresh tokens, scopes. Instagram tokens expire. Threads requires a daily refresh in some cases. LinkedIn rotates session cookies aggressively. Doing this right across eight platforms is a full-time job — which is why companies like PostEverywhere exist.

The PostEverywhere API wraps all of that into one bearer-token interface. Your agent talks JSON, not DOM selectors. Your reliability goes up, your maintenance goes to near-zero. The tradeoff is the API price ($19/mo on Starter). For any agent doing real volume, that is recouped in the first day of avoided debugging.

Three Architectures for Social Agents

There is no single "right" way to build this. Pick the one that matches how the agent fits into your stack.

Architecture 1: MCP via Claude Code (simplest)

You are already coding with Claude. You install @posteverywhere/mcp (source on GitHub), drop the JSON into your .claude/mcp.json, and Claude becomes the agent. No server, no cron, no deployment. You ask Claude to plan and post; it calls the 11 MCP tools and reports back.

Best for: solo founders, content marketers who code, anyone who already lives in Claude Code or Claude Desktop. Setup time: 5 minutes.

Tradeoff: there is no autonomous schedule. The agent only runs when you prompt it. Fine if "running the agent" means "checking it once a day". Read the full setup in our Claude Code MCP guide.

Architecture 2: Custom agent + Node SDK (most control)

You write a Node.js script that imports the @posteverywhere/sdk SDK, calls an LLM (OpenAI, Anthropic, Gemini, Ollama, your choice), and runs on a cron. The agent has full control: which model, which prompt template, what data to pass in, how to handle errors. You can store performance history, run A/B tests, branch logic on platform.

Best for: teams who want a long-lived production agent, agencies running this for clients, products that have an "auto-post" feature. Setup time: a day for the basics, a week for production-grade.

Tradeoff: more code to maintain. You are wiring up the LLM call, the API call, and the storage layer yourself. This guide builds exactly this.

Architecture 3: LangChain / CrewAI / AutoGen + REST (any framework)

If your team is already on an agent framework, the REST API drops in as a tool. LangChain has a Tool primitive that takes an HTTP function. CrewAI exposes the same pattern. AutoGen lets you register functions as agent capabilities. You wrap each PostEverywhere endpoint (/posts, /accounts, /media) as a tool and let the framework handle the orchestration loop.

Best for: teams already using an agent framework for other workflows, products doing multi-step planning where social media is one of many tools.

Tradeoff: you are picking up the framework's complexity, not just the API's. If you are not already on LangChain, do not adopt it just for this — Architecture 2 is lighter.

Setup: API Key, SDK, Environment

The rest of this guide builds Architecture 2.

Get an API key from the developer dashboard. The free trial includes API access. Add the ai scope if you want image generation.

# .env
POSTEVERYWHERE_API_KEY=pe_live_a1b2c3d4e5f67890abcdef1234567890
OPENAI_API_KEY=sk-proj-...

Install the SDK and a few helpers:

npm init -y
npm install @posteverywhere/sdk openai dotenv

Project skeleton:

agent/
  index.ts        # entrypoint
  decide.ts       # LLM-driven decision logic
  schedule.ts     # PostEverywhere API calls
  history.json    # persisted engagement data
  .env

The SDK is a thin typed wrapper around the REST API. One client, one bearer token, methods for every endpoint:

import { PostEverywhere } from '@posteverywhere/sdk';

const client = new PostEverywhere({
  apiKey: process.env.POSTEVERYWHERE_API_KEY!,
});

const accounts = await client.accounts.list();
const post = await client.posts.create({
  content: 'Hello world',
  account_ids: [12, 34, 56],
});

You can read the source on GitHub — it is small enough to skim in 10 minutes if you want to know exactly what the requests look like before adopting it.

The Agent Loop: Working Code

The agent does four things: read connected accounts, look at past performance, decide what and when to post, and ship it. Here it is end to end.

schedule.ts — the execution layer

// schedule.ts
import { PostEverywhere } from '@posteverywhere/sdk';

const client = new PostEverywhere({
  apiKey: process.env.POSTEVERYWHERE_API_KEY!,
});

export async function listAccounts() {
  const { data } = await client.accounts.list();
  return data; // [{ id: 12, platform: 'instagram', username: '...' }, ...]
}

export async function recentPosts(daysBack = 14) {
  const since = new Date();
  since.setDate(since.getDate() - daysBack);
  const { data } = await client.posts.list({
    status: 'published',
    since: since.toISOString(),
  });
  return data;
}

export async function schedulePost(opts: {
  content: string;
  account_ids: number[];
  scheduled_for: string; // ISO 8601 UTC
  timezone?: string;
  media_ids?: string[];
  platform_content?: Record<string, { content: string }>;
}) {
  const { data } = await client.posts.create(opts);
  return data; // { id, status: 'scheduled', scheduled_for, ... }
}

Three thin wrappers around the SDK. The TypeScript signatures match the CreatePostRequest schema from the OpenAPI spec: content and account_ids are required, the rest are optional. platform_content lets you override per platform — useful when you want a different caption on LinkedIn versus X.

decide.ts — the intelligence layer

// decide.ts
import OpenAI from 'openai';
import { recentPosts } from './schedule';

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

// Plug in your own engagement source here — PostEverywhere's GET /posts/{id}/results
// returns per-platform publish status and URLs, but not engagement rate. For engagement
// you query each platform's native analytics, store it yourself, or use a third-party
// analytics provider. This stub returns whatever shape your data warehouse uses.
async function fetchEngagement(postId: string, platform: string): Promise<number> {
  // Replace with your real analytics call — examples: a Postgres query, the platform's
  // Insights API, or your own metrics service.
  return 0;
}

export async function decideNextPost() {
  const history = await recentPosts(14);

  // Group published posts by hour-of-day + platform, then average engagement
  const byHour: Record<string, number[]> = {};
  for (const post of history) {
    if (!post.published_at) continue;
    const hour = new Date(post.published_at).getUTCHours();
    const key = `${post.platform}:${hour}`;
    if (!byHour[key]) byHour[key] = [];
    const engagement = await fetchEngagement(post.id, post.platform);
    if (engagement > 0) byHour[key].push(engagement);
  }

  // Best hour per platform
  const bestHours: Record<string, number> = {};
  for (const [key, rates] of Object.entries(byHour)) {
    const [platform, hour] = key.split(':');
    const avg = rates.reduce((a, b) => a + b, 0) / rates.length;
    if (!bestHours[platform] || avg > bestHours[`${platform}_score`]) {
      bestHours[platform] = parseInt(hour);
      bestHours[`${platform}_score`] = avg;
    }
  }

  // Ask the LLM to draft platform-specific content
  const completion = await openai.chat.completions.create({
    model: 'gpt-4o',
    response_format: { type: 'json_object' },
    messages: [
      {
        role: 'system',
        content: `You are a social media strategist. Generate a post about
PostEverywhere's API. Return JSON with keys: shared_content (string, the
core message), platform_overrides (object with keys instagram, linkedin,
x, threads — each a platform-tailored version).`,
      },
    ],
  });

  const content = JSON.parse(completion.choices[0].message.content!);

  return {
    content: content.shared_content,
    platform_content: content.platform_overrides,
    bestHours,
  };
}

Two clean functions. recentPosts(14) pulls the last fortnight of published work and computes the best-performing hour per platform. The OpenAI call drafts the actual content. You could swap GPT-4o for Claude Sonnet 4.6 or Gemini 2.5 with one line — the rest of the agent does not care which model does the drafting.

index.ts — the loop

// index.ts
import 'dotenv/config';
import { listAccounts, schedulePost } from './schedule';
import { decideNextPost } from './decide';

async function run() {
  console.log(`[${new Date().toISOString()}] agent starting`);

  // 1. Discover what we can post to
  const accounts = await listAccounts();
  console.log(`  ${accounts.length} accounts connected`);

  // 2. Decide content + best time per platform
  const decision = await decideNextPost();
  console.log(`  decision: ${decision.content.slice(0, 80)}...`);

  // 3. Map best hour per platform to a real timestamp
  const targetUtc = new Date();
  targetUtc.setUTCDate(targetUtc.getUTCDate() + 1); // tomorrow
  const avgHour = Math.round(
    Object.entries(decision.bestHours)
      .filter(([k]) => !k.endsWith('_score'))
      .reduce((sum, [, h]) => sum + (h as number), 0) /
      Math.max(Object.keys(decision.bestHours).length / 2, 1),
  ) || 14; // fallback 2pm UTC
  targetUtc.setUTCHours(avgHour, 0, 0, 0);

  // 4. Ship it
  const result = await schedulePost({
    content: decision.content,
    account_ids: accounts.map((a: any) => a.id),
    scheduled_for: targetUtc.toISOString(),
    platform_content: decision.platform_content,
  });

  console.log(`  scheduled post ${result.id} for ${result.scheduled_for}`);
}

run().catch((err) => {
  console.error(err);
  process.exit(1);
});

Run it with npx tsx index.ts. You will see one HTTP request to GET /accounts, one to GET /posts?status=published, one to OpenAI, and one to POST /posts. Total runtime on a normal connection: 4 to 8 seconds.

That is the whole agent. The "intelligence" is in decide.ts; the "execution" is in schedule.ts. Each layer is independently testable. Swap the LLM, swap the API, neither cares.

How the LLM Makes Decisions

Worth slowing down on this part because it is where most agents go wrong.

You do not want the LLM picking endpoints, timestamps, or account IDs. Those are deterministic, structured decisions — the API and your code handle them. The LLM is good at exactly two things in this loop:

1. Drafting content. Generating a Threads post that sounds different from a LinkedIn post that sounds different from an X reply. This is unambiguously LLM territory.

2. Choosing topic from history. Given that "scheduling tips" got 4.2% engagement and "case studies" got 1.1%, lean into scheduling tips next week. You feed the LLM a structured summary of past performance and ask it to recommend a topic. You do not let the LLM compute averages — Python or your typed code does that.

The pattern in decide.ts is intentional: deterministic stats from recentPosts(), then an LLM call for content. The reverse (LLM does stats, deterministic code drafts content) is the wrong split and produces bad agents.

A second principle: the LLM should never have raw API access. Give it structured inputs and accept structured outputs. JSON in, JSON out. Validated against a schema (Zod, valibot, Joi — pick one). If the LLM hallucinates a platform name, your validator catches it before the API does.

Skip the auth and platform code entirely. PostEverywhere handles tokens, refresh, retries, transcoding. Your agent stays small. Get your API key.

Deploying to Vercel, Cloudflare, or Lambda

The agent is ~150 lines of TypeScript. Deploy options, in increasing complexity:

Vercel Cron (easiest)

Vercel supports cron-triggered serverless functions on the Pro plan. Drop the agent into app/api/agent/route.ts, add this to vercel.json:

{
  "crons": [
    {
      "path": "/api/agent",
      "schedule": "0 13 * * *"
    }
  ]
}

Set environment variables in the Vercel dashboard. Done. Your agent runs daily at 1pm UTC. Free if you stay under the cron quota; otherwise it's part of the Pro plan.

Cloudflare Workers (cheapest at scale)

Cloudflare Workers + Cron Triggers handle this for $5/mo. The Workers runtime is V8, not Node, so the SDK has to support fetch natively — ours does, and the OpenAI SDK has a Workers-compatible build. Wrangler config:

# wrangler.toml
name = "social-agent"
main = "src/index.ts"
compatibility_date = "2026-04-01"

[triggers]
crons = ["0 13 * * *"]

Read the Cloudflare Cron Triggers docs for the exact handler signature. KV or D1 for persistence if you want to store performance history.

AWS Lambda (most flexible)

Lambda + EventBridge Scheduler is the right move if your team is already on AWS. Package the agent as a single zip, create a scheduled rule, point it at the Lambda. Use Parameter Store or Secrets Manager for the API keys, RDS or DynamoDB for state.

This is overkill for a single agent. Worth it for an agency running 50 client agents on staggered schedules.

A node on a server (simplest if you own one)

If you already run a server, a cron line is the lightest possible deployment:

0 13 * * * cd /opt/social-agent && /usr/bin/node dist/index.js >> /var/log/social-agent.log 2>&1

PM2 if you want process management. systemd timers if you prefer system-native.

Handling Errors, Rate Limits, and Edge Cases

What goes wrong in production:

429 rate limited. PostEverywhere's API limits are 60/min, 1,000/hour, 10,000/day. The response includes Retry-After (in seconds) and X-RateLimit-Reset (Unix timestamp). The SDK throws a typed RateLimitError; catch it, sleep, retry. For a single-account agent posting once a day, you will not hit this. For an agency-scale agent, implement exponential backoff with jitter.

402 insufficient credits. Returned when AI image generation runs out of credits. Either upgrade the plan or fall back to text-only posts. Catch the error, log it, ship without the image.

401 unauthorized. API key revoked or expired. Hard fail — alert a human. Do not retry.

404 on account ID. Account was disconnected since you last cached the list. Refetch GET /accounts, retry once. Alert if it happens repeatedly.

Content too long for X. PostEverywhere's API rejects with a 400 if your content exceeds platform limits. Use platform_content to send a shorter version to X specifically.

Token expired on a connected platform. Caught by the publishing layer, surfaces as a failed status on GET /posts/{id}/results. Your agent should poll get_post_results after publish time, identify failures, alert the user to reconnect.

A production-ready agent wraps every API call in a typed error handler. The simplest version:

try {
  return await client.posts.create(opts);
} catch (err: any) {
  if (err.status === 429) {
    await sleep(parseInt(err.headers['retry-after']) * 1000);
    return await client.posts.create(opts);
  }
  if (err.status === 402) {
    // strip media, retry without it
    return await client.posts.create({ ...opts, media_ids: [] });
  }
  throw err; // unhandled — fail loudly
}

The full ApiEnvelope shape ({ data, error, meta: { request_id, timestamp } }) means every failure has a request_id you can include in support tickets if something genuinely breaks at the platform level.

FAQs

Can I use a different LLM than OpenAI?

Yes. The decide.ts module is the only place that touches the LLM. Swap openai for @anthropic-ai/sdk, @google/generative-ai, or any provider with a chat-completions style API. The rest of the agent is unaffected. Many teams use Claude Sonnet 4.6 for content drafting because it tends to produce more natural social copy. See our comparison of best social media APIs for tooling notes.

How is this different from Architecture 1 (MCP)?

The MCP version is interactive: Claude waits for your prompts, then calls the API. This version is autonomous: it runs on a cron, no human in the loop. Same API, different control flow. Most teams build both: MCP for ad-hoc work in Claude Code, this kind of agent for the daily schedule. Read the MCP setup guide.

What happens if my agent posts something off-brand or wrong?

The API supports a draft mode. Set status: 'draft' on create_post and it stays in your dashboard for human review instead of going live. Wire your agent to draft mode for the first two weeks while you tune prompts; flip to live publishing once you trust the output. The PostEverywhere dashboard shows draft and scheduled posts in one calendar view.

Can I run multiple agents for different brands?

Yes. Each brand gets its own PostEverywhere account (or a sub-workspace on a higher tier), each with its own API key. Your agent code is one codebase that loops through clients, using the matching key per loop iteration. Agencies running this pattern sign up to the higher-volume plans for the credit allocation; the per-call API cost is the same.

Do I need to handle media uploads myself?

The two-step media upload (POST /media/upload then POST /media/{id}/complete) is for cases where you have an existing image or video. If you want AI-generated images, the /ai/generate-image endpoint returns a media ID directly, and you pass that into create_post.media_ids. No upload needed. See the AI image generator for what the dashboard side looks like.

How do I track what the agent has done?

Every API response includes a meta.request_id. Log it. Combine with the post ID returned from create_post and you can trace any post back to the request that created it. For analytics on actual published performance, poll GET /posts/{id}/results 24 hours after publish — that is when most platforms have finalised metrics.

Is this overkill for a single-account user?

Probably yes. If you have one Instagram, one LinkedIn, and one X account, the PostEverywhere dashboard and calendar cover 95% of what you would build. The custom agent pattern wins when you have specific business logic — brand-tone enforcement, multi-client scheduling, integration with your CRM, anything an off-the-shelf product cannot reach.

Can I use this with Cursor or other AI IDEs?

For interactive use, yes — install the @posteverywhere/mcp package and Cursor can drive it the same way Claude Code does. For autonomous, scheduled use, the SDK pattern shown above is what you want. Both share a backend, so you can prototype interactively and ship the autonomous version without rewrites.

What to Build Next

You have the agent loop. The cleanest extensions:

  • Wire performance tracking back into the prompt (the closing the loop step in our older build a social media agent guide)
  • Add an approval workflow with Slack notifications for human-in-the-loop oversight
  • Read the Claude Code MCP guide for the interactive companion to this autonomous agent
  • Browse the API documentation for endpoints not used here (webhooks, team management, bulk operations)
  • Compare against PostEverywhere vs Hootsuite API and the Buffer API migration guide if you are still picking a vendor

Most production agents end up smaller than the version you start with. Strip out anything that does not deliver clear value, lean on the API for everything platform-related, and let the LLM do the parts only it can do.

Jamie Partridge
Written by Jamie Partridge

Founder & CEO of PostEverywhere. Writing about social media strategy, publishing workflows, and analytics that help brands grow faster.

Contents

  • Table of Contents
  • Why Agents Need an API, Not Browser Automation
  • Three Architectures for Social Agents
  • Setup: API Key, SDK, Environment
  • The Agent Loop: Working Code
  • How the LLM Makes Decisions
  • Deploying to Vercel, Cloudflare, or Lambda
  • Handling Errors, Rate Limits, and Edge Cases
  • FAQs
  • What to Build Next

Related

  • How to Use PostEverywhere with Claude Code (MCP Setup Guide)
  • How to Build a Social Media Agent with an API (Developer Guide)
  • How to Automate Social Media Posting with an API (Step-by-Step)
  • 9 Best Social Media APIs for Developers (Compared)

Related Articles

Tools

How to Use PostEverywhere with Claude Code (MCP Setup Guide)

Connect PostEverywhere to Claude Code in three steps. Ask Claude to schedule posts, generate images, and manage your social accounts directly from your terminal using the official MCP server.

April 26, 2026·12 min read
Developers

How to Build a Social Media Agent with an API (Developer Guide)

Step-by-step tutorial for building an autonomous social media agent that generates content with AI, schedules posts across platforms, and optimises based on performance — all using the PostEverywhere API.

April 13, 2026·22 min read
Developers

How to Automate Social Media Posting with an API (Step-by-Step)

Stop manually posting to 8 platforms. This guide shows you how to automate social media publishing using a REST API — with code examples in Python, Node.js, and cURL.

March 23, 2026·11 min read
Tools

9 Best Social Media APIs for Developers (Compared)

We compared 9 social media APIs on platform support, pricing, rate limits, and developer experience. Here's which ones are actually worth integrating.

March 23, 2026·13 min read
Developers

How to Post to TikTok via API (2026 Guide)

Two ways to publish TikTok videos programmatically — TikTok's native Content Posting API and PostEverywhere's unified API. Code examples, approval process, rate limits, and common pitfalls.

April 13, 2026·14 min read

Ready to Transform Your Social Media Strategy?

Try PostEverywhere to streamline your social media management. Our powerful platform helps you schedule, analyze, and optimize your social media presence across all platforms.

Start Free TrialExplore Our Features

Footer

PostEverywhere

The all-in-one platform for social media management and growth. Built for marketing teams in the US, UK, Canada, Australia & Europe.

XLinkedInInstagram

Product

  • Features
  • Platforms
  • Industries
  • Small Business
  • Pricing
  • Developers
  • Resources

Features

  • Social Media Scheduling
  • Calendar View
  • AI Content Generator
  • AI Image Generator
  • Best Time to Post
  • Cross-Posting
  • Multi-Account Management
  • Workspaces
  • Bulk Scheduling
  • Agents
  • Campaign Management
  • Analytics

Integrations

  • Instagram Integration
  • LinkedIn Integration
  • TikTok Integration
  • Facebook Integration
  • X Integration
  • YouTube Integration
  • Threads Integration
  • Pinterest Integration

Resources

  • Resources Hub
  • How-To Guides
  • Blog
  • API Docs
  • Help

Free Tools

  • Post Previewer
  • Viral Score Predictor
  • Engagement Calculator
  • Content Repurposer
  • 30-Day Content Generator
  • Grid Previewer
  • Viral Hook Generator
  • Hashtag Generator
  • Character Counter
  • UTM Link Builder

Developers

  • API Reference
  • Node.js SDK (npm)
  • SDK on GitHub
  • Claude Code MCP (npm)
  • MCP on GitHub
  • OpenAPI Spec

Company

  • Contact
  • Privacy
  • Terms

© 2026 PostEverywhere. All rights reserved.