Part 1 of 7 · Review responder series ~5 min read

A review responder on AWS for a few dollars a month

Your business has thirty new reviews this month. Twenty-two are positive, four are mixed, three are angry, and one mentions a name and a date you don’t remember. You meant to reply to all of them. You replied to two. Here’s how to design a small responder that watches every new review across your platforms, drafts replies in your voice from your own policies, posts the safe ones automatically, and hands you the rest with everything you need to respond in one sitting.

Key takeaways

  • Three review sources, three AWS pieces. Google Business Profile, Facebook, and Yelp fold into one internal queue.
  • Every review ends in one of four moves: auto-reply, draft, escalate, ignore. There is no fifth.
  • The composer writes only from your voice file and your policies file. It never invents a refund window, a phone number, or a promise.
  • Anything 1- or 2-star, safety-flagged, or that names a staff member skips auto-reply and pings a human with a draft and the matched policy excerpt attached.
  • Runs on AWS for about $3/month at typical small-business review volume.

The whole system on one page

Before any code, here’s the shape of what we’re building.

System architecture: four outside surfaces, three inside AWS At the top, four external surfaces in a row. Far left, "Your review sources" — Google Business Profile, Facebook page reviews, Yelp business listing. Centre-left, "Your voice and policies" — a Drive folder with three short docs covering tone, what you can promise, and what you must never say. Centre-right, "Your team" — where escalations land, the inbox or Slack channel you already use. Far right, "Your themes log" — a small Drive sheet where recurring complaints accumulate for monthly review. Each connects via an arrow to the AWS account container below. Review sources has a two-way arrow representing both incoming reviews and outgoing replies. Voice and policies feeds in. Team receives escalations with the original review and a draft reply attached. Themes log receives recurring complaint categories. Inside the AWS account are three components in a row, mirroring the layout above. On the left, the Review intake — receives reviews from each platform, deduplicates, screens out spam. In the middle, the Responder — reads the review, picks one of four moves: auto-reply, draft, escalate, or ignore. On the right, the Dispatch and learning — posts the reply or queues it for approval, escalates anything serious, and increments themes for the monthly log. Internal arrows flow left to right. A note at the bottom reads: every review ends in one of four outcomes, and the responder never invents a fact about your business. Review sources Google, Facebook, Yelp Voice & policies tone, refunds, banned phrases Your team where escalations land Themes log recurring complaints review in, reply out grounds escalate with package themes tallied weekly AWS account Review intake dedupe, screen, queue in one shape Responder picks one of four: auto, draft, escalate, ignore Dispatch & learning post, escalate, tally themes review move Every review ends in one of four outcomes — and the responder never invents a fact about your business.
Fig 1. Four outside surfaces, three pieces inside AWS. Reviews flow in from your platforms, the Responder picks one of four moves, and replies go back out under the same accounts they were left under.

What you set up once (the outside)

  • Your review sources — Google Business Profile, your Facebook page, your Yelp listing, connected through each platform’s official integration. The responder polls or receives push events, never scrapes. You stay one click away from disconnecting any source if you don’t like a reply that went out.
  • A voice-and-policies folder — three short Google Docs in a Drive folder. How you sound (warm, brief, never defensive; example openers; signature line). What you can promise (refund window, replacement policy, escalation phone number, hours). What you must never say (specific phrases, comparative claims, anything legal or medical, any commitment beyond the policies file). Edit a doc, the responder picks up the change on the next refresh; no deploy.
  • A team destination — the inbox, Slack channel, or shared queue you already use. Anything 1- or 2-star, anything that mentions a staff name, anything that hits a safety or legal keyword, anything the responder isn’t confident about, lands here with the original review, the proposed draft, and the matching policies excerpt attached.
  • A themes log — a small Drive sheet where recurring topics accumulate. “Wait time at the front desk” mentioned in seven reviews this quarter is one row in your operations meeting, not seven separate replies that don’t fix anything.

What runs on every review (the inside)

  • The review intake — receives or polls each platform, deduplicates by review ID so the same review is never processed twice, screens out obvious junk (profanity floods, off-topic links, banned-words spam), and writes the cleaned review to a single internal queue. By the time the responder sees a review, it’s in one shape regardless of which platform it came from.
  • The responder — for every queued review, reads the rating and the body, extracts what the customer is praising or complaining about, and picks one of four moves: auto-reply (the obvious thanks-yous), draft for review (anything with a specific complaint or named staff), escalate (anything 1-star, safety, or legal), or ignore (already replied, off-topic, junk that slipped past intake). When it composes, it pulls only from your voice file and your policies file — never invents a refund window, a phone number, or a promise.
  • Dispatch and learning — on auto-reply, posts through the platform’s API and marks the review answered. On draft, drops a package in your inbox or Slack: original review, proposed reply, matched policy excerpt, and a one-line reason it’s a draft and not auto. On escalate, the same package, marked urgent. And on every review, regardless of move, extracts the themes the customer mentioned and increments them in the monthly log.

In plain words

A new review lands. The cloud reads it within a minute, decides “this is a 5-star thanks-for-the-coffee, my voice file says ‘warm and brief’ on these and the policies file doesn’t apply,” and posts the reply automatically. Another review lands. This one is 3-star and mentions a specific dish that disappointed; the cloud writes a draft that acknowledges the complaint and offers a return-visit credit (which is in your policies file), and drops the draft in your inbox. You glance at it, change one word, hit send. A third review lands and it’s 1-star with a phrase like “food poisoning”; the cloud doesn’t try to compose a reply at all. It marks the review urgent, drafts a single-sentence acknowledgement — “We’re sorry, we’d like to look into this; please reply with your visit details or call the number on your receipt” — and sends the whole package to your manager. You see all three, in one place, at the time of day you choose to deal with reviews.

Total cost runs in coffee-money territory at typical small-business volume — pennies per review, dominated by the few model calls per draft.

Design rules that shaped every decision

  • The responder composes from your voice file and your policies file only — never invents a refund window, a phone number, or a promise you don’t offer.
  • Four moves, always. Auto-reply, draft, escalate, ignore. There is no fifth.
  • Negative or specific reviews never auto-post. Drafts are for you; auto is for the obvious thanks-yous.
  • Safety-and-legal keywords (food poisoning, injury, lawyer, refund-stalled, named-staff complaint) bypass everything and escalate immediately.
  • Configuration lives in Drive. Tweaking your tone or your refund window doesn’t need a deploy.
  • Recurring themes come back as a list, not as N replies. Use the list to fix the underlying problem.

Why this shape

Most “AI review reply” tools fall into one of two traps. The first kind auto-posts everything in a generic upbeat voice, which is fine until it auto-replies “Thank you for the kind words!” to a 1-star review about a missed delivery. The second kind drafts everything for human review, which sounds responsible but quietly trains you to skip the stack — by the third week the drafts pile up and nothing gets posted, exactly the problem you started with. Neither is what you want for the review of your dental office that arrived at 11pm Saturday from a patient who was upset about a wait time.

The setup above splits the difference. The obvious thanks-yous post themselves in your voice; you don’t read those, you just see the count went up. Specific complaints, mixed reviews, anything that needs a real human eye, becomes a one-tap draft — the cloud has done the writing, you keep the editorial control. And anything serious is shoved up the chain to a real person with all the context attached, because a 1-star review with a safety complaint isn’t a “review” anymore, it’s a triage event.

The next four posts walk through each piece in turn — how a review reaches the responder, how the responder reads it, how it picks one of four moves, and how the reply stays in your voice. One diagram per post. A cost breakdown and a final engineering reference at the end.

All posts