Part 3 of 7 · Quote drafter series ~5 min read

How the drafter reads an RFQ

By the time the drafter sees an RFQ it’s in one shape, regardless of source. Inside that shape, though, lives the actual hard problem: turning “240 of A-12 plus 60 of A-12L for our Chicago site by the 30th, plus a couple of the long brackets if you have them in stock” into a clean list of catalog SKUs with quantities, a delivery destination, and a deadline. Three small extractors run in parallel, a catalog lookup that knows your aliases resolves each line item, and a move-picker reads it all to choose one of four moves.

Key takeaways

  • Three extractors run in parallel against every RFQ: line items, constraints, context.
  • The catalog lookup is grounded in a Bedrock Knowledge Base. Exact SKU beats alias beats fuzzy beats nothing.
  • Each extracted field carries a confidence score the move-picker reads before anything else.
  • Four moves, one pick per RFQ: auto-draft, clarify, out of scope, reject.
  • The drafter never invents a SKU. A line item without a catalog match becomes the out-of-scope move.

The three extractors

Three extractors plus catalog lookup feed the move-picker A diagram showing one input on the left flowing through three parallel extractor boxes and a catalog lookup, all converging on a move-picker on the right. Far left: a "Cleaned RFQ" box (the output of the intake from Part 2). Three arrows fan out to three vertical extractor boxes in the middle column. Top: "Line items extractor" — pulls SKUs, plain-English aliases, quantities, and any size or model variants out of the message; emits each as a candidate line item with a confidence score. Middle: "Constraints extractor" — pulls deadline, delivery destination, payment terms proposed by the buyer, and any custom asks like rush shipping; emits each as a typed field with a confidence score. Bottom: "Context extractor" — pulls industry signals, company size cues from the email domain enrichment, project notes, and urgency hints like "ASAP" or named events; emits each as a tag with a confidence. Below the three extractors, a "Catalog lookup" box: takes each candidate line item from the line-items extractor and resolves it against the catalog stored in a Bedrock Knowledge Base. The lookup is grounded RAG — embeddings for vector recall, then a strict matcher in plain Python that prefers exact SKU over alias over fuzzy match over none. The output is each line item annotated with the resolved catalog row and a match-strength score, or marked unmatched. All four boxes converge on a "Move-picker" box on the right. The move-picker reads the confidence scores and the catalog match strength, applies a small set of rules, and emits exactly one of four moves: auto-draft, clarify, out of scope, or reject. A note at the bottom: confidence scores travel with every field — the move-picker never reads a value without seeing how sure the extractor was. Cleaned RFQ from intake Line items SKUs · aliases · quantities size and model variants + confidence per line Constraints deadline · destination payment terms · rush asks + confidence per field Context industry · company size project notes · urgency cues + confidence per tag Catalog lookup (KB) exact SKU > alias > fuzzy > none Move-picker reads scores + matches picks one of four moves: draft · clarify · OOS · reject resolve resolved lines Confidence scores travel with every field — the move-picker never reads a value without seeing how sure the extractor was.
Fig 3. Three extractors and a catalog lookup feed the move-picker. Each extractor emits its fields with confidence scores; the catalog lookup resolves line items via the Bedrock Knowledge Base; the move-picker reads the lot and chooses one of four moves.

Line items: what they’re actually asking for

The hardest of the three. Customers don’t write line items the way your catalog does. Your catalog has SKU A-12, “Bracket, 12-inch, standard.” The buyer writes “240 of the standard bracket” or “the 12-inch ones” or just “A-12 x 240.” Sometimes a customer who has bought from you for years writes the SKU correctly but with a typo (A12 without the dash). Sometimes they write a quantity range (“maybe 200, definitely 150”). Sometimes they bundle (“the usual order plus 60 of the long ones”).

The extractor uses Claude Haiku 4.5 with a short system prompt: “Pull out a list of {quantity, name-or-SKU, variant-notes} rows. One row per visible request. Mark the confidence of each based on how clearly the buyer said it. Do not invent anything that isn’t in the message.” The output is a list of candidate lines. Quantity ranges get a range field that the move-picker will treat as ambiguous. Bundles get split into their parts. The extractor never tries to match a name to a catalog row — that’s the catalog lookup’s job.

Constraints: the parts that aren’t negotiable

Deadline, delivery address, payment terms the buyer proposed (“net 30,” “COD,” “PO attached”), and any custom asks: rush shipping, partial deliveries, a certificate of analysis, a special invoicing format. Each is its own typed field with a confidence. Missing fields are fine. The move-picker handles “no deadline given” differently than “deadline next Tuesday” differently than “deadline ASAP.” What the extractor never does is fill in a missing field with a guess. If the RFQ doesn’t mention a destination, the extractor returns destination: null, confidence: 1.0 — sure that nothing was said.

Context: the bits that flavor the cover paragraph

Industry, company-size cues (from the email domain plus anything in the signature or project description), urgency hints (named events like “before our trade show on the 15th” or “ahead of the audit”), and project notes (one sentence on what the order is for, when the buyer mentions it). The drafter doesn’t price differently based on context. Pricing comes only from the catalog and rules. But the cover-paragraph composer (Part 5) reads the context tags to write a slightly different opening sentence for an RFQ headed to an aerospace audit versus a hardware store opening their second location.

The catalog lookup: where RAG actually earns its keep

Each candidate line item from the line-items extractor goes to the catalog lookup. The catalog lives in a Drive doc — a flat list, one row per SKU, with columns: SKU code, plain-English aliases (comma-separated, the names customers actually use), base price, unit, MOQ, lead time, and notes. A small sync Lambda copies the doc to an S3 bucket every few minutes; a Bedrock Knowledge Base indexes that bucket using Titan Text Embeddings v2 over Amazon S3 Vectors. (Bedrock KB doesn’t ship a native Google Drive connector, so the one-hop-through-S3 design is intentional — and the side benefit is that S3 versioning gives you point-in-time history of every catalog edit.) The lookup runs each candidate line item as a vector retrieval against the Knowledge Base.

The retrieval returns the top few rows. Then a strict matcher in plain Python — no model — decides what counts as a match. The order is fixed. An exact SKU match in the buyer’s text wins, even if the embedding score is mediocre. A name match wins next: the buyer wrote “the standard bracket” and your catalog lists “standard bracket” as a name for A-12. A close-but-not-exact SKU match (a typo, a missing dash) wins next, but only if it’s close enough by a threshold you set. Below that, the lookup marks the line as unmatched. It never invents a SKU. An unmatched line doesn’t become a quote line. It becomes a flag for the move-picker.

The move-picker: four moves, always

With confidence scores from three extractors and match strengths from the catalog lookup, the move-picker decides what happens next. It’s a small set of hand-written rules — not a model call. The rules are written out so the team can check any decision later.

  • Auto-draft. Every line resolved to a catalog row at high match strength. The constraints are clear: deadline present, destination present, no unsupported custom asks. Context is fine. Move on to pricing.
  • Clarify. Most of the RFQ looks fine but something important is unclear. A quantity range too wide to price (“200 to 500” spans two volume-break tiers, so the per-unit price changes). A SKU that resolves to two catalog rows differing only in size, with no size given. A missing deadline on a request marked “rush.” The drafter writes one specific question for the customer — never two — and parks the RFQ as “awaiting reply.” If the buyer answers, the RFQ goes back through the drafter from the top.
  • Out of scope. At least one line item didn’t resolve. Or the buyer asked for something the rules doc says you don’t do — “net 90” on a first order, delivery outside your service area. The drafter doesn’t auto-draft. It tags the contact in the CRM, pings the sales lead in Slack with what matched and what didn’t, and stops there. A human picks it up.
  • Reject. Spam signals that slipped past the screen step, vendor pitches phrased as RFQs (the screen catches most; the move-picker catches the rest by reading the message), and competitor fishing. Archived with a reason. The team never sees it.

Next post: how an auto-draft RFQ becomes a priced quote — how the catalog and rules docs combine into one number per line, and how every applied rule cites the passage that produced it.

All posts