Skip to main content
All posts
March 13, 2026Β·6 min readΒ·Diogo Hudson

The AI import, from drop to review

The whole flow, end to end, in plain language. Drop a file, watch the progress, land on a quote that's already mostly right.

The AI import, from drop to review

Most technical writeups focus on one step β€” the normalization, the LLM call, the matching algorithm. But users don't experience steps. They experience a flow: drop a file, wait a moment, review the result, send the quote. Let's walk the entire AI import from the user's point of view, including the details that make the difference between a tool that feels smart and a tool that feels like a demo.

Drop You drag a supplier document onto the page. The drop zone highlights with an amber outline β€” the same color used throughout the importer to signal 'attention needed' or 'AI at work.' Client-side validation runs before a single byte hits the server. It checks file extension against the allowed set (PDF, XLSX, XLS, CSV), confirms the file is under 20 MB, and rejects anything else with an inline error message that names the problem β€” 'CSV files must be under 20 MB' rather than 'Invalid file.'

The client-side validation is not just politeness. It prevents the user from waiting 30 seconds for an upload only to get a server-side rejection. It also keeps garbage out of the processing queue β€” no corrupted binaries, no password-protected PDFs that would waste an LLM call before failing. The validation is strict at the boundary and silent downstream: if a file passes the gate, the pipeline assumes it's processable.

The drop zone also accepts paste. If you have a supplier spreadsheet open in another tab, you can copy the relevant rows and paste them directly. The paste handler detects tabular data, normalizes it into the same internal format as a file upload, and routes it into the same processing pipeline. Same stages, same review screen, same result. The input method doesn't change the behavior β€” the pipeline doesn't care how the bytes arrived.

Process Once the file is uploaded, a full-screen progress view takes over. This is the moment where most import tools lose the user. A spinner with 'Processing...' is honest but useless β€” it gives no signal about whether the thing is working or stuck. A percentage bar is worse β€” it implies precision the system doesn't have, and a bar stuck at 73% for 45 seconds feels broken even when everything is fine.

Quotery uses time-based stage cycling instead of percentage. Six stages rotate in sequence: 'Reading document,' 'Understanding context,' 'Detecting groups,' 'Matching product codes,' 'Inferring products by description,' 'Building your quote.' Each stage stays visible for a duration tuned to the observed p50 of real imports β€” roughly 3 to 6 seconds per stage depending on file size and complexity. The stages are not fake progress. They correspond to real milestones in the pipeline: the normalization layer reads the document, the LLM reasons about structure, the grouping pass identifies sections, the matcher runs against product codes, the inference pass handles unmatched lines, and the final assembly creates the QuoteSection and QuoteLine objects.

The pacing is honest but not literal. A 14-line import might spend 4 seconds on 'Reading document' when the actual PDF parsing took 1.2 seconds, and 4 seconds on 'Matching product codes' when the deterministic matcher ran in 200ms. The extra time is buffer β€” it smooths out variability so the user never sees a stage flash by in 0.3 seconds followed by a 15-second stall on the next one. The experience is consistent even when the underlying timing is lumpy.

Behind the scenes, three LLM calls run in sequence during processing. Call A normalizes the document into structured JSON: it identifies sections, groups line items, and extracts the raw text the matcher needs. Call B runs inference on lines that couldn't be matched deterministically β€” it takes a product description like 'VENT-450-B 220V INDUSTRIAL' and maps it to the closest known product. Call C generates the summary banner (see post 14 for the full design rationale). The user never sees these calls β€” they see the stages cycling and, eventually, the result.

If something goes wrong during processing β€” a password-protected PDF, a file that's actually an image scan with no extractable text, a corrupted XLSX β€” the progress view transitions to an error state with a specific message. 'This PDF is password-protected and cannot be processed' is actionable. 'Import failed' is not. Every error path in the pipeline has a human-readable message that tells the user what happened and what to do next.

Review The progress view dissolves and you land on the new Quote. The transition is important: you don't land on a raw classification grid or a JSON dump. You land on the same Quote detail view you'd see if you'd built the quote manually β€” the same layout, the same controls, the same edit capabilities. The only difference is that the data is already populated, and a dismissible AI-generated banner sits above the items table.

The banner (covered in depth in post 14) answers the only question you have at this moment: 'What do I need to look at?' It says something like 'Imported 14 lines. 11 matched cleanly, 2 need your review, and 1 couldn't be matched β€” please check line 9.' The banner is generated by an LLM call that receives only classification counts and your locale β€” no product names, no prices, no customer data. It's constrained to two sentences maximum and disappears on navigation.

Each line in the items table carries a classification chip. Green chips say 'Exact match' β€” the system found the product by code, the price was snapped from the product catalog, and the line is ready to go. Amber chips say 'AI decision β€” review' β€” the deterministic matcher couldn't find a match, the LLM made its best guess, and a human needs to confirm or correct it. Red chips say 'Not found' β€” neither the matcher nor the LLM could map this line to a known product, and it needs manual attention.

The chip colors create an immediate visual hierarchy. You can scan 14 lines in under 3 seconds and know exactly where to focus. Green lines are visually recessive β€” they're confirmed, they're fine, you don't need to think about them. Amber and red lines draw the eye. The design says 'these are the ones that need you' without making you read a single product name.

Tapping an amber chip opens an inline comparison: the LLM's suggested match on the left, the original supplier text on the right, and a 'Confirm' or 'Change' action below. If the AI got it right β€” and it does roughly 90% of the time for the lines it attempts β€” you tap Confirm and the chip turns green. If it got it wrong, you search for the correct product, select it, and the chip updates. Red lines require the same manual search but without a suggestion to start from.

The review experience is where the tool earns or loses trust. If the AI is right 90% of the time but the review UI makes confirming those 90% feel like work, the tool has failed. If confirming a correct AI guess takes one tap and the UI stays out of your way, the tool earns its keep. Every amber Confirm is a single tap. Every red search is a typeahead lookup against the product catalog, scoped to your tenant, with fuzzy matching on name and code.

Total time A 14-line import with typical classification results β€” say 9 green, 3 amber, 2 red β€” takes well under two minutes from drop to send. That includes the processing time (roughly 25-40 seconds depending on file complexity), the review time (confirming 3 amber matches, manually matching 2 red lines), and any small price or quantity adjustments. The same task in a spreadsheet β€” typing 14 product codes, looking up prices, calculating line totals, formatting the quote β€” takes twenty to forty minutes depending on catalog familiarity.

The time savings compound with volume. A team processing 5 supplier quotes a day saves roughly 90 minutes daily β€” the equivalent of a full workday per week recovered from data entry. But the more important savings are cognitive. The importer handles the mechanical work of transcription and matching. The human handles the judgment work of review and adjustment. That's the right division of labor between a machine and a person who knows their customers.

See how AI turns supplier docs into quotes in under a minute.

All posts
Short pieces on quoting, inventory, AI, and how small distributors ship a lot of stuff without the fuss.