The One-Claim Test. How to Read an AI-Services Homepage.
AI-services homepages sell verbs because verbs commit to nothing. The test that filters them: name a workflow, a metric, an operator whose job changes.
Published April 28, 2026 by David Suydam
If you run operations at a mid-market company in April 2026, you are being pitched AI by firms whose homepages you can no longer tell apart. It does not matter where you are on the curve. Maybe your team has not been cleared to use LLMs yet and you are stuck on data-handling and security. Maybe you have eighteen months of pilots that produced task automations and did not change how the business runs. Most operators we talk to are somewhere between. The pitches arrive the same way regardless: open with a verb, promise something the operator cannot test, close with a meeting request.
Read the next ten of them in a row and you will see the shape.
Accenture's AI page opens, "In the last 30 years, no technology has promised to change everything across a business. Until generative AI."
Deloitte's GenAI landing reads, "The true power of GenAI comes from humans with big ideas."
EY: "Moving beyond productivity to reimagine the enterprise."
Slalom: "AI dares us to rethink the impossible."
Cognizant: "Empowering better, faster decision-making with data and AI services."
These are real, current homepages from firms that genuinely deliver work. Pasted next to each other, none of them commits to a claim a competitor would disagree with, and none names the workflow that gets redesigned, the metric that moves, or the operator whose job changes.
The category sells verbs. Verbs commit to nothing.
That is fine for tone. It is not enough to choose a vendor. A CFO weighing five AI vendors has no conversation budget for five. They have a filter budget. A homepage that does not say what changes for them gets filed under "another one of these." Most CFOs are reducing AI vendors, not adding them, and prefer capabilities embedded in platforms they already run (L.E.K. 2025).
Something is missing: look at their own numbers
The category's own research is the strongest evidence that something is missing. McKinsey's March 2025 State of AI tested 25 organizational attributes against EBIT impact from gen AI.
Workflow redesign had the biggest single effect on EBIT.
Only 21% of gen-AI adopters have fundamentally redesigned at least some workflows. The other ~79% have layered AI onto existing processes. More than 80% are not yet seeing enterprise-level EBIT impact.
Four out of five companies have not made the move that drives the EBIT. The layer between "we let people use ChatGPT" and "we changed how the company runs" is where the work is, and most enterprise AI investment skips it or defers it.
A randomized field experiment from INSEAD and HBS, published March 2026 by Hyunjin Kim, Dahyeon Kim, and Rembrand Koning, lands the same finding under controls. All firms had identical AI tool access. The treatment group also got case studies on AI-native firms reorganizing workflows around AI. Treatment firms generated 1.9× the revenue of control. The variable was workflow redesign, not AI (SSRN 6513481; Robinson 2026).
Those two findings are the spine of the claim our relaunched homepage now makes:
Workflow redesign is the missing step between task-level AI automation and the operating-model change senior leaders actually want. Most AI work skips this step or defers it, which is why most AI work does not change how the business runs.
This is Architech's framing, not a settled industry taxonomy. We name three layers:
tasks (a single step gets faster),
workflows (decisions and handoffs reallocated between people and AI), and
the operating model (cost structure, span of control, decision rights, capacity allocation).
The middle layer does most of the economic work; most AI vendors currently pitch the bottom one.
What a testable claim is, and why it filters
A testable claim has two properties. First, a reasonable competitor could disagree with it. "We partner with you on your AI journey" is not a claim. "Workflow redesign beats task automation for moving EBIT" is. Second, an operator could check it inside ninety days by asking the firm to name the workflow they would redesign and the number they would commit to moving.
Run that test on the homepages above and most describe the firm's posture, not the work that follows. We ran it on our own old homepage. It would not have passed. We had verbs.
Why the missing step gets skipped, even by serious firms
The step is not skipped because firms are unserious. It is skipped because tasks are easier to scope, demo, and invoice in ninety days. Workflow redesign needs two things AI-services firms commonly under-invest in. At the front: senior advisory that helps leadership decide which workflow to redesign, willing to recommend stopping. At the back: activation that gets the redesigned workflow scaled and into habitual use, not just into production.
The activation gap is measurable. One enterprise survey reports productivity gains around 5× while only 29% of organizations see significant ROI (WRITER 2026). 28% of employees know how to use their company's AI (WalkMe, Nov 2025). 87% of leaders say employees need more training (Forrester / Simpplr, Q1 2026). Deployed AI is not used AI.
MIT's NANDA initiative said it plainer last summer (Fortune):
About 95% of enterprise generative AI pilots fail to achieve rapid revenue acceleration.
The 5% that succeed "pick one pain point, execute well, and partner smartly with companies who use their tools." Generic AI plugged into a workflow no one has redesigned produces task-level productivity and silence at the P&L.
Strategy, redesign, activation
Workflow redesign is the missing step, but it is not the only one. It is the middle of three, and a firm's claim only holds if it can deliver across all three.
Up front, AI Jumpstart is a senior-led advisory engagement that decides where AI matters and whether to act. It ends in a decision to redesign a specific workflow, or a decision to stop. In the middle, workflow redesign is the build: deconstructing the workflow, reallocating decisions between people and AI, engineering it into production. At the back, workflow activation is what the category routinely skips: getting the redesigned workflow into habitual use and tracking the metric it was supposed to move.
We are willing to claim this position now because we used it on ourselves first (e.g. an internal redesign of how Architech produces content, see customer-zero piece from April 20). We redesigned a workflow we own before recommending it on workflows clients own (which we'll be talking about publicly soon).
The strongest counter-argument, taken seriously
The cleanest counter-POV comes from the RPA and task-automation side. Tasks first, the argument goes, is the cheaper, lower-political-risk on-ramp. UiPath's own implementation guidance tells operators to "begin by identifying high-value tasks that are time-consuming, prone to error, or critical to business outcomes." Tasks are meant to compound into AI-integrated processes and then into agentic workflows. Some firms sell breadth. That is a legitimate business. They are selling something else.
The data still cuts. Tasks alone do not compose into operating-model change. INSEAD shows it under controls, McKinsey shows it at scale, the activation surveys show why. Most of the 79% have not gotten to redesign yet, not decided against it; the framing is "skipped or deferred," not "rejected." But if your goal is operating-model change (actual cost-structure, span-of-control, or decision-rights movement), the firm you hire has to do the middle layer. Most of the category is not selling it.
A test you can run this week
Open the homepages of the AI-services firms on your shortlist, including ours. For each, write down one sentence describing the claim, in the firm's own words. If it could be pasted onto any other firm's site without editing, it is not a claim. If it does not name a workflow, a metric, or an operator whose job changes, it is not a claim. If it cannot be checked inside ninety days, it is not a claim.
Most will not pass. That is the diagnostic. The firms that do pass will not all be Architech, and they should not be. Apply the same test to all of us, including the one writing this piece.
We rebuilt architech.ca around one claim, the one in this piece. Test us on it. The category has stopped making claims you can test. An operator with a vendor budget has every right to make that the first filter.
Ready to apply this to your workflows?
Architech's AI Jumpstart is the structured entry point.