AI design guide

On AI in services.

AI is a tool. Like every tool the practitioner uses, it deserves the question: where does it belong, where doesn't it, and at what level. This page sets out how this practice answers.

Three commitments.

These run through every tool this practice ships, and through the writing on the publication side.

  1. 01

    AI is a tool, not a service.

    AI belongs inside services, in support of users and practitioners. It isn't a service in itself. Nobody asked their government for a chatbot, and a service that's mostly AI by surface area has the design problem upside down.

  2. 02

    AI suggests; humans decide.

    Every AI in this practice's tools runs as a suggestion the user confirms or amends. The reasoning shows. The user signs off. The model never lands a verdict alone.

  3. 03

    Disclosure is part of the design.

    If AI touches your input, the surface tells you. The disclosure isn't buried in a privacy notice. It sits next to the input it describes, in the same register as the rest of the page.

  4. 04

    Every claim is sourced.

    A claim made on this site is one of three things. The practitioner's own, named. Borrowed from public methodology, linked. Or rebuilt from public sources where the original was private. Private client material doesn't show up here.

  5. 05

    Every method is teachable.

    If a method can't be described well enough that a profession lead in another department can apply it without the practitioner in the room, the method isn't finished. The site is the public form of the methodology.

Four levels of automation.

Suggested, assisted, automated, autonomous. The frame this practice uses to read where AI fits in a piece of work.

  1. Suggested.

    AI suggests; the practitioner decides. The model proposes options or framings. The human picks. The model is consulted, not deferred to.

    Where the judgement carries the work. Drafting research questions, choosing themes, deciding what to recruit for.
  2. Assisted.

    AI drafts; the practitioner edits. The model writes a first cut. The human refines. Saves time on the input, keeps the output in the practitioner's voice.

    Where the form is generic but the substance is specific. Outreach emails, summary memos, code clusters.
  3. Automated.

    AI runs the step. The practitioner audits a sample. The model handles the bulk. The human checks a portion for quality and catches drift early.

    Where the work is well-defined and the model is mature. Transcription, format conversion, deterministic checks.
  4. Autonomous.

    AI runs the step without review. The model acts alone. Rare, and only where stakes are low and the recovery path is cheap.

    Where the cost of being wrong is bounded. Sort, filter, batch grouping of low-stakes signal.

Where this practice sits: most UCD work belongs at suggested or assisted. Some belongs at automated. Almost none belongs at autonomous, and only where the recovery path is designed, not assumed.

AI suggests; humans confirm.

The interaction pattern this practice ships in every tool that uses AI.

You type the context. The model reads it and proposes a verdict or level, with a short reasoning. You confirm the suggestion or amend it. What lands in the report is what you signed off, not what the model produced alone.

The pattern matters more than any single feature. It says the model is in the loop, not in charge. It keeps the practitioner accountable for the read, and gives the model a job it's good at: surfacing a draft to react to. The reasoning is shown so you can argue with it. The confirm step is a choice, not a click-through.

Where AI is not in this shape in a service, it isn't this practice's recommendation.

What we tell you. What we don't keep.

The transparency and privacy posture, written in plain English so it can be defended in plain English.

Transparency

What we tell you.

  • Every AI-touching surface says so, in the same place as the input.
  • The reasoning the model produced is shown alongside what you signed off.
  • Each report names the model that helped assemble it, and the methodology behind the frame.
  • If the model gets a thing wrong and you amend, your amendment is what shows. The model's draft does not survive in the report.

Privacy

What we don't keep.

  • The assistant is Claude Haiku, billed via the Anthropic API.
  • Your context is read by the model to generate the suggestion, then discarded. Nothing is retained beyond the call.
  • Email is collected at the end to deliver the report, and only then. The newsletter is opt-in, never automatic.
  • No analytics on the answers themselves. Aggregate completion rates only.

Cambridge HCI for AI Systems.

The methodology this practice draws on. Academic, not vendor-coded.

The frame, the pattern, and the discipline of treating AI as a tool inside the work draw on the HCI for AI Systems methodology taught by Cambridge's Department of Computer Science and Technology. The course examines how human-centred design applies when the system being designed is partly statistical: where the human-in-the-loop sits, what kinds of explainability serve practitioners versus end users, how trust is calibrated, and what a recovery path looks like when the model is wrong.

This practice is informed by the methodology, not bound to it. The four-levels frame is a working translation. The AI-suggests-humans-confirm pattern is one implementation of human-in-the-loop. Other practitioners in the same lineage will work differently. That's the point: the methodology gives the questions, not the answers.

See the position run.

Two free tools embody the three commitments, the four levels, and the pattern. Use them, argue with them.

Free tool

Service Standard Diagnostic.

Sixteen questions across the first five Service Standard points, read against AI-enabled work. The model suggests a verdict per question. You sign off. The report lands at Met, Partially met, or Not met, with the reasoning you confirmed.

Open the diagnostic

Free tool

Automation Strategy Explorer.

Pick a UCD profession, walk its typical tasks one at a time. The model suggests a level per task. You sign off. The considerations sheet covers every task in the profession with the trade-offs to weigh.

Open the explorer