Purpose
This document outlines the curriculum across all five levels — what tasks the platform contains, what patterns each task practices, what mental model it constructs. It is the content layer that sits underneath the spec's structural commitments (the five Transfer Design Principles, the near/far transfer assessment) and the pedagogy doc's intellectual commitments (the five design moves grounded in transfer-of-learning research).
It is intentionally an outline, not a fully-specified content catalog. Concrete task design will be reshaped by Phase 1 of fieldwork.md — provider visits surface failure modes and pedagogical patterns the literature does not. The personas the curriculum targets (older adult, ESL, returning-to-workforce, etc.) are deferred to that phase.
Where this fits in the mission
The pitch's thesis is transferable schemas for digital problem-solving in the age of AI — a higher-order claim. That claim collapses if the platform only teaches Level 1 ("I can use a single app") or Level 2 ("I can move information between apps") work, because:
- Level 1–2 work is already served by the public library system. Libraries have been doing intro-to-computers, intro-to-email, basic-file-handling instruction for thirty years, free of charge, with in-person human support. Northstar Digital Literacy is the established assessment; GCFGlobal is the established free curriculum. A startup that re-skins this is competing with thirty years of institutional infrastructure on its own terms.
- The novel content lives at Levels 3+. Multi-application fluency, information synthesis across sources, AI-augmented workflows, evaluating unfamiliar tools — these are where adult digital-fluency providers themselves report ("multiple respondents suggested it is not clear how to train people to move from this initial level to more fluency,"1 in the words of the Urban Institute's 2019 brief). Levels 3+ are the place the field is asking for help.
The implication for our curriculum: Levels 1–2 should be a runway, not a destination. Where possible, we partner with the existing library / ABE infrastructure for Level 1–2 onboarding (or assume incoming users have completed Northstar-style basics elsewhere) and concentrate our distinctive contribution at Levels 3–5. This may mean revising the spec's "v1 MVP = Levels 1–2" framing. See Implications for the spec below.
How a curriculum entry is structured
Every task entry below carries the same four-part anatomy. Detailed worked examples are deferred to v0 build; the outline establishes the shape and tests it against the design principles.
- Scenario: plain-language framing of what the user is trying to do, written from the user's perspective ("I need to apply for…"), not the system's.
- Primary pattern: the schema this task practices. From the curriculum's controlled pattern vocabulary (see Pattern vocabulary).
- Mental model under construction: which of the four mental-model tracks this task contributes to.
- Surface form: which app(s) the task lives in. Per Transfer Design Principle #1, every pattern is practiced in ≥3 distinct surface forms before being marked taught.
Level 1 — Operational basics
Goal: construct the desktop mental model. Establish file persistence, multi-window state, basic hierarchy.
Strategic position: Probably not where our distinctive contribution lives. The likely v1 design is to either (a) scope this level minimally and assume incoming users have acquired these skills elsewhere (Northstar / library), (b) partner with a library system that delivers Level 1 in-person and routes graduates to our platform for Level 2+, or (c) include it as a fast-track diagnostic so users without the basics aren't blocked.
Task families
- File persistence basics — Pattern: persistence/recall · Mental model: file system · Surfaces: file browser, downloads, desktop. Learner downloads a file, renames it, places it in a named folder, then finds it again later in a different session.
- Browser orientation — Pattern: navigation/wayfinding · Mental model: multi-window state · Surfaces: browser. Learner opens multiple tabs, closes the right one, restores a closed tab, distinguishes tabs from windows.
- Account creation and credential persistence — Pattern: identity/persistence · Mental model: persistence + hierarchy · Surfaces: form, email. Learner creates an account on a service, receives a confirmation email, returns the next day and logs back in without re-creating the account.
- Copy / paste across contexts — Pattern: extract/transform · Mental model: multi-window state · Surfaces: document, browser, form. Learner finds information in one app, transports it intact to another.
What "complete" means at Level 1
User can perform these tasks without coaching, and can answer the question "where is the file you just saved?" without hesitation. The diagnostic is the mental model, not the keystrokes.
Level 2 — Transactional tasks
Goal: sequencing and verification. Multi-step processes within one or two apps; introduce conditional logic and recovery from errors.
Strategic position: Bridge level. Some library systems handle this; many don't reach this depth. Our value-add starts here and compounds upward.
Task families
- Form completion with validation — Pattern: sequence/recover · Mental model: versioning/history · Surfaces: form (3 different surface forms required). Learner completes a multi-field form, encounters validation errors on submission, identifies which field failed, corrects, resubmits.
- Email composition with structure — Pattern: compose/structure · Mental model: hierarchical organization (threading) · Surfaces: email. Learner composes a reply that quotes the original, adds a clear subject, sends to a specific recipient — not just hits "reply all."
- Account management and recovery — Pattern: identity/recover · Mental model: persistence · Surfaces: email, form. Learner recovers access to an account where they've forgotten the password — uses the email-reset flow without abandoning halfway.
- Verification flows — Pattern: extract/verify · Mental model: multi-window + persistence · Surfaces: email + form, document + form. Learner enters data on one screen, confirms it via another (typically email), and proceeds.
Cross-cutting at Level 2
Errors are taught as opportunities. Per the spec's UX principle, a wrong attempt becomes a contrasting-case opportunity. The AI co-pilot's behavior here practices the productive-struggle and pattern-naming moves before the curriculum gets harder.
Level 3 — Workflow execution
Goal: operate across applications. Compose Level 1–2 patterns into longer workflows that span apps and sustain state across them.
Strategic position: This is where the project's core mission begins. The library system does not, in general, scaffold this level; the workforce-development field has named this as the gap (Urban Institute 2019). This is where we lead.
Task families
- Research → summarize → send — Pattern: extract/synthesize/transmit · Mental model: multi-window + versioning · Surfaces: browser + document + email. Learner reads three short articles, extracts three claims, drafts a summarizing email with the claims attributed to their sources.
- Receive → locate → respond — Pattern: sequence/cross-reference · Mental model: multi-window + hierarchy · Surfaces: email + file browser + email. Learner receives a request via email, finds the relevant document on the file system, replies with a summary of the document plus the document attached.
- Apply for a service end-to-end — Pattern: sequence/recover/verify (composite) · Mental model: all four · Surfaces: browser + form + email + document. Learner navigates to a benefits portal, completes a multi-page form requiring documents from elsewhere, uploads supporting evidence, receives a confirmation, returns later to check status.
- Comparison-shop with constraints — Pattern: extract/evaluate/decide · Mental model: multi-window + versioning · Surfaces: browser + document. Learner compares three options against stated constraints (budget, deadline, feature requirement), produces a justified recommendation document.
What's hard about Level 3
The cognitive load is multi-window state held over time. The user must remember "I was looking for X, then I went to Y, now I'm back, where was I?" without losing the thread. The co-pilot's interventions calibrate around this — re-stating the goal when the user returns from a tangent is more valuable than naming a pattern they've already used.
Level 4 — AI-augmented workflows
Goal: direct AI as a tool inside a workflow. Evaluate AI output. Refine.
Strategic position: Novel. No established curriculum exists at this level for adult digital-fluency populations. The closest analog is corporate AI-literacy training, which is structured around enterprise tools (Copilot, Glean) rather than general-purpose AI direction. We can lead here without competing with anyone.
Task families
- Drafting with AI, verifying against source — Pattern: delegate/verify · Mental model: AI-as-directable-system · Surfaces: AI panel + document + browser. Learner asks the AI to draft a response (e.g., a benefit appeal), then verifies each factual claim against the source documents, identifies any hallucinations, refines.
- Explaining unfamiliar content with AI — Pattern: delegate/cross-check · Mental model: AI-as-directable-system · Surfaces: document + AI panel. Learner receives an unfamiliar form (e.g., a tax form, a lease), asks the AI to explain each field, confirms the AI's explanation against the form's actual fields and instructions before acting.
- Iterative refinement — Pattern: iterate/converge · Mental model: versioning + AI-as-directable-system · Surfaces: AI panel + document. Learner asks for a draft, finds it almost-right-but-not-quite, learns to ask for specific changes ("make it shorter," "remove the second paragraph") rather than starting over.
- Detecting AI failure modes — Pattern: evaluate/discriminate · Mental model: AI-as-directable-system · Surfaces: AI panel + browser + document. Curated tasks where the AI confidently produces a wrong answer; learner is taught to expect this category of failure and to verify rather than trust.
What's hard about Level 4
Calibrating trust. The user has to internalize that "the AI is often right but unreliable in ways that matter, and verification is non-optional." This is a category of mental model — the AI is not an oracle and not a search engine; it's a directable but fallible collaborator. Our curriculum may be the first place a low-fluency adult learner encounters this framing.
Level 5 — Adaptation layer
Goal: future-proofing. Encounter an unfamiliar tool, recognize patterns, figure it out. This is the closest thing to "I am now digitally fluent" the platform certifies.
Strategic position: The most ambitious level and the most aligned with the pitch's thesis. Tools change; AI agents proliferate; the only durable skill is the ability to meet new ones. No existing adult-education program targets this directly.
Task families
- First-encounter with an unfamiliar tool — Pattern: explore/map · Mental model: AI-as-directable-system + multi-window. Learner is given a previously-unseen tool (an unfamiliar AI agent, a niche productivity app) and a task to accomplish in it. The metric is whether they can succeed without a tutorial — by exploring, asking the AI co-pilot for orientation, and applying patterns from earlier levels.
- Comparing two AI agents on the same task — Pattern: evaluate/discriminate · Mental model: AI-as-directable-system. Learner gives the same task to two different AI agents (or the same agent with different prompts), evaluates the outputs, picks the better one, articulates why it's better. Practices the high-order judgment Level 4 introduced.
- Decomposing and delegating — Pattern: decompose/delegate/integrate · Mental model: all four. Learner faces a complex task (e.g., plan a multi-leg trip, prepare for a difficult conversation with documentation), breaks it into sub-tasks, delegates appropriate ones to the AI, integrates the results into a coherent whole.
- Creating instructions for someone else — Pattern: abstract/transmit · Mental model: hierarchy + AI-as-directable-system. Learner who has solved a task records or writes the steps for a different person to follow. Forces the metacognitive abstraction the pedagogy doc names as the key transfer move (Salomon & Perkins' "mindful abstraction").
What "fluent" means at Level 5
The benchmark isn't that the user knows specific tools. It's that the user can be dropped into an unfamiliar digital environment and apply schemas (decomposition, verification, delegation, evaluation) to make progress. The far-transfer assessment instrument is built around this scenario.
Pattern vocabulary
The full controlled vocabulary of patterns the AI co-pilot names. Stable across the curriculum, ~16 patterns. Each pattern recurs across at least three surface forms before being considered taught.
| Pattern | Plain-language description | Levels where it appears |
|---|---|---|
| persistence/recall | Save now, find later, even after time passes. | 1, 2, 3 |
| navigation/wayfinding | Know where you are; know how to get back. | 1, 3 |
| identity/persistence | Establish an account; return to the same identity. | 1, 2 |
| extract/transform | Take information from one place; reshape it for another. | 1, 3, 4 |
| sequence/recover | Multi-step process; recover when a step fails. | 2, 3, 4 |
| compose/structure | Construct a message, document, or request with appropriate structure. | 2, 3 |
| identity/recover | Regain access when an identity is lost. | 2 |
| extract/verify | Move data; confirm it's correct via a second channel. | 2, 4 |
| extract/synthesize/transmit | Read multiple sources; produce a unified output. | 3 |
| sequence/cross-reference | Workflow that spans apps and references state in another. | 3 |
| extract/evaluate/decide | Compare options; produce a reasoned choice. | 3, 5 |
| delegate/verify | Ask AI; verify its output against ground truth. | 4 |
| delegate/cross-check | Use AI to explain; confirm its explanation. | 4 |
| iterate/converge | Refine a draft via successive specific requests. | 4 |
| evaluate/discriminate | Detect when AI is wrong; choose between alternatives. | 4, 5 |
| explore/map | Build a model of an unfamiliar tool by interacting with it. | 5 |
| decompose/delegate/integrate | Break a complex task; route parts; recombine. | 5 |
| abstract/transmit | Articulate a process clearly enough for a different person to follow. | 5 |
The co-pilot uses these names verbatim. Same pattern, same name, every time. This is what makes pattern naming a durable cognitive handle (per Salomon & Perkins on mindful abstraction).
Implications for the spec
The current product spec defines the v1 MVP scope as "Curriculum Levels 1–2 (~10–15 tasks total)." Given the strategic position above, this is probably wrong — or at minimum, it sells short the project's distinctive contribution.
Two alternatives to consider in the next spec revision:
- v1 covers Levels 2–3, with Level 1 as a fast-track diagnostic. Users who already have the basics (which is most of our target population — they have smartphones; the gap is desktop/multi-app work) skip past Level 1 in 10–20 minutes. Users who don't are routed (in-platform or via library partnership) to Level 1 acquisition before re-entering. This positions v1 against the real gap libraries don't fill.
- v1 covers Levels 3–4 only, partner with libraries for Levels 1–2. Sharper. Concedes the basics to the established infrastructure. Concentrates engineering effort and content design entirely on the novel-contribution levels. Distribution requires a library partnership, which the fieldwork program is set up to identify.
The choice between these is partly a product question (what's the right scope for a 4–6 month MVP?) and partly a distribution question (which library or workforce partner can we credibly land in the first six months?). It should be made jointly with Phase 2 of the field-research program, not now.
What's deferred
This document deliberately does not commit to:
- Specific task content (the actual benefit form, the actual three articles to research). Real-task selection requires field research with providers serving real learners — we want to teach tasks people actually do, not ones we imagine they do.
- Personas. The curriculum is structured around competencies, not user types. Persona work is a Phase 1 deliverable from
fieldwork.md. - Per-task assessment rubrics. The Assessment Engine spec calls out near vs far transfer; the per-task rubric design follows from concrete task content.
- Pacing and progression heuristics. How long should a typical task take? What's the gap between unlocks? These are calibration questions answered by deployment, not by drafting.
- Multilingual versions. v1 is English. Per the technical-approach doc §8.3, multilingual is a v2 question.