Skip to main content
Digital Fluency

What Digital Fluency Actually Teaches

Curriculum

Purpose

This document outlines the curriculum across all five levels — what tasks the platform contains, what patterns each task practices, what mental model it constructs. It is the content layer that sits underneath the spec's structural commitments (the five Transfer Design Principles, the near/far transfer assessment) and the pedagogy doc's intellectual commitments (the five design moves grounded in transfer-of-learning research).

It is intentionally an outline, not a fully-specified content catalog. Concrete task design will be reshaped by Phase 1 of fieldwork.md — provider visits surface failure modes and pedagogical patterns the literature does not. The personas the curriculum targets (older adult, ESL, returning-to-workforce, etc.) are deferred to that phase.


Where this fits in the mission

The pitch's thesis is transferable schemas for digital problem-solving in the age of AI — a higher-order claim. That claim collapses if the platform only teaches Level 1 ("I can use a single app") or Level 2 ("I can move information between apps") work, because:

The implication for our curriculum: Levels 1–2 should be a runway, not a destination. Where possible, we partner with the existing library / ABE infrastructure for Level 1–2 onboarding (or assume incoming users have completed Northstar-style basics elsewhere) and concentrate our distinctive contribution at Levels 3–5. This may mean revising the spec's "v1 MVP = Levels 1–2" framing. See Implications for the spec below.


How a curriculum entry is structured

Every task entry below carries the same four-part anatomy. Detailed worked examples are deferred to v0 build; the outline establishes the shape and tests it against the design principles.


Level 1 — Operational basics

Goal: construct the desktop mental model. Establish file persistence, multi-window state, basic hierarchy.

Strategic position: Probably not where our distinctive contribution lives. The likely v1 design is to either (a) scope this level minimally and assume incoming users have acquired these skills elsewhere (Northstar / library), (b) partner with a library system that delivers Level 1 in-person and routes graduates to our platform for Level 2+, or (c) include it as a fast-track diagnostic so users without the basics aren't blocked.

Task families

  1. File persistence basicsPattern: persistence/recall · Mental model: file system · Surfaces: file browser, downloads, desktop. Learner downloads a file, renames it, places it in a named folder, then finds it again later in a different session.
  2. Browser orientationPattern: navigation/wayfinding · Mental model: multi-window state · Surfaces: browser. Learner opens multiple tabs, closes the right one, restores a closed tab, distinguishes tabs from windows.
  3. Account creation and credential persistencePattern: identity/persistence · Mental model: persistence + hierarchy · Surfaces: form, email. Learner creates an account on a service, receives a confirmation email, returns the next day and logs back in without re-creating the account.
  4. Copy / paste across contextsPattern: extract/transform · Mental model: multi-window state · Surfaces: document, browser, form. Learner finds information in one app, transports it intact to another.

What "complete" means at Level 1

User can perform these tasks without coaching, and can answer the question "where is the file you just saved?" without hesitation. The diagnostic is the mental model, not the keystrokes.


Level 2 — Transactional tasks

Goal: sequencing and verification. Multi-step processes within one or two apps; introduce conditional logic and recovery from errors.

Strategic position: Bridge level. Some library systems handle this; many don't reach this depth. Our value-add starts here and compounds upward.

Task families

  1. Form completion with validationPattern: sequence/recover · Mental model: versioning/history · Surfaces: form (3 different surface forms required). Learner completes a multi-field form, encounters validation errors on submission, identifies which field failed, corrects, resubmits.
  2. Email composition with structurePattern: compose/structure · Mental model: hierarchical organization (threading) · Surfaces: email. Learner composes a reply that quotes the original, adds a clear subject, sends to a specific recipient — not just hits "reply all."
  3. Account management and recoveryPattern: identity/recover · Mental model: persistence · Surfaces: email, form. Learner recovers access to an account where they've forgotten the password — uses the email-reset flow without abandoning halfway.
  4. Verification flowsPattern: extract/verify · Mental model: multi-window + persistence · Surfaces: email + form, document + form. Learner enters data on one screen, confirms it via another (typically email), and proceeds.

Cross-cutting at Level 2

Errors are taught as opportunities. Per the spec's UX principle, a wrong attempt becomes a contrasting-case opportunity. The AI co-pilot's behavior here practices the productive-struggle and pattern-naming moves before the curriculum gets harder.


Level 3 — Workflow execution

Goal: operate across applications. Compose Level 1–2 patterns into longer workflows that span apps and sustain state across them.

Strategic position: This is where the project's core mission begins. The library system does not, in general, scaffold this level; the workforce-development field has named this as the gap (Urban Institute 2019). This is where we lead.

Task families

  1. Research → summarize → sendPattern: extract/synthesize/transmit · Mental model: multi-window + versioning · Surfaces: browser + document + email. Learner reads three short articles, extracts three claims, drafts a summarizing email with the claims attributed to their sources.
  2. Receive → locate → respondPattern: sequence/cross-reference · Mental model: multi-window + hierarchy · Surfaces: email + file browser + email. Learner receives a request via email, finds the relevant document on the file system, replies with a summary of the document plus the document attached.
  3. Apply for a service end-to-endPattern: sequence/recover/verify (composite) · Mental model: all four · Surfaces: browser + form + email + document. Learner navigates to a benefits portal, completes a multi-page form requiring documents from elsewhere, uploads supporting evidence, receives a confirmation, returns later to check status.
  4. Comparison-shop with constraintsPattern: extract/evaluate/decide · Mental model: multi-window + versioning · Surfaces: browser + document. Learner compares three options against stated constraints (budget, deadline, feature requirement), produces a justified recommendation document.

What's hard about Level 3

The cognitive load is multi-window state held over time. The user must remember "I was looking for X, then I went to Y, now I'm back, where was I?" without losing the thread. The co-pilot's interventions calibrate around this — re-stating the goal when the user returns from a tangent is more valuable than naming a pattern they've already used.


Level 4 — AI-augmented workflows

Goal: direct AI as a tool inside a workflow. Evaluate AI output. Refine.

Strategic position: Novel. No established curriculum exists at this level for adult digital-fluency populations. The closest analog is corporate AI-literacy training, which is structured around enterprise tools (Copilot, Glean) rather than general-purpose AI direction. We can lead here without competing with anyone.

Task families

  1. Drafting with AI, verifying against sourcePattern: delegate/verify · Mental model: AI-as-directable-system · Surfaces: AI panel + document + browser. Learner asks the AI to draft a response (e.g., a benefit appeal), then verifies each factual claim against the source documents, identifies any hallucinations, refines.
  2. Explaining unfamiliar content with AIPattern: delegate/cross-check · Mental model: AI-as-directable-system · Surfaces: document + AI panel. Learner receives an unfamiliar form (e.g., a tax form, a lease), asks the AI to explain each field, confirms the AI's explanation against the form's actual fields and instructions before acting.
  3. Iterative refinementPattern: iterate/converge · Mental model: versioning + AI-as-directable-system · Surfaces: AI panel + document. Learner asks for a draft, finds it almost-right-but-not-quite, learns to ask for specific changes ("make it shorter," "remove the second paragraph") rather than starting over.
  4. Detecting AI failure modesPattern: evaluate/discriminate · Mental model: AI-as-directable-system · Surfaces: AI panel + browser + document. Curated tasks where the AI confidently produces a wrong answer; learner is taught to expect this category of failure and to verify rather than trust.

What's hard about Level 4

Calibrating trust. The user has to internalize that "the AI is often right but unreliable in ways that matter, and verification is non-optional." This is a category of mental model — the AI is not an oracle and not a search engine; it's a directable but fallible collaborator. Our curriculum may be the first place a low-fluency adult learner encounters this framing.


Level 5 — Adaptation layer

Goal: future-proofing. Encounter an unfamiliar tool, recognize patterns, figure it out. This is the closest thing to "I am now digitally fluent" the platform certifies.

Strategic position: The most ambitious level and the most aligned with the pitch's thesis. Tools change; AI agents proliferate; the only durable skill is the ability to meet new ones. No existing adult-education program targets this directly.

Task families

  1. First-encounter with an unfamiliar toolPattern: explore/map · Mental model: AI-as-directable-system + multi-window. Learner is given a previously-unseen tool (an unfamiliar AI agent, a niche productivity app) and a task to accomplish in it. The metric is whether they can succeed without a tutorial — by exploring, asking the AI co-pilot for orientation, and applying patterns from earlier levels.
  2. Comparing two AI agents on the same taskPattern: evaluate/discriminate · Mental model: AI-as-directable-system. Learner gives the same task to two different AI agents (or the same agent with different prompts), evaluates the outputs, picks the better one, articulates why it's better. Practices the high-order judgment Level 4 introduced.
  3. Decomposing and delegatingPattern: decompose/delegate/integrate · Mental model: all four. Learner faces a complex task (e.g., plan a multi-leg trip, prepare for a difficult conversation with documentation), breaks it into sub-tasks, delegates appropriate ones to the AI, integrates the results into a coherent whole.
  4. Creating instructions for someone elsePattern: abstract/transmit · Mental model: hierarchy + AI-as-directable-system. Learner who has solved a task records or writes the steps for a different person to follow. Forces the metacognitive abstraction the pedagogy doc names as the key transfer move (Salomon & Perkins' "mindful abstraction").

What "fluent" means at Level 5

The benchmark isn't that the user knows specific tools. It's that the user can be dropped into an unfamiliar digital environment and apply schemas (decomposition, verification, delegation, evaluation) to make progress. The far-transfer assessment instrument is built around this scenario.


Pattern vocabulary

The full controlled vocabulary of patterns the AI co-pilot names. Stable across the curriculum, ~16 patterns. Each pattern recurs across at least three surface forms before being considered taught.

Pattern Plain-language description Levels where it appears
persistence/recall Save now, find later, even after time passes. 1, 2, 3
navigation/wayfinding Know where you are; know how to get back. 1, 3
identity/persistence Establish an account; return to the same identity. 1, 2
extract/transform Take information from one place; reshape it for another. 1, 3, 4
sequence/recover Multi-step process; recover when a step fails. 2, 3, 4
compose/structure Construct a message, document, or request with appropriate structure. 2, 3
identity/recover Regain access when an identity is lost. 2
extract/verify Move data; confirm it's correct via a second channel. 2, 4
extract/synthesize/transmit Read multiple sources; produce a unified output. 3
sequence/cross-reference Workflow that spans apps and references state in another. 3
extract/evaluate/decide Compare options; produce a reasoned choice. 3, 5
delegate/verify Ask AI; verify its output against ground truth. 4
delegate/cross-check Use AI to explain; confirm its explanation. 4
iterate/converge Refine a draft via successive specific requests. 4
evaluate/discriminate Detect when AI is wrong; choose between alternatives. 4, 5
explore/map Build a model of an unfamiliar tool by interacting with it. 5
decompose/delegate/integrate Break a complex task; route parts; recombine. 5
abstract/transmit Articulate a process clearly enough for a different person to follow. 5

The co-pilot uses these names verbatim. Same pattern, same name, every time. This is what makes pattern naming a durable cognitive handle (per Salomon & Perkins on mindful abstraction).


Implications for the spec

The current product spec defines the v1 MVP scope as "Curriculum Levels 1–2 (~10–15 tasks total)." Given the strategic position above, this is probably wrong — or at minimum, it sells short the project's distinctive contribution.

Two alternatives to consider in the next spec revision:

  1. v1 covers Levels 2–3, with Level 1 as a fast-track diagnostic. Users who already have the basics (which is most of our target population — they have smartphones; the gap is desktop/multi-app work) skip past Level 1 in 10–20 minutes. Users who don't are routed (in-platform or via library partnership) to Level 1 acquisition before re-entering. This positions v1 against the real gap libraries don't fill.
  2. v1 covers Levels 3–4 only, partner with libraries for Levels 1–2. Sharper. Concedes the basics to the established infrastructure. Concentrates engineering effort and content design entirely on the novel-contribution levels. Distribution requires a library partnership, which the fieldwork program is set up to identify.

The choice between these is partly a product question (what's the right scope for a 4–6 month MVP?) and partly a distribution question (which library or workforce partner can we credibly land in the first six months?). It should be made jointly with Phase 2 of the field-research program, not now.


What's deferred

This document deliberately does not commit to:

Footnotes

  1. Hecker & Loprest (Urban Institute), Foundational Digital Skills for Career Progress, 2019, p. 14: "Multiple respondents suggested it is not clear how to train people to move from this initial level to more fluency." PDF · our notes.