Where AI waits for your thinking first.
A learning platform that pulls students into conceptual depth before technical detail. Built for any subject, any age, any pedagogy — configurable in eight steps, grounded in thirty years of learning sciences.
Most AI tutors give you the answer.
Wonderment makes you ask the question first — then sketch your model, defend your claim, debate three peers, and only then meet the canonical answer. Concepts before procedures. Always.
The wonderment-question principle, after Scardamalia & Bereiter (1994); Chin & Osborne (2008)
Every learning session moves through the same four cognitive stages. The third stage — the struggle gate — is the design's centre of gravity. Until the learner externalises a sketch, a prediction, and a claim, the AI's canonical answer stays locked.
Five candidate questions appear, drawn from the wonderment taxonomy: mechanism, limit, analogy, contradiction, transfer. Pick one — or write a better one.
Sketch the forces. Predict the limit case. State your claim and your reason. The AI's answer is gated until you commit to a model of your own.
A creative peer, a skeptic, and a mediator respond — not to give the answer, but to pressure-test yours. You adjudicate.
Now the canonical answer appears, alongside what your model got right and what shifted. Then: what extension question would you ask next?
Wonderment generates candidate questions in five canonical types. They aren't the questions the curriculum trains learners to ask. They're the questions that change how the problem looks.
If forces are equal and opposite, why doesn't the skater stop the moment she pushes?
What if the wall were made of paper? Or what if it were a planet?
Is this like firing a cannon, or more like jumping off a boat?
The wall pushes me back, but I don't feel it pushing — does it really?
Does this also explain how rockets work in empty space?
Each role sees a workspace shaped to their work — the student thinks, the teacher monitors, the school admin governs, the platform admin scales. Same product, four different surfaces.
A workspace that rewards your thinking. Pick a creative question, sketch your model, debate three peers, then meet the canonical answer.
An eight-step wizard configures any subject, any pedagogy, any age. Live monitor every learner's stage; review artefacts; adjust scaffolding mid-session.
Govern the platform at the school level. Approve presets for school use, organise teachers and classes, watch equity dashboards, align with curriculum standards.
Multi-tenant control plane. Provision schools, configure AI-provider routing and failover, curate the master preset library, audit every event.
A wizard exposes seven parametric dimensions plus a launch step. Not just STEM. Not just one pedagogy. The configuration becomes a JSON document the platform turns into a live session.
Wonderment is not vibes. Every design choice maps to a published finding. Below: the empirical anchors that determined how the workspace works.
Self-generated "I wonder why…" questions predict deeper conceptual gain. Most classrooms see <0.2 student questions per hour. Scardamalia & Bereiter; Chin & Osborne, 2008.
Struggling on a novel problem before canonical instruction improves transfer. The struggle gate is the workspace's operationalisation. Kapur, 2008, 2014; meta-analysis Sinha & Kapur, 2021.
Prompting learners to explain why drives conceptual change. The "claim + reason" cell at the gate is exactly this prompt. Chi et al., 1989; meta-analysis Bisra et al., 2018.
Misconceptions are coherent p-prims, not gaps. They yield to dissatisfaction plus a plausible alternative — the work the skeptic agent does. diSessa, 1993; Chi, 2008.
AutoTutor's lineage of dialogic tutoring systems shows that multiple agent voices outperform a single tutor on transfer tasks. Graesser, Person, McNamara, et al.
Conceptual and procedural knowledge co-develop iteratively — neither comes strictly first. Wonderment's pedagogy mix lets instructors balance the two. Rittle-Johnson & Schneider, 2015.
Largest single learning gain in the session occurs after the struggle gate, consistent with productive-failure literature.
After three weeks, learners reach for limit and contradiction questions on their own — the deepest types in the taxonomy.
Learner-visible work balanced with AI co-construction; full transparency for teachers, parents, and the learner themselves.
We're working with a small set of partner schools across Hong Kong, mainland China, and Southeast Asia. Get the full demo, a sample preset library for your subject area, and a forty-five-minute walkthrough with the team.