I write about the intersection of AI, digital literacy, learning science, and the future of education, not as a spectator, but as someone building in it every day.

Through my work, I explore what meaningful AI integration actually looks like, how assessment must evolve in a world shaped by generative AI, and what skills will matter most in the decade ahead. I’m passionate about helping educators, leaders, and organizations think clearly about how intelligent tools are reshaping the way we learn, teach, and work, and how to design systems that are innovative, responsible, and deeply human.

Beyond writing, I work directly with education leaders, certification bodies, and learning product teams to design AI strategies, modernize assessment models, build scalable content systems, and apply learning science in practical, operational ways.

If you’re curious about where education is headed in the age of AI, you’re in the right place.

Hello! I’m Michelle Marlowe.

Education is changing.
The question isn’t whether AI belongs in it.
The question is whether we’ll shape that future
intentionally.

Let’s Build What’s Next

AI is reshaping education faster than most institutions can adapt.

I help organizations move from reaction to strategy, integrating AI thoughtfully, redesigning assessment responsibly, and building scalable learning systems grounded in cognitive science.

AI in education is not just a feature upgrade. It’s a structural shift.

If you’re rethinking assessment, literacy, governance, or learning product design, I’d love to explore how we can work together.

  • Strong learning experiences don’t happen by accident. They’re designed around how people actually think, retain, and apply information. I help education teams, product leaders, and organizations translate learning science into practical design decisions, not theory, but implementation.

    This work focuses on applying principles like retrieval practice, spaced repetition, interleaving, cognitive load management, feedback timing, and scenario-based learning to real-world products and programs. I work with teams to audit existing learning experiences, identify where cognitive principles are being underutilized (or misunderstood), and redesign toward deeper retention and transfer.

    For organizations building digital products, that means aligning features to memory formation and skill development, not just engagement metrics. For assessment teams, it means measuring application and reasoning instead of surface recall. For leadership teams, it means making product and content decisions grounded in how learning actually works.

    The outcome is learning systems that are more durable, defensible, and effective, because they’re built on cognitive science, not intuition.

  • I work with executive teams who want a clear, responsible approach to AI, not a rushed implementation or a stalled initiative.

    Together, we identify where AI can meaningfully improve learning outcomes and business performance, and where it introduces unnecessary risk. This includes evaluating opportunities and vulnerabilities, defining governance and quality standards, building practical 90-day and 12-month roadmaps, and supporting build-vs-buy decisions. The outcome is alignment: AI initiatives that strengthen learning, protect credibility, and support sustainable growth.

  • AI features should enhance cognition, not replace it. I work directly with Product, Engineering, and Education teams to ensure AI integration supports learning science principles.

    That means designing feedback systems that deepen understanding, protecting productive struggle, mitigating hallucination risk, and aligning features to clear cognitive outcomes.

    The goal is simple: build AI-enabled learning products that strengthen thinking rather than shortcut it.

  • If learners can access ChatGPT, recall is no longer a reliable measure of competence. I help certification bodies, universities, and professional associations redesign assessment models to prioritize judgment, reasoning, and contextual application.

    This work includes auditing assessments for AI vulnerability, modernizing item-writing frameworks, aligning to cognitive complexity, and training reviewers and SMEs to evaluate thinking instead of surface-level correctness.

    The result is assessment systems that remain rigorous, defensible, and relevant in an AI-enabled world.

  • Digital literacy now includes collaborating with intelligent systems, not just navigating software.

    I partner with institutions and workforce organizations to define modern literacy competencies that prepare learners and professionals to evaluate AI outputs, recognize bias and hallucinations, and make informed decisions about when and how to use AI tools.

    The focus is practical implementation: clear competency models, leadership decision frameworks, and rollout strategies that connect literacy directly to workforce readiness and organizational goals.

  • As AI increases content velocity, governance becomes essential to preserving trust. I help growing EdTech companies, certification providers, and learning platforms design content systems that scale without sacrificing rigor.

    This includes auditing workflows, strengthening QA structures, clarifying ownership, optimizing SME collaboration, and building performance frameworks tied to learner outcomes and business metrics.

    The outcome is a durable content engine that supports growth while maintaining accuracy and consistency.