Anastasia Betts, PhD
10 min readJul 17, 2024

The Knowledge Model Imperative: Why Human Expertise is Essential for AI in Education

by Anastasia Betts, PhD.

Editor’s Note: This article is the second in a series exploring the intersection of Knowledge Space Theory, AI, and personalized learning. The series aims to demystify complex concepts in educational technology and highlight the potential and challenges of creating truly adaptive learning systems. For background, readers are encouraged to start with our first article, “The Future is Now: Accelerating Learning through Knowledge Space Theory and AI-Driven Personalization.

Introduction

In our previous exploration of Knowledge Space Theory and AI-driven personalization (found here), we delved into the transformative potential of comprehensive knowledge models in education. These models, complex representations of how concepts, cognitive processes, and skills interrelate within a domain, form the backbone of truly adaptive learning systems. As we continue to navigate the rapidly evolving landscape of educational technology, three significant misconceptions have emerged that warrant our attention.

First, many educators and technologists mistakenly equate existing standards frameworks, such as Common Core or state-specific standards, with the kind of detailed knowledge models necessary for AI-driven adaptive learning. Second, there’s a widespread assumption that comprehensive knowledge models already exist and are readily available. Lastly, with the recent surge of interest in Large Language Models (LLMs) like ChatGPT, there’s a growing belief that these AI tools can effortlessly generate the complex knowledge models required for personalized learning.

These misconceptions, while understandable given the current buzz around AI in education, risk oversimplifying the intricate process of creating effective knowledge models. In this article, we’ll clarify the crucial differences between standards frameworks and true knowledge models, explain why comprehensive models are not yet widely available, and examine why LLMs, despite their impressive capabilities, fall short in creating these essential educational tools.

By addressing these misconceptions head-on, we aim to foster a more nuanced understanding of the challenges and opportunities in developing knowledge models for education. This understanding is crucial as we work towards harnessing the full potential of AI in creating more effective, personalized learning environments for all students, while recognizing the irreplaceable role of human expertise in this process.

Common Misconceptions in Educational AI

As artificial intelligence continues to make headlines in the education sector, three prevalent misconceptions have emerged that require careful examination and clarification.

Standards Frameworks are not Knowledge Models

A common misconception is equating educational standards frameworks, such as the Common Core or Next Generation Science Standards, with the detailed knowledge models required for AI-driven adaptive learning. While standards frameworks play a crucial role in education by outlining learning goals and expectations, they are fundamentally different from the granular, interconnected knowledge models needed to leverage AI in personalized learning systems.

Standards frameworks typically provide broad, grade-level learning objectives but often lack the detailed breakdown of knowledge components, their relationships, and the specific pathways for learning progression. In contrast, knowledge models offer a much more fine-grained representation of a domain, explicitly mapping out how concepts build upon each other and intersect.

This confusion can lead to the mistaken belief that existing standards frameworks are sufficient to power adaptive learning systems. In reality, while these frameworks can inform the development of knowledge models, they lack the necessary detail and structure to drive truly personalized learning experiences.

Comprehensive Knowledge Models do not Freely Exist

A second critical misconception is the assumption that comprehensive, detailed knowledge models already exist and are readily available for use in AI-driven educational systems. This belief likely stems from the existence of various educational resources, curriculum guides, and Standards Frameworks. However, as we discussed above and in our previous article, standards are not knowledge models, and truly comprehensive knowledge models that capture the intricate relationships between concepts, skills, and cognitive processes across entire domains are remarkably rare.

Most existing models are either too broad (like curriculum standards) or too narrow (focusing on specific topics), lacking the granularity and interconnectedness required for truly adaptive learning. The reality is that creating such comprehensive models is an ongoing challenge in the field of educational technology. While some companies and research institutions are working on developing these models — they are largely proprietary and will likely never be freely available. They are also not yet sufficiently comprehensive to power the kind of personalized, AI-driven learning experiences that many envision. Recognizing this gap is crucial for understanding the current state of AI in education and the significant work that lies ahead in realizing the full potential of personalized learning.

The assumption that LLMs can create knowledge models

Lastly, given the growing number of powerful LLMs (e.g., see this article about the best LLMs of 2024), there seems to be a belief that these AI systems can generate the complex knowledge models required for personalized learning. This assumption is understandable given the impressive text generation capabilities of LLMs. Many assume that if an AI can produce coherent essays on complex topics, surely it can map out the intricacies of a knowledge domain.

However, this belief overlooks the fundamental differences between generating text based on patterns in data (which is the main function of LLMs) and creating structured, pedagogically sound knowledge models. LLMs, while powerful, lack the deep understanding of domain-specific relationships, learning progressions, and cognitive processes that are essential for effective knowledge modeling. They can describe concepts but struggle to represent the nuanced interconnections and prerequisites that form the backbone of a comprehensive knowledge model.

By addressing these misconceptions, we can foster a more accurate understanding of the complexities involved in creating effective knowledge models for education. To fully grasp why LLMs fall short in this task, we need to delve deeper into how these AI systems actually work.

Understanding Large Language Models (LLMs)

Large Language Models (LLMs) have garnered significant attention in recent years due to their impressive capabilities in generating human-like text. However, their limitations in creating comprehensive knowledge models are often overlooked. Let’s explore what LLMs are and how they function to better understand both their potential and their constraints in educational contexts.

How do Large Language Models (LLMs / Generative AI) actually work?

Large Language Models are advanced artificial intelligence systems trained on vast amounts of text data. Examples include models like ChatGPT, Claude, and others. These models use deep learning techniques, particularly a neural network architecture, to process and generate human-like text.

The training process for LLMs involves exposing the model to billions of words from various sources such as books, websites, and articles (see an informative, if not alarming explanation here). During this process, the model learns patterns in language, including grammar, context, and even some factual information. However, it’s crucial to note that LLMs don’t “understand” this information in the way humans do. Instead, they learn to predict what words are likely to come next in a sequence based on the patterns they’ve observed.

Capabilities and limitations of LLMs

LLMs have demonstrated remarkable capabilities in various language tasks:

1. Text generation: They can produce human-like text on a wide range of topics.

2. Translation: Many LLMs can translate between multiple languages.

3. Summarization: They can condense longer texts into shorter summaries.

4. Question answering: LLMs can provide answers to questions based on their training data.

However, these capabilities come with significant limitations:

1. Lack of true understanding: LLMs don’t comprehend the meaning of the text they process or generate. They operate based on statistical patterns (i.e. math) rather than actual understanding of language.

2. No real-world knowledge: While LLMs may appear to have knowledge, they don’t have direct experience or understanding of the real world.

3. Potential for inaccuracies: LLMs can generate information that seems correct but isn’t, often referred to as “hallucinations.”

4. Limited reasoning abilities: While they can perform some reasoning tasks, LLMs struggle with complex logical reasoning, especially when it requires integrating multiple pieces of information.

The fundamental difference between text generation and knowledge modeling

The key distinction between what LLMs do and what’s required for knowledge modeling lies in the nature of the task. LLMs excel at generating text that appears coherent and relevant based on patterns in their training data. This is fundamentally different from creating a structured representation of knowledge that captures the relationships between concepts, the prerequisites for understanding, and the cognitive processes involved in learning.

Knowledge modeling requires not just listing or describing concepts, but understanding how they interconnect, how they build upon each other, and how they relate to broader learning objectives. It involves creating a structured, hierarchical representation of a domain that can guide personalized learning paths. This level of structured understanding and representation is beyond the current capabilities of LLMs, which are designed for pattern recognition and text generation rather than for creating organized, pedagogically sound knowledge structures.

Understanding these fundamental aspects of LLMs sets the stage for exploring why, despite their impressive capabilities, they fall short in the crucial task of creating the comprehensive knowledge models required for truly personalized, AI-driven learning experiences. It would be so nice if they could create knowledge models, but they can’t.

Creating True Knowledge Models: A Complex Human Endeavor

While we’ve established that LLMs can’t create the knowledge models we need, the question remains: how do we create them? Let’s explore the unique challenges and processes involved in developing true knowledge models for education.

The Multidisciplinary Nature of Knowledge Modeling

· Collaboration is key: Creating effective knowledge models requires input from domain experts, cognitive scientists, educators, and data scientists.

· Bridging theory and practice: Models must integrate theoretical frameworks with practical classroom experiences and empirical research findings.

Structuring Knowledge Beyond Linear Progression

· Multi-dimensional mapping: Knowledge models must represent concepts in networks, not just linear sequences, capturing complex interdependencies.

· Adaptive pathways: Models need to accommodate multiple learning paths to suit diverse learner needs and backgrounds.

Incorporating Cognitive and Metacognitive Elements

· Beyond content: Effective models must include strategies for learning, problem-solving, and metacognition.

· Representing tacit knowledge: Capturing implicit knowledge and skills that experts often take for granted is a significant challenge.

Validation and Refinement Process

· Iterative testing: Models require extensive testing with diverse learner groups to ensure effectiveness and identify gaps.

· Continuous evolution: As new research emerges and educational needs change, models must be regularly updated and refined.

Ethical Considerations and Bias Mitigation

· Ensuring inclusivity: Models must be designed to be culturally responsive and inclusive of diverse perspectives.

· Addressing potential biases: Rigorous review processes are needed to identify and mitigate biases in the knowledge representation.

Creating true knowledge models is a complex, ongoing process that requires human expertise, collaborative effort, and a deep understanding of both the subject matter and the learning process. While AI tools can assist in this process, the core work of conceptualizing, structuring, and validating these models remains a distinctly human endeavor.

The Critical Role of Knowledge Models in Personalized Learning

At the heart of truly effective personalized learning lies a crucial component: the knowledge model. The quality and granularity of these models directly impact the effectiveness of the learning experience, acting as the foundation upon which all personalization is built.

Detailed knowledge models enable learning systems to create highly tailored learning paths, guiding learners through content in the most optimal sequence for their individual needs. This precision extends to assessment as well. With a granular understanding of the relationships between concepts, systems can more accurately gauge a learner’s current understanding, identifying specific knowledge gaps and misconceptions rather than broad areas of weakness.

This level of detail allows personalized learning systems to adapt in real-time, selecting the most appropriate content for each learner based on their current knowledge state and learning goals. It also enables the provision of just-in-time support, anticipating potential stumbling blocks and offering relevant scaffolding when needed.

The efficiency of learning is significantly enhanced when a system truly understands the structure of knowledge in a domain. Learners can progress more rapidly, avoiding unnecessary repetition and focusing on areas that genuinely need attention. This targeted approach not only saves time but also maintains engagement by consistently presenting material at the right level of challenge.

It’s crucial to understand that the quality of the personalized learning experience is intrinsically tied to the quality of the underlying knowledge model. A superficial or incomplete model will result in sub-optimal learning experiences, no matter how sophisticated the technology delivering it might be. This underscores why the creation of comprehensive, granular knowledge models by human experts is so vital — it forms the bedrock upon which effective personalized learning is built.

Conclusion: The Irreplaceable Role of Human Expertise

As we’ve explored the complexities of creating true knowledge models for education, it’s clear that this task requires human expertise. While AI tools like Large Language Models have their place, they cannot replace the nuanced understanding and interdisciplinary collaboration that human experts bring to the table.

The creation of comprehensive knowledge models is a deeply human endeavor, requiring domain expertise, pedagogical understanding, and ethical judgment. As we push the boundaries of educational technology, we must remember that the power of personalization comes not just from advanced algorithms, but from the deep, structured understanding of knowledge domains that only human experts can provide.

The misconceptions we addressed underscore a crucial point: there’s still significant work to be done in this field. This work requires investment not just in technology, but in human capital — in training, supporting, and valuing the experts who can create these intricate maps of knowledge.

The future of education lies in the synergy between human expertise in knowledge modeling and technological advancements in content delivery and adaptation. By recognizing the critical role of human experts in creating high-quality knowledge models, we pave the way for educational technologies that can truly meet the diverse needs of learners and unlock the full potential of personalized education.

This article was edited in collaboration with Claude, an AI language model developed by Anthropic.

Anastasia Betts, PhD
Anastasia Betts, PhD

Written by Anastasia Betts, PhD

EdTech leader | Learning Scientist | Educational Problem Solver | Interests: early childhood & K-12 education, the learning sciences, curriculum design, & more.

No responses yet