Blog Viewer

Book Review of AI-Powered Pedagogy and Curriculum Design

  

BOOK REVIEW

AI-Powered Pedagogy and Curriculum Design: Practical Insights for Educators, edited by Geoff Baker and Lucy Caton, Routledge UK, 17 September 2025, 274 Pages, £45.99 (Paperback), ISBN 9781032894744

image

A small moment in the book captures the larger problem it wants us to face. In a university workshop, an educator tries a simple prompt to generate a scheme of learning, then tries again with clearer boundaries about learning outcomes, assessment conditions, and inclusion. The second output is sharper, more aligned to pedagogy, and more usable. But it also surfaces a quiet social fault line: the most capable versions of these systems are often locked behind paywalls, and the “good” guidance becomes unevenly distributed. That one scene is more than a teaching tip. It is a reminder that AI in education is already an issue of access, status, and power, long before we argue about whether students should be allowed to use it. For SIM scholars, that is the right starting point, because the central question is not whether AI improves teaching in the abstract, but who benefits, who bears risk, and how institutions justify the trade-offs they make.

AI-Powered Pedagogy and Curriculum Design, edited by Geoff Baker and Lucy Caton, is best read as a book about organizational choices disguised as a handbook of classroom practice. It does not treat generative AI as a neutral tool that can be added to the curriculum like a new software license. Instead, it presents AI as a force that pushes universities and colleges to clarify what they value, how they govern, and what kind of relationship they want between educators and students. This is precisely why the book belongs in SIM conversations. Education is a major pathway into work and citizenship. When AI reshapes educational practice, it reshapes the distribution of opportunity, the meanings of merit and integrity, and the dignity of professional identity for teachers. The volume repeatedly returns to these social consequences without losing the practical pulse of what educators are actually doing in classrooms, workshops, and assessment redesign meetings.

The introduction sets out the editors’ core claim with unusual clarity. Generative AI has created pressure for rapid institutional responses, yet many responses have been shaped by fear, policy gaps, and uncertain definitions of what AI competence even means. The book’s wager is that useful guidance must be grounded in real classroom settings and in educator experience, not only in lofty principles or vendor narratives. The key takeaway for SIM scholars is straightforward. If institutions respond to AI mainly through surveillance, prohibition, and deterrence, they risk damaging trust and worsening inequity. If they respond through capability building, transparent governance, and participatory sensemaking, they create conditions in which ethical practice is more likely, and the harms of rushed adoption are easier to detect and correct.

Part 1 provides historical and philosophical foundations that, in a SIM lens, function like an inoculation against techno-solutionism (Linares-Lanzman et al., 2025). Chapter 2, a brief history of AI, traces the field’s cycles of promise and disappointment, from early ideas about machine intelligence to the era of data-driven systems. Its practical value is not just timeline knowledge. It is the lesson that people are prone to over-attribute intelligence to “intelligent-seeming” systems, and institutions are prone to institutionalize those attributions as policy. For SIM scholars, the importance of this chapter lies in what it makes teachable: the cultural politics of innovation (Courvisanos, 2009), the role of hype in resource allocation (Logue & Grimes, 2022), and the temptation to treat complex social problems as technical tasks (Hewitt & Hall, 1973).

Chapter 3, a philosophical overview of ethical concepts for navigating AI in education, supplies a vocabulary that educators often lack when they are asked to write policy quickly. It helps distinguish what is merely compliant from what is ethically defensible. A SIM-oriented reading can place this chapter within business ethics debates that students already know, such as duty-based reasoning versus outcome-based reasoning, and the difference between procedural fairness and distributive fairness. The chapter’s key takeaway is that ethical practice with AI is not reducible to a rule like “declare your tool use.” Ethical practice also includes preventing harm from biased outputs, respecting privacy and consent, avoiding false authority in machine-generated claims, and ensuring that the burden of risk does not fall mainly on those with the least institutional power. Its importance is that it makes ethics usable, not ornamental (Manning & Amare, 2006). It gives educators and managers language for deciding, explaining, and revising decisions.

Part 2 turns to ethics as lived organizational work. Chapter 4, on balancing innovation and ethics, frames AI integration as a governance challenge. The chapter emphasizes the need to anticipate algorithmic bias, manage data responsibly, and invest in end-user training. Read through stakeholder theory, the chapter implicitly argues that institutions must treat students and educators as stakeholders whose interests are not identical, and whose vulnerabilities differ. The key takeaway is that ethical adoption requires infrastructures of oversight and learning, not just aspirational statements. Its importance for SIM is that it resembles what responsible firms must do with AI: set boundaries, build competence, and create accountability mechanisms that survive beyond the initial excitement (Busuioc & Lodge, 2017).

Chapter 5, on how AI is transforming higher education experience, is one of the most SIM-relevant chapters because it shows how quickly “integrity” can become a moral panic that harms relationships. The Leeds Trinity University case describes a cross-functional working group that tried to build guidance for ethical and sustainable AI use. The chapter highlights guidance fatigue, anxiety about being accused of misconduct, and the strain placed on relational trust when students feel actively discouraged from using AI under threat of being villainized. It also challenges a common stereotype in higher education: that students are naturally AI-literate while educators lag behind. The takeaway is that access and competence are different. Many students do not understand hallucinations, citation errors, or the limits of reliability. The importance of the chapter lies in its implicit justice argument. If institutions only punish misuse without teaching critical use, they create a hidden curriculum where the confident and well-resourced benefit, and the anxious or under-supported avoid experimentation, even when that experimentation could support learning.

Chapter 6 focuses on transparency and accountability through explainable AI. The SIM contribution here is a reminder that black-box decision making is not merely a technical concern. It is a legitimacy concern. When AI influences decisions in education, whether in student support systems, analytics, or administrative processes, explainability becomes part of procedural justice. The key takeaway is that accountability requires the ability to understand and contest decisions, not just accept them. Its importance is that it points to a governance standard that SIM scholars can generalize: institutions should not deploy systems that they cannot explain, audit, or justify to those affected by them.

Part 3 shifts from policy questions to the human experience of professional change. Chapter 7 addresses AI in teacher professional development and treats readiness as a developmental process. It implicitly draws on organizational learning: people learn new tools through experimentation, peer exchange, and iterative reflection. The takeaway is that professional development should be structured, sustained, and psychologically safe, not delivered as a one-off compliance training. Its importance for SIM is that it shows how institutions can avoid a common injustice of digital transformation: expecting individuals to absorb costs of change alone, including emotional strain, time burdens, and the fear of public failure.

Chapter 8 strengthens that focus by proposing a staff development framework for professional readiness and ethical practice. The chapter’s key takeaway is that training must cover both practical competencies and ethical judgment, and it must be designed so that staff with different starting points are not left behind. This is where educational psychology and organizational studies meet. People adopt tools when they feel competent, supported, and able to experiment without stigma. The importance for SIM scholars is that the chapter treats capability building as an equity issue. In organizational terms, it recognizes that competence is socially produced by access, time, support, and culture, not merely by personal motivation (Schunk & Zimmerman, 1997).

Chapter 9 asks the emotionally charged question many educators hear from colleagues and sometimes from themselves: if students have ChatGPT, why do they need us? The chapter’s answer is a strong defense of the educator’s role, grounded in the idea that teaching is not content delivery. It is design, judgment, relationship, and the cultivation of thinking habits. In educational psychology terms, learning depends on motivation, feedback, and a sense of belonging, not just exposure to information. The key takeaway is that AI can support learning but cannot replace the relational and moral dimensions of education. The importance for SIM is broader than education. It provides language for any profession under automation pressure: the irreplaceable parts of work are often the ethical, relational, and interpretive parts, which organizations routinely undervalue because they are harder to measure.

Chapter 10 addresses academic identity in the AI era and explores resistance, adaptation, and traditional values. Read through an institutional lens, the chapter shows that resistance is often a signal of value conflict, not a lack of modernity. Educators may resist AI because they fear erosion of standards, misrecognition of expertise, or a shift from education to production metrics. The key takeaway is that institutions should interpret resistance as data about legitimacy, workload, and professional meaning. The importance for SIM scholars is that this is a live case of how identities are negotiated during technological change, and how poor change management can produce cynicism, compliance theatre, or disengagement.

Part 4 focuses on student experience and employability, bringing the book’s ethical and organizational arguments into the domain where inequality is most visible. Chapter 11 discusses AI and student experience, emphasizing that positive experience is shaped by teaching quality, curriculum structure, and thoughtful integration of digital tools. It also presents a management capstone case that integrates generative AI using constructivist-oriented active learning, where students engage real business issues and use AI for idea generation, planning, and aspects of data handling. The pedagogy is careful: AI is treated as a stakeholder in learning, not a source of truth, and students are trained to fact-check, validate, and interpret outputs. The key takeaway is that AI can broaden perspective taking and improve efficiency, but only when paired with information literacy and ethical boundaries. Its importance for SIM scholars is twofold. First, it provides a practical model for teaching responsible AI use as a lived practice. Second, it shows how employability narratives can be reframed: not “learn AI to be productive,” but “learn AI to remain agentic, ethical, and critically capable in AI-shaped workplaces.”

Chapter 12 extends this into employability more directly, focusing on how generative AI influences readiness for work. The SIM lens here is to treat employability as a social distribution, not merely an individual achievement. AI may widen opportunity for some by lowering barriers to communication and planning, while also deepening inequity if access to premium tools, mentoring, or institutional support is uneven. The key takeaway is that AI competence should be taught as a blend of practical skill and ethical judgment. The importance lies in positioning employability as a moral project: preparing graduates to participate responsibly in organizations that increasingly use AI to evaluate, rank, and manage people.

Chapter 13 offers a concrete case of digital assistants in an educational institution, including tools that support students, teachers, and campus teams. For SIM scholars, this is a governance case. Digital assistants promise speed and support, but they also concentrate data, shape decision pathways, and risk normalizing automated gatekeeping. The key takeaway is that assistance systems must be designed with clear purposes, limits, and ongoing monitoring. The importance is that it shows AI not as a classroom add-on but as institutional infrastructure, where social issues like surveillance, privacy, and unequal treatment can become embedded in routine support.

Chapter 14 brings the discussion back to participation through staff and student partnership. It frames partnership as a way to maximize benefits while addressing risks, and it aligns with ideas of co-creation and radical collegiality. The key takeaway is that ethical AI governance improves when those affected help shape the rules and practices. The importance for SIM scholars is that it models democratic governance in a domain often governed top-down, and it suggests a general principle relevant to organizations: inclusion is not only a value statement, it is a design choice that requires forums, processes, and shared authority.

The conclusion in Chapter 15 ties the volume together around skills, ethics, and human autonomy. It emphasizes building AI literacy among staff and students, coordinating policy through centralized but participatory frameworks, and collaborating with external partners without surrendering educational judgment. The key takeaway is that institutions should resist both extremes: panic-driven prohibition and uncritical adoption. The importance for SIM scholars is that the book presents AI integration as a test of institutional responsibility. It asks whether education providers will treat equity, trust, and autonomy as non-negotiable principles, or as costs to be managed.

Across its chapters, the volume’s contribution is not that it offers a single universal policy. It offers a way of seeing. It invites educators and institutions to interpret AI as an ethical and organizational phenomenon (Von Krogh, 2018), where the main challenges are governance, capability, and legitimacy, and where the main risks are inequity, mistrust, and the quiet displacement of human judgment. For SIM research scholars, the book becomes a bridge between two conversations that are often separated. One conversation is about pedagogical tactics, prompts, lesson plans, and assessment design. The other is about social harm, fairness, accountability, and power. This collection insists that these are the same conversation, because in an AI-shaped educational fabric, the design of pedagogy is also the design of justice.

Disclosure of interest

The author(s) confirm that there are no financial or non-financial competing interests.

Statement of funding

No funding was received.

References

Busuioc, M., & Lodge, M. (2017). Reputation and accountability relationships: Managing accountability expectations through reputation. Public Administration Review, 77(1), 91-100.

Courvisanos, J. (2009). Political aspects of innovation. Research Policy, 38(7), 1117-1124.

Hewitt, J. P., & Hall, P. M. (1973). Social problems, problematic situations, and quasi-theories. American Sociological Review, 367-374.

Linares-Lanzman, J., Falco, E., & Çelik, Ö. (2025). Techno-Solutionism and Age Discourse in the AI Industry. Bulletin of Science, Technology & Society, 45(3-4), 89-101.

Logue, D., & Grimes, M. (2022). Living up to the hype: How new ventures manage the resource and liability of future-oriented visions within the nascent market of impact investing. Academy of Management Journal, 65(3), 1055-1082.

Manning, A., & Amare, N. (2006). Visual-rhetoric ethics: Beyond accuracy and injury. Technical communication, 53(2), 195-211.

Schunk, D. H., & Zimmerman, B. J. (1997). Social origins of self-regulatory competence. Educational psychologist, 32(4), 195-208.

Von Krogh, G. (2018). Artificial intelligence in organizations: New opportunities for phenomenon-based theorizing. Academy of Management Discoveries, 4(4), 404-409.

*****

Reviewed by:

Mayukh Mukhopadhyay

Executive Doctoral Scholar

Indian Institute of Management Indore

*****

0 comments
0 views

Permalink