Visit Modern Campus

If ChatGPT Needs Better Prompts, What Do Students Need?

AdobeStock_565643935
Students need a behaviorist approach to teaching that responds to their needs to provide better instruction and ultimately lead them to achieve prescribed learning outcomes.

Imagine this: You’re using ChatGPT to help write a lesson plan. The output isn’t quite right. It’s too vague, too long, not what you had in mind. What do you do? 

You don’t scold the AI. You don’t assume it’s lazy, unmotivated or uninterested in your subject. You don’t second-guess its intentions. You simply adjust the prompt. You try again, reframe the question, add structure and clarify your goal, then it works. 

This interaction reveals something profound, not about artificial intelligence but about human learning. When dealing with AI, we accept a core truth that education often forgets: Learning isn’t about what we do to students; it’s about what students do as a result of instruction. That is, the only thing we can reliably assess is behavior. Not expectations, feelings, background or intentions—just behavior. 

However, in most classrooms, we focus on coverage, engagement or experience but rarely ask what students are supposed to do at the end of instruction. The latter is precisely the role of student learning outcomes (SLOs): to make learning visible, measurable and behavioral. Like AI prompts, well-crafted SLOs specify what students are expected to demonstrate, not just what they are exposed to.  

In fact, our entire relationship with AI is behaviorist by necessity. We only see the output. There’s no brain to probe, no feelings to consult, no internal state to empathize with. What we’re left with is action, and from that action we adjust our own behavior to improve the outcome. It’s not cold. It’s precise. It’s defined. It’s clearly articulated. And that’s exactly what SLOs are supposed to do for students. 

Behaviorism at the Core of AI Interactions 

Every time we interact with AI, we’re engaged in a feedback loop. We prompt, observe, evaluate and refine. The AI, in turn, responds based solely on probabilities, patterns extracted from vast data sets. It does not intend to please us and does not feel anything, so we never confuse its output with its identity. 

This is where B.F. Skinner’s work becomes strikingly relevant, not in a lab with pigeons but in digital spaces with large language models. Skinner argued that learning is not a mysterious transformation that happens inside the mind. It is behavior shaped by environment. As he famously wrote: “The question is not whether machines think, but whether men do” (Skinner, 1969, p. 288). In other words, if you want to understand learning, don’t look inward. Look outward at the interaction between the organism and its environment. Look at what changes. 

When we apply this insight to students, something shifts. Suddenly, it’s not about how much a student cares, or whether they understand. It’s about what they can do. And once that’s our focus, our responsibility as educators becomes clearer: Create the conditions where the desired behavior emerges reliably. 

Empathy Isn’t Enough; Design Is What Matters 

In classrooms, educators often let good intentions lead the way. We empathize, interpret and try to meet students where they are emotionally or intellectually in our own estimation as faculty. While that’s deeply human, it’s also risky because no amount of empathy can replace the hard work of designing learning environments that work. 

Here’s where AI provides a helpful model. We don’t hope ChatGPT gets it. We engineer better prompts until it does. What if we approached instruction the same way, not as performance but as an evolving system of inputs designed to elicit specific, observable student behavior? 

Adopting this approach doesn’t mean abandoning compassion, but it does mean letting go of the idea that we can teach through emotional attunement alone. Students don’t learn because we care. They learn because we design tasks, feedback loops and conditions that reinforce the behaviors we want to see. As Skinner (1954) argued, environment, not willpower or internal desire, shapes behavior. If students aren’t learning, we must ask what reinforcement structure we’ve created. What cues, support and consequences are shaping their actions? 

Students as Agents, Not Projects 

This reframing also calls into question the way we think about students themselves. In higher education, we spend a great deal of time talking about students as if they are projects, entities to be fixed, retained or engaged. We focus on identity: English learner, first-gen, low-income, neurodivergent. Though we intend for these labels to support equity, they often serve to predict outcomes rather than change them. 

We can’t assess generalizations, but we keep trying. We assume that students from certain backgrounds need different expectations or more leniency. Maybe they do need more support, but what they need is not our assumptions but our action—not our intentions but our design. 

Just as we don’t ask where AI comes from, we shouldn’t center our teaching on who a student is. We should ask what a student can do now and what environments will help them do more tomorrow. Students don’t need us to lower the bar. They need us to believe they can reach it and for us to build the structure that helps them get there. 

Prompt Engineering as a Model for Teaching 

So, let’s take the analogy seriously. Prompt engineering and instructional design share a common structure. Consider the parallels: 

Prompting AI 

Teaching Students 

Define the task clearly 

Clarify the learning outcome 

Use precise language 

Avoid vague verbs in SLOs 

Provide examples 

Model the skill 

Adjust based on output 

Give feedback and redesign tasks 

Repeat with refinement 

Allow multiple attempts 

If a student fails to demonstrate a skill, we don’t label them a failure. We revise the assignment. We revisit the instructions. We improve conditions in the environment where learning occurs. The burden is not on the student to feel more inspired. The burden is on us to create better systems that make learning possible. That’s not mechanistic but humane because what’s more caring than building a world in which success is within reach? 

The Power of Observable Learning 

Once we move away from the idea that learning is something hidden inside the mind and recognize it as something expressed through action, everything changes. Assessment becomes clearer. Feedback becomes more useful. Even grading becomes fairer. As Skinner (1968) emphasized, learning isn’t defined by how a student feels or what they understand internally. It’s defined by the behavioral changes that result from experience.  

This definition doesn’t mean students are machines. It means machines have reminded us of something education too often overlooks: Behavior is the only valid evidence of learning. When we focus on what students do, we stop guessing, stop generalizing and start building real pathways toward mastery. Let’s stop asking how students feel about learning and start helping them do it again and again in a variety of contexts, through repeated articulations and applications until they can do it on their own. 

A New Kind of Care 

Ironically, by focusing less on emotion and more on behavior, we create fairer, more humane classrooms. Students don’t need us to feel sorry for them. They need environments where success is a byproduct of their participation. 

When AI fails, we don’t punish it. We redesign the prompt. When students struggle, we should do the same. Let’s make sure our students are doing the right things and that our systems are built to support them. Teaching is not about what we do to students. It’s about what students do because of us. 

 

References 

Skinner, B. F. (1954). The science of learning and the art of teaching. Harvard Educational Review, 24 (2), 86–97. 

Skinner, B. F. (1968). The technology of teaching. Appleton-Century-Crofts. 

Skinner, B. F. (1969). Contingencies of reinforcement: A theoretical analysis. Appleton-Century-Crofts.