Using Artificial Intelligence in your K12 classroom
A practical analysis by a high school teacher for high school teachers
The intention of this post is to give teachers a way to assess if, when and how they want to use Artificial Intelligence in our classrooms. The post is organized to place the most useful information first and the justifications and background for that information later.
How AI can be used by students
It should be used ONLY for situations where immediate feedback to the student on work the student has done will likely improve the student’s learning experience. It should never be used as a first or foundational step for something like brainstorming.
Only after a clear presentation of the dangers of using Artificial Intelligence to escape doing work. Fortunately, most students are already aware that Large Language Model Generative Intelligence (LLM GenAI) can damage their ability to think critically. 67% of students endorsed the statement, “The more students use AI for their schoolwork, the more it will harm their critical thinking skills” Rand, March 2026
A Use Case Example
The reasons why I think LLM GenAI is useful in this example
Immediate feedback is extremely useful in encouraging the student to be specific and helping them to build familiarity with specific and detailed writing.
Because I can embed the prompt in the app I can constrain the LLM GenAI reasonably well to stay within this very specific task. It is also true that sometimes it goes a little rogue but so far going rogue has been about formatting of the response and not major content issues.
The specifics
I am working with students to build their capacity to manage a project of their own design over time. The class is Computer Science and they are learning to use the Python programming language. It is uniquely difficult to teach programming right now because novice programming projects can easily be accomplished using Large Language Model Generative Artificial Intelligence (LLM GenAI). The only way to counter this is to leverage the student’s curiosity and incentivize progress, intention and reflection. Because I am a Computer Science teacher I have learned to build web-based applications that students can interact with and I can try to use the design of this interaction to shape the learning experience.
First thing I do is to ask the student to describe their project. This is a VERY simple activity and I tell them it will likely change as they begin to do the work. They give the project a name and they describe it with a few sentences. Then I ask them to create the first milestone. The concept of a milestone is often new to students. I have not yet figured out the best way to help students define milestones other than to say it is a goal that will take you a few days to achieve. The first milestone tends to be “Getting started” where they figure out what learning materials they are going to use and what a simple first “product” might be. Thinking about an intermediate stage of a larger project is difficult for students and is useful learning
Once the project is designed and the first milestone is created the student begins to document their daily intentions and to reflect on what they accomplished relative to that intention. This is where I use LLM GenAI. At the beginning of every class a student declares an intention which should describe work that fits within the milestone and it should be very specific and measurable. An example might be “Today I will watch the first lesson of the Godot Game Engine tutorial. I will install the software and get the environment set up to begin my project.”
I have connected the Google Gemini LLM GenAI to the website and it checks the student’s intention for specificity and detail. It gives a Red, Yellow, Green assessment and provides suggestions for improvement. The student MUST redo it if they get a Red and they CAN redo it if they get a Yellow. Many students hate Yellow and choose to rework it to get Green. This is a two hour class. At the end of the class the student completes a very tactical reflection. They begin by giving themselves a Red, Yellow, Green assessment of their own effort and then they describe what they did and have the option to include images of anything that has been created. The LLM GenAI works as it did before but now also ensures that the reflection is relevant to the intention. Again, they MUST redo a Red and CAN redo a Yellow.
What this tells me about use in the classroom
Useful attributes of an LLM GenAI app
Immediate feedback to the student
Through constraining the LLM GenAI as much as possible to avoid the very serious negative impacts of LLM GenAI use
How to apply this in your own classroom
As of right now, without a significant amount of coding knowledge, I don’t know how I would advise a teacher. If we use Electricity as an metaphor, the LLM GenAI vendors are electrical engineers and they would love us to engage with their products as light switch users when the reality is that at this point you can’t ensure this tech will be useful to students unless you are at least an electrician.
As with all of EdTech, most of the apps that get dreamed up by EdTech developers are manifestations of what the developers think should happen in a classroom. They are usually wrong. If you think about which EdTech apps have been the most successful they are things like Learning Management Systems (Google Classroom and Canvas), quiz tools (Kahoot, Quizlet), generic tools like Google Docs, Sheets, Slides and now LLM GenAI chat bots. All of these are designed for the user to bring the content AND the context. What I see a need for is a tool that combines the context management of Google Classroom with the content management of Kahoot and allows the teacher to embed workflow and constrain the LLM GenAI res[onces with prompts that focus on explicit aspects of student work with the least potential of the LLM GenAI generating text outside of the intended context.
If you can’t find a way to confidently constrain the LLM GenAI outputs and a way for the student to interact directly with the tool to receive immediate and constrained feedback, I do not think LLM GenAI is good tool to use in a classroom.
The stated motivation by many EdTech influencers is to move away from obedience focused pedagogies but they don’t seem to have the knowledge, capacity or desire to build the services that will make that possible. Don’t trust EdTech influencers that are not currently (or at least recently) in the classroom. Don’t trust EdTech influencers who rail on about the “The AGE of AI” and berate you for not doing what they think is important. In my experience, the pedagogical vision of any consultant is important but ONLY as a measuring stick to judge the tactics that they recommend. At this moment, most EdTech influencers don’t actually recommend ANY tactics. Those people you can ignore completely.
The sad truth is that the LLM GenAI tool that is designed for generalized use and the greatest flexibility and is currently being pushed into students hands is the worst possible implementation of this powerful technology. The LLM GenAI vendors have created products have harmed more students than they have helped and then they tell us that we need to create LLM GenAI Literacy courses to mitigate the damage they caused.
Why assigning students to use a chatbot is … sub-optimal
Everything that I say here is well documented elsewhere. I put it here because it is the backdrop of all of this. There is a lot of magical thinking about LLM GenAI, if we choose to use these tools in our classrooms then we must understand this side of LLM GenAI.
Cognitive Offloading – This is well studied and broadly recognized as a serious harm that can come from using LLM GenAI. The most common expression of this harm is that LLM GenAI keeps a novice from becoming an expert. We hear EdTech influencers say all the time that ‘it should not be used as a crutch’ and that is usually the end of their advice. When I push for examples, the most common example I get is to use LLM GenAI for brainstorming; which, is a perfect example of using LLM GenAI as a crutch.
The even darker side is what is being called “AI Psychosis” which is caused by the design of LLM GenAI services. This design intentionally addicts people to the service through sycophancy. Specifically, they are using what we have learned through the science of Behavioral Economics to addict users to the tool. This has resulted in suicide and hospitalization. The recent court cases lost by Meta and Google/YouTube are clear examples of this —> Meta and YouTube Found Negligent in Landmark Social Media Addiction Case
Racial and Gender Bias – Again, this is very well understood to be true and very well documented but rarely taken seriously. Unconstrained responses from these tools will express a white supremacist world. Why? Because we live in a white supremacist world and these tools are trained on the text that falls out of this world. The constraints that we can put on these tools cannot make them not racially and gender biased but they can limit the expression of that bias by limiting the context that the LLM GenAI is responding to.
The Age of AI and LLM GenAI Inevitability – Just like the text on the bottle of the new household cleaner will tell you how amazingly clean everything will be, #AiinEdu pushers will argue that LLM GenAI must be used in your classroom because kids are already using it, because “it is already here”, because this is “The Age of AI!” This was also true when we finally started to address teen pregnancy. This was also true when we finally started to address drug addiction and violence. And yes, this is also “the age of human reproduction”. We have been forced to add curricular goals around the invasion of LLM GenAI into our classrooms. We have been derided for not being aware of the dangers AND for not using it in our classrooms. This is NOT new to teachers.
Career Readiness – Here is a very simple fact. The student that refuses to use LLM GenAI will be a more capable employee than the one that uses it regularly. The idea that students need to be taught how to use LLM GenAI as a career readiness strategy is laughable. What ever LLM GenAI variant that is still being used when students get a job will be very different from what we have now. What we have now is very simple to use. Understanding how LLM GenAI can be harmful is FAR more import.
Personalization – This is in every “you suck if you don’t use it screed” put out by EdTech influencers. The easiest response is just to recommend Audrey Watters book, Teaching Machines: The History of Personalized Learning”. Again, it is not new. Here’s a good summary.
LLM GenAI has harmed more students than it has helped – This is the one that makes me crazy. The EdTech influencer generally agrees that this is true and responds with “this is the age of AI” or just magical thinking about how all that will change cause …
How have they been harmed?
Killed in War – Israel has killed thousands of children in Palestine. The US just slaughtered over 100 children in Iran. This was done by using LLM GenAI to analyze social media and other communications channels to choose targets.
Kidnapped and Disappeared – ICE uses the same technology (the same company, Palantir) to target families to kidnap.
Health Care and Financial Services – LLM GenAI is used to deny healthcare and financial services which predominantly impacts people of color and women.
Predictive Policing – This is known to be error prone, especially image recognition but also LLM GenAI analysis of social media etc.
Suicide and mental health degradation cause by LLM GenAI algorithms embedded in social media.


