When it comes to developing policies on AI in K-12, schools are largely on their own
News > Technology News
Audio By Carbonatix
8:32 AM on Thursday, January 22
By Janice Mak
(The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts.)
Janice Mak, Arizona State University
(THE CONVERSATION) Generative artificial intelligence technology is rapidly reshaping education in unprecedented ways. With its potential benefits and risks, K-12 schools are actively trying to adapt teaching and learning.
But as schools seek to navigate into the age of generative AI, there’s a challenge: Schools are operating in a policy vacuum. While a number of states offer guidance on AI, only a couple of states require local schools to form specific policies, even as teachers, students and school leaders continue to use generative AI in countless new ways. As a policymaker noted in a survey, “You have policy and what’s actually happening in the classrooms – those are two very different things.”
As part of my lab’s research on AI and education policy, I conducted a survey in late 2025 with members of the National Association of State Boards of Education, the only nonprofit dedicated solely to helping state boards advance equity and excellence in public education. The survey of the association’s members reflects how education policy is typically formed through dynamic interactions across national, state and local levels, rather than being dictated by a single source.
But even in the absence of hard-and-fast rules and guardrails on how AI can be used in schools, education policymakers identified a number of ethical concerns raised by the technology’s spread, including student safety, data privacy and negative impacts on student learning.
They also expressed concerns over industry influence and that schools will later be charged by technology providers for large language model-based tools that are currently free. Others report that administrators in their state are very concerned about deepfakes: “What happens when a student deepfakes my voice and sends it out to cancel school or bomb threat?”
At the same time, policymakers said teaching students to use AI technology to their benefit remains a priority.
Local actions dominate
Although chatbots have been widely available for more than three years, the survey revealed that states are in the early stages of addressing generative AI, with most yet to implement official policies. While many states are providing guidance or tool kits, or are starting to write state-level policies, local decisions dominate the landscape, with each school district primarily responsible for shaping its own plans.
When asked whether their state has implemented any generative AI policies, respondents said there was a high degree of local influence regardless of whether a state issued guidance or not. “We are a ‘local control’ state, so some school districts have banned (generative AI),” wrote one respondent. “Our (state) department of education has an AI tool kit, but policies are all local,” wrote another. One shared that their state has a “basic requirement that districts adopt a local policy about AI.”
Like other education policies, generative AI adoption occurs within the existing state education governance structures, with authority and accountability balanced between state and local levels. As with previous waves of technology in K-12 schools, local decision-making plays a critical role.
Yet there is generally a lack of evidence related to how AI will affect learners and teachers, which will take years to become more clear. That lag adds to the challenges in formulating policies.
States as a lighthouse
However, state policy can provide vital guidance by prioritizing ethics, equity and safety, and by being adaptable to changing needs. A coherent state policy can also answer key questions, such as acceptable student use of AI, and ensure more consistent standards of practice. Without such direction, districts are left to their own devices to identify appropriate, effective uses and construct guardrails.
As it stands, AI usage and policy development are uneven, depending on how well resourced a school is. Data from a RAND-led panel of educators showed that teachers and principals in higher-poverty schools are about half as likely to AI guidance provided. The poorest schools are also less likely to use AI tools.
When asked about foundational generative AI policies in education, policymakers focused on privacy, safety and equity. One respondent, for example, said school districts should have the same access to funding and training, including for administrators.
And rather than having the technology imposed on schools and families, many argued for grounding the discussion in human values and broad participation. As one policymaker noted, “What is the role that families play in all this? This is something that is constantly missing from the conversation and something to uplift. As we know, parents are our kids’ first teachers.”
Introducing new technology
According to a Feb. 24, 2025, Gallup Poll, 60% of teachers report using some AI for their work in a range of ways. Our survey also found there is “shadow use of AI,” as one policymaker put it, where employees implement generative AI without explicit school or district IT or security approval.
Some states, such as Indiana, offer schools the opportunity to apply for a one-time competitive grant to fund a pilot of an AI-powered platform of their choosing as long as the product vendors are approved by the state. Grant proposals that focus on supporting students or professional development for educators receive priority.
In other states, schools opt in to pilot tests that are funded by nonprofits. For example, an eighth grade language arts teacher in California participated in a pilot where she used AI-powered tools to generate feedback on her students’ writing. “Teaching 150 kids a day and providing meaningful feedback for every student is not possible; I would try anything to lessen grading and give me back my time to spend with kids. This is why I became a teacher: to spend time with the kids.” This teacher also noted the tools showed bias when analyzing the work of her students learning English, which gave her the opportunity to discuss algorithmic bias in these tools.
One initiative from the Netherlands offers a different approach than finding ways to implement products developed by technology companies. Instead, schools take the lead with questions or challenges they are facing and turn to industry to develop solutions informed by research.
Core principles
One theme that emerged from survey respondents is the need to emphasize ethical principles in providing guidance on how to use AI technology in teaching and learning. This could begin with ensuring that students and teachers learn about the limitations and opportunities of generative AI, when and how to leverage these tools effectively, critically evaluate its output and ethically disclose its use.
Often, policymakers struggle to know where to begin in formulating policies. Analyzing tensions and decision-making in organizational context – or what my colleagues and I called dilemma analysis in a recent report – is an approach schools, districts and states can take to navigate the myriad of ethical and societal impacts of generative AI.
Despite the confusion around AI and a fragmented policy landscape, policymakers said they recognize it is incumbent upon each school, district and state to engage their communities and families to co-create a path forward.
As one policymaker put it: “Knowing the horse has already left the barn (and that AI use) is already prevalent among students and faculty … (on) AI-human collaboration vs. outright ban, where on the spectrum do you want to be?”
This article is republished from The Conversation under a Creative Commons license. Read the original article here: https://theconversation.com/when-it-comes-to-developing-policies-on-ai-in-k-12-schools-are-largely-on-their-own-268272.