Thoughts on AI in Everyday Life and the Classroom: Opportunities, Challenges, and Ethical Considerations

We may not think about artificial intelligence (AI) every day, but it’s quietly woven into the fabric of our daily personal and professional lives. Whether we’re searching on Google, checking our emails, booking a doctor’s appointment, navigating with GPS, or receiving personalized movie and music recommendations, AI is there, working behind the scenes. As Akgun and Greenhow (2021) point out, we’ve been using AI for years, often without even realizing it.

In recent years, AI has rapidly evolved and expanded its influence, especially in education. Lee Davis, Associate Director of Software and Scanners at Keypoint Intelligence, defines AI as a branch of computer science focused on developing machines capable of performing tasks that typically require human intelligence. These tasks include language comprehension, pattern recognition, problem-solving, and adapting to new challenges. According to Davis (2024), AI can be thought of as a set of instructions that allow computers to mimic human cognitive abilities—thinking, observing, and learning—through complex algorithms and mathematical models.

Generative AI and Its Role in Education

One of the most talked-about developments in AI is “Generative AI,” which includes tools like ChatGPT and Microsoft Copilot. As Sætra (2023) explains, generative AI refers to machine learning platforms trained on vast datasets to generate content based on user input. This content can take many forms—text, images, videos, audio, and even code—and is already being used across various professions, from law and programming to marketing and HR.

However, Sætra also warns of the challenges generative AI brings. These include shifts in workplace dynamics, job displacement, digital divides, and even risks to democracy through the mass production of persuasive or misleading content. There are also concerns about over-reliance on AI, which could lead to cognitive atrophy, and the ethical implications of AI replacing human relationships or reinforcing societal biases.

Educational Technology Theories and AI Integration

As educators, it’s important to ground our use of AI in thoughtful practice rooted in critical approaches and appropriate pedagogical frameworks. Two frameworks that I have used for this exploration are David Moursund’s work on technology in project-based learning (PBL) and Mike Ribble’s concept of digital citizenship.

Moursund (1999) emphasizes the importance of moving beyond “first-order” uses of technology—doing old tasks in new ways—and toward “second-order” uses that foster creativity and innovation. For example, using computers to create original videos or animations rather than just digitizing worksheets.

Ribble (2017) defines digital citizenship as “the norms of appropriate, responsible behaviour concerning technology use.” He stresses the need to teach students about the risks of technology, including privacy concerns and ethical use. His work is especially relevant in subjects like English Language Arts and Social Studies, where critical thinking and ethical reflection are key.

While I don’t follow these frameworks to the letter, they’ve provided valuable guidance as we consider how to integrate AI into our classrooms responsibly. Considerations along these lines, allow for tools like AI to be implemented in ways that can leverage their capabilities while also allowing for critical evaluations of their implications. I do not believe that there is one concrete fits all approach that works, but rather considerations that help educate, inform, and modernize classroom practices.

Ethical Considerations in AI Use

Our discussions as educators have surfaced several ethical concerns around AI in schools. Akgun and Greenhow (2021) highlight issues such as privacy, surveillance, autonomy, bias, and discrimination. AI systems often collect vast amounts of data, and while users may consent to this, they often do so without fully understanding the implications. This can compromise both privacy and agency.

Moreover, AI algorithms can reflect and reinforce societal biases, even without malicious intent. This predictive power can influence student behaviour and decision-making, potentially limiting autonomy and perpetuating inequality.

Building on these concerns, Adams, Pente, Lemermeyer, and Rockwell (2023) argue for AI use that is pedagogically appropriate and respectful of children’s rights. They advocate for explanations of AI that are accessible to children, ensuring informed participation and data security. They also caution against overuse of robotic tutors, which could hinder social and emotional development.

One notable gap in current discussions is the “right to be forgotten” (RTBF)—the ability for individuals, especially minors, to request the deletion of their data. The authors stress that ethics policies in K–12 education must consider children’s developmental stages, cultural backgrounds, and evolving identities.

Moving Forward with Awareness and Intention

As we continue to explore AI in education, it’s clear that we must proceed with both curiosity and caution. The potential benefits are immense—personalized learning, creative expression, and enhanced engagement—but so are the risks.

Fortunately, resources are emerging to support educators. Tools like MIT’s An Ethics of Artificial Intelligence Curriculum for Middle School Students and IBM’s Educator’s AI Classroom Kit offer practical guidance for those just beginning their AI journey.

Ultimately, the goal isn’t to avoid AI, but to use it wisely—empowering students while protecting their rights and fostering ethical, informed digital citizens.

References

Adams, C., Pente, P., Lemermeyer, G., & Rockwell, G. (2023). Ethical principles for artificial intelligence in K-12 education. Computers and Education: Artificial Intelligence, 4, 100131.

Akgun, S., & Greenhow, C. (2022). Artificial intelligence in education: Addressing ethical challenges in K-12 settings.AI and Ethics, 2(3), 431-440.

Davis, L. (2024, September 30). 10 AI tools in 2024. Forbes. https://www.forbes.com/advisor/business/ai-tools/

Moursund, D. G. (1999). Project-based learning using information technology (pp. 1-141). Eugene, OR: International Society for Technology in Education.

Ribble, M. (2017). Digital citizenship in schools: Nine elements all students should know. International Society for Technology in Education.

Sætra, H. S. (2023). Generative AI: Here to stay, but for good? Technology in Society, 75, 102372.