blog post

AI in Education

In a recent development reflecting the challenges posed by the rise of artificial intelligence in education, teachers and administrators at Hartselle High School in Alabama have formed a committee to address the use of AI-generated applications like ChatGPT in academic settings. This move comes amidst growing concerns about the potential misuse of such technology by students.

This story highlights how AI, particularly language models capable of answering a wide array of questions, is affecting classrooms and educational integrity.

The committee was formed to gain a deeper understanding of AI’s impact on education and was formed following incidents where students were found using ChatGPT and similar platforms for completing assignments.

The dilemma of AI in schools is not confined to local districts. Notably, New York City Public Schools and the Los Angeles Unified School District had initially banned ChatGPT, though New York City has since reversed its ban, and Los Angeles is revising its policy to find a middle ground.

In both Hartselle and Morgan County school districts, while there has been a cautious tolerance towards text-based AI this year, there is an increasing awareness of its potential to aid dishonest students. As a countermeasure, teachers are resorting to traditional methods to identify AI-generated work, comparing in-class writing with assignments completed at home.

However, challenges arise in subjects like mathematics, where detecting AI assistance is more complex. And, there are limitations in monitoring students’ online activities outside school premises.

Disciplinary measures are in place in Morgan County schools for misuse of AI, with repeat offenses potentially leading to severe academic penalties. While Morgan County schools currently allow unrestricted use of AI tools like ChatGPT, Hartselle has implemented restrictions on some AI applications following incidents last spring.

Despite these concerns, educators also recognize the potential benefits of AI. Teachers in both districts are exploring ways to use ChatGPT for enhancing learning experiences, such as in computer programming classes and for explaining complex concepts.

State-level discussions, led by figures like Rep. Terri Collins, chair of the House Education Policy Committee, are underway to establish guidelines for the positive and safe use of AI in schools. These initiatives aim to teach students how to use AI effectively and responsibly in their academic work.

But the larger risk, which has been completely overlooked by everyone in all of these districts, affects how folks learn into the future. Had HP invented the handheld calculator in 1920 and not 1972, the study of mathematics would have largely disappeared from school curriculums by 1930 as a result. 

Over-learning and Leaky Abstraction

Over-learning is a risk for GenAI (think of a self-reinforcing feedback loop that produces more results but less innovation) but also for the humans using them that no longer innovate in the areas where they can lean on GenAI.

Leaky Abstraction is a more subtle threat, and the combination is deadly.

Leaky Abstraction is the rapid growth of an education capability that “takes repetitive shortcuts” in curriculum and inevitability means that the users no longer have a deep subject matter expertise. As GenAI through ChatGPT and smart LLMs, continue to learn and expand their knowledge base, more and more “subject matter expertise” will become less and less important, until at some near term future moment, we will rely on ChatGPT or some form of GenAI for solving problems, conducting fast research, answering all questions, and substituting for knowledge.

Just as the calculator today fills in for the math or arithmetic challenged folks who can’t do simple addition without a calculator, GenAI will become the Uber-crutch for all learning about all subject-matter, providing just-in-time solutions whether the topic is history, literature, journalism or statistics.

And, we can all imagine where that future leads.

Author

Steve King

Managing Director, CyberEd

King, an experienced cybersecurity professional, has served in senior leadership roles in technology development for the past 20 years. He has founded nine startups, including Endymion Systems and seeCommerce. He has held leadership roles in marketing and product development, operating as CEO, CTO and CISO for several startups, including Netswitch Technology Management. He also served as CIO for Memorex and was the co-founder of the Cambridge Systems Group.

 

Get In Touch!

Leave your details and we will get back to you.