Artificial Intelligence for Artificial Lives. Real Wisdom for Real Lives
by David Geelan | 13 December 2024 |
Recently I attended a professional learning session at my Catholic university, entitled “AI: Promise or Threat?” The presentations were engaging and interesting, and they got me thinking.
If you’re involved in education, you’ll know that the advent of ChatGPT in late 2022 and the avalanche of chatbots and AI tools that have followed it have hit like a hurricane. I spend quite a lot of my time as a Head of School chasing up academic integrity violations where students have handed in work that is wholly or partly the work of the chatbots rather than the students themselves.
Let me add here that I’m that pedantic guy who feels constrained to mention that what we have now is not intelligence, artificial or otherwise. What we have are “large language models” (LLMs) that use machine learning on vast bodies of data to pull together plausible-seeming answers to questions. The problems with them are that (a) they are trained on information put together by collating input from humans, and humans are often confused, ignorant, or sometimes maliciously misleading and (b) the LLMs have no stake in the answer being true: they simply provide what they have, and if they don’t have it they’ll make it up. Hence the frequency of “hallucinated” references to the research literature in student assignments – and academic papers – where the author has let the LLMs do the work and then failed to check in with the real world.
Anyway, the event I attended was focused in particular on the Core Curriculum classes that occur at our university and at most Catholic universities. These are focused on ethics, philosophy, and theology – on engaging with the big questions of life: why we’re here, how we got here, where we’re going, how we should treat one another along the way.
The focus was on why students choose to submit work done by AI in these classes, rather than actively engaging with the readings and lecture content and the big questions themselves.
As a bit of a side point, while students do submit AI-penned work in the Core Curriculum courses, they do it with probably about equal frequency in the teacher education and science education courses that I teach in the School I lead. It seems to be a widespread issue.
It’s tempting to make it into an adversarial situation: to assume that all students all the time are trying to cheat, and all teachers all the time are trying to catch and punish them. But that’s far too simple a picture, and I think does a disservice to both students and teachers. I think the great majority of teachers are passionate about what they are teaching and are striving to share that with students and build the students’ capacity and identity, and that the great majority of students are genuinely seeking to learn and grow—with some caveats I’ll discuss now.
Whose fault?
First, I think the technology giants at Google and Microsoft and Apple and their ilk bear a lot of the burden. Often students “cheat” unwittingly because the tech giant that made their software sneaked in a new AI-driven “feature” that they didn’t even notice, and it played out in their assignment in a way that got it flagged for attention without the student even noticing it had happened.
The policies in universities are also struggling to keep up with the rapid rate of change, and it may be that in one class, using a tool such as Grammarly or Quillbot to check spelling and grammar is allowed, and in another it’s forbidden. I know there are bits and pieces of AI that are trying hard to intrude in this piece even as I’m writing it!
Second, I think many of our students are desperate. The material conditions of life have changed. When many of us went to university, that was our main job, and we had the leisure and time to engage in long discussions of the big questions of life—as well as to do things like play tabletop Dungeons and Dragons games and get up to various forms of mischief. Many of us either lived with or were supported by our parents, or were able to earn enough in summer jobs to study full time the rest of the year.
All that has changed for many of our students – they’re trying to work full time to support themselves while also studying full time. Wages have stayed flat, and tuition and the cost of living have gone up, and many no longer have that family support (or the parents are themselves cash-strapped and desperate), so paid work is a necessity. In that kind of environment, getting a little help from the computer fairy can seem like a life-line.
The presenters yesterday took it a step further, though: do the students really value the things they are learning, and see the assessment tasks as meaningful learning activities to help them develop and flourish as human beings? Or is it simply one more task to complete for the instrumental goal of getting the degree to get the job to get the wage to get the house to just survive? If it’s the latter, that’s not the fault of the students: we brought them up, we created the system, we trained them in it. High school became all about getting the marks to get the university place to get onto the beginning of that train. Primary/elementary school became about preparing for high school.
Hurdles vs. opportunities
The speakers drew on a 1961 article in Harpers (yes, this crisis is older than I am, and long pre-dates even personal computers, let alone AI) by Michael Novak, entitled “God in the Colleges,” In it he argues that an instrumentalist, “technocratic” impulse pervades higher education, where learning is seen as a process of gaining a qualification, rather than as a collaborative, collective process of human growth and development for human flourishing. Courses are seen as hurdles to be jumped rather than opportunities for growth, and the focus is transactional – what can others do for me? and (sometimes) what can I do for others? – rather than relational, with both God and our fellow human beings.
Novak’s conclusion was that “God will return to the colleges when [humanity] returns.” Students will engage for the search for meaning when the search is not forced into silence and blindness by the quest for technocratic success.
We try to respond to our students by emphasizing “academic integrity”: the idea that cheating is dishonest and morally wrong. It is, but I’m not sure that’s the most powerful approach. Perhaps we need to rethink the way we “do education,” and help students to see its real power for promoting human flourishing.
If you went to the gym every day but were never sore afterward, you would likely not be gaining muscle: you wouldn’t be working hard enough to let your body know it needed to change and grow. In the same way, if you study and your brain doesn’t hurt sometimes, you may not be learning enough! Because learning challenges preconceptions and mental patterns and structures, and takes effort.
But if you went to the gym every day and just watched a machine lift weights, you’d gain even less. If you go to university and have a machine do your assignments, there might be some “machine learning,” but there will be precious little “you learning.”
Revitalising the value and importance of learning for human flourishing and full human development in the context of learning together with others seems to me like a better response to AI cheating (as we also work toward making a fairer society where machines do the menial labor humans can’t to free up human minds for the things only humans can do) than a punitive approach.
I’ll leave the mental exercise of applying how these concepts may apply to “doing church” for others to think through, though perhaps I can paraphase Novak: “God will return to the churches when humanity returns.”
Dr. David Geelan is Sue’s husband and Cassie and Alexandra’s dad. He started out at Avondale College, and is currently Professor and National Head of the School of Education, within the faculty of Education, Philosophy and Theology at the University of Notre Dame in Sydney, Australia.