
Recently, I had the pleasure of attending a brief lunchtime talk, “On the (Im)possibilities of Ethical AI in the Classroom,” hosted by the Geography Department and the Women’s Gender and Sexuality Studies (WGSS) program. Doing as the WGSS program does best, they managed to take what should be a straightforward and important discussion and give it an insufferable kick. As one of the professors announced at the beginning, the unsurprising twist of the discussion would be to “think through the debate in the context and ethos of WGSS practices and feminist, queer, and racial points of view, considering this topic.” Overall, the sentiment from the meeting was incredibly anti-AI, especially regarding its use within the classroom.
About half the time, a comment was made, it was rather sensible; the other half it seemed as though members of the group would incoherently attempt to apply a sort of critical theory lens to the discussion. By the end of the discussion, it seemed as though there were two discourses happening simultaneously. To the credit of the group, we certainly made progress in our more straightforward conversation – that is, the one actually related to the title of the lecture.
A number of legitimate grievances were raised against generative AI. The issues raised included its unreliability due to its tendency to lie, hallucinate, or engage in confirmation bias. The increasing tendency to use AI for every question, even matters of health, is somewhat alarming. Furthermore, the College’s push for its Evergreen AI therapy chatbot found itself under harsh criticism.
While I blame the left for the increase in mental health issues since the 1960s (with the destruction of Western values, including a recognition of duty to God, family, and country), the leftists in the room raised a valid point: AI will not be a solution to their mental health crisis. Speaking to an AI chatbot about the problems in your life seems almost dystopian. While we should address the mental health crisis from its root – in the meantime therapy should not be facilitated by robots. None of this is to suggest that mental health issues are solely reserved for those on the left, but pattern recognition is not hard. For example, the average Christian girl with a loving family is likely going to need less therapy than the lady in the WGSS discussion who was wearing a KN-95 mask, a Palestinian keffiyeh in her hoodie, a “Read Black Lit” bag, and her stylish earrings (dice). Both are religious, just in rather fundamentally different ways that tend to lead to different outlooks on life.
Another large part of our discussion concerned the detrimental effect of AI on learning and fairness within the classroom. This is a very legitimate concern. As one professor was concerned – especially if students use generative AI to produce writing – they “won’t have a chance to build critical thinking and cognitive development.” This is not to mention the fact that it is blatant cheating and unfair to those who refrain from doing so. As another group member pointed out, in using AI for research, “students go straight to an answer,” skipping out on “research skills” and “retention skills.” One professor even shared that she has caught students using AI to take notes on why they should refrain from using AI. The future of AI use in the classroom will be heavily reliant on the character of our students. However, one student in the group insisted that if classes are to disincentivize AI use, they need to structure classes in a way where students don’t question what it is that genuinely contributes to their learning and development as a person.
Unfortunately, the explicit dangers of AI and possible solutions to such were not as clear in our other discourse regarding “feminist, queer, and racial points of view.” The central question raised was how or if AI can promote feminist, queer, and racial activism. Activism and the classroom seem are clearly closely related in the WGSS program at Dartmouth. Surprisingly, the professors don’t seem to be hiding what the true agenda of the supposed “education” is that they teach.
According to one WGSS professor, generative AI is incompatible with the theory of her course, which seeks to understand “feminist,” “anti-colonial” basketweaving (or a topic alike). AI is apparently unable to properly understand and reflect on intersectional and marginalized perspectives. It is not clear how exactly AI could be used to undertake activism efforts; this was never clarified among the word salads that sought to address the issue. Even if there were a clear way to use AI to perform activism, it was never explained what exactly about the activism would not be possible due to AI’s nature in not being a marginalized individual itself. Does this mean non-marginalized human beings cannot be effective in promoting change? None of this is clear, and actual coherent thoughts regarding the subject were lacking in the discussion. The problem with the types of individuals who often attend these meetings is that there is no substance to the majority of the claims they make; they merely speak in buzzwords. Most of what they do is virtue signal and seek recognition from others alike. For example, there was a professor who spoke for about 3-5 minutes unmasked and then afterward chose to put a mask on for the rest of the discussion. One can’t help but wonder what the point is of putting the mask on after speaking.
Apparently, not only is AI incompatible in performing activism, but it actively harms activist efforts. The reason is due to “environmental racism” and “digital redlining.” The professor who brought this up highlighted what she considered to be racism in the placement of AI data centers. As is typical, she never explained what the racism is that is being committed, but I can only assume she is insinuating that data centers are being placed disproportionately near minority neighborhoods. Unfortunately, as she admitted, there are not yet peer reviewed research that covers the topic and so she lamented having to resolve to make students read “sketchy blogs” on the subject. Being that I was never informed how the placement of AI data centers contributes to “environmental racism,” I am struggling to conceive of how exactly this might be. After all, in choosing the placement of data centers, I would imagine the priority would be to place them near bodies of water. This is not an inherently racist action so far as I am aware.
I certainly left the discussion disappointed. I went into it with an open mind, willing and wanting to learn more about the ways in which we can apply a critical lens to AI. I wanted to learn more about AI and environmental racism as well as the ways in which AI is compatible or incompatible with feminist, queer, and racial activism. Many seemed to draw pessimistic conclusions regarding these subjects, and now I feel even more lost than I was before the discussion began. I can assume that all the leftist claims made are true, but given the fact that none of them were explained or defended clearly, I don’t know how I – as an activist myself – can help alleviate these problems raised. I look forward to attending the next WGSS lecture to learn more.
Be the first to comment on "Can AI Be Used as a Tool for Activism? "