Can AI Be Used as a Tool for Activism? 

A Promotion for Dartmouth’s Evergreen AI | Courtesy of Dartmouth College

Recently, I had the plea­sure of attending a brief lunchtime talk, “On the (Im)possibilities of Ethi­cal AI in the Classroom,” hosted by the Geography Department and the Wom­en’s Gender and Sexuality Studies (WGSS) program. Doing as the WGSS pro­gram does best, they man­aged to take what should be a straightforward and important discussion and give it an insufferable kick. As one of the professors an­nounced at the beginning, the unsurprising twist of the discussion would be to “think through the debate in the context and ethos of WGSS practices and feminist, queer, and racial points of view, consider­ing this topic.” Overall, the sentiment from the meet­ing was incredibly anti-AI, especially regarding its use within the classroom. 

About half the time, a comment was made, it was rather sensible; the other half it seemed as though members of the group would incoherently attempt to apply a sort of critical theory lens to the discussion. By the end of the discussion, it seemed as though there were two discourses happening si­multaneously. To the credit of the group, we certainly made progress in our more straightforward conversa­tion – that is, the one actu­ally related to the title of the lecture. 

A number of legitimate grievances were raised against generative AI. The issues raised included its unreliability due to its tendency to lie, hal­lucinate, or engage in confirma­tion bias. The increasing tenden­cy to use AI for every question, even matters of health, is some­what alarming. Furthermore, the College’s push for its Evergreen AI therapy chatbot found itself under harsh criticism. 

While I blame the left for the increase in mental health issues since the 1960s (with the de­struction of Western values, in­cluding a recognition of duty to God, family, and country), the leftists in the room raised a valid point: AI will not be a solution to their mental health crisis. Speak­ing to an AI chatbot about the problems in your life seems al­most dystopian. While we should address the mental health crisis from its root – in the meantime therapy should not be facilitat­ed by robots. None of this is to suggest that mental health issues are solely reserved for those on the left, but pattern recognition is not hard. For example, the av­erage Christian girl with a lov­ing family is likely going to need less therapy than the lady in the WGSS discussion who was wear­ing a KN-95 mask, a Palestinian keffiyeh in her hoodie, a “Read Black Lit” bag, and her stylish earrings (dice). Both are reli­gious, just in rather fundamen­tally different ways that tend to lead to different outlooks on life.

Another large part of our dis­cussion concerned the detrimen­tal effect of AI on learning and fairness within the classroom. This is a very legitimate concern. As one professor was concerned – especially if students use gen­erative AI to produce writing – they “won’t have a chance to build critical thinking and cogni­tive development.” This is not to mention the fact that it is blatant cheating and unfair to those who refrain from doing so. As anoth­er group member pointed out, in using AI for research, “students go straight to an answer,” skip­ping out on “research skills” and “retention skills.” One professor even shared that she has caught students using AI to take notes on why they should refrain from using AI. The future of AI use in the classroom will be heavily re­liant on the character of our stu­dents. However, one student in the group insisted that if classes are to disincentivize AI use, they need to structure classes in a way where students don’t question what it is that genuinely contrib­utes to their learning and devel­opment as a person.

Unfortunately, the explic­it dangers of AI and possible solutions to such were not as clear in our other discourse re­garding “feminist, queer, and ra­cial points of view.” The central question raised was how or if AI can promote feminist, queer, and racial activism. Activism and the classroom seem are clearly close­ly related in the WGSS program at Dartmouth. Surprisingly, the professors don’t seem to be hid­ing what the true agenda of the supposed “education” is that they teach.

According to one WGSS pro­fessor, generative AI is incom­patible with the theory of her course, which seeks to under­stand “feminist,” “anti-colonial” basketweaving (or a topic alike). AI is apparently unable to prop­erly understand and reflect on intersectional and marginalized perspectives. It is not clear how exactly AI could be used to un­dertake activism efforts; this was never clarified among the word salads that sought to address the issue. Even if there were a clear way to use AI to perform activ­ism, it was never explained what exactly about the activism would not be possible due to AI’s nature in not being a marginalized in­dividual itself. Does this mean non-marginalized human beings cannot be effective in promot­ing change? None of this is clear, and actual coherent thoughts re­garding the subject were lacking in the discussion. The problem with the types of individuals who often attend these meetings is that there is no substance to the majority of the claims they make; they merely speak in buzzwords. Most of what they do is virtue signal and seek rec­ognition from others alike. For example, there was a professor who spoke for about 3-5 minutes unmasked and then afterward chose to put a mask on for the rest of the discussion. One can’t help but wonder what the point is of putting the mask on after speaking.

Apparently, not only is AI incompatible in performing ac­tivism, but it actively harms ac­tivist efforts. The reason is due to “environmental racism” and “digital redlining.” The professor who brought this up highlighted what she considered to be racism in the placement of AI data cen­ters. As is typical, she never ex­plained what the racism is that is being committed, but I can only assume she is insinuating that data centers are being placed dis­proportionately near minority neighborhoods. Unfortunately, as she admitted, there are not yet peer reviewed research that cov­ers the topic and so she lamented having to resolve to make stu­dents read “sketchy blogs” on the subject. Being that I was never informed how the placement of AI data centers contributes to “environmental racism,” I am struggling to conceive of how exactly this might be. After all, in choosing the placement of data centers, I would imagine the priority would be to place them near bodies of water. This is not an inherently racist action so far as I am aware. 

I certainly left the discussion disappointed. I went into it with an open mind, willing and want­ing to learn more about the ways in which we can apply a critical lens to AI. I wanted to learn more about AI and environmental rac­ism as well as the ways in which AI is compatible or incompatible with feminist, queer, and racial activism. Many seemed to draw pessimistic conclusions regard­ing these subjects, and now I feel even more lost than I was before the discussion began. I can assume that all the leftist claims made are true, but given the fact that none of them were explained or defended clearly, I don’t know how I – as an activist myself – can help alleviate these problems raised. I look forward to attending the next WGSS lec­ture to learn more.

Be the first to comment on "Can AI Be Used as a Tool for Activism? "

Leave a comment

Your email address will not be published.


*