
5 things a trans scientist wants you to know about AI
newsisout.com
Transgender people were no strangers to theheydayof the internet experimenting with AI text to image generation during primarily 2022 and 2023. Trans writers shared about the potentialpromiseof thetechnologyfor producing affirming portraits. The same models, however, researchersfoundalso produced stereotypes and over-sexualization of their depictions of transgender people.Since then, AI use has explodedas have examples of how it fails the LGBTQ+ community. Generative AI has repeatedly beenfoundtobiasagainst LGBTQ folks and other marginalized groups. Research has shown that AI surveillance systems pose aunique threatto the transgender community. In fact, anti-trans and gender critical groups have alreadyusedAI with the intent of excluding and marginalizing the transgender community.There are very few trans people working in and studying AI, leaving the community with limited input and guidance around the technology. To better understand the navigating of technology as trans and queer, the Blade sat down with Eddie Ungless, a queer and transresearcherstudying queerphobia and transphobia in AI models.Here are five things Ungless emphasized that the LGBTQ community should consider when navigating the technology.1: AI isnt necessarily intelligentWhen most people think about AI these days, they think about Chat-GPT or DALL-E. These are essentially, Ungless explains, a system trained on a very large amount of data to identify subtle patterns, and through that, it is able to mimic very particular aspects of human intelligence, like the written word.Mimicking particular aspects of human intelligence does not translate to actual or widespread intelligence.AI is quite good at pretending to be good at language, Ungless says, that has led to a sense that these systems have a human-like intelligence in other regards. For example, Chat-GPT may be able to write human-sounding paragraphs, but the content may not be accurate.Large language models like Chat-GPT are just a sliver of AI technology. Its almost more like a marketing term, Ungless says, referencing a largerline of thoughtin the AI community. Looking back five years, what was considered to be artificial intelligence was social media recommendation algorithms and moderation algorithms, Ungless explains, those arent things people think of as strongly as AI nowadays.2: The trouble is in the dataThere are certain kinds of bias that are now salient to the AI development community, says Ungless.The problem is that AIs are often being trained on enormous amounts of unfiltered data. Ungless explains, we are training these systems on so much data that it is impossible to be confident about what is contained in that data.This is especially true when the data comes from the internet and the content is likely to be disproportionatelyinaccurateorhateful.Developers will take stop-gap measures to try to tame the data, like removing common slurs or testing the final product on common identity terms. What theyre trying to avoid is a scandal, says Ungless.But, Ungless says, even if you attempt to remove large amounts of stereotyping, sexualizing, fetishizing, or offensive data, its still going to creep in and affect your final product.Ungless said that it doesnt have to be the case. They just have to build models differently. Developers are retroactively trying to undo damage that was done by [the model] being trained on the contents of the internet. Ungless asks, why dont we feed our models differently?3: Smaller is often betterDevelopers try to feed AI models as much data as possible, with the thought that more information equals more intelligence. This isnt necessarily the case, says Ungless, especially if the data is coming from the internet.Ungless images AI trained on smaller, more accurate data sets that have had more human input in creating. For example, scientists have used AI technology on specialized data sets to improvebreast cancer detection. Scientists had to be much more diligent about the data going into their models.Part of curating accurate data sets for AI tools in the future, Ungless argues, should be in consultation with marginalized communities affected by the tools. Instead of trying to fix a tool retroactively, including community input would ensure a better product from the beginning.Or, at least, looking into what information is already out there. A good first step would be asking the people who build these systems to reflect more on the normative decisions they make whilst developing them, he says. A lot of decisions get made without due consideration for the impact [or] existing social science research.Plus, there are other benefits to smaller AI models: researchshowsthey are greener. Earlier in 2025, Chinese AI DeepSeek made headlines for being trained onless dataand using less energy.4: Use the tools thoughtfullyLarge language models trained on large swathes of the internet are embedded in many peoples everyday lives at this point. Ungless recommends being thoughtful about what you want to get out of AI. Ungless said it can be a useful technology. I think that it can have convenient uses. I think it can have creative uses, they say, especially for tasks that need automation or are tricky solo.When investigating what the LGBTQ community uses AI for, Ungless has found people use it for things like writing scripts to help explain coming out or experimenting with gender presentation. He acknowledges that this can be useful in tailoring resources but also encourages folks to check out community created resources since they may be more helpful.Beyond the limits to AIs accuracy, Ungless points out that they are to an extent, averaging machines. So when you ask it to produce a letter for a loved one, its not going to be personalized.They encourage everybody to engage more mindfully with AI, and ask questions like Does this task really need to be automated using AI? Could a person do it better? Could a more [low-tech] solution do it well?5: Protect your dataSince AIs data often comes from scraping off the internet, the personal data or intellectual property of anyone, not just the LGBTQ community, should be taken into consideration.Ungless has found that this is especially a concern for LGBTQ creatives. He encourages creators to learn more about how their work may have been used in AI training, pointing users to websites likeHave I been Trained?If creators are making visual content, they can also apply a filterGlazethat protects data from being used in training sets.Ultimately many of these questions will come down to policy. Ungless encourages everyone to engage with the future of AI policy. They urge: As AI gets normalized, I think all should be engaged with ensuring policy makers and regulation keeps an eye on what AI companies are doing.(This story is part of the Digital Equity Local Voices Fellowship lab through News is Out. The lab initiative is made possible with support from Comcast NBCUniversal.)The post 5 things a trans scientist wants you to know about AI appeared first on News Is Out.
0 Comments
·0 Shares
·70 Views
·0 Reviews