[ad_1]
Major Social Media Platforms Accused of Neglecting LGBTQ+ User Safety
In a recent publication by GLAAD, alarming concerns regarding the lack of protection for LGBTQ+ users on major social networking platforms have been highlighted.
Key Findings in the GLAAD Report
The Social Media Safety Index report displays the inability or unwillingness of these platforms to either implement policies safeguarding user data or enforce them effectively. Among the major points the report assessed were:
– The failure to prohibit online hate speech against LGBTQ+ people,
– The platforms’ shortcomings in halting the spread of harmful stereotypes and misinformation about the LGBTQ+ community.
Platform Ratings
This report, now in its fourth year, evaluated six dominant social media platforms, including Meta’s subsidiaries Facebook, Instagram and Threads, and other giants such as TikTok, Youtube, and X (previously Twitter). The platforms were gauged based on 12 different criteria. This included:
– The establishment of explicit policies to protect trans, nonbinary, and gender-nonconforming users from deadnaming and misgendering,
– The provision for users to add their pronouns to profiles,
– Protection for authentic LGBTQ+ related advertisements
– The platforms’ records in tracking and disclosing violations of their LGBTQ+ inclusivity policy.
Subsequently, most social media platforms were found to be significantly lacking, allowing damaging speech to flourish, even while reaping substantial profits from advertising.
Though each of the platforms was given an overall failing grade, TikTok received a slightly improved score of D+, due to its recent adoption of a policy against allowing advertisers to target users based on sexual orientation or gender identity.
Spread of Misinformation
Despite having policies supposedly designed to protect LGBTQ+ users as per their documents, the report emphasizes that these platforms do little in actuality to halt the expansion of detrimental and inaccurately represented information.
Specifically, X platform experienced a surge in misinformation about the LGBTQ+ community promoted by influencers against homosexuality. These included cases where LGBTQ+ individuals were associated with criminals or predatory behavior. Reports of bomb threats aimed at schools, gyms, and children’s hospitals, highlighted by these accounts, have significantly increased.
Promotion of Anti-LGBTQ+ Sentiment
X’s platform owner, Elon Musk, has been caught endorsing anti-trans content, including applauding restrictions on trans women participating in sports. There is tangible evidence of how this online targeting intensifies offline violence towards the LGBTQ+ community. Ironically, X has only earned $2.5 billion in advertising revenue in 2023, compared to Meta generating $134 billion last year while permitting content that mistreats trans people.
Targeting Legitimate LGBTQ+ Content
The report also reveals how social media networks have targeted genuine LGBTQ+ content, making their platforms less safe and accessible for this community. Even instances of photos showing gay parents and their newborn being flagged as “sensitive content” further display such bias.
Artificial Intelligence Impact
The deployment of artificial intelligence tools for moderating content could potentially amplify this targeting. Investigations by Wired in April exposed biases in AI systems like OpenAI’s Sora in its portrayal of queer individuals.
Platforms such as Facebook have depended solely on automated systems to review content at times, neglecting any human intervention. This in itself is extremely concerning, particularly for the safety of LGBTQ+ users.
Automated Gender Recognition, Privacy Advocates Alert
More worryingly, the report unveiled major tech companies’ development of “automated gender recognition” technology to predict a person’s gender. Although aimed at improving product advertisement targetting, privacy advocates alarm that such technology could be utilised further to categorise and monitor people, invading their privacy.
Recommendations
The report calls for a strengthened and enforced policy to protect LGBTQ+ users. This would involve preventing advertisers from targetting LGBTQ+ users and enhancing content moderation without complete automation. Still, due to the absence of any enforced restrictions on AI and social media platforms in the United States, surrealistically, these companies remain self-governed.
Conclusion
In conclusion, this report exposes the desperate need for reform in how our digital space operates, especially concerning protecting the rights and safety of marginalized communities such as LGBTQ+. The lack of operational controls and regulations concerning these platforms possibly imposes new risks upon users that must be addressed promptly.
[ad_2]