ECNETNews reports that Elon Musk’s recent changes to user blocking on the platform formerly known as Twitter will allow blocked accounts to view a user’s public profile and posts, while still preventing them from interacting with the content. This shift represents Musk’s ongoing criticism of traditional blocking protocols and moderation practices online, which he believes are ineffective against users intent on harassing others.
Musk has argued that the existing blocking measures are insufficient, as malicious users can easily circumvent them using alternative accounts or private browsing. This new approach to blocking is raising concerns among many users who feel it undermines their ability to protect themselves from online harassment, particularly during sensitive situations involving doxxing and threats.
New Blocking Guidelines Raise Concerns Over Safety
Since Musk announced the blocking changes, users have expressed their outrage, stating that this fundamental alteration misunderstands the purpose of blocking. Many argue that the block button serves as a vital first line of defense against online abuse, particularly for those maintaining public accounts for professional purposes. The new policy is viewed as a push for users to alter their privacy settings, potentially forcing them to switch to private profiles to maintain their online boundaries.
Critics emphasize that these changes could particularly jeopardize the safety of marginalized groups. Recent reports, including GLAAD’s 2024 Social Media Safety Index, highlight that the platform is deemed the most unsafe for LGBTQ users, with calls from advocacy organizations for at-risk individuals to utilize blocking features freely.
The implications of these changes extend to younger users, who may already be navigating heightened risks of online abuse. Research from Thorn, a nonprofit focused on combating child sexual abuse, reveals that teens are highly reliant on online safety tools, utilizing features like blocking as essential resources to fend off unwanted contact and harassment. The report indicates that many youth prefer employing digital safety tools rather than seeking support from their immediate in-person networks.
Concerns also arise regarding the potential increase in exposure to harmful content, particularly with the ongoing complications in content moderation and the rise of AI-generated material on social media. With platforms facing growing scrutiny over their commitment to user safety and mental well-being, it remains unclear how the altered blocking features will affect user experiences.
While some social media companies are enhancing their safety options, including customizable features that promote online security, the changes at the platform will likely provoke further discussion about user safety and content moderation strategies in the digital landscape.