Advocacy groups warn of crackdown on Twitter after Musk kills security council

In the weeks since Musk bought the platform, he systematically dismantled much of the company’s antiquated content moderation apparatus as he embraced free speech and objections to what he said. virus of the awakened mind.” Dismissing the board, which is strictly advisory, is that it rejects outside comments on how Twitter should be run.

Yael Eisenstat, vice president of the Center for Technology and Society at the Anti-Defamation League, said, “The elimination of the board is disappointing because its members have valuable insights into how to make their platform safer for all users.”

“Twitter will miss the opportunity to learn from a diverse group of experts, from free speech allies to people who are extremely protective of children’s and people’s privacy,” Magid added.

GLAAD, an LGBTQ advocacy group, said that in the absence of advice or best practices on hate speech, β€œthe platform remains clearly dangerous not only for LGBTQ users, but also for brands and advertisers.

Musk had already fired nearly all of the company’s human content moderators, personally attacked former Twitter trust and security manager Yoel Roth, and reinforced writing by writers Matt Taibbi and Bari Weiss that pushed the story. so-called Twitter files.

It was created in 2016, the council has grown from its original 40 members to nearly 100 groups worldwide. They met periodically to advise on product launches or new initiatives Twitter was working on. And even before its official cancellation, Three prominent board members resigned last weekIt claims that “the security and well-being of Twitter users has been compromised.”

In October, Musk announced that he would create a content moderation board and meet with civil rights groups to address an apparent increase in hate speech on the platform. But over the next month, Musk quickly charted a different course, instead using former President Donald Trump and Rep. Marjorie Taylor Greene (R-Ga.), is ending Twitter’s enforcement of its Covid-19 disinformation policy, releasing internal documents that claim it did. “militant” workers Trump is depressed.

Recently, Musk personally attacked Roth, a former top security officer at Twitter, which led to a wave of threats to Roth’s personal safety, forcing him and his family to flee their home. Musk also claimed the advisory board and Twitter has not removed child sexual abuse content for years, the former CEO and co-founder of Twitter Jack Dorsey was rejected.

Musk’s salvos drew criticism from politicians, including Rep. Ted Place (D-Calif.) who tweeted this The contents of the “Twitter Files” were “dumb” and claims that Trump was silenced in 2020 were “pure gaslighting”. Also, Senator. Mark Kelly (D-Arizona) pushed back about Musk tweets about Anthony Fauci, the nation’s top infectious disease expert, asking Musk to “stop mocking and promoting hatred against already marginalized and at-risk members of the #LGBTQ+ community.”

represents Ritchie Torres (DN.Y.) He said Musk criticized Fauci It means “Elon is not a champion of free speech.”

However, no lawmaker sitting in Congress has yet abandoned the platform because there is no equivalent social media site to share their message and reach reporters quickly.

Advocacy groups continue to warn that Musk’s own behavior β€” and his commitment to free speech absolutism β€” could cause real-world damage.

The Center for Democracy and Technology, a digital rights nonprofit that receives funding from tech companies, was a board member and rejected Musk’s public endorsement of hate speech on the platform. “We are also appalled by the irresponsible actions of Twitter executives to spread false information about the board, which put board members at risk and undermined any confidence in the company,” the group said in a statement.

Instead of the Council and its majority of human content moderators, Musk’s Twitter could rely on artificial intelligence and algorithms to detect and remove hate speech and extremist content, experts say. Former council members say that won’t be enough.

“It’s unclear what the platform will look like, but I’m not optimistic that AI alone can protect speech without people being able to determine the nuances of certain types of content that violate their Terms of Service.” Magid said. .

All news on the site does not represent the views of the site, but we automatically transmit and translate this news through software technology on the site, rather than from a human editor.

Leave a Reply

Your email address will not be published. Required fields are marked *