security

Analysis | What's a Bot? Why Musk and Twitter's CEO Are Fighting Over Fake Accounts – The Washington Post


Placeholder while article actions load

Elon Musk and Twitter Inc. chief Parag Agrawal are butting heads over the way the social media giant handles so-called bots, stoking speculation Musk may try to lower the price or even walk away from his $44 billion offer for the company. Musk told a tech conference in Miami that fake users make up at least 20% of all Twitter accounts, possibly as high as 90%. Twitter disagrees. It reports that spam accounts make up less than 5% of total users, and Agrawal posted a long thread laying out his company’s methodology. Musk replied by first asking why Twitter doesn’t just call users to verify their identity — and then by posting a poop emoji.

1. What are Twitter bots and what are they used for?

On Twitter, bots are automated accounts that can do the same things as real human beings: send out tweets, follow other users and like and retweet postings by others. Spam bots use these abilities to engage in potentially deceptive, harmful or annoying activity. Spam bots programmed with a commercial motivation might tweet incessantly in an attempt to drive traffic to a website for a product or service. They can be used to spread misinformation and promote political messages. In the 2016 presidential election, there were concerns that Russian bots helped influence the race in favor of the winner, Donald Trump. Spam bots can also disseminate links to fake giveaways and other financial scams. After announcing his plans to acquire Twitter, Musk said one of his priorities is cracking down on spam bots that promote scams involving cryptocurrencies.

2. Are bots and fake accounts allowed on Twitter?

Bots are allowed on Twitter, though company policy requires such accounts to indicate that they’re automated. The platform has even launched a label for “good” bots, such as @tinycarebot, an account that tweets self-care reminders. Spam bots, however, aren’t permitted and the company has policies meant to combat them. Users are encouraged to report policy violations. The company locks accounts with suspicious activity. To get back in, users may have to provide additional information such as a phone number or solve a reCAPTCHA challenge, which entails completing a puzzle or typing in a phrase seen in an image to confirm they’re human. Twitter also can permanently suspend spam accounts. The company estimated that fake accounts and spam accounted for less than 5% of its daily active users in the fourth quarter of 2021.

3. Can Elon Musk crack down on bots?

Musk certainly seems to think so. On April 25 he said he wanted to improve Twitter by, among other things, “defeating the spam bots, and authenticating all humans.” Making greater use of security methods like reCAPTCHA could help crack down on spam bots. Twitter could increase deployment of multifactor authentication, a type of identity verification where users have to confirm who they are and that they’re human by using another channel such as phone or email. The company could also boost usage of machine-learning algorithms that could help identify spam bots based on their Twitter activity.

4. What’s at stake for Twitter?

Twitter could lose users who are frustrated, concerned or even harmed by spam bots and fraudulent activity. Persistent security issues could also draw more attention from regulators who want to rein in Twitter and the broader tech industry. On the flip side, a tougher crackdown on spam bots could hurt Twitter’s total user count by cleaning out fake accounts. More immediately, Musk, chief executive officer of Tesla Inc. and SpaceX, said on May 13 that his bid to buy Twitter was “temporarily on hold” pending details about how many spam and fake accounts are on the platform.

5. Why is security such a challenge for Twitter?

Mobile apps are often more vulnerable than websites accessed through an internet browser on a desktop computer or laptop. Web browsers like Google Chrome update and make security improvements in the background without a user realizing it. When it comes to a mobile app, users often have to make the update themselves to ensure that a new security patch is in place. More established tech companies like Google and Microsoft also have large designated security teams putting them ahead of social media companies when it comes to security.

More stories like this are available on bloomberg.com



READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.