Thursday, November 21, 2024
spot_img

Top 5 This Week

Related Posts

30+ brands suspend their Twitter marketing campaigns after finding their ads next to child-pornography accounts

Some popular brands have paused their Twitter marketing campaigns after discovering that their ads had appeared alongside child pornography accounts.

Affected brands. There were reportedly more than 30 brands that appeared on the profile pages of Twitter accounts peddling links to the exploitative material. Among those brands are a children’s hospital and PBS Kids. Other verified brands include:

  • Dyson
  • Mazda
  • Forbes
  • Walt Disney
  • NBC Universal
  • Coca-Cola
  • Cole Haan

What happened. Twitter hasn’t given any answers as to what may have happened to cause the issue. But a Reuters review found that some tweets include keywords related to “rape” and “teens,” which appeared alongside promoted tweets from corporate advertisers. In one example, a promoted tweet for shoe and accessories brand Cole Haan appeared next to a tweet in which a user said they were “trading teen/child” content.

In another example, a user tweeted searching for content of “Yung girls ONLY, NO Boys,” which was immediately followed by a promoted tweet for Texas-based Scottish Rite Children’s Hospital. 

How brands are reacting. “We’re horrified. Either Twitter is going to fix this, or we’ll fix it by any means we can, which includes not buying Twitter ads.” David Maddocks, brand president at Cole Haan, told Reuters.

“Twitter needs to fix this problem ASAP, and until they do, we are going to cease any further paid activity on Twitter,” said a spokesperson for Forbes.

“There is no place for this type of content online,” a spokesperson for carmaker Mazda USA said in a statement to Reuters, adding that in response, the company is now prohibiting its ads from appearing on Twitter profile pages.

A Disney spokesperson called the content “reprehensible” and said they are “doubling-down on our efforts to ensure that the digital platforms on which we advertise, and the media buyers we use, strengthen their efforts to prevent such errors from recurring.”

Twitter’s response. In a statement, Twitter spokesperson Celeste Carswell said the company “has zero tolerance for child sexual exploitation” and is investing more resources dedicated to child safety, including hiring for new positions to write policy and implement solutions. She added that the matter is being investigated.

An ongoing issue. A cybersecurity group called Ghost Data identified more than 500 accounts that have openly shared or requested child sexual abuse material over a 20-day period. Twitter failed to remove 70% of them. After Reuters shared a sample of explicit accounts with Twitter. Twitter then removed 300 additional accounts but left more than 100 active.

Twitter’s transparency reports on its website show it suspended more than 1 million accounts last year for child sexual exploitation.

What Twitter is, and isn’t doing. A team of Twitter employees concluded in a report last year saying that the company needed more time to identify and remove child exploitation material at scale. The report noted that the company had a backlog of cases to review for possible reporting to law enforcement.

Traffickers often use code words such as “cp” for child pornography and are “intentionally as vague as possible,” to avoid detection. The more that Twitter cracks down on certain keywords, the more that users are nudged to use obfuscated text, which “tend to be harder for Twitter to automate against,” the report said.

Ghost Data said that such tricks would complicate efforts to hunt down the materials, but noted that his small team of five researchers and no access to Twitter’s internal resources was able to find hundreds of accounts within 20 days.

Not just a Twitter problem. The problem isn’t isolated to just Twitter. Child safety advocates say predators are using Facebook and Instagram to groom victims and exchange explicit images. Predators instruct victims to reach out to them on Telegram and Discord to complete payment and receive materials. The files are then usually stored on cloud services like Dropbox.

Why we care. Child pornography and explicit accounts on social media are everyone’s problem. Since offenders are continually trying to deceive the algorithms using code words or slang, we can never be 100% sure that our ads aren’t appearing where they shouldn’t be. If you’re advertising on Twitter, be sure to review your placements as thoroughly as possible.

But Twitter’s response seems to be lacking. If a watchdog group like Ghost Data can find these accounts without accessing Twitter’s internal data, then it seems pretty reasonable to assume that a child can, as well. Why isn’t Twitter removing all of these accounts? What additional data are they looking for to justify a suspension?

Like a game of Whac-A-Mole, for every account that is removed, several more pop up, and suspended users will likely go on to create new accounts, masking their IP addresses. So is this an automation issue? Is there a problem with getting local law enforcement agencies to react? Twitter spokesperson Carswell said that the information in recent reports “… is not an accurate reflection of where we are today.” This is likely an accurate statement as the issue seems to have gotten worse.


Contributing authors are invited to create content for Search Engine Land and are chosen for their expertise and contribution to the search community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. The opinions they express are their own.


Popular Articles