wired-telegramnude.jpg

Pornographic deepfakes are being weaponised at an alarming scale with at least 104,000 women targeted by a bot operating on the messaging app Telegram since July. The bot is used by thousands of people every month who use it to create nude images of friends and family members, some of whom appear to be under the age of 18.

The still images of nude women are generated by an AI that ‘removes’ items of clothing from a non-nude photo. Every day the bot sends out a gallery of new images to an associated Telegram channel which has almost 25,000 subscribers. The sets of images are frequently viewed more 3,000 times. A separate Telegram channel that promotes the bot has more than 50,000 subscribers.

Some of the images produced by the bot are glitchy but many could pass for genuine. “It is maybe the first time that we are seeing these at a massive scale,” says Giorgio Patrini, CEO and chief scientist at deepfake detection company Sensity, which conducted the research. The company is publicising its findings in a bid to pressure services hosting the content to remove it but is not publicly naming the Telegram channels involved.

The actual number of women targeted by the deepfake bot is likely much higher than 104,000. Sensity was only able to count images shared publicly and the bot gives people the option to generate photos privately. “Most of the interest for the attack is on private individuals,” Patrini says. “The very large majority of those are for people that we cannot even recognise.”

As a result, it is likely very few of the women who have been targeted know that the images exist. The bot and a number of Telegram channels linked to it are primarily Russian-language but also offer English-language translations. In a number of cases the images created appear to contain girls who are under the age of 18, Sensity adds, saying it has no way to verify this but has informed law enforcement of their existence.

Unlike other non-consensual explicit deepfake videos, which have racked up millions of views on porn websites, these images require no technical knowledge to create. The process is automated and can be used by anyone – it’s as simple as uploading an image to any messaging service.

The images are automatically created once people upload a clothed image of the victim to the Telegram bot from their phone or desktop. Sensity’s analysis says the technology only works on images of women. The bot is free to use although it limits people to ten images per day and payments have to be made to remove watermarks from images. A premium version costs around $8 for 112 images, Sensity says.

“It's a depressing validation of all the fears that those of us who had heard about this technology brought up at the beginning,” says Mary Anne Franks, a professor of law at the University of Miami. Franks provided some feedback on the Sensity research before it was published but was not involved in the report’s final findings. “Now you've got the even more terrifying reality that it doesn't matter if you've never posed for a photo naked or never shared any kind of intimate data with someone, all they need is a picture of your face”.

It’s believed that the Telegram bot is powered by a version the DeepNude software. Vice first reported on DeepNude in June 2019. The original creator killed the app citing fears about how it could be used, but not before it reached 95,000 downloads in just a few days.

The code was quickly backed-up and copied. The DeepNude software uses deep learning and generative adversarial networks to generate what it thinks victims bodies look like. The AI is trained on a set of images of clothed and naked women and is able to synthesise body parts in final images.

“This is now something that a community has embedded into a messaging platform app and therefore they have pushed forward the usability and the ease to access this type of technology,” Patrini says. The Telegram bot is powered by external servers, Sensity says, meaning it lowers the barrier of entry. “In a way, it is literally deepfakes as a service”.

Telegram did not answer questions about the bot and the abusive images it produces. Sensity’s report also says the company did not respond when it reported the bot and channels several months ago. The company has a [limited set of terms of service](https://telegram.org/tos#:~:text=By signing up for Telegram,Telegram channels%2C bots%2C etc.). One of its three bullet points says that people should not “post illegal pornographic content on publicly viewable Telegram channels, bots, etc”.

In an expanded set of frequently asked questions Telegram says it does process requests to take down “illegal public content”. It adds that Telegram chats and group chats are private and the company doesn’t process requests related to them, however, channels and bots are publicly available. A section on takedowns says “we can take down porn bots”.

Before the publication of this article the Telegram channel which pushed out daily galleries of bot-generated deepfake images saw all of the messages within it removed. It is not clear who these were removed by.

For this sort of activity there is usually some data on who has used the bot and their intentions. Within the Telegram channels linked to the bot there is a detailed “privacy policy” and people using the service have answered self-selecting surveys about their behaviour.

An anonymous poll posted to the Telegram channel in July 2019 was answered by more than 7,200 people, of which 70 per cent of these said they were from “Russia, Ukraine, Belarus, Kazakhstan and the entire former USSR”. All other regions of the world had less than six per cent of the poll share each. People using the bot also self-reported finding it from Russian social media network VK. Sensity’s report says that it has found a large amount of deepfake content on the social network and the bot also has a dedicated page on the site. VK did not respond to requests for comment.

A separate July 2019 poll answered by 3,300 people revealed people’s motivations for using the bot. It asked the question: “Who are you interested to undress in the first place?”. The overwhelming majority of respondents, 63 per cent, selected the option: “Familiar girls, whom i know in real life”. Celebrities and “stars” was the second most selected category (16 per cent), “models and beauties from Instagram” was the third most selected option with eight per cent.

Experts fear these type of images will be used to humiliate and blackmail women. But as deepfake technology has been rapidly scaled, the law has failed to keep up and has mostly focussed on the future political impact of the technology.

Since deepfakes were invented at the end of 2017, they have mostly been used to abuse women. The growth over the last year has been exponential as the technology required to make them becomes cheaper and easier to use. In July 2019 there were 14,678 deepfake videos online, a previous Sensity research found. By June this year the number climbed to 49,081. Almost all of these videos were pornographic in nature and targeted women.