Rising Popularity of AI-Based Apps Undressing Women Sparks Concerns, Researchers Find

Deepfake

Researchers have identified a concerning surge in the popularity of apps and websites utilizing artificial intelligence (AI) to undress women in photos. According to Graphika, a social network analysis company, 24 million people visited such websites in September alone. The trend is part of a broader issue of non-consensual pornography facilitated by AI advancements, known as deepfake pornography.

These undressing services, often marketed as “nudify,” extensively use popular social networks for promotion. The number of links advertising such apps increased over 2,400% on social media platforms like X and Reddit since the beginning of the year. AI is employed to manipulate images, recreating them to depict individuals in a nude state. Notably, many of these services specifically target women.

The rise in popularity is attributed to the release of open-source diffusion models, which are AI models capable of creating highly realistic images. Santiago Lakatos, an analyst at Graphika, highlighted the accessibility of these models, contributing to the development of more convincing deepfakes compared to earlier, less sophisticated versions.

Several ethical and legal concerns surround these practices, as the images are often sourced from social media without the subject’s knowledge or consent. The use of language in advertisements and potential harassment issues have been identified, further exacerbating the problematic nature of these services.

Some of the apps charge users $9.99 a month, claiming to attract a substantial customer base. This has prompted concerns about the proliferation of deepfake pornography and its potential to target ordinary individuals, including students and young adults.

Major platforms, such as Google’s YouTube and Reddit, have taken steps to address the issue. Google stated that it doesn’t allow ads with sexually explicit content and is removing violative ads. Reddit prohibits the non-consensual sharing of faked explicit material and has banned several domains as a result.

Privacy experts and cybersecurity professionals are increasingly alarmed by the ease with which deepfake software can be used, potentially impacting ordinary individuals. While there is currently no federal law specifically prohibiting the creation of deepfake pornography, some cases have been prosecuted under existing laws, such as those related to child sexual abuse material.

Platforms like TikTok and Meta Platforms Inc. (formerly Facebook) have taken measures to block keywords associated with undressing apps in response to the growing concerns. Despite these efforts, the issue remains a challenge, prompting a broader conversation about the ethical use of AI and the need for regulatory measures to address deepfake-related abuses.

Kaitlin Welch

Kaitlin Welch manages to cover anything. She is our freelance contributor. Kristie is responsible for covering reporting in finance and business News categories. Kaitlin has experience of 5 years as a reporter to News insights. Kaitlin writes related to the News Category.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post
ipad

Apple’s Comprehensive iPad Lineup Upgrade Set for 2024, Beginning with Next-Gen iPad Pro and iPad Air in March

Next Post
apple 1

Apple Plans to Assemble Over 50 Million iPhones Annually in India

Related Posts