A wave of pornographic deepfake photos of singer Taylor Swift went viral online, prompting the social networking platform X to ban some of her searches. An picture, voice recording, or video that is produced to give the impression that individuals are saying or acting in ways they did not is known as a “deepfake.” On Monday, attempts to search for her name on the website without quote marks produced an error message. Users were prompted to attempt their search again. The words “Don’t worry—it’s not your fault” also surfaced. Nevertheless, postings might display her name and the places it occurred by enclosing it in quote marks.
Swift’s pictures started becoming viral on X a week ago. She is currently the most well-known victim of a risk that tech corporations and anti-abuse organizations have found difficult to address. In a statement, Joe Benarroch, head of business operations at X, stated, “This is a temporary action and done with an abundance of caution as we prioritize safety on this issue.” Following the photographs’ viral release, the singer’s devoted following, known as the “Swifties,” swiftly rallied behind her. They started a defense campaign and overflowed the website with more flattering pictures of the singer. On their posts, they incorporated the hashtag #ProtectTaylorSwift. A few claimed to be reporting accounts that disseminated deepfakes.
According to Reality Defender, a deepfake search group, Swift was followed on numerous occasions for posting obscene content. You might find a lot of the posts on X, which used to be called Twitter. A few photos also appeared on Facebook and other social media platforms. At least twenty distinct artificial intelligence (AI)-generated photos were discovered by the researchers. According to researchers, the quantity of pornographic deepfakes has increased in recent years. Since the technology needed to create these kinds of photos has gotten easier to use and more widely available, the trend has continued.
Leave a Reply