BRISBANE, Australia – An Australian man has been handed a historic fine of AU$343,500 (approximately 317 million KRW) by a federal court in Brisbane, Queensland, for posting non-consensual deepfake pornographic images of women, including public figures, online. The ruling, delivered on September 26, marks the first case in Australia to impose such a substantial financial penalty for deepfake abuse, sending a powerful message against the exploitation of AI technology for image-based sexual abuse.
The Federal Court found Antonio Rotondo violated the Online Safety Act by repeatedly uploading 12 deepfake pornographic photos of six women between November 2022 and October 2023. The images were posted 14 times on a deepfake website. The court also ordered Rotondo to cover the legal costs incurred by the Australian online safety regulator, the eSafety Commissioner, which brought the civil suit against him. Furthermore, the identities of the victims are to remain confidential.
Deliberate and Persistent Breach
The court determined that Rotondo’s actions were both deliberate and sustained. Evidence included his own remarks that creating the explicit deepfakes was "fun," indicating a calculated disregard for the law and the harm caused.
Rotondo had previously shown contempt for the judicial process. After being ordered by the court in early 2023 to delete the images, he ignored the directive, stating, "I am not a resident of Australia. The removal notice means nothing to me. Get an arrest warrant if you think you are right." He was subsequently arrested in December of the same year upon entering Australia from the Philippines, facing a contempt of court charge that resulted in an additional fine of AU$25,000 (about 23 million KRW).
Impact and Regulatory Response
The eSafety Commissioner, which spearheaded the legal challenge, praised the outcome, noting that the judgment delivers a "strong message about the consequences for anyone who perpetrates deepfake image-based abuse." One victim’s statement during the trial underscored the profound violation, conveying a feeling of being "violated" and "appalled" despite the content being visibly fake.
The surge in the misuse of artificial intelligence for digital sexual abuse, including applications like 'nudify' apps that create fake nude images, has prompted significant regulatory action in Australia. The government is actively working to curb this trend. Earlier this month, the Australian Department of Communications announced its efforts to present legislation to Parliament aimed at blocking apps like 'nudify' and online stalking tools.
The need for stricter measures is evident in the data: the eSafety Commissioner has reported that the number of complaints regarding non-consensual, digitally altered intimate images targeting minors has more than doubled over the past 18 months, with women being the target in approximately 80% of cases. In a separate move to protect minors, the Australian government is also planning to implement a complete ban on the use of social media platforms like YouTube, Facebook, and Instagram for children under 16, effective from the end of this year.
This landmark fine against Rotondo solidifies Australia's position as a global leader in utilizing civil law to combat technology-facilitated abuse, establishing a clear deterrent against the non-consensual production and sharing of deepfake pornography.
[Copyright (c) Global Economic Times. All Rights Reserved.]