Beginning last week, the United Kingdom has started requiring purveyors of online porn to check IDs—and it’s already reverberating beyond adult websites. For example, Bluesky—a general-interest social media platform and not what most people would call an “adult website” by any means—will begin requiring U.K. users to prove they’re adults or otherwise find direct messaging and certain content inaccessible.
Platforms with U.K. users are now required to block minors from being able to see not just porn but “self-harm, suicide and eating disorder content,” according to Ofcom, the U.K.’s communication regulatory agency. The requirement is part of the U.K.’s Online Safety Act of 2023. This far-reaching law imposes rules on an array of digital services, including social media platforms, search engines, video-sharing platforms, direct messaging tools, dating apps, message boards, and more. As a part of this bill, online platforms publishing content that authorities deem “harmful to children” must “introduce robust age checks.”
You are reading Sex & Tech, from Elizabeth Nolan Brown. Get more of Elizabeth’s sex, tech, bodily autonomy, law, and online culture coverage.
The age-verification rule isn’t aimed solely at sex sites, but at any digital entity where racy content or other “harmful” speech could be found.
In addition to Bluesky, Reddit, X, Discord, and Grinder “have now announced they will deploy age assurance” schemes, Ofcom says.
Services had until last week to start complying or face serious financial consequences.
On Bluesky, this means submitting credit card information or submitting to a facial scan.
Per Ofcom’s rules, there are various ways that age checks can be done, including checking users’ government issued IDs, employing some sort of online ID verification service, or utilizing bank, credit card, or phone information.
But letting users self-report that they are above age will no longer suffice.
If you’re in the U.S. and thinking, “What does this have to do with me?” Well, consider the U.K. a glimpse into our inevitable surveillance-mad future.
At least 20 states have already passed rules requiring age verification for adult content. And I think we can expect most, if not all, states to follow suit now that the Supreme Court has given it the OK.
A lot of these state laws regarding age checks and online porn have been written in ways to exclude platforms like X and Bluesky (for instance, by only applying to platforms where more than one-third of the content is adult-oriented).
But sex work is always the canary in the coal mine for free speech and privacy, and age-check requirements aren’t stopping with online porn.
Already, some states are passing laws that necessitate social media platforms checking IDs or otherwise verifying user ages.
A federal appeals court recently gave the green light to Mississippi to start enforcing a social media age verification law.
“Around the world, a new wave of child protection laws are forcing a profound shift that could normalize rigorous age checks broadly across the web,” note Matt Burgess and Lily Hay Newman at Wired. They point out that “Meanwhile, courts in France ruled last week that porn sites can check users’ ages. Ireland implemented age checking laws for video websites this week. The European Commission is testing an age-verification app. And in December, Australia’s strict social media ban for children under 16 will take effect, introducing checks for social media and people logged in to search engines.”
“Age verification impedes people’s ability to anonymously access information online,” Stanford University researcher Riana Pfefferkorn told Wired. “That includes information that adults have every right to access but might not want anyone else knowing they’re consuming—such as pornography—as well as information that kids want to access but that for political reasons gets deemed inappropriate for them, such as accurate information about sex, reproductive health information, and LGBTQ content.”
The age of online anonymity being possible is rapidly vanishing. In its place, we get dubious “protection” measures that can be easily gamed by motivated parties, may send people to less regulated and less responsible platforms, put adults and children alike at risk of identity theft and other security violations, and make it much easier for authorities around the world to keep tabs on their citizens.
“The Supreme Court of Canada has rejected a constitutional challenge of the criminal law on sex work, upholding the convictions of two men who argued its provisions are overly broad,” reports The Canadian Press.
The case came before Canada’s Supreme Court last November, and this newsletter covered it then:
The case is Kloubakov v. Canada. It was brought by two men—Mikhail Kloubakov and Hicham Moustaine—who were employed as drivers for women being paid for sex. Both men were found guilty of benefiting financially from, and helping to procure, people for sexual services.
In arguments before the court on November 12 and 13, lawyers for Kloubakov and Moustaine argued that certain provisions of Canada’s current sex work laws violate the Canadian Charter of Rights and Freedoms, which guarantees all people a right to life, liberty, and security of person.
In a unanimous ruling last week, Canada’s Supreme Court rejected their argument.
The court said that “a third party who provides security to someone who sells sexual services could do so lawfully, the court said, as long as they do not encourage the person to sell sex and provided the benefit they receive is proportionate to the value of the services they provide,” notes The Canadian Press. The court said it would be up to judges on a case-by-case basis to sort such things out.
I don’t know enough about Canadian law to say for sure, but that sure sounds like it would still prevent sex workers from being able to legally pay people to be their drivers, security, etc. Who in their right mind would openly engage in such a pursuit if the only thing preventing their prosecution was a judge determining that they weren’t charging too much for their services and were appropriately disapproving of the sex taking place?
And making it difficult or dangerous for third parties to be legally employed by sex workers only leaves more opportunity for third parties who will take the risk to be exploitative.
Last Wednesday’s newsletter looked at the free speech risks posed by government crackdowns on artificial intelligence that they deem too woke, noting that President Donald Trump was supposed to soon release an order on the matter. That order—titled “Preventing Woke AI in the Federal Government”—is here. It states that the U.S. government shall:
Procure only those LLMs developed in accordance with the following two principles (Unbiased AI Principles):
(a) Truth-seeking. LLMs shall be truthful in responding to user prompts seeking factual information or analysis. LLMs shall prioritize historical accuracy, scientific inquiry, and objectivity, and shall acknowledge uncertainty where reliable information is incomplete or contradictory.
(b) Ideological Neutrality. LLMs shall be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI. Developers shall not intentionally encode partisan or ideological judgments into an LLM’s outputs unless those judgments are prompted by or otherwise readily accessible to the end user.
Foundation for Individual Rights and Expression (FIRE) president Greg Lukianoff notes that while “the culture war framing on all of this is obvious, and the executive order plays well with voters who are exhausted by perceived left-coded tech and institutional groupthink. But once you move beyond the political theater, the implications of this order become far more serious.”
Related: Reason‘s Jack Nicastro looks at the Trump administration’s AI Action Plan.
• The Guardian profiles Chilean photographer Paz Errázuriz:
Between 1982 and 1987, Errázuriz spent time photographing life in the brothels of Santiago, as trans sex workers fixed their hair, shifted their stockings, refined their makeup and killed time waiting for male clients. It was, she says, a “beautiful” experience. “We talked or we’d have a glass of wine or a coffee. They trusted me.”
Such was her empathetic bond with her subjects, that she even developed a friendship with the mother of two brothers working in one of the brothels. “I dedicated the series to her.” She titled the project Adam’s Apple, and it characterised a career defined by an enduring love of outsiders.
Works from the series can now be seen in her first major solo UK exhibition, Paz Errázuriz: Dare to Look – Hidden Realities of Chile at MK Gallery in Milton Keynes. Other subjects of the 171 photographs on show include psychiatric patients, circus performers, boxers, political activists and the homeless, highlighting the humanity of those living under duress during the military dictatorship of Augusto Pinochet.
• The Stopping Terrorists Online Presence and Holding Accountable Tech Entities (STOP HATE) Act “would make it mandatory for social media companies to work with the federal government” by requiring “companies to provide triennial reports on their moderation policies—and violations they catch—to the U.S. attorney general,” notes Reason‘s Matthew Petti. At a press conference last week, Rep. Don Bacon (R–Neb.), one of the bill’s two sponsors, “made it clear that the STOP HATE Act was meant to push social media companies to act even more like an arm of government censorship.”
• Will AI slop make people touch grass more?
reason.com (Article Sourced Website)
#agegated #internet