You are here

News

'Breaking Bad' Creator Hates AI, Promises New Show 'Pluribus' Was 'Made By Humans'

Slashdot - 9 November, 2025 - 15:34
The new series from Breaking Bad creator Vince Gilligan, Pluribus, was emphatically made by humans, not AI, reports TechCrunch: If you watched all the way to the end of the new Apple TV show "Pluribus," you may have noticed an unusual disclaimer in the credits: "This show was made by humans." That terse message — placed right below a note that "animal wranglers were on set to ensure animal safety" — could potentially provide a model for other filmmakers seeking to highlight that their work was made without the use of generative AI. In fact, yesterday the former X-Files writer told Variety "I hate AI. AI is the world's most expensive and energy-intensive plagiarism machine...." He goes on, about how AI-generated content is "like a cow chewing its cud — an endlessly regurgitated loop of nonsense," and how the U.S. will fail to regulate the technology because of an arms race with China. He works himself up until he's laughing again, proclaiming: "Thank you, Silicon Valley! Yet again, you've fucked up the world." He also says "there's a very high possibility that this is all a bunch of horseshit," according to the article. "It's basically a bunch of centibillionaires whose greatest life goal is to become the world's first trillionaires. I think they're selling a bag of vapor." And earlier this week he told Polygon that he hasn't used ChatGPT "because, as of yet, no one has held a shotgun to my head and made me do it." (Adding "I will never use it.") Time magazine called Thursday's two-episode premiere "bonkers." Though ironically, that premiere hit its own dystopian glitch. "After months of buildup and an omnipresent advertising campaign, Apple's much-anticipated new show Pluribus made its debut..." reports Macworld. "And the service promptly suffered a major outage across the U.S. and Canada." As reported by Bloomberg and others, users started to report that the service had crashed at around 10:30 p.m. ET, shortly after Apple made the first two episodes of the show available to stream. There were almost 13,000 reports on Downdetector before Apple acknowledged the problem on its System Status page. Reports say the outage was brief, lasting less than an hour... [T]here remains a Resolved Outage note on Apple TV (simply saying "Some users were affected; users experienced a problem with Apple TV" between 10:29 and 11.38 p.m.), as well as on Apple Music and Apple Arcade, which also went down at the same time. Social media reports indicated that the outage was widespread.

Read more of this story at Slashdot.

Categories: Technology

New Firefox Mascot 'Kit' Unveiled On New Web Page

Slashdot - 9 November, 2025 - 13:34
"The Firefox brand is getting a refresh and you get the first look," says a new web page at Firefox.com. "Kit's our new mascot and your new companion through an internet that's private, open and actually yours." Slashdot reader BrianFagioli believes the new mascot "is meant to communicate that message in a warmer, more relatable way." And Firefox is already selling shirts with Kit over the pocket (as well as stickers)...

Read more of this story at Slashdot.

Categories: Technology

Common Crawl Criticized for 'Quietly Funneling Paywalled Articles to AI Developers'

Slashdot - 9 November, 2025 - 10:34
For more than a decade, the nonprofit Common Crawl "has been scraping billions of webpages to build a massive archive of the internet," notes the Atlantic, making it freely available for research. "In recent years, however, this archive has been put to a controversial purpose: AI companies including OpenAI, Google, Anthropic, Nvidia, Meta, and Amazon have used it to train large language models. "In the process, my reporting has found, Common Crawl has opened a back door for AI companies to train their models with paywalled articles from major news websites. And the foundation appears to be lying to publishers about this — as well as masking the actual contents of its archives..." Common Crawl's website states that it scrapes the internet for "freely available content" without "going behind any 'paywalls.'" Yet the organization has taken articles from major news websites that people normally have to pay for — allowing AI companies to train their LLMs on high-quality journalism for free. Meanwhile, Common Crawl's executive director, Rich Skrenta, has publicly made the case that AI models should be able to access anything on the internet. "The robots are people too," he told me, and should therefore be allowed to "read the books" for free. Multiple news publishers have requested that Common Crawl remove their articles to prevent exactly this use. Common Crawl says it complies with these requests. But my research shows that it does not. I've discovered that pages downloaded by Common Crawl have appeared in the training data of thousands of AI models. As Stefan Baack, a researcher formerly at Mozilla, has written, "Generative AI in its current form would probably not be possible without Common Crawl." In 2020, OpenAI used Common Crawl's archives to train GPT-3. OpenAI claimed that the program could generate "news articles which human evaluators have difficulty distinguishing from articles written by humans," and in 2022, an iteration on that model, GPT-3.5, became the basis for ChatGPT, kicking off the ongoing generative-AI boom. Many different AI companies are now using publishers' articles to train models that summarize and paraphrase the news, and are deploying those models in ways that steal readers from writers and publishers. Common Crawl maintains that it is doing nothing wrong. I spoke with Skrenta twice while reporting this story. During the second conversation, I asked him about the foundation archiving news articles even after publishers have asked it to stop. Skrenta told me that these publishers are making a mistake by excluding themselves from "Search 2.0" — referring to the generative-AI products now widely being used to find information online — and said that, anyway, it is the publishers that made their work available in the first place. "You shouldn't have put your content on the internet if you didn't want it to be on the internet," he said. Common Crawl doesn't log in to the websites it scrapes, but its scraper is immune to some of the paywall mechanisms used by news publishers. For example, on many news websites, you can briefly see the full text of any article before your web browser executes the paywall code that checks whether you're a subscriber and hides the content if you're not. Common Crawl's scraper never executes that code, so it gets the full articles. Thus, by my estimate, the foundation's archives contain millions of articles from news organizations around the world, including The Economist, the Los Angeles Times, The Wall Street Journal, The New York Times, The New Yorker, Harper's, and The Atlantic.... A search for nytimes.com in any crawl from 2013 through 2022 shows a "no captures" result, when in fact there are articles from NYTimes.com in most of these crawls. "In the past year, Common Crawl's CCBot has become the scraper most widely blocked by the top 1,000 websites," the article points out...

Read more of this story at Slashdot.

Categories: Technology

Pages

Subscribe to Creative Contingencies aggregator