You are here
News
America's Los Alamos Lab Is Now Investing Heavily In AI For Science
Established in 1943 to coordinate America's building of the first atomic bomb, the Los Alamos National Lab in New Mexico is still "one of the world's largest and most advanced scientific institutions" notes Wikipedia.
And it now has a "National Security AI Office," where senior director Jason Pruet is working to help "prepare for a future in which AI will reshape the landscape of science and security," according to the lab's science and technology magazine 1663. "This year, the Lab invested more in AI-related work than at any point in history..."
Pruet: AI is starting to feel like the next great foundation for scientific progress. Big companies are spending billions on large machines, but the buy-in costs of working at the frontiers of AI are so high that no university has the exascale-class machines needed to run the latest AI models. We're at a place now where we, meaning the government, can revitalize that pact by investing in the infrastructure to study AI for the public good... Part of what we're doing with the Lab's machines, like Venado — which has 2500 GPUs — is giving universities access to that scale of computing. The scale is just completely different. A typical university might have 50 or 100 GPUs.
Right now, for example, we have partnerships with the University of California, the University of Michigan, and many other universities where researchers can tap into this infrastructure. That's something we want to expand on. Having university collaboration will be critical if the Department of Energy is going to have a comprehensive AI program at scale that is focused on national security and energy dominance...
There was a time when I wouldn't have advocated for government investment in AI at the scale we're seeing now. But the weight of the evidence has become overwhelming. Large models — "frontier models" — have shown such extraordinary capabilities with recent advances in areas as diverse as hypothesis generation, mathematics, biological design, and complex multiphysics simulations. The potential for transformative impact is too significant to ignore.
"He no longer views the technology as just a tool, but as a fundamental shift in how scientists approach problems and make discoveries," the article concludes.
"The global race humanity is now in... is about how to harness the technology's potential while mitigating its harms."
Thanks to Slashdot reader rabbitface25 — also a Los Alamo Lab science writer — for sharing his article.
Read more of this story at Slashdot.
Categories: Technology
Fiverr Ad Mocks Vibe Coding - with a Singing Overripe Avocado
It's a cultural milestone. Fiverr just released an ad mocking vibe coding.
The video features what its description calls a "clueless entrepreneur" building an app to tell if an avocado is ripe — who soon ends up blissfully singing with an avocado to the tune of the cheesy 1987 song "Nothing's Gonna Stop Us Now." The avocado sings joyously of "a new app on the rise in a no-code world that's too good to be true" (rhyming that with "So close. Just not tested through...")
"Let them say we're crazy. I don't care about bugs!" the entrepreneur sings back. "Built you in a minute, now I'm so high off this buzz..."
But despite her singing to the overripe avocado that "I don't need a backend if I've got the spark!" and that they can "build this app together, vibe-coding forever. Nothing's going to stop us now!" — the build suddenly fails. (And it turns out that avocado really was overripe...) Fiverr then suggests viewers instead hire one of their experts for building their apps...
The art/design site Creative Bloq acknowledges Fiverr "flip-flopping between scepticism and pro-AI marketing." (They point out a Fiverr ad last November had ended with the tagline "Nobody cares that you use AI! They care about the results — for the best ones higher Fiverr experts who've mastered every digital skill including AI.") But the site calls this new ad "a step in the right direction towards mindful AI usage."
Just like an avocado that looks perfect on the outside, once you inspect the insides, AI-generated code can be deceptively unripe.
Fiverr might be feeling the impact of vibecoding themselves. The freelancing web site saw the company's share price fall over 14% this week, with one Yahoo! Finance site saying this week's quarterly results revealed Fiverr's active buyers dropped 10.9% compared to last year — a decrease of 3.4 million buyers which "overshadowed a 9.8% increase in spending per buyer."
Even when issuing a buy recommendation, Seeking Alpha called it "a short-term rebound play, as the company faces longer-term risks from AI and active buyer churn."
Read more of this story at Slashdot.
Categories: Technology
Would AI Perform Better If We Simulated Guilt?
Remember, it's all synthesized "anthropomorphizing". But with that caveat, Science News reports:
In populations of simple software agents (like characters in "The Sims" but much, much simpler), having "guilt" can be a stable strategy that benefits them and increases cooperation, researchers report July 30 in Journal of the Royal Society Interface... When we harm someone, we often feel compelled to pay a penance, perhaps as a signal to others that we won't offend again. This drive for self-punishment can be called guilt, and it's how the researchers programmed it into their agents. The question was whether those that had it would be outcompeted by those that didn't, say Theodor Cimpeanu, a computer scientist at the University of Stirling in Scotland, and colleagues.
Science News spoke to a game-theory lecturer from Australia who points out it's hard to map simulations to real-world situations — and that they end up embodying many assumptions. Here researchers were simulating The Prisoner's Dilemma, programming one AI agent that "felt guilt (lost points) only if it received information that its partner was also paying a guilt price after defecting." And that turned out to be the most successful strategy.
One of the paper's authors then raises the possibility that an evolving population of AIs "could comprehend the cold logic to human warmth."
Thanks to Slashdot reader silverjacket for sharing the article.
Read more of this story at Slashdot.
Categories: Technology
Despite Breach and Lawsuits, Tea Dating App Surges in Popularity
The women-only app Tea now "faces two class action lawsuits filed in California" in response to a recent breach," reports NPR — even as the company is now boasting it has more than 6.2 million users.
A spokesperson for Tea told the CBC it's "working to identify any users whose personal information was involved" in a breach of 72,000 images (including 13,000 verification photos and images of government IDs) and a later breach of 1.1 million private messages. Tea said they will be offering those users "free identity protection services."
The company said it removed the ID requirement in 2023, but data that was stored before February 2024, when Tea migrated to a more secure system, was accessed in the breach... [Several sites have pointed out Tea's current privacy policy is telling users selfies are "deleted immediately."]
Tea was reportedly intended to launch in Canada on Friday, according to information previously posted on the App Store, but as of this week the launch date is now in February 2026. Tea didn't respond to CBC's questions about the apparent delay. Yet even amid the current turmoil, Tea's waitlist has ballooned to 1.5 million women, all eager to join, the company posted on Wednesday. A day later, Tea posted in its Instagram stories that it had approved "well over" 800,000 women into the app that day alone.
So, why is it so popular, despite the drama and risks?
Tea tapped into a perceived weakness of ther dating apps, according to an associate health studies professor at Ontario's Western University interviewed by the CBC, who thinks users should avoid Tea, at least until its security is restored.
Tech blogger John Gruber called the incident "yet another data point for the argument that any 'private messaging' feature that doesn't use E2EE isn't actually private at all." (And later Gruber notes Tea's apparent absence at the top of the charts in Google's Play Store. "I strongly suspect that, although Google hasn't removed Tea from the Play Store, they've delisted it from discovery other than by searching for it by name or following a direct link to its listing.")
Besides anonymous discussions about specific men, Tea also allows its users to perform background and criminal record checks, according to NPR, as well as reverse image searches. But the recent breach, besides threatening the safety of its users, also "laid bare the anonymous, one-sided accusations against the men in their dating pools." The CBC points out there's a men's rights group on Reddit now urging civil lawsuits against tea as part of a plan to get the app shut down. And "Cleveland lawyer Aaron Minc, who specializes in cases involving online defamation and harassment, told The Associated Press that his firm has received hundreds of calls from people upset about what's been posted about them on Tea."
Yet in response to Tea's latest Instagram post, "The comments were almost entirely from people asking Tea to approve them, so they could join the app."
Read more of this story at Slashdot.
Categories: Technology
Four Radioactive Wasp Nests Found Near US Nuclear Storage Site
The Washington Post reports:
In early July, a wasp nest with a radiation level 10 times what is allowed by federal regulations was found inside the grounds of a sprawling Cold War-era nuclear site in South Carolina that today partly serves as a storage area for radioactive liquid waste. Federal officials said Friday that at least three more contaminated wasp nests were found within the 310-square-mile Savannah River Site, which encompasses an area more than four times the size of the District of Columbia...
[F]ederal authorities said that the discoveries were not cause for alarm and experts noted that the discovery of radioactivity in wildlife near nuclear facilities did not necessarily indicate the likelihood of a major leak... In a statement sent to reporters, Edwin Deshong, manager of the Savannah River Site's Office of Environmental Management, said the wasp nests had "very low levels of radioactive contamination" and did not pose health risks to the site's workers, nearby residents or the environment... The Savannah River Site's 43 active underground waste tanks have more than 34 million gallons of radioactive liquid waste. The oldest tanks have previously "developed small hairline cracks" that led to small-volume leaks, the Savannah River Site says on its website.
A July report after the first nest was found said there was "no impact" from the contaminated nest, the Post reports, with the nest's high radioactivity level due to "on-site legacy radioactive contamination" rather than "a loss of contamination control."
More from the Associated Press:
The tank farm is well inside the boundaries of the site and wasps generally fly just a few hundred yards from their nests, so there is no danger they are outside the facility, according to a statement from Savannah River Mission Completion which now oversees the site. If there had been wasps found, they would have significantly lower levels of radiation than their nests, according to the statement which was given to the Aiken Standard.
Thanks to long-time Slashdot reader sandbagger for sharing the news.
Read more of this story at Slashdot.
Categories: Technology
AI Tools Gave False Information About Tsunami Advisories
After an 8.8 earthquake off the coast of Russia, "weather authorities leapt into action," reports SFGate, by modeling the threat of a tsunami "and releasing warnings and advisories to prepare their communities..."
But some residents of Hawaii, Japan and North America's West Coast turned to AI tools for updates that "appear to have badly bungled the critical task at hand." Google's "AI Overview," for example, reportedly gave "inaccurate information about authorities' safety warnings in Hawaii and elsewhere," according to reports on social media.
Thankfully, the tsunami danger quickly subsided on Tuesday night and Wednesday morning without major damage. Still, the issues speak to the growing role of AI tools in people's information diets... and to the tools' potentially dangerous fallibility... A critic of Google — who prompted the search tool to show an AI overview by adding "+ai" to their search — called the text that showed up "dangerously wrong."
Responding to similar complaints, Grok told one user on X.com "We'll improve accuracy."
Read more of this story at Slashdot.
Categories: Technology
Satellites, Drones, and AI: the New 'High-Tech Quest to Fight Wildfires'
There's now an "influx" of startups fightging wildfires, reports the Washington Post.
"More than 100 new wildfire-related technologies have launched in the U.S. and around the world since 2023, according to Lori Moore-Merrell, who served as U.S. fire administrator during the Biden administration... Unmanned lookout poles that use AI to sense smoke have been erected in the West. Swarms of military-grade drones are increasingly used for wildfire detection and management. AI technology also tracks lightning strikes, which can ignite wildfires..."
As America contends with what is already a punishing year of wildfires across massive swaths of the country, new, extremely precise satellite images beamed from space from the initiative FireSat. In March, a satellite outfitted with infrared sensors was launched more than 370 miles into space with the sole task of detecting and monitoring fires. With the ability to loop millions of miles around the planet each day, it found active fires and burn scars using bands of infrared light, demonstrating technology that the project's leaders and its early adopters said could be integral to filling technological gaps in the way they fight burns.
The satellite initiative was launched by a nonprofit coalition called Earth Fire Alliance (EFA). Its partners include Muon Space, which is developing the satellites; Google, which is using AI to help filter through the images; the Gordon and Betty Moore Foundation; and the Environmental Defense Fund. The goal is to have 50 satellites in orbit by 2030 to capture the entire world. At full capacity, the constellation is aiming to sweep the entire Earth every 20 minutes to detect small fires. By spring or summer of next year, it plans to launch three more satellites into space that will coordinate with agencies in states including California and Colorado to help them detect and fight fire.
Read more of this story at Slashdot.
Categories: Technology
New Steam on Linux Market Share Stats 'Likely the Largest Surveyed Figure Ever'
"The July 2025 results of the Steam Survey were posted a few minutes ago," Phoronix reported last night, "and show a healthy 0.32% increase to put the Linux gaming marketshare at 2.89%."
That's a recent high in percentage terms and while Steam saw around 3% in the early days of Steam on Linux a decade ago, in absolute terms this is likely the largest surveyed figure ever for the Linux gaming population.
Linux was at 2.89% for July while macOS was at 1.88% and Windows at 95.23%.
There does seem to be a jagged line that's trending upward...
November: 2.03%
December: 2.29%
January: 2.06%
February: 1.45%
March: 2.33%
April: 2.27%
May: 2.69%
June: 2.57%
July: 2.89%
Read more of this story at Slashdot.
Categories: Technology
Early Universe's 'Little Red Dots' May Be Black Hole Stars
After it began "peering into the distant universe" in 2022, NASA's James Webb Space Telescope "has discovered a rash of 'little red dots'," reports Science magazine. There's "hundreds of them, shining within the first billion years of the 13.8-billion-year-old universe, so small and red that they defied conventional explanation."
"Only in the past few months has a picture begun to emerge. The little red dots, astronomers say, may be an entirely new type of object: a colossal ball of bright, hot gas, larger than the Solar System, powered not by nuclear fusion, but by a black hole..."
The objects, which some astronomers are calling "black hole stars," could be a missing link in the evolution of galaxies and help explain the rapid growth of supermassive black holes that lie at their hearts. "The big breakthrough of the past 6 months is actually the realization that we can throw out all these other models we've been playing with before," says astronomer Anna de Graaff of the Max Planck Institute for Astronomy... JWST couldn't resolve the dots into a recognizable shape, which meant they must have been tiny — less than 2% of the diameter of the Milky Way. "It was a mystery ... as to why they were so spatially compact," says Caitlin Casey of the University of Texas at Austin. An impossibly dense packing of stars would be needed to explain their brightness. "I was excited," Casey says...
For Mitch Begelman, a theoretical astrophysicist at the University of Colorado Boulder, the observations are a vindication. Earlier this month, he and a colleague posted a preprint on arXiv reviving a scenario for the formation of hypothetical "quasi-stars" that he and others had proposed 20 years ago. The first generation of stars, they calculated, could have grown to colossal size in the early universe, which was made up almost entirely of hydrogen, the raw material of stars. When a giant star ran out of fuel, they said, its core would have collapsed into a black hole, but the outer envelope of hydrogen was so dense it survived the blast, enclosing the newborn black hole. As the black hole chewed at its shroud of gas, the entire system glowed as a quasi-star larger than the Solar System. "That's what the quasi-star envelope is doing, it's force-feeding the black hole by pushing matter into it," Begelman says.
Given how common little red dots appear to be in the early universe, theorists are beginning to wonder whether this giant-ball-of-gas phase is an essential part of black hole growth and the evolution of galaxies. "We're probably looking at kind of a new phase of black hole growth that we didn't know about before," de Graaff says.
"If the red dots do turn out to be black hole stars, it will be precisely the sort of breakthrough expected from JWST — and the kind of discovery astronomers live for."
Thanks to Slashdot reader sciencehabit for sharing the news.
Read more of this story at Slashdot.
Categories: Technology
Facing US Chip Restrictions, China Pitches Global Cooperation on AI
In Shanghai at the World Artificial Intelligence Conference (which ran until Tuesday), the Chinese government "announced an international organization for AI regulation and a 13-point action plan aimed at fostering global cooperation to ensure the technology's beneficial and responsible development," reports the Washington Post.
The theme of the conference was "Global Solidarity in the AI Era," the article notes, and "the expo is one part of Beijing's bid to establish itself as a responsible AI leader for the international community."
CNN points out that China's announcement comes "just days after the United States unveiled its own plan to promote U.S. dominance."
Chinese Premier Li Qiang unveiled China's vision for future AI oversight at the World AI Conference, an annual gathering in Shanghai of tech titans from more than 40 countries... While Li did not directly refer to the U.S. in his speech, he alluded to the ongoing trade tensions between the two superpowers, which include American restrictions on advanced semiconductor exports — a component vital for powering and training AI, which is currently causing a shortage in China. "Key resources and capabilities are concentrated in a few countries and a few enterprises," said Li in his speech on Saturday. "If we engage in technological monopoly, controls and restrictions, AI will become an exclusive game for a small number of countries and enterprises...."
Secretary-General of the Association of Southeast Asian Nations, Dr. Kao Kim Hourn, also called for "robust governance" of artificial intelligence to mitigate potential threats, including misinformation, deepfakes, and cybersecurity threats... Former Google CEO Eric Schmidt reiterated the call for international collaboration, explicitly calling on the U.S. and China to work together... "We have a vested interest to keep the world stable, keep the world not at war, to keep things peaceful, to make sure we have human control of these tools."
China's plan "called for establishing an international open-source community," reports the Wall Street Journal, "through which AI models can be freely deployed and improved by users." Industry participants said that plan "showed China's ambition to set global standards for AI and could undermine the U.S., whose leading models aren't open-source... While the world's best large language model is still American, the best model that everyone can use free is now Chinese."
"The U.S. should commit to ensuring that powerful models remain openly available," argues an opinion piece in The Hill by Stability AI's former head of public policy.
Ubiquity is a matter of national security: retreating behind paywalls will leave a vacuum filled by strategic adversaries. Washington should treat open technology not as a vector for Chinese Communist Party propaganda but as a vessel to transmit U.S. influence abroad, molding the global ecosystem around U.S. industry. If DeepSeek is China's open-source "Sputnik moment," we need a legislative environment that supports — not criminalizes — an American open-source Moon landing.
Read more of this story at Slashdot.
Categories: Technology
For Sale: a 1990 Airstream Trailer/NASA Command Vehicle for Space Shuttle Landings
The vehicle "once led the Space Shuttle down the runway at Edwards Air Force Base," The Drive reported in 2022, noting it was won in an auction for $21,061 (beating 18 other bidders). "I just figured the NASA brand combined with Airsteam hip seemed like a can't lose combination," the buyer says now, in a listing for the vehicle on the on the automotive sales site Hemmings.com asking $199,000..
They're touting it as a priceless marketing/publicity prop — "a once in a lifetime opportunity" to own what was once an "onsite command center complete with communications and atmospheric monitoring... Imagine pulling into Burning Man driving this..." The seller points out it's the only custom-built "Airstream" trailer ever sold by NASA. (The others were crushed, except for one donated to the Kennedy museum.) But for this one "Apparently there was some miscommunication when the vehicle was decommissioned. It should have been offered to museums but the sales team did not know what it was.")
"Has only 8240 miles on it as driven from Ohio to California then around the Edwards base."
The seller apparently first tried listing it on eBay in May for $50,000. ("Reserve not met," says that listing page now. "Very well maintained, minor dings on exterior...")
Thanks to long-time Slashdot reader schwit1 for sharing the news.
Read more of this story at Slashdot.
Categories: Technology
Top AI Salaries Dwarf Those of the Manhattan Project and the Space Race
An anonymous reader quotes a report from Ars Technica: Silicon Valley's AI talent war just reached a compensation milestone that makes even the most legendary scientific achievements of the past look financially modest. When Meta recently offered AI researcher Matt Deitke $250 million over four years (an average of $62.5 million per year)—with potentially $100 million in the first year alone -- it shattered every historical precedent for scientific and technical compensation we can find on record. [Meta CEO Mark Zuckerberg reportedly also offered an unnamed AI engineer $1 billion in compensation to be paid out over several years.] That includes salaries during the development of major scientific milestones of the 20th century. [...]
To put these salaries in a historical perspective: J. Robert Oppenheimer, who led the Manhattan Project that ended World War II, earned approximately $10,000 per year in 1943. Adjusted for inflation using the US Government's CPI Inflation Calculator, that's about $190,865 in today's dollars -- roughly what a senior software engineer makes today. The 24-year-old Deitke, who recently dropped out of a PhD program, will earn approximately 327 times what Oppenheimer made while developing the atomic bomb. [...] The Apollo program offers another striking comparison. Neil Armstrong, the first human to walk on the moon, earned about $27,000 annually -- roughly $244,639 in today's money. His crewmates Buzz Aldrin and Michael Collins made even less, earning the equivalent of $168,737 and $155,373, respectively, in today's dollars. Current NASA astronauts earn between $104,898 and $161,141 per year. Meta's AI researcher will make more in three days than Armstrong made in a year for taking "one giant leap for mankind." The report notes that the sums being offered to some of these AI researchers top even the most popular sports athletes. "The New York Times noted that Steph Curry's most recent four-year contract with the Golden State Warriors was $35 million less than Deitke's Meta deal (although soccer superstar Cristiano Ronaldo will make $275 million this year as the highest-paid professional athlete in the world)," reports Ars.
Read more of this story at Slashdot.
Categories: Technology
Researchers Map Where Solar Energy Delivers the Biggest Climate Payoff
A Rutgers-led study using advanced computational modeling reveals that expanding solar power by just 15% could reduce U.S. carbon emissions by over 8.5 million metric tons annually, with the greatest benefits concentrated in specific regions like California, Texas, and the Southwest. The study has been published in Science Advances. From the report: The study quantified both immediate and delayed emissions reductions resulting from added solar generation. For example, the researchers found that in California, a 15% increase in solar power at noon was associated with a reduction of 147.18 metric tons of CO2 in the region in the first hour and 16.08 metric tons eight hours later.
The researchers said their methods provide a more nuanced understanding of system-level impacts from solar expansion than previous studies, pinpointing where the benefits of increased solar energy adoption could best be realized. In some areas, such as California, Florida, the mid-Atlantic, the Midwest, Texas and the Southwest, small increases in solar were estimated to deliver large CO2 reductions, while in others, such as New England, the central U.S., and Tennessee, impacts were found to be minimal -- even at much larger increases in solar generation.
In addition, the researchers said their study demonstrates the significant spillover effects solar adoption has on neighboring regions, highlighting the value of coordinated clean energy efforts. For example, a 15% increase in solar capacity in California was associated with a reduction of 913 and 1,942 metric tons of CO2 emissions per day in the northwest and southwest regions, respectively. "It was rewarding to see how advanced computational modeling can uncover not just the immediate, but also the delayed and far-reaching spillover effects of solar energy adoption," said the lead author Arpita Biswas, an assistant professor with the Department of Computer Science at the Rutgers School of Arts and Sciences. "From a computer science perspective, this study demonstrates the power of harnessing large-scale, high-resolution energy data to generate actionable insights. For policymakers and investors, it offers a roadmap for targeting solar investments where emissions reductions are most impactful and where solar energy infrastructure can yield the highest returns."
Read more of this story at Slashdot.
Categories: Technology
Lying Increases Trust In Science, Study Finds
A new paper from Bangor University outlines the "bizarre phenomenon" known as the transparency paradox: that transparency is needed to foster public trust in science, but being transparent about science, medicine and government can also reduce trust. The paper argues that while openness in science is intended to build trust, it can backfire when revealing uncomfortable truths. Philosopher Byron Hyde and author of the study suggests that public trust could be improved not by sugarcoating reality, but by educating people to expect imperfection and understand how science actually works. Phys.org reports: The study revealed that, while transparency about good news increases trust, transparency about bad news, such as conflicts of interest or failed experiments, decreases it. Therefore, one possible solution to the paradox, and a way to increase public trust, is to lie (which Hyde points out is unethical and ultimately unsustainable), by for example making sure bad news is hidden and that there is always only good news to report.
Instead, he suggests that a better way forward would be to tackle the root cause of the problem, which he argues is the public overidealising science. People still overwhelmingly believe in the 'storybook image' of a scientist who makes no mistakes, which creates unrealistic expectations. Hyde is calling for a renewed effort to teach the public about scientific norms, which would be done through science education and communication to eliminate the "naive" view of science as infallible. "... most people know that global temperatures are rising, but very few people know how we know that," says Hyde. "Not enough people know that science 'infers to the best explanation' and doesn't definitively 'prove' anything. Too many people think that scientists should be free from biases or conflicts of interest when, in fact, neither of these are possible. If we want the public to trust science to the extent that it's trustworthy, we need to make sure they understand it first."
The study has been published in the journal Theory and Society.
Read more of this story at Slashdot.
Categories: Technology
Anthropic Revokes OpenAI's Access To Claude Over Terms of Service Violation
An anonymous reader quotes a report from Wired: Anthropic revoked OpenAI's API access to its models on Tuesday, multiple sources familiar with the matter tell WIRED. OpenAI was informed that its access was cut off due to violating the terms of service. "Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI's own technical staff were also using our coding tools ahead of the launch of GPT-5," Anthropic spokesperson Christopher Nulty said in a statement to WIRED. "Unfortunately, this is a direct violation of our terms of service." According to Anthropic's commercial terms of service, customers are barred from using the service to "build a competing product or service, including to train competing AI models" or "reverse engineer or duplicate" the services. This change in OpenAI's access to Claude comes as the ChatGPT-maker is reportedly preparing to release a new AI model, GPT-5, which is rumored to be better at coding.
OpenAI was plugging Claude into its own internal tools using special developer access (APIs), instead of using the regular chat interface, according to sources. This allowed the company to run tests to evaluate Claude's capabilities in things like coding and creative writing against its own AI models, and check how Claude responded to safety-related prompts involving categories like CSAM, self-harm, and defamation, the sources say. The results help OpenAI compare its own models' behavior under similar conditions and make adjustments as needed. "It's industry standard to evaluate other AI systems to benchmark progress and improve safety. While we respect Anthropic's decision to cut off our API access, it's disappointing considering our API remains available to them," OpenAI's chief communications officer Hannah Wong said in a statement to WIRED. Nulty says that Anthropic will "continue to ensure OpenAI has API access for the purposes of benchmarking and safety evaluations as is standard practice across the industry."
Read more of this story at Slashdot.
Categories: Technology
Peak Energy Ships America's First Grid-Scale Sodium-Ion Battery
Longtime Slashdot reader AmiMoJo shares a report from Electrek: Peak Energy shipped out its first sodium-ion battery energy storage system, and the New York-based company says it's achieved a first in three ways: the US's first grid-scale sodium-ion battery storage system; the largest sodium-ion phosphate pyrophosphate (NFPP) battery system in the world; and the first megawatt-hour scale battery to run entirely on passive cooling -- no fans, pumps, or vents. That's significant because removing moving parts and ditching active cooling systems eliminates fire risk.
According to the Electric Power Research Institute, 89% of battery fires in the US trace back to thermal management issues. Peak's design doesn't have those issues because it doesn't have those systems. Instead, the 3.5 MWh system uses a patent-pending passive cooling architecture that's simpler, more reliable, and cheaper to run and maintain. The company says its technology slashes auxiliary power needs by up to 90%, saves about $1 million annually per gigawatt hour of storage, and cuts battery degradation by 33% over a 20-year lifespan. [...]
Peak is working with nine utility and independent power producer (IPP) customers on a shared pilot this summer. That deployment unlocks nearly 1 GWh of future commercial contracts now under negotiation. The company plans to ship hundreds of megawatt hours of its new system over the next two years, and it's building its first US cell factory, which is set to start production in 2026.
Read more of this story at Slashdot.
Categories: Technology
Aurora's Self-Driving Trucks Are Now Driving At Night
Aurora Innovation has expanded its autonomous trucking operations with nighttime driverless runs between Dallas and Houston and a new Phoenix terminal. "Efficiency, uptime, and reliability are important for our customers, and Aurora is showing we can deliver," said Chris Urmson, co-founder and CEO of Aurora, in a press release. "Just three months after launch, we're running driverless operations day and night and we've expanded our terminal network to Phoenix. Our rapid progress is beginning to unlock the full value of self-driving trucks for our customers, which has the potential to transform the trillion-dollar trucking industry." FreightWaves reports: The expansion allows for continuous utilization, shortening delivery times and serving as part of its path to autonomous trucking profitability. Aurora notes that the unlocking of nighttime autonomous operations can also improve road safety. It cited a 2021 Federal Motor Carrier Safety Administration report on large truck and bus crashes that noted a disproportionate 37% of fatal crashes involving large trucks occurred at night. This comes despite trucks traveling fewer miles during those hours.
Aurora's SAE L4 autonomous driving system, called the Aurora Driver, can detect objects in the dark more than 450 meters away via its proprietary, long-range FirstLight Lidar. The lidar can identify pedestrians, vehicles, and debris up to 11 seconds sooner than a traditional driver, according to the company. In addition to the fleet and operations expansion, the new terminal in Phoenix, which opened in June, is part of an infrastructure-light approach. Aurora notes this design will closely resemble how the company plans to integrate with future customer endpoints, optimized for speed to market.
This expansion of the more than 15-hour Fort Worth to Phoenix route opens up opportunities to showcase the autonomous truck's ability to cut transit time in half compared to a single driver, who is limited to the 11-hour hours-of-service limitation. Aurora is piloting the autonomous trucking Phoenix lane with two customers, Hirschbach and Werner.
Read more of this story at Slashdot.
Categories: Technology
Skipping Over-The-Air Car Updates Could Be Costly
Longtime Slashdot reader Mr_Blank shares a report from Autoblog: Once a new OTA update becomes available, owners of GM vehicles have 45 days to install the update. After this date, the company will not cover any damages or issues that are caused by ignoring the update. "Damage resulting from failure to install over-the-air software updates is not covered," states the warranty booklet for 2025 and 2026 models.
This same rule applies to all GM's brands in the USA: Chevrolet, Buick, Cadillac, and GMC. However, if the software update itself causes any component damage, that will be covered by the warranty. Owners coming from older GM vehicles will have to adapt as the company continues to implement its Global B electronic architecture on newer models, which relies heavily on OTA updates. Similar policies appear in the owner's manual for Tesla. Software-defined vehicles are here to stay, even if some of them have far more tech glitches than they should -- just ask Volvo.
Read more of this story at Slashdot.
Categories: Technology
A Luggage Service's Web Bugs Exposed the Travel Plans of Every User
An anonymous reader quotes a report from Wired: An airline leaving all of its passengers' travel records vulnerable to hackers would make an attractive target for espionage. Less obvious, but perhaps even more useful for those spies, would be access to a premium travel service that spans 10 different airlines, left its own detailed flight information accessible to data thieves, and seems to be favored by international diplomats. That's what one team of cybersecurity researchers found in the form of Airportr, a UK-based luggage service that partners with airlines to let its largely UK- and Europe-based users pay to have their bags picked up, checked, and delivered to their destination. Researchers at the firm CyberX9 found that simple bugs in Airportr's website allowed them to access virtually all of those users' personal information, including travel plans, or even gain administrator privileges that would have allowed a hacker to redirect or steal luggage in transit. Among even the small sample of user data that the researchers reviewed and shared with WIRED they found what appear to be the personal information and travel records of multiple government officials and diplomats from the UK, Switzerland, and the US.
Airportr's CEO Randel Darby confirmed CyberX9's findings in a written statement provided to WIRED but noted that Airportr had disabled the vulnerable part of its site's backend very shortly after the researchers made the company aware of the issues last April and fixed the problems within a few day. "The data was accessed solely by the ethical hackers for the purpose of recommending improvements to Airportr's security, and our prompt response and mitigation ensured no further risk," Darby wrote in a statement. "We take our responsibilities to protect customer data very seriously." CyberX9's researchers, for their part, counter that the simplicity of the vulnerabilities they found mean that there's no guarantee other hackers didn't access Airportr's data first. They found that a relatively basic web vulnerability allowed them to change the password of any user to gain access to their account if they had just the user's email address -- and they were also able to brute-force guess email addresses with no rate limitations on the site. As a result, they could access data including all customers' names, phone numbers, home addresses, detailed travel plans and history, airline tickets, boarding passes and flight details, passport images, and signatures.
By gaining access to an administrator account, CyberX9's researchers say, a hacker could also have used the vulnerabilities it found to redirect luggage, steal luggage, or even cancel flights on airline websites by using Airportr's data to gain access to customer accounts on those sites. The researchers say they could also have used their access to send emails and text messages as Airportr, a potential phishing risk. Airportr tells WIRED that it has 92,000 users and claims on its website that it has handled more than 800,000 bags for customers. [...] The researchers found that they could monitor their browser's communications as they signed up for Airportr and created a new password, and then reuse an API key intercepted from those communications to instead change another user's password to anything they chose. The site also lacked a "rate limiting" security measure that would prevent automated guesses of email addresses to rapidly change the password of every user's account. And the researchers were also able to find email addresses of Airportr administrators that allowed them to take over their accounts and gain their privileges over the company's data and operations. "Anyone would have been able to gain or might have gained absolute super-admin access to all the operations and data of this company," says Himanshu Pathak, CyberX9's founder and CEO. "The vulnerabilities resulted in complete confidential private information exposure of all airline customers in all countries who used the service of this company, including full control over all the bookings and baggage. Because once you are the super-admin of their most sensitive systems, you have have the ability to do anything."
Read more of this story at Slashdot.
Categories: Technology
Palantir Lands $10 Billion Army Software and Data Contract
Palantir has secured a massive $10 billion contract with the U.S. Army to unify 75 contracts into a single AI-focused enterprise framework, streamlining procurement and enhancing military readiness. CNBC reports: The agreement creates a "comprehensive framework for the Army's future software and data needs" that provides the government with purchasing flexibility and removes contract-related fees and procurement timelines, according to a release. Palantir co-founder and CEO Alex Karp has been a vocal proponent of protecting U.S. interests and joining forces on AI to fend off adversaries.
Earlier this year, Palantir delivered its first two AI-powered systems in its $178 million contract with the U.S. Army. In May, the Department of Defense boosted its Maven Smart Systems contract to beef up AI capabilities by $795 million.
Read more of this story at Slashdot.
Categories: Technology
Pages
