You are here

News

Researchers Map Where Solar Energy Delivers the Biggest Climate Payoff

Slashdot - 2 August, 2025 - 20:00
A Rutgers-led study using advanced computational modeling reveals that expanding solar power by just 15% could reduce U.S. carbon emissions by over 8.5 million metric tons annually, with the greatest benefits concentrated in specific regions like California, Texas, and the Southwest. The study has been published in Science Advances. From the report: The study quantified both immediate and delayed emissions reductions resulting from added solar generation. For example, the researchers found that in California, a 15% increase in solar power at noon was associated with a reduction of 147.18 metric tons of CO2 in the region in the first hour and 16.08 metric tons eight hours later. The researchers said their methods provide a more nuanced understanding of system-level impacts from solar expansion than previous studies, pinpointing where the benefits of increased solar energy adoption could best be realized. In some areas, such as California, Florida, the mid-Atlantic, the Midwest, Texas and the Southwest, small increases in solar were estimated to deliver large CO2 reductions, while in others, such as New England, the central U.S., and Tennessee, impacts were found to be minimal -- even at much larger increases in solar generation. In addition, the researchers said their study demonstrates the significant spillover effects solar adoption has on neighboring regions, highlighting the value of coordinated clean energy efforts. For example, a 15% increase in solar capacity in California was associated with a reduction of 913 and 1,942 metric tons of CO2 emissions per day in the northwest and southwest regions, respectively. "It was rewarding to see how advanced computational modeling can uncover not just the immediate, but also the delayed and far-reaching spillover effects of solar energy adoption," said the lead author Arpita Biswas, an assistant professor with the Department of Computer Science at the Rutgers School of Arts and Sciences. "From a computer science perspective, this study demonstrates the power of harnessing large-scale, high-resolution energy data to generate actionable insights. For policymakers and investors, it offers a roadmap for targeting solar investments where emissions reductions are most impactful and where solar energy infrastructure can yield the highest returns."

Read more of this story at Slashdot.

Categories: Technology

Lying Increases Trust In Science, Study Finds

Slashdot - 2 August, 2025 - 17:00
A new paper from Bangor University outlines the "bizarre phenomenon" known as the transparency paradox: that transparency is needed to foster public trust in science, but being transparent about science, medicine and government can also reduce trust. The paper argues that while openness in science is intended to build trust, it can backfire when revealing uncomfortable truths. Philosopher Byron Hyde and author of the study suggests that public trust could be improved not by sugarcoating reality, but by educating people to expect imperfection and understand how science actually works. Phys.org reports: The study revealed that, while transparency about good news increases trust, transparency about bad news, such as conflicts of interest or failed experiments, decreases it. Therefore, one possible solution to the paradox, and a way to increase public trust, is to lie (which Hyde points out is unethical and ultimately unsustainable), by for example making sure bad news is hidden and that there is always only good news to report. Instead, he suggests that a better way forward would be to tackle the root cause of the problem, which he argues is the public overidealising science. People still overwhelmingly believe in the 'storybook image' of a scientist who makes no mistakes, which creates unrealistic expectations. Hyde is calling for a renewed effort to teach the public about scientific norms, which would be done through science education and communication to eliminate the "naive" view of science as infallible. "... most people know that global temperatures are rising, but very few people know how we know that," says Hyde. "Not enough people know that science 'infers to the best explanation' and doesn't definitively 'prove' anything. Too many people think that scientists should be free from biases or conflicts of interest when, in fact, neither of these are possible. If we want the public to trust science to the extent that it's trustworthy, we need to make sure they understand it first." The study has been published in the journal Theory and Society.

Read more of this story at Slashdot.

Categories: Technology

Anthropic Revokes OpenAI's Access To Claude Over Terms of Service Violation

Slashdot - 2 August, 2025 - 13:30
An anonymous reader quotes a report from Wired: Anthropic revoked OpenAI's API access to its models on Tuesday, multiple sources familiar with the matter tell WIRED. OpenAI was informed that its access was cut off due to violating the terms of service. "Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI's own technical staff were also using our coding tools ahead of the launch of GPT-5," Anthropic spokesperson Christopher Nulty said in a statement to WIRED. "Unfortunately, this is a direct violation of our terms of service." According to Anthropic's commercial terms of service, customers are barred from using the service to "build a competing product or service, including to train competing AI models" or "reverse engineer or duplicate" the services. This change in OpenAI's access to Claude comes as the ChatGPT-maker is reportedly preparing to release a new AI model, GPT-5, which is rumored to be better at coding. OpenAI was plugging Claude into its own internal tools using special developer access (APIs), instead of using the regular chat interface, according to sources. This allowed the company to run tests to evaluate Claude's capabilities in things like coding and creative writing against its own AI models, and check how Claude responded to safety-related prompts involving categories like CSAM, self-harm, and defamation, the sources say. The results help OpenAI compare its own models' behavior under similar conditions and make adjustments as needed. "It's industry standard to evaluate other AI systems to benchmark progress and improve safety. While we respect Anthropic's decision to cut off our API access, it's disappointing considering our API remains available to them," OpenAI's chief communications officer Hannah Wong said in a statement to WIRED. Nulty says that Anthropic will "continue to ensure OpenAI has API access for the purposes of benchmarking and safety evaluations as is standard practice across the industry."

Read more of this story at Slashdot.

Categories: Technology

Pages

Subscribe to Creative Contingencies aggregator