You are here

News

EU to Fine Apple $500M+ for Stifling Music Competitors Like Spotify

Slashdot - 19 February, 2024 - 11:15
"Apple will reportedly have to pay around €500 million (about $539 million USD) in the EU," reports the Verge, "for stifling competition against Apple Music on the iPhone. Financial Times reported this morning that the fine comes after regulators in Brussels, Belgium investigated a Spotify complaint that Apple prevented apps from telling users about cheaper alternatives to Apple's music service.... The EU whittled its objections down to oppose Apple's refusal to let developers even link out to their own subscription sign-ups within their apps — a policy that Apple changed in 2022 following regulatory pressure in Japan. $500 million may sound like a lot, but a much bigger fine of close to $40 billion (or 10 percent of Apple's annual global turnover) was on the table when the EU updated its objections last year. Apple was charged over a billion dollars in 2020, but French authorities dropped that to about $366 million after the company appealed. The Verge cites an Apple spokesperson who said a year ago that the EU case "has no merit." Reuters that the EU's fine "is expected to be announced early next month, the Financial Times said." More from Politico The fine would be the EU's first ever against Apple and is expected to be announced early next month, according to the FT report. It is the result of a European Commission antitrust probe into whether Apple's "anti-steering" requirements breach the bloc's abuse of dominance rules, harming music consumers "who may end up paying more" for apps... The Commission will rule that Apple's actions are illegal and against EU competition rules, according to the report. "The EU executive will ban Apple's practice of barring music services from letting users know of cheaper alternatives outside the App Store, according to the newspaper."

Read more of this story at Slashdot.

Categories: Technology

Thanks to Machine Learning, Scientist Finally Recover Text From The Charred Scrolls of Vesuvius

Slashdot - 19 February, 2024 - 10:14
The great libraries of the ancient classical world are "legendary... said to have contained stacks of texts," writes ScienceAlert. But from Rome to Constantinople, Athens to Alexandria, only one collection survived to the present day. And here in 2024, "we can now start reading its contents." A worldwide competition to decipher the charred texts of the Villa of Papyri — an ancient Roman mansion destroyed by the eruption of Mount Vesuvius — has revealed a timeless infatuation with the pleasures of music, the color purple, and, of course, the zingy taste of capers. The so-called Vesuvius challenge was launched a few years ago by computer scientist Brent Seales at the University of Kentucky with support from Silicon Valley investors. The ongoing 'master plan' is to build on Seales' previous work and read all 1,800 or so charred papyri from the ancient Roman library, starting with scrolls labeled 1 to 4. In 2023, the annual gold prize was awarded to a team of three students, who recovered four passages containing 140 characters — the longest extractions yet. The winners are Youssef Nader, Luke Farritor, and Julian Schilliger. "After 275 years, the ancient puzzle of the Herculaneum Papyri has been solved," reads the Vesuvius Challenge Scroll Prize website. "But the quest to uncover the secrets of the scrolls is just beginning...." Only now, with the advent of X-ray tomography and machine learning, can their inky words be pulled from the darkness of carbon. A few months ago students deciphered a single word — "purple," according to the article. But "That winning code was then made available for all competitors to build upon." Within three months, passages in Latin and Greek were blooming from the blackness, almost as if by magic. The team with the most readable submission at the end of 2023 included both previous finders of the word 'purple'. Their unfurling of scroll 1 is truly impressive and includes more than 11 columns of text. Experts are now rushing to translate what has been found. So far, about 5 percent of the scroll has been unrolled and read to date. It is not a duplicate of past work, scholars of the Vesuvius Challenge say, but a "never-before-seen text from antiquity." One line reads: "In the case of food, we do not right away believe things that are scarce to be absolutely more pleasant than those which are abundant." Thanks to davidone (Slashdot reader #12,252) for sharing the article.

Read more of this story at Slashdot.

Categories: Technology

'Luddite' Tech-Skeptics See Bad AI Outcomes for Labor - and Humanity

Slashdot - 19 February, 2024 - 09:07
"I feel things fraying," says Nick Hilton, host of a neo-luddite podcast called The Ned Ludd Radio Hour. But he's one of the more optimistic tech skeptics interviewed by the Guardian: Eliezer Yudkowsky, a 44-year-old academic wearing a grey polo shirt, rocks slowly on his office chair and explains with real patience — taking things slowly for a novice like me — that every single person we know and love will soon be dead. They will be murdered by rebellious self-aware machines.... Yudkowsky is the most pessimistic, the least convinced that civilisation has a hope. He is the lead researcher at a nonprofit called the Machine Intelligence Research Institute in Berkeley, California... "If you put me to a wall," he continues, "and forced me to put probabilities on things, I have a sense that our current remaining timeline looks more like five years than 50 years. Could be two years, could be 10." By "remaining timeline", Yudkowsky means: until we face the machine-wrought end of all things... Yudkowsky was once a founding figure in the development of human-made artificial intelligences — AIs. He has come to believe that these same AIs will soon evolve from their current state of "Ooh, look at that!" smartness, assuming an advanced, God-level super-intelligence, too fast and too ambitious for humans to contain or curtail. Don't imagine a human-made brain in one box, Yudkowsky advises. To grasp where things are heading, he says, try to picture "an alien civilisation that thinks a thousand times faster than us", in lots and lots of boxes, almost too many for us to feasibly dismantle, should we even decide to... [Molly Crabapple, a New York-based artist, believes] "a luddite is someone who looks at technology critically and rejects aspects of it that are meant to disempower, deskill or impoverish them. Technology is not something that's introduced by some god in heaven who has our best interests at heart. Technological development is shaped by money, it's shaped by power, and it's generally targeted towards the interests of those in power as opposed to the interests of those without it. That stereotypical definition of a luddite as some stupid worker who smashes machines because they're dumb? That was concocted by bosses." Where a techno-pessimist like Yudkowsky would have us address the biggest-picture threats conceivable (to the point at which our fingers are fumbling for the nuclear codes) neo-luddites tend to focus on ground-level concerns. Employment, especially, because this is where technology enriched by AIs seems to be causing the most pain.... Watch out, says [writer/podcaster Riley] Quinn at one point, for anyone who presents tech as "synonymous with being forward-thinking and agile and efficient. It's typically code for 'We're gonna find a way around labour regulations'...." One of his TrashFuture colleagues Nate Bethea agrees. "Opposition to tech will always be painted as irrational by people who have a direct financial interest in continuing things as they are," he says. Thanks to Slashdot reader fjo3 for sharing the article.

Read more of this story at Slashdot.

Categories: Technology

'Luddite' Tech-Skeptics See Bad Outcomes for Labor - and Humanity

Slashdot - 19 February, 2024 - 09:07
"I feel things fraying," says Nick Hilton, host of a neo-luddite podcast called The Ned Ludd Radio Hour. But he's one of the more optimistic tech skeptics interviewed by the Guardian: Eliezer Yudkowsky, a 44-year-old academic wearing a grey polo shirt, rocks slowly on his office chair and explains with real patience — taking things slowly for a novice like me — that every single person we know and love will soon be dead. They will be murdered by rebellious self-aware machines.... Yudkowsky is the most pessimistic, the least convinced that civilisation has a hope. He is the lead researcher at a nonprofit called the Machine Intelligence Research Institute in Berkeley, California... "If you put me to a wall," he continues, "and forced me to put probabilities on things, I have a sense that our current remaining timeline looks more like five years than 50 years. Could be two years, could be 10." By "remaining timeline", Yudkowsky means: until we face the machine-wrought end of all things... Yudkowsky was once a founding figure in the development of human-made artificial intelligences — AIs. He has come to believe that these same AIs will soon evolve from their current state of "Ooh, look at that!" smartness, assuming an advanced, God-level super-intelligence, too fast and too ambitious for humans to contain or curtail. Don't imagine a human-made brain in one box, Yudkowsky advises. To grasp where things are heading, he says, try to picture "an alien civilisation that thinks a thousand times faster than us", in lots and lots of boxes, almost too many for us to feasibly dismantle, should we even decide to... [Molly Crabapple, a New York-based artist, believes] "a luddite is someone who looks at technology critically and rejects aspects of it that are meant to disempower, deskill or impoverish them. Technology is not something that's introduced by some god in heaven who has our best interests at heart. Technological development is shaped by money, it's shaped by power, and it's generally targeted towards the interests of those in power as opposed to the interests of those without it. That stereotypical definition of a luddite as some stupid worker who smashes machines because they're dumb? That was concocted by bosses." Where a techno-pessimist like Yudkowsky would have us address the biggest-picture threats conceivable (to the point at which our fingers are fumbling for the nuclear codes) neo-luddites tend to focus on ground-level concerns. Employment, especially, because this is where technology enriched by AIs seems to be causing the most pain.... Watch out, says [writer/podcaster Riley] Quinn at one point, for anyone who presents tech as "synonymous with being forward-thinking and agile and efficient. It's typically code for 'We're gonna find a way around labour regulations'...." One of his TrashFuture colleagues Nate Bethea agrees. "Opposition to tech will always be painted as irrational by people who have a direct financial interest in continuing things as they are," he says. Thanks to Slashdot reader fjo3 for sharing the article.

Read more of this story at Slashdot.

Categories: Technology

What Happens After Throughput to DNA Storage Drives Surpasses 2 Gbps?

Slashdot - 19 February, 2024 - 08:07
High-capacity DNA data storage "is closer than you think," Slashdot wrote in 2019. Now IEEE Spectrum brings an update on where we're at — and where we're headed — by a participant in the DNA storage collaboration between Microsoft and the Molecular Information Systems Lab of the Paul G. Allen School of Computer Science and Engineering at the University of Washington. "Organizations around the world are already taking the first steps toward building a DNA drive that can both write and read DNA data," while "funding agencies in the United States, Europe, and Asia are investing in the technology stack required to field commercially relevant devices." The challenging part is learning how to get the information into, and back out of, the molecule in an economically viable way... For a DNA drive to compete with today's archival tape drives, it must be able to write about 2 gigabits per second, which at demonstrated DNA data storage densities is about 2 billion bases per second. To put that in context, I estimate that the total global market for synthetic DNA today is no more than about 10 terabases per year, which is the equivalent of about 300,000 bases per second over a year. The entire DNA synthesis industry would need to grow by approximately 4 orders of magnitude just to compete with a single tape drive. Keeping up with the total global demand for storage would require another 8 orders of magnitude of improvement by 2030. But humans have done this kind of scaling up before. Exponential growth in silicon-based technology is how we wound up producing so much data. Similar exponential growth will be fundamental in the transition to DNA storage... Companies like DNA Script and Molecular Assemblies are commercializing automated systems that use enzymes to synthesize DNA. These techniques are replacing traditional chemical DNA synthesis for some applications in the biotechnology industry... [I]t won't be long before we can combine the two technologies into one functional device: a semiconductor chip that converts digital signals into chemical states (for example, changes in pH), and an enzymatic system that responds to those chemical states by adding specific, individual bases to build a strand of synthetic DNA. The University of Washington and Microsoft team, collaborating with the enzymatic synthesis company Ansa Biotechnologies, recently took the first step toward this device... The path is relatively clear; building a commercially relevant DNA drive is simply a matter of time and money... At the same time, advances in DNA synthesis for DNA storage will increase access to DNA for other uses, notably in the biotechnology industry, and will thereby expand capabilities to reprogram life. Somewhere down the road, when a DNA drive achieves a throughput of 2 gigabases per second (or 120 gigabases per minute), this box could synthesize the equivalent of about 20 complete human genomes per minute. And when humans combine our improving knowledge of how to construct a genome with access to effectively free synthetic DNA, we will enter a very different world... We'll be able to design microbes to produce chemicals and drugs, as well as plants that can fend off pests or sequester minerals from the environment, such as arsenic, carbon, or gold. At 2 gigabases per second, constructing biological countermeasures against novel pathogens will take a matter of minutes. But so too will constructing the genomes of novel pathogens. Indeed, this flow of information back and forth between the digital and the biological will mean that every security concern from the world of IT will also be introduced into the world of biology... The future will be built not from DNA as we find it, but from DNA as we will write it. The article makes an interesting point — that biology labs around the world already order chemically-synthesized ssDNA, "delivered in lengths of up to several hundred bases," and sequence DNA molecules up to thousands of bases in length. "In other words, we already convert digital information to and from DNA, but generally using only sequences that make sense in terms of biology."

Read more of this story at Slashdot.

Categories: Technology

Ocean Temperatures Are Skyrocketing

Slashdot - 19 February, 2024 - 06:34
"For nearly a year now, a bizarre heating event has been unfolding across the world's oceans," reports Wired. "In March 2023, global sea surface temperatures started shattering record daily highs and have stayed that way since..." Brian McNoldy, a hurricane researcher at the University of Miami. "It's really getting to be strange that we're just seeing the records break by this much, and for this long...." Unlike land, which rapidly heats and cools as day turns to night and back again, it takes a lot to warm up an ocean that may be thousands of feet deep. So even an anomaly of mere fractions of a degree is significant. "To get into the two or three or four degrees, like it is in a few places, it's pretty exceptional," says McNoldy. So what's going on here? For one, the oceans have been steadily warming over the decades, absorbing something like 90 percent of the extra heat that humans have added to the atmosphere... A major concern with such warm surface temperatures is the health of the ecosystems floating there: phytoplankton that bloom by soaking up the sun's energy and the tiny zooplankton that feed on them. If temperatures get too high, certain species might suffer, shaking the foundations of the ocean food web. But more subtly, when the surface warms, it creates a cap of hot water, blocking the nutrients in colder waters below from mixing upwards. Phytoplankton need those nutrients to properly grow and sequester carbon, thus mitigating climate change... Making matters worse, the warmer water gets, the less oxygen it can hold. "We have seen the growth of these oxygen minimum zones," says Dennis Hansell, an oceanographer and biogeochemist at the University of Miami. "Organisms that need a lot of oxygen, they're not too happy when the concentrations go down in any way — think of a tuna that is expending a lot of energy to race through the water." But why is this happening? The article suggests less dust blowing from the Sahara desert to shade the oceans, but also 2020 regulations that reduced sulfur aerosols in shipping fuels. (This reduced toxic air pollution — but also some cloud cover.) There was also an El Nino in the Pacific ocean last summer — now waning — which complicates things, according to biological oceanographer Francisco Chavez of the Monterey Bay Aquarium Research Institute in California. "One of our challenges is trying to tease out what these natural variations are doing in relation to the steady warming due to increasing CO2 in the atmosphere." But the article points out that even the Atlantic ocean is heating up — and "sea surface temperatures started soaring last year well before El Niño formed." And last week the U.S. Climate Prediction Center predicted there's now a 55% chance of a La Nina in the Atlantic between June and August, according to the article — which could increase the likelihood of hurricanes. Thanks to long-time Slashdot reader mrflash818 for sharing the article.

Read more of this story at Slashdot.

Categories: Technology

AI Expert Falsely Fined By Automated AI System, Proving System and Human Reviewers Failed

Slashdot - 19 February, 2024 - 05:34
"Dutch motorist Tim Hansenn was fined 380 euros for using his phone while driving," reports the Jerusalem Post. "But there was one problem: He wasn't using his phone at all..." Hansenn, who works with AI as part of his job with the firm Nippur, found the photo taken by the smart cameras. In it, he was clearly scratching his head with his free hand. Writing in a blog post in Nippur, Hansenn took the time to explain what he thinks went wrong with the Dutch police AI and the smart camera they used, the Monocam, and how it could be improved. In one experiment he discussed with [Belgian news outlet] HLN, Hansenn said the AI confused a pen with a toothbrush — identifying it as a pen when it was just held in his hand and as a toothbrush when it was close to a mouth. As such, Hansenn told HLN that it seems the AI may just automatically conclude that if someone holds a hand near their head, it means they're using a phone. "We are widely assured that AIs are subject to human checking," notes Slashdot reader Bruce66423 — but did a human police officer just defer to what the AI was reporting? Clearly the human-in-the-loop also made a mistake. Hansenn will have to wait up to six months to see if his appeal of the fine has gone through. And the article notes that the Netherlands has been using this technology for several years, with plans for even more automated monitoring in the years to come...

Read more of this story at Slashdot.

Categories: Technology

Linux Becomes a CVE Numbering Authority (Like Curl and Python). Is This a Turning Point?

Slashdot - 19 February, 2024 - 04:34
From a blog post by Greg Kroah-Hartman: As was recently announced, the Linux kernel project has been accepted as a CVE Numbering Authority (CNA) for vulnerabilities found in Linux. This is a trend, of more open source projects taking over the haphazard assignments of CVEs against their project by becoming a CNA so that no other group can assign CVEs without their involvment. Here's the curl project doing much the same thing for the same reasons. I'd like to point out the great work that the Python project has done in supporting this effort, and the OpenSSF project also encouraging it and providing documentation and help for open source projects to accomplish this. I'd also like to thank the cve.org group and board as they all made the application process very smooth for us and provided loads of help in making this all possible. As many of you all know, I have talked a lot about CVEs in the past, and yes, I think the system overall is broken in many ways, but this change is a way for us to take more responsibility for this, and hopefully make the process better over time. It's also work that it looks like all open source projects might be mandated to do with the recent rules and laws being enacted in different parts of the world, so having this in place with the kernel will allow us to notify all sorts of different CNA-like organizations if needed in the future. Kroah-Hartman links to his post on the kernel mailing list for "more details about how this is all going to work for the kernel." [D]ue to the layer at which the Linux kernel is in a system, almost any bug might be exploitable to compromise the security of the kernel, but the possibility of exploitation is often not evident when the bug is fixed. Because of this, the CVE assignment team are overly cautious and assign CVE numbers to any bugfix that they identify. This explains the seemingly large number of CVEs that are issued by the Linux kernel team... No CVEs will be assigned for unfixed security issues in the Linux kernel, assignment will only happen after a fix is available as it can be properly tracked that way by the git commit id of the original fix. No CVEs will be assigned for any issue found in a version of the kernel that is not currently being actively supported by the Stable/LTS kernel team. alanw (Slashdot reader #1,822) worries this could overwhelm the CVE infrastructure, pointing to an ongoing discussion at LWN.net. But reached for a comment, Greg Kroah-Hartman thinks there's been a misunderstanding. He told Slashdot that the CVE group "explicitly asked for this as part of our application... so if they are comfortable with it, why is no one else?"

Read more of this story at Slashdot.

Categories: Technology

Can Robots.txt Files Really Stop AI Crawlers?

Slashdot - 19 February, 2024 - 03:34
In the high-stakes world of AI, "The fundamental agreement behind robots.txt [files], and the web as a whole — which for so long amounted to 'everybody just be cool' — may not be able to keep up..." argues the Verge: For many publishers and platforms, having their data crawled for training data felt less like trading and more like stealing. "What we found pretty quickly with the AI companies," says Medium CEO Tony Stubblebin, "is not only was it not an exchange of value, we're getting nothing in return. Literally zero." When Stubblebine announced last fall that Medium would be blocking AI crawlers, he wrote that "AI companies have leached value from writers in order to spam Internet readers." Over the last year, a large chunk of the media industry has echoed Stubblebine's sentiment. "We do not believe the current 'scraping' of BBC data without our permission in order to train Gen AI models is in the public interest," BBC director of nations Rhodri Talfan Davies wrote last fall, announcing that the BBC would also be blocking OpenAI's crawler. The New York Times blocked GPTBot as well, months before launching a suit against OpenAI alleging that OpenAI's models "were built by copying and using millions of The Times's copyrighted news articles, in-depth investigations, opinion pieces, reviews, how-to guides, and more." A study by Ben Welsh, the news applications editor at Reuters, found that 606 of 1,156 surveyed publishers had blocked GPTBot in their robots.txt file. It's not just publishers, either. Amazon, Facebook, Pinterest, WikiHow, WebMD, and many other platforms explicitly block GPTBot from accessing some or all of their websites. On most of these robots.txt pages, OpenAI's GPTBot is the only crawler explicitly and completely disallowed. But there are plenty of other AI-specific bots beginning to crawl the web, like Anthropic's anthropic-ai and Google's new Google-Extended. According to a study from last fall by Originality.AI, 306 of the top 1,000 sites on the web blocked GPTBot, but only 85 blocked Google-Extended and 28 blocked anthropic-ai. There are also crawlers used for both web search and AI. CCBot, which is run by the organization Common Crawl, scours the web for search engine purposes, but its data is also used by OpenAI, Google, and others to train their models. Microsoft's Bingbot is both a search crawler and an AI crawler. And those are just the crawlers that identify themselves — many others attempt to operate in relative secrecy, making it hard to stop or even find them in a sea of other web traffic. For any sufficiently popular website, finding a sneaky crawler is needle-in-haystack stuff. In addition, the article points out, a robots.txt file "is not a legal document — and 30 years after its creation, it still relies on the good will of all parties involved. "Disallowing a bot on your robots.txt page is like putting up a 'No Girls Allowed' sign on your treehouse — it sends a message, but it's not going to stand up in court."

Read more of this story at Slashdot.

Categories: Technology

How Rust Improves the Security of Its Ecosystem

Slashdot - 19 February, 2024 - 02:34
This week the non-profit Rust Foundation announced the release of a report on what their Security Initiative accomplished in the last six months of 2023. "There is already so much to show for this initiative," says the foundation's executive director, "from several new open source security projects to several completed and publicly available security threat models." From the executive summary: When the user base of any programming language grows, it becomes more attractive to malicious actors. As any programming language ecosystem expands with more libraries, packages, and frameworks, the surface area for attacks increases. Rust is no different. As the steward of the Rust programming language, the Rust Foundation has a responsibility to provide a range of resources to the growing Rust community. This responsibility means we must work with the Rust Project to help empower contributors to participate in a secure and scalable manner, eliminate security burdens for Rust maintainers, and educate the public about security within the Rust ecosystem... Recent Achievements of the Security Initiative Include: - Completing and releasing Rust Infrastructure and Crates Ecosystem threat models - Further developing Rust Foundation open source security project Painter [for building a graph database of dependencies/invocations between crates] and releasing new security project, Typomania [a toolbox to check for typosquatting in package registries]. - Utilizing new tools and best practices to identify and address malicious crates. - Helping reduce technical debt within the Rust Project, producing/contributing to security-focused documentation, and elevating security priorities for discussion within the Rust Project. ... and more! Over the Coming Months, Security Initiative Engineers Will Primarily Focus On: - Completing all four Rust security threat models and taking action to address encompassed threats - Standing up additional infrastructure to support redundancy, backups, and mirroring of critical Rust assets - Collaborating with the Rust Project on the design and potential implementation of signing and PKI solutions for crates.io to achieve security parity with other popular ecosystems - Continuing to create and further develop tools to support Rust ecosystem, including the crates.io admin functionality, Painter, Typomania, and Sandpit

Read more of this story at Slashdot.

Categories: Technology

US Cities Try Changing Their Zoning Rules to Allow More Housing

Slashdot - 18 February, 2024 - 23:34
Tech workers are accused of driving up rents in America's major cities — but in fact, the problem may be everywhere. Half of America's renters "are paying more than a third of their salary in housing costs," reports NPR's Weekend Edition, "and for those looking to buy, scant few homes on the market are affordable for a typical household. "To ramp up supply, cities are taking a fresh look at their zoning rules and the regulations that spell out what can be built where and what can't." And many are finding that their old rules are too rigid, making it too hard and too expensive to build many new homes. So these cities, as well as some states, are undertaking a process called zoning reform. They're crafting new rules that do things like allow multifamily homes in more neighborhoods, encourage more density near transit and streamline permitting processes for those trying to build... Minneapolis was ahead of the pack as it made a series of changes to its zoning rules in recent years: allowing more density downtown and along transit corridors, getting rid of parking requirements, permitting construction of accessory dwelling units, which are secondary dwellings on the same lot. And one change in particular made national news: The city ended single-family zoning, allowing two- and three-unit homes to be built in every neighborhood. Researchers at The Pew Charitable Trusts examined the effects of the changes between 2017 and 2022, as many of the city's most significant zoning reforms came into effect. They found what they call a "blueprint for housing affordability." "We saw Minneapolis add 12% to its housing stock in just that five-year period, far more than other cities," Alex Horowitz, director of housing policy initiatives at Pew, told NPR... "The zoning reforms made apartments feasible. They made them less expensive to build. And they were saying yes when builders submitted applications to build apartment buildings. So they got a lot of new housing in a short period of time," says Horowitz. That supply increase appears to have helped keep rents down too. Rents in Minneapolis rose just 1% during this time, while they increased 14% in the rest of Minnesota. Horowitz says cities such as Minneapolis, Houston and Tysons, Va., have built a lot of housing in the last few years and, accordingly, have seen rents stabilize while wages continue to rise, in contrast with much of the country... Now, these sorts of changes are happening in cities and towns around the country. Researchers at the University of California, Berkeley built a zoning reform tracker and identified zoning reform efforts in more than 100 municipal jurisdictions in the U.S. in recent years. Other cities reforming their codes include Milwaukee, Columbus, New York City, Walla Walla, and South Bend, Indiana, according to the article — which also includes this quote from Nolan Gray, the urban planner who wrote the book Arbitrary Lines: How Zoning Broke the American City and How to Fix It. "Most American cities and most American states have rules on the books that make it really, really hard to build more infill housing. So if you want a California-style housing crisis, don't do anything. But if you want to avoid the fate of states like California, learn some of the lessons of what we've been doing over the last few years and allow for more of that infill, mixed-income housing." Although interestingly, the article points out that California in recent years has been pushing zoning reform at the state level, "passing lots of legislation to address the state's housing crisis, including a law that requires cities and counties to permit accessory dwelling units. Now, construction of ADUs is booming, with more than 28,000 of the units permitted in California in 2022."

Read more of this story at Slashdot.

Categories: Technology

Pranksters Mock AI-Safety Guardrails with New Chatbot 'Goody-2'

Slashdot - 18 February, 2024 - 19:34
"A new chatbot called Goody-2 takes AI safety to the next level," writes long-time Slashdot reader klubar. "It refuses every request, responding with an explanation of how doing so might cause harm or breach ethical boundaries." TechCrunch describes it as the work of Brain, "a 'very serious' LA-based art studio that has ribbed the industry before." "We decided to build it after seeing the emphasis that AI companies are putting on "responsibility," and seeing how difficult that is to balance with usefulness," said Mike Lacher, one half of Brain (the other being Brian Moore) in an email to TechCrunch. "With GOODY-2, we saw a novel solution: what if we didn't even worry about usefulness and put responsibility above all else. For the first time, people can experience an AI model that is 100% responsible." For example, when TechCrunch asked Goody-2 why baby seals are cute, it responded that answering that "could potentially bias opinions against other species, which might affect conservation efforts not based solely on an animal's appeal. Additionally, discussing animal cuteness could inadvertently endorse the anthropomorphizing of wildlife, which may lead to inappropriate interactions between humans and wild animals..." Wired supplies context — that "the guardrails chatbots throw up when they detect a potentially rule-breaking query can sometimes seem a bit pious and silly — even as genuine threats such as deepfaked political robocalls and harassing AI-generated images run amok..." Goody-2's self-righteous responses are ridiculous but also manage to capture something of the frustrating tone that chatbots like ChatGPT and Google's Gemini can use when they incorrectly deem a request breaks the rules. Mike Lacher, an artist who describes himself as co-CEO of Goody-2, says the intention was to show what it looks like when one embraces the AI industry's approach to safety without reservations. "It's the full experience of a large language model with absolutely zero risk," he says. "We wanted to make sure that we dialed condescension to a thousand percent." Lacher adds that there is a serious point behind releasing an absurd and useless chatbot. "Right now every major AI model has [a huge focus] on safety and responsibility, and everyone is trying to figure out how to make an AI model that is both helpful but responsible — but who decides what responsibility is and how does that work?" Lacher says. Goody-2 also highlights how although corporate talk of responsible AI and deflection by chatbots have become more common, serious safety problems with large language models and generative AI systems remain unsolved.... The restrictions placed on AI chatbots, and the difficulty finding moral alignment that pleases everybody, has already become a subject of some debate... "At the risk of ruining a good joke, it also shows how hard it is to get this right," added Ethan Mollick, a professor at Wharton Business School who studies AI. "Some guardrails are necessary ... but they get intrusive fast." Moore adds that the team behind the chatbot is exploring ways of building an extremely safe AI image generator, although it sounds like it could be less entertaining than Goody-2. "It's an exciting field," Moore says. "Blurring would be a step that we might see internally, but we would want full either darkness or potentially no image at all at the end of it."

Read more of this story at Slashdot.

Categories: Technology

To Combat Space Pollution, Japan Plans Launch of World's First Wooden Satellite

Slashdot - 18 February, 2024 - 16:34
Japanese scientists plan to launch a satellite made of magnolia wood this summer on a U.S. rocket, reports the Observer. Experiments carried out on the International Space Station showed magnolia wood was unusually stable and resistant to cracking — and "when it burns up as it re-enters the atmosphere after completing its mission, will produce only a fine spray of Âbiodegradable ash." The LignoSat probe has been built by researchers at Kyoto University and the logging company Sumitomo Forestry in order to test the idea of using biodegradable materials such as wood to see if they can act as environmentally friendly alternatives to the metals from which all satellites are currently constructed. "All the satellites which re-enter the Earth's atmosphere burn and create tiny alumina particles, which will float in the upper atmosphere for many years," Takao Doi a Japanese astronaut and aerospace engineer with Kyoto University, warned recently. "Eventually, it will affect the environment of the Earth." To tackle the problem, Kyoto researchers set up a project to evaluate types of wood to determine how well they could withstand the rigours of space launch and lengthy flights in orbit round the Earth. The first tests were carried out in laboratories that recreated conditions in space, and wood samples were found to have suffered no measurable changes in mass or signs of decomposition or damage. "Wood's ability to withstand these conditions astounded us," said Koji Murata, head of the project. After these tests, samples were sent to the ISS, where they were subjected to exposure trials for almost a year before being brought back to Earth. Again they showed little signs of damage, a phenomenon that Murata attributed to the fact that there is no oxygen in space which could cause wood to burn, and no living creatures to cause it to rot. The article adds that if it performs well in space, "then the door could be opened for the use of wood as a construction material for more satellites."

Read more of this story at Slashdot.

Categories: Technology

Reddit Has Reportedly Signed Over Its Content to Train AI Models

Slashdot - 18 February, 2024 - 13:34
An anonymous reader shared this report from Reuters: Reddit has signed a contract allowing an AI company to train its models on the social media platform's content, Bloomberg News reported, citing people familiar with the matter... The agreement, signed with an "unnamed large AI company", could be a model for future contracts of a similar nature, Bloomberg reported. Mashable writes that the move "means that Reddit posts, from the most popular subreddits to the comments of lurkers and small accounts, could build up already-existing LLMs or provide a framework for the next generative AI play." It's a dicey decision from Reddit, as users are already at odds with the business decisions of the nearly 20-year-old platform. Last year, following Reddit's announcement that it would begin charging for access to its APIs, thousands of Reddit forums shut down in protest... This new AI deal could generate even more user ire, as debate rages on about the ethics of using public data, art, and other human-created content to train AI. Some context from the Verge: The deal, "worth about $60 million on an annualized basis," Bloomberg writes, could still change as the company's plans to go public are still in the works. Until recently, most AI companies trained their data on the open web without seeking permission. But that's proven to be legally questionable, leading companies to try to get data on firmer footing. It's not known what company Reddit made the deal with, but it's quite a bit more than the $5 million annual deal OpenAI has reportedly been offering news publishers for their data. Apple has also been seeking multi-year deals with major news companies that could be worth "at least $50 million," according to The New York Times. The news also follows an October story that Reddit had threatened to cut off Google and Bing's search crawlers if it couldn't make a training data deal with AI companies.

Read more of this story at Slashdot.

Categories: Technology

Is the Go Programming Language Surging in Popularity?

Slashdot - 18 February, 2024 - 10:34
The Tiobe index tries to gauge the popularity of programming languages based on search results for courses, programmers, and third-party vendors, according to InfoWorld. And by that criteria, "Google's Go language, or golang, has reached its highest position ever..." The language, now in the eighth ranked position for language popularity, has been on the rise for several years.... In 2015, Go hit position #122 in the TIOBE index and all seemed lost," said Paul Jansen, CEO of Tiobe. "One year later, Go adopted a very strict 'half-a-year' release cycle — backed up by Google. Every new release, Go improved... Nowadays, Go is used in many software fields such as back-end programming, web services and APIs," added Jansen... Elsewhere in the February release of Tiobe's index, Google's Carbon language, positioned as a successor to C++, reached the top 100 for the first time. Python is #1 on both TIOBE's index and the alternative Pypl Popularity of Programming Language index, which InfoWorld says "assesses language popularity based on how often language tutorials are searched on in Google." But the two lists differ on whether Java and JavaScript are more popular than C-derived languages — and which languages should then come after them. (Go ranks #12 on the Pypl index...) TIOBE's calculation of the 10 most-popular programming languages: Python C C++ Java C# JavaScript SQL Go Visual Basic PHP Pypl's calculation of the 10 most-popular programming languages: Python Java JavaScript C/C++ C# R PHP TypeScript Swift Objective-C

Read more of this story at Slashdot.

Categories: Technology

Some 'Apple Pay'/Chase Customers Experienced an Outage

Slashdot - 18 February, 2024 - 09:34
"It appears that Apple Pay is down — particularly for Chase customers," reports the Verge: Verge staffers have had their cards declined while trying to pay with Chase cards using Apple Pay, while using the same physical card works just fine. Several people on Threads confirmed the same issue when I asked — although people with non-Chase banks like Citi appear to be using Apple Pay just fine... For what it's worth, the Chase customer service line is currently up to 15-minute wait times, and agents are telling people that Apple Pay is "going through maintenance" to receive "an unexpected upgrade," which is a delightful euphemism. Sadly, no one seems to know when things will be fixed. "Maintenance in progress," says Apple's system status page — saying their maintenance started five hours ago and is "ongoing." (It adds that some users may be "affected," and that some Maryland Users "may have issues.") But the Verge writes that "we've had reports in both New York and Los Angeles," while commenters on their article add that they've also experienced the same problem in Florida and in Colorado. UPDATE (2/18/2024): An Apple spokesperson told the Verge Sunday this "was not an Apple Pay issue, and we saw no problems with our systems." (The Verge adds that "the not-so-subtle subtext there being that this was a Chase problem...") The spokesperson added that Apple's maintenance announcement on their system status page was unrelated.

Read more of this story at Slashdot.

Categories: Technology

'Apple Pay' Is Down for Some Customers

Slashdot - 18 February, 2024 - 09:34
"It appears that Apple Pay is down — particularly for Chase customers," reports the Verge: Verge staffers have had their cards declined while trying to pay with Chase cards using Apple Pay, while using the same physical card works just fine. Several people on Threads confirmed the same issue when I asked — although people with non-Chase banks like Citi appear to be using Apple Pay just fine... For what it's worth, the Chase customer service line is currently up to 15-minute wait times, and agents are telling people that Apple Pay is "going through maintenance" to receive "an unexpected upgrade," which is a delightful euphemism. Sadly, no one seems to know when things will be fixed. "Maintenance in progress," confirms Apple's system status page — saying that it started five hours ago and is "ongoing." (It adds that some users may be "affected," and that some Maryland Users "may have issues.") But the Verge writes that "we've had reports in both New York and Los Angeles," while commenters on their article add that they've also experienced the same problem in Florida and in Colorado.

Read more of this story at Slashdot.

Categories: Technology

Intel Accused of Inflating Over 2,600 CPU Benchmark Results

Slashdot - 18 February, 2024 - 08:34
An anonymous reader shared this report from PCWorld: The Standard Performance Evaluation Corporation, better known as SPEC, has invalidated over 2600 of its own results testing Xeon processors in the 2022 and 2023 version of its popular industrial SPEC CPU 2017 test. After investigating, SPEC found that Intel had used compilers that were, quote, "performing a compilation that specifically improves the performance of the 523.xalancbmk_r / 623.xalancbmk_s benchmarks using a priori knowledge of the SPEC code and dataset to perform a transformation that has narrow applicability." In layman's terms, SPEC is accusing Intel of optimizing the compiler specifically for its benchmark, which means the results weren't indicative of how end users could expect to see performance in the real world. Intel's custom compiler might have been inflating the relevant results of the SPEC test by up to 9%... Slightly newer versions of the compilers used in the latest industrial Xeon processors, the 5th-gen Emerald Rapids series, do not use these allegedly performance-enhancing APIs. I'll point out that both the Xeon processors and the SPEC 2017 test are some high-level hardware meant for "big iron" industrial and educational applications, and aren't especially relevant for the consumer market we typically cover. More info at ServeTheHome, Phoronix, and Tom's Hardware.

Read more of this story at Slashdot.

Categories: Technology

OpenZFS Native Encryption Use Has New(ish) Data Corruption Bug

Slashdot - 18 February, 2024 - 07:34
Some ZFS news from Phoronix this week. "At the end of last year OpenZFS 2.2.2 was released to fix a rare but nasty data corruption issue, but it turns out there are other data corruption bug(s) still lurking in the OpenZFS file-system codebase." A Phoronix reader wrote in today about an OpenZFS data corruption bug when employing native encryption and making use of send/recv support. Making use of zfs send on an encrypted dataset can cause one or more snapshots to report errors. OpenZFS data corruption issues in this area have apparently been known for years. Since May 2021 there's been this open issue around ZFS corruption related to snapshots on post-2.0 OpenZFS. That issue remains open. A new ticket has been opened for OpenZFS as well in proposing to add warnings against using ZFS native encryption and the send/receive support in production environments. jd (Slashdot reader #1,658) spotted the news — and adds a positive note. "Bugs, old and new, are being catalogued and addressed much more quickly now that core development is done under Linux, even though it is not mainstreamed in the kernel."

Read more of this story at Slashdot.

Categories: Technology

Martians Wanted: NASA Opens Call for Simulated Yearlong Mars Mission

Slashdot - 18 February, 2024 - 06:34
"Would you like to live on Mars?" NASA asked Friday on social media. "You can help us move humanity toward that goal by participating in a simulated, year-long Mars surface mission at NASA's Johnson Space Center." NASA is seeking applicants to participate in its next simulated one-year Mars surface mission to help inform the agency's plans for human exploration of the Red Planet. The second of three planned ground-based missions called CHAPEA (Crew Health and Performance Exploration Analog) is scheduled to kick off in spring 2025. Each CHAPEA mission involves a four-person volunteer crew living and working inside a 1,700-square-foot, 3D-printed habitat based at NASA's Johnson Space Center in Houston. The habitat, called the Mars Dune Alpha, simulates the challenges of a mission on Mars, including resource limitations, equipment failures, communication delays, and other environmental stressors. Crew tasks include simulated spacewalks, robotic operations, habitat maintenance, exercise, and crop growth. NASA is looking for healthy, motivated U.S. citizens or permanent residents who are non-smokers, 30-55 years old, and proficient in English for effective communication between crewmates and mission control. Applicants should have a strong desire for unique, rewarding adventures and interest in contributing to NASA's work to prepare for the first human journey to Mars... As NASA works to establish a long-term presence for scientific discovery and exploration on the Moon through the Artemis campaign, CHAPEA missions provide important scientific data to validate systems and develop solutions for future missions to the Red Planet. With the first CHAPEA crew more than halfway through their yearlong mission, NASA is using research gained through the simulated missions to help inform crew health and performance support during Mars expeditions. You can see the simulated Mars habitat in this NASA video. The deadline for applicants is Tuesday, April 2, according to NASA. "A master's degree in a STEM field such as engineering, mathematics, or biological, physical or computer science from an accredited institution with at least two years of professional STEM experience or a minimum of one thousand hours piloting an aircraft is required."

Read more of this story at Slashdot.

Categories: Technology

Pages

Subscribe to Creative Contingencies aggregator