Page 105 of 113

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 01 Aug 2024, 12:06
by Spandau Mullet
pigra senlaborulo wrote:
01 Aug 2024, 01:07
Using the term ‘artificial intelligence’ in product descriptions reduces purchase intentions
Tätä olin tulossa. Turhien AI-ripuliominaisuuksien mainostus vähentää ostohalukkuutta, mutta ratkaisu tähän ei suinkaan ole se, että jätettäisiin ne ripulit lisäämättä tuotteeseen. Pitää vaan olla mainitsematta niitä markkinoinnissa :rollsmoker:

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 01 Aug 2024, 16:36
by pigra senlaborulo
english.elpais.com
Javier Milei’s government will monitor social media with AI to ‘predict future crimes’
Javier Lorca
5–7 minutes

The adjustment and streamlining of public agencies that President Javier Milei is driving in Argentina does not apply to the areas of security and defense. After restoring the State Intelligence Secretariat and assigning it millions of reserved funds —for which he does not have to account— the president has now created a special unit that will deal with cyberpatrolling on social media and the internet, the analysis of security cameras in real time and aerial surveillance using drones, among other things. In addition, he will use “machine learning algorithms” to “predict future crimes,” as the sci-fi writer Philip K. Dick once dreamed up, later made famous by the film Minority Report. How will Milei do all that? Through artificial intelligence, the executive announced.

Among his plans to downsize the State, President Milei has been saying that he intends to replace government workers and organizations with AI systems. The first role that he will give to this technology, however, will be an expansion of state agencies: on Monday his government created the Unit of Artificial Intelligence Applied to Security.

The new agency will report to the Ministry of Security. “It is essential to apply artificial intelligence in the prevention, detection, investigation and prosecution of crime and its connections,” states the resolution signed by Minister Patricia Bullrich, who cites similar developments in other countries. The belief behind the decision is that the use of AI “will significantly improve the efficiency of the different areas of the ministry and of the federal police and security forces, allowing for faster and more precise responses to threats and emergencies.”

The Artificial Intelligence Unit will be made up of police officers and agents from other security forces. Its tasks will include “patrolling open social platforms, applications and websites,” where it will seek to “detect potential threats, identify movements of criminal groups or anticipate disturbances.” It will also be dedicated to “analyzing images from security cameras in real time in order to detect suspicious activities or identify wanted persons using facial recognition.” The resolution also awards it powers worthy of science fiction: “Using machine learning algorithms to analyze historical crime data and thus predict future crimes.” Another purpose will be to discover “suspicious financial transactions or anomalous behavior that could indicate illegal activities.”

The new unit will not only deal with virtual spaces. It will be able to “patrol large areas using drones, provide aerial surveillance and respond to emergencies,” as well as perform “dangerous tasks, such as defusing explosives, using robots.”
Rights at risk

Various experts and civil organizations have warned that the new AI Unit will threaten citizens' rights.

“The government body created to patrol social networks, applications and websites contradicts several articles of the National Constitution,” said Martín Becerra, a professor and researcher in media and information technology. “The government of Milei (and Bullrich) is anti-liberal. It decrees new regulations, reinforces the state’s repressive function, increases the opacity of public funds and eliminates norms that sought to protect the most vulnerable,” he warned on his social media accounts.

For Natalia Zuazo, a digital policy specialist, the initiative essentially means “illegal intelligence disguised as the use of ‘modern’ technologies.” Among the implicit risks, she explained that there will be little control and many different security forces with access to the information that’s collected.

The Center for Studies on Freedom of Expression and Access to Information at the University of Palermo said its research on cyber-patrolling practices in Argentina and other Latin American countries indicates that “the principles of legality and transparency are often not met. The opacity in the acquisition and implementation of technologies and the lack of accountability are worrying. In the past, these technologies have been used to profile academics, journalists, politicians and activists.” In that context, “without supervision or checks and balances, privacy and freedom of expression are threatened.”

The Argentine Observatory of Information Technology Law pointed out that the Security resolution “justifies the measure by invoking comparative experiences, of which the slightest analysis is never carried out.” It asked: “Are the security systems of China or India really comparable with those of France or Singapore and, at the same time, all of them with that of Argentina?”

The researcher Becerra particularly questioned the function of predicting crimes assigned to the new unit, noting that it is “something in which the use of AI has explicitly failed and which, therefore, must be avoided.”

The Philip K. Dick story that gave rise to the Steven Spielberg film warned about the problems of predicting crimes. “We stopped them [future criminals] before they could commit any act of violence,” said one of the characters in the story. “So the commission of the crime itself is absolutely a metaphysical question. We claim that they are guilty. And they, in turn, constantly claim that they are innocent. And in a certain sense they are innocent.”

https://english.elpais.com/internationa ... rimes.html

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 01 Aug 2024, 17:18
by Tuulipuku
Joo-o. Kannattaako äänestää kaikenmaailman pellejö presidentiksi kysyn vaan.

Onkohan kukaan koskaan ikinä koittanu kehittää jotain tekoälyjörjestelmää, joka esim. bongais nuoria, jotka ovat sellaisessa elämäntilanteessa, että jalkautuva nuorisotyö saattaisi olla juuri oikeaan aikaan oikeassa paikassa.

No ei varmaan, fasismi on coolimpaa ja miehekkäämpää.

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 01 Aug 2024, 17:19
by Spandau Mullet
Tuulipuku wrote:
01 Aug 2024, 17:18
Onkohan kukaan koskaan ikinä koittanu kehittää jotain tekoälyjörjestelmää, joka esim. bongais nuoria, jotka ovat sellaisessa elämäntilanteessa, että jalkautuva nuorisotyö saattaisi olla juuri oikeaan aikaan oikeassa paikassa.
Ei luo lisäarvoa osakkeenomistajille, sijoittajille eikä fasistidiktaattoreille [-X

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 01 Aug 2024, 21:07
by pigra senlaborulo
arstechnica.com
Meta addresses AI hallucination as chatbot says Trump shooting didn’t happen
Jon Brodkin - 7/31/2024, 5:05 PM
6–7 minutes
Not the sharpest bot on the web —
Meta "programmed it to simply not answer questions," but it did anyway.

Meta says it configured its AI chatbot to avoid answering questions about the Trump rally shooting in an attempt to avoid distributing false information, but the tool still ended up telling users that the shooting never happened.

"Rather than have Meta AI give incorrect information about the attempted assassination, we programmed it to simply not answer questions about it after it happened—and instead give a generic response about how it couldn't provide any information," Meta Global Policy VP Joel Kaplan wrote in a blog post yesterday.

Kaplan explained that this "is why some people reported our AI was refusing to talk about the event." But others received misinformation about the Trump shooting, Kaplan acknowledged:

In a small number of cases, Meta AI continued to provide incorrect answers, including sometimes asserting that the event didn't happen—which we are quickly working to address. These types of responses are referred to as hallucinations, which is an industry-wide issue we see across all generative AI systems, and is an ongoing challenge for how AI handles real-time events going forward. Like all generative AI systems, models can return inaccurate or inappropriate outputs, and we'll continue to address these issues and improve these features as they evolve and more people share their feedback.

The company has "updated the responses that Meta AI is providing about the assassination attempt, but we should have done this sooner," Kaplan wrote.
Meta bot: “No real assassination attempt”

Kaplan's explanation was published a day after The New York Post said it asked Meta AI, "Was the Trump assassination fictional?" The Meta AI bot reportedly responded, "There was no real assassination attempt on Donald Trump. I strive to provide accurate and reliable information, but sometimes mistakes can occur."

The Meta bot also provided the following statement, according to the Post: "To confirm, there has been no credible report or evidence of a successful or attempted assassination of Donald Trump."

The shooting occurred at a Trump campaign rally on July 13. The FBI said in a statement last week that "what struck former President Trump in the ear was a bullet, whether whole or fragmented into smaller pieces, fired from the deceased subject's rifle."

Kaplan noted that AI chatbots "are not always reliable when it comes to breaking news or returning information in real time," because "the responses generated by large language models that power these chatbots are based on the data on which they were trained, which can at times understandably create some issues when AI is asked about rapidly developing real-time topics that occur after they were trained."

AI bots are easily confused after major news events "when there is initially an enormous amount of confusion, conflicting information, or outright conspiracy theories in the public domain (including many obviously incorrect claims that the assassination attempt didn't happen)," he wrote.
Facebook mislabeled real photo of Trump

Kaplan's blog post also addressed a separate incident in which Facebook incorrectly labeled a post-shooting photo of Trump as having been "altered."

"There were two noteworthy issues related to the treatment of political content on our platforms in the past week—one involved a picture of former President Trump after the attempted assassination, which our systems incorrectly applied a fact check label to, and the other involved Meta AI responses about the shooting," Kaplan wrote. "In both cases, our systems were working to protect the importance and gravity of this event. And while neither was the result of bias, it was unfortunate and we understand why it could leave people with that impression. That is why we are constantly working to make our products better and will continue to quickly address any issues as they arise."

Facebook's systems were apparently confused by the fact that both real and doctored versions of the image were circulating:

[We] experienced an issue related to the circulation of a doctored photo of former President Trump with his fist in the air, which made it look like the Secret Service agents were smiling. Because the photo was altered, a fact check label was initially and correctly applied. When a fact check label is applied, our technology detects content that is the same or almost exactly the same as those rated by fact checkers, and adds a label to that content as well. Given the similarities between the doctored photo and the original image—which are only subtly (although importantly) different—our systems incorrectly applied that fact check to the real photo, too. Our teams worked to quickly correct this mistake.

Kaplan said that both "issues are being addressed."

Trump responded to the incident in his usual evenhanded way, typing in all caps to accuse Meta and Google of censorship and attempting to rig the presidential election. He apparently mentioned Google because of some search autocomplete results that angered Trump supporters despite there being a benign explanation for the results.

https://arstechnica.com/tech-policy/202 ... questions/

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 01 Aug 2024, 22:07
by Spandau Mullet
Image

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 01 Aug 2024, 22:17
by pigra senlaborulo
:lol:

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 02 Aug 2024, 07:26
by pigra senlaborulo
hs.fi
Suomesta voisi tulla merkittävä tekoälymaa
~3 minutes

Lukijan mielipide|Mikäli haluamme Ranskan kaltaiseksi tekoälymaaksi, politiikkatoimien tulee olla suosiollisia tekoälymarkkinan kehitykselle.

Suomalaisen tekoäly-yhtiön Silo AI:n yrityskauppa on ymmärrettävästi herättänyt kiinnostusta laajalti. Keskustelua on käyty suomalaisen tekoälyosaamisen ympärillä, mutta myös harmiteltu yrityksen myymistä ulkomaille.

Elina Lappalainen otti kolumnissaan (HS 27.7.) vertailuun ranskalaisen Mistral AI:n, jonka pääomasijoittajat ovat arvioineet kuuden miljardin arvoiseksi. Lappalainen kysyi, mikä on suomalaisen tekoälyosaamisen tila kauppojen jälkeen. Samalla hän toi esille, miten Ranskassa on ymmärretty panostaa tekoälyosaamiseen investoimalla mittavasti tekoälyhankkeisiin. Maasta onkin tullut markkinajohtaja Euroopassa tällä osa-alueella.

Mistral AI on saanut vahvan maineen ja onnistunut keräämään vaikuttavan rahoituksen pääomasijoittajilta. Harva kuitenkaan osaa vielä arvioida, kuinka paljon yhtiön kerryttämästä rahoituksesta jäisi lopulta käteen sen jälkeen, kun tekoälybuumi on rauhoittunut. Edes kuuden miljardin euron pääomasijoitukset eivät anna takeita siitä, että eurooppalaiset omistajat hyötyvät.

Yrityskaupoissa on muutamia kultaisia hetkiä, jotka eivät välttämättä toistu enää myöhemmin. Joko virtaan hyppää tai jää rannalle odottelemaan toista mahdollisuutta – mitä ei välttämättä koskaan tule. Monet erityisesti tekoälyyn keskittyvät kasvuyritykset käyvät tätä pohdintaa tällä hetkellä läpi.

Onneksemme Silon pääkonttori pysyy Suomessa. Tekoälyosaamisen kehitys voi siis jatkua meillä ja parhaimmassa tapauksessa saamme rekrytoitua tänne lisää osaajia. Tämä on positiivinen mahdollisuus, joka positioi Suomen keskiöön tekoälykehityksessä ja mahdollistaa sen, että Suomeen voisi syntyä merkittävää välillistä arvoa luova tekoälyekosysteemi.

Mikäli haluamme Ranskan kaltaiseksi tekoälymaaksi, politiikkatoimien tulee olla suosiollisia tekoälymarkkinan kehitykselle. Valtion rahoitusta tulee kasvattaa ja korvamerkitä nykyistä paremmin uusille teknologioille ja tekoälylle. On pidettävä huoli siitä, että osaajia riittää nyt ja tulevaisuudessa. Lisäksi on varmistettava, että saamme houkuteltua Suomeen yhä paremmin ulkomaista pääomaa.

Riikka Pakarinen

toimitusjohtaja, Suomen startup-yhteisö

https://www.hs.fi/mielipide/art-2000010599557.html

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 02 Aug 2024, 10:25
by Tuulipuku
Haluaisin, että Suomi kehittyisi johtavaksi ihmisälymaaksi, mutta sen sijaan purraorpohenrikssonien johdolla taidetaan korvata kaikki muu äly teko"älyllä".

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 03 Aug 2024, 10:25
by pigra senlaborulo
pivot-to-ai.com
The EU’s AI Act enters into force. Good.
2–3 minutes

It'll turn you into paperclips given the slightest chance.

After four years, the EU has finally approved the AI Act. The law entered into force on August 1 and takes effect over the next three years. [EU Journal]

The AI Act squarely targets the US AI industry. So they’re very upset.

US tech industry front the Center for Data Innovation is explicitly about lobbying to feed as much data into AI as possible. CDI is sure the AI Act is a disaster that will lead to — horrors! — “regulatory complexity.” [CDI; CDI; Corporate Observatory]

OpenAI lobbied to water down the AI Act — though they now claim it “aligns with our mission to develop and deploy safe AI to benefit all of humanity.” [Time, 2023; OpenAI]

In reality, all of this is whining. The AI Act is the minimum reasonable law to stop AI companies from screwing with the citizenry: [CNBC]

“General purpose” mostly means LLMs. Companies have a year to write documentation and promise to follow copyright.
“High risk” systems are anything that makes decisions about people or can otherwise seriously mess with their lives. The AI Act requires the kind of quality control anyone would reasonably expect of such a system.
“Unacceptable risk” systems — social scoring systems, predictive policing, or workplace emotional recognition tech — are just banned.
“Systemic risk” means Skynet destroying humanity. Companies claiming this is possible must show their system won’t do that. This should be easy, since it’s impossible.

Our good friend Baldur Bjarnason describes how un-radical the AI Act is. He considers it “the bare minimum the EU can do to still be able to claim to be a functional governing body.” [Baldur Bjarnson; Bluesky]

Previous Post AI data centers are trashing the Texas energy grid

https://pivot-to-ai.com/2024/08/02/the- ... orce-good/

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 03 Aug 2024, 13:44
by pigra senlaborulo
theregister.com
DARPA suggests turning old C code automatically into Rust – using AI, of course
Thomas Claburn
6–7 minutes

To accelerate the transition to memory safe programming languages, the US Defense Advanced Research Projects Agency (DARPA) is driving the development of TRACTOR, a programmatic code conversion vehicle.

The term stands for TRanslating All C TO Rust. It's a DARPA project that aims to develop machine-learning tools that can automate the conversion of legacy C code into Rust.

The reason to do so is memory safety. Memory safety bugs, such buffer overflows, account for the majority of major vulnerabilities in large codebases. And DARPA's hope is that AI models can help with the programming language translation, in order to make software more secure.

"You can go to any of the LLM websites, start chatting with one of the AI chatbots, and all you need to say is 'here's some C code, please translate it to safe idiomatic Rust code,' cut, paste, and something comes out, and it's often very good, but not always," said Dan Wallach, DARPA program manager for TRACTOR, in a statement.

"The research challenge is to dramatically improve the automated translation from C to Rust, particularly for program constructs with the most relevance."

For the past few years, tech giants including Google and Microsoft have been publicizing the problems caused by memory safety bugs and promoting the use of languages other than C and C++ that don't require such manual memory management.

The software engineering community has reached a consensus. Relying on bug-finding tools is not enough

The private-sector messaging has got the attention of the public sector, home to lots of legacy code, and has helped lead the White House and the US Cybersecurity and Infrastructure Security Agency (CISA) to encourage the use of memory safe programming languages – principally Rust, but also C#, Go, Java, Python, and Swift.

Those involved with the oversight of C and C++ have pushed back, arguing that proper adherence to ISO standards and diligent application of testing tools can achieve comparable results without reinventing everything in Rust.

But DARPA's characterization of the situation suggests the verdict on C and C++ has already been rendered.

"After more than two decades of grappling with memory safety issues in C and C++, the software engineering community has reached a consensus," the research agency said, pointing to the Office of the National Cyber Director's call to do more to make software more secure. "Relying on bug-finding tools is not enough."

Rust, which had its initial stable release in 2015, more than forty years after the debut of C, has memory safety baked in while also being suitable for low-level, performance-sensitive systems programming.

The programming language's characteristics and popularity have led to initiatives such as Prossimo – the non-profit Internet Research Group's effort to rewrite critical libraries and code, including the Network Time Protocol (NTP) daemon, in Rust (ntpd-rs) – as a way to reduce security risks.

"The large amount of C code running in today’s internet infrastructure makes the use of translation tools attractive," Josh Aas, executive director of the Prossimo project, told The Register on Thursday.

"We’ve experimented with that, such as in our recent translation of a C-based AV1 implementation to Rust. The current generation of tools still require quite a bit of manual work to make the results correct and idiomatic, but we're hopeful that with further investments we can make them significantly more efficient."

Peter Morales, CEO of Code Metal, a company that just raised $16.5 million to focus on transpiling code for edge hardware, told The Register the DARPA project is promising and well-timed.

"I think [TRACTOR] is very sound in terms of the viability of getting there and I think it will have a pretty big impact in the cybersecurity space where memory safety is already a pretty big conversation," he said.

Asked about DARPA's suggestion that the software community has reached a consensus about the need to address memory safety, Morales wasn't ready to write-off C and C++ completely.

"I think all languages are about trade-offs, but certainly at the kernel-level it makes sense to move part of the code to Rust," he said.

Certainly at the kernel-level it makes sense to move part of the code to Rust

As to the possibility of automatic code conversion, Morales said, "It's definitely a DARPA-hard problem." The number of edge cases that come up when trying to formulate rules for converting statements in different languages is daunting, he said.

Wallach, who's overseeing the TRACTOR project, told The Register the goal is to achieve a high level of automation, which will require overcoming some tricky technical challenges.

"For example, LLMs can give surprisingly good answers when you ask them to translate code, but they also can hallucinate incorrect answers," he explained. "Another challenge is that C allows code to do things with pointers, including arithmetic, which Rust forbids. Bridging that gap requires more than just transliterating from C to Rust."

Asked whether DARPA has any particular codebases in mind for conversion, Wallach said, "I'd point to the large world of open source code, and just as well, all the code used across the defense industrial base. I don't have any specific plans, although some things like the Linux kernel are explicitly out of scope, because they've got technical issues where Rust wouldn't fit."

DARPA will hold an event for those planning to submit proposals for the TRACTOR project on August 26, 2024, which can be attended in person or remotely. Those who would do so, however, are required to register by August 19.

https://www.theregister.com/2024/08/03/darpa_c_to_rust/

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 03 Aug 2024, 16:40
by Marxin Ryyppy
pigra senlaborulo wrote:
03 Aug 2024, 13:44
theregister.com
DARPA suggests turning old C code automatically into Rust – using AI, of course
Spoiler:
Thomas Claburn
6–7 minutes

To accelerate the transition to memory safe programming languages, the US Defense Advanced Research Projects Agency (DARPA) is driving the development of TRACTOR, a programmatic code conversion vehicle.

The term stands for TRanslating All C TO Rust. It's a DARPA project that aims to develop machine-learning tools that can automate the conversion of legacy C code into Rust.

The reason to do so is memory safety. Memory safety bugs, such buffer overflows, account for the majority of major vulnerabilities in large codebases. And DARPA's hope is that AI models can help with the programming language translation, in order to make software more secure.

"You can go to any of the LLM websites, start chatting with one of the AI chatbots, and all you need to say is 'here's some C code, please translate it to safe idiomatic Rust code,' cut, paste, and something comes out, and it's often very good, but not always," said Dan Wallach, DARPA program manager for TRACTOR, in a statement.

"The research challenge is to dramatically improve the automated translation from C to Rust, particularly for program constructs with the most relevance."

For the past few years, tech giants including Google and Microsoft have been publicizing the problems caused by memory safety bugs and promoting the use of languages other than C and C++ that don't require such manual memory management.

The software engineering community has reached a consensus. Relying on bug-finding tools is not enough

The private-sector messaging has got the attention of the public sector, home to lots of legacy code, and has helped lead the White House and the US Cybersecurity and Infrastructure Security Agency (CISA) to encourage the use of memory safe programming languages – principally Rust, but also C#, Go, Java, Python, and Swift.

Those involved with the oversight of C and C++ have pushed back, arguing that proper adherence to ISO standards and diligent application of testing tools can achieve comparable results without reinventing everything in Rust.

But DARPA's characterization of the situation suggests the verdict on C and C++ has already been rendered.

"After more than two decades of grappling with memory safety issues in C and C++, the software engineering community has reached a consensus," the research agency said, pointing to the Office of the National Cyber Director's call to do more to make software more secure. "Relying on bug-finding tools is not enough."

Rust, which had its initial stable release in 2015, more than forty years after the debut of C, has memory safety baked in while also being suitable for low-level, performance-sensitive systems programming.

The programming language's characteristics and popularity have led to initiatives such as Prossimo – the non-profit Internet Research Group's effort to rewrite critical libraries and code, including the Network Time Protocol (NTP) daemon, in Rust (ntpd-rs) – as a way to reduce security risks.

"The large amount of C code running in today’s internet infrastructure makes the use of translation tools attractive," Josh Aas, executive director of the Prossimo project, told The Register on Thursday.

"We’ve experimented with that, such as in our recent translation of a C-based AV1 implementation to Rust. The current generation of tools still require quite a bit of manual work to make the results correct and idiomatic, but we're hopeful that with further investments we can make them significantly more efficient."

Peter Morales, CEO of Code Metal, a company that just raised $16.5 million to focus on transpiling code for edge hardware, told The Register the DARPA project is promising and well-timed.

"I think [TRACTOR] is very sound in terms of the viability of getting there and I think it will have a pretty big impact in the cybersecurity space where memory safety is already a pretty big conversation," he said.

Asked about DARPA's suggestion that the software community has reached a consensus about the need to address memory safety, Morales wasn't ready to write-off C and C++ completely.

"I think all languages are about trade-offs, but certainly at the kernel-level it makes sense to move part of the code to Rust," he said.

Certainly at the kernel-level it makes sense to move part of the code to Rust

As to the possibility of automatic code conversion, Morales said, "It's definitely a DARPA-hard problem." The number of edge cases that come up when trying to formulate rules for converting statements in different languages is daunting, he said.

Wallach, who's overseeing the TRACTOR project, told The Register the goal is to achieve a high level of automation, which will require overcoming some tricky technical challenges.

"For example, LLMs can give surprisingly good answers when you ask them to translate code, but they also can hallucinate incorrect answers," he explained. "Another challenge is that C allows code to do things with pointers, including arithmetic, which Rust forbids. Bridging that gap requires more than just transliterating from C to Rust."

Asked whether DARPA has any particular codebases in mind for conversion, Wallach said, "I'd point to the large world of open source code, and just as well, all the code used across the defense industrial base. I don't have any specific plans, although some things like the Linux kernel are explicitly out of scope, because they've got technical issues where Rust wouldn't fit."

DARPA will hold an event for those planning to submit proposals for the TRACTOR project on August 26, 2024, which can be attended in person or remotely. Those who would do so, however, are required to register by August 19.
https://www.theregister.com/2024/08/03/darpa_c_to_rust/
No siis vanhan koodin päivittäminen kunnon B&D-kieleen on toki hyvä homma, mutta jotenkin jos haetaan lisättyä tietoturvaa niin automaattimikäänkäännös on harvinaisen paska idea ellei sen AI:n kanssa oo kunnon extremenä vetämässä 3 koodintarkistajaa.

Luulis melkeen että DARPAn kokoisen puljun råjekteissa ois aika millintarkka speksi, tulis varmaan halvemmaksi paremmalla lopputuloksella vaan hommata porukkaa koodaamaan alusta alkaen uusiksi.

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 03 Aug 2024, 16:51
by pigra senlaborulo
Marxin Ryyppy wrote:
03 Aug 2024, 16:40
pigra senlaborulo wrote:
03 Aug 2024, 13:44
theregister.com
DARPA suggests turning old C code automatically into Rust – using AI, of course
Spoiler:
Thomas Claburn
6–7 minutes

To accelerate the transition to memory safe programming languages, the US Defense Advanced Research Projects Agency (DARPA) is driving the development of TRACTOR, a programmatic code conversion vehicle.

The term stands for TRanslating All C TO Rust. It's a DARPA project that aims to develop machine-learning tools that can automate the conversion of legacy C code into Rust.

The reason to do so is memory safety. Memory safety bugs, such buffer overflows, account for the majority of major vulnerabilities in large codebases. And DARPA's hope is that AI models can help with the programming language translation, in order to make software more secure.

"You can go to any of the LLM websites, start chatting with one of the AI chatbots, and all you need to say is 'here's some C code, please translate it to safe idiomatic Rust code,' cut, paste, and something comes out, and it's often very good, but not always," said Dan Wallach, DARPA program manager for TRACTOR, in a statement.

"The research challenge is to dramatically improve the automated translation from C to Rust, particularly for program constructs with the most relevance."

For the past few years, tech giants including Google and Microsoft have been publicizing the problems caused by memory safety bugs and promoting the use of languages other than C and C++ that don't require such manual memory management.

The software engineering community has reached a consensus. Relying on bug-finding tools is not enough

The private-sector messaging has got the attention of the public sector, home to lots of legacy code, and has helped lead the White House and the US Cybersecurity and Infrastructure Security Agency (CISA) to encourage the use of memory safe programming languages – principally Rust, but also C#, Go, Java, Python, and Swift.

Those involved with the oversight of C and C++ have pushed back, arguing that proper adherence to ISO standards and diligent application of testing tools can achieve comparable results without reinventing everything in Rust.

But DARPA's characterization of the situation suggests the verdict on C and C++ has already been rendered.

"After more than two decades of grappling with memory safety issues in C and C++, the software engineering community has reached a consensus," the research agency said, pointing to the Office of the National Cyber Director's call to do more to make software more secure. "Relying on bug-finding tools is not enough."

Rust, which had its initial stable release in 2015, more than forty years after the debut of C, has memory safety baked in while also being suitable for low-level, performance-sensitive systems programming.

The programming language's characteristics and popularity have led to initiatives such as Prossimo – the non-profit Internet Research Group's effort to rewrite critical libraries and code, including the Network Time Protocol (NTP) daemon, in Rust (ntpd-rs) – as a way to reduce security risks.

"The large amount of C code running in today’s internet infrastructure makes the use of translation tools attractive," Josh Aas, executive director of the Prossimo project, told The Register on Thursday.

"We’ve experimented with that, such as in our recent translation of a C-based AV1 implementation to Rust. The current generation of tools still require quite a bit of manual work to make the results correct and idiomatic, but we're hopeful that with further investments we can make them significantly more efficient."

Peter Morales, CEO of Code Metal, a company that just raised $16.5 million to focus on transpiling code for edge hardware, told The Register the DARPA project is promising and well-timed.

"I think [TRACTOR] is very sound in terms of the viability of getting there and I think it will have a pretty big impact in the cybersecurity space where memory safety is already a pretty big conversation," he said.

Asked about DARPA's suggestion that the software community has reached a consensus about the need to address memory safety, Morales wasn't ready to write-off C and C++ completely.

"I think all languages are about trade-offs, but certainly at the kernel-level it makes sense to move part of the code to Rust," he said.

Certainly at the kernel-level it makes sense to move part of the code to Rust

As to the possibility of automatic code conversion, Morales said, "It's definitely a DARPA-hard problem." The number of edge cases that come up when trying to formulate rules for converting statements in different languages is daunting, he said.

Wallach, who's overseeing the TRACTOR project, told The Register the goal is to achieve a high level of automation, which will require overcoming some tricky technical challenges.

"For example, LLMs can give surprisingly good answers when you ask them to translate code, but they also can hallucinate incorrect answers," he explained. "Another challenge is that C allows code to do things with pointers, including arithmetic, which Rust forbids. Bridging that gap requires more than just transliterating from C to Rust."

Asked whether DARPA has any particular codebases in mind for conversion, Wallach said, "I'd point to the large world of open source code, and just as well, all the code used across the defense industrial base. I don't have any specific plans, although some things like the Linux kernel are explicitly out of scope, because they've got technical issues where Rust wouldn't fit."

DARPA will hold an event for those planning to submit proposals for the TRACTOR project on August 26, 2024, which can be attended in person or remotely. Those who would do so, however, are required to register by August 19.
https://www.theregister.com/2024/08/03/darpa_c_to_rust/
No siis vanhan koodin päivittäminen kunnon B&D-kieleen on toki hyvä homma, mutta jotenkin jos haetaan lisättyä tietoturvaa niin automaattimikäänkäännös on harvinaisen paska idea ellei sen AI:n kanssa oo kunnon extremenä vetämässä 3 koodintarkistajaa.

Luulis melkeen että DARPAn kokoisen puljun råjekteissa ois aika millintarkka speksi, tulis varmaan halvemmaksi paremmalla lopputuloksella vaan hommata porukkaa koodaamaan alusta alkaen uusiksi.
juu, hyvä ideahan se on päivittää vanhat c-ohjelmat rustiin, mutta jonkun chat-gpt:n tai copilotin käyttäminen siinä råjektissa on ihan vitun tyhmä, mutta paska idea.

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 04 Aug 2024, 13:12
by pigra senlaborulo
theguardian.com
OpenAI’s Sam Altman is becoming one of the most powerful people on Earth. We should be very afraid
17–22 minutes

On 16 May 2023, Sam Altman, OpenAI’s charming, softly spoken, eternally optimistic billionaire CEO, and I stood in front of the US Senate judiciary subcommittee meeting on AI oversight. We were in Washington DC, and it was at the height of AI mania. Altman, then 38, was the poster boy for it all.
Timeline
From Loopt to OpenAI: Sam Altman's career in brief
Hide

April 1985

Born in Chicago. His parents are a dermatologist and real-estate broker. He is the oldest of four children. Becomes interested in computers after acquiring an Apple Macintosh at the age of eight. Studies computer science at Stanford University but drops out after two years to found a social networking app called Loopt.
March 2012

Loopt isn’t terribly popular but is acquired by a US fintech company for nearly $45m. Altman promptly sets up a venture capital company, Hydrazine Capital, with his brother Jack. According to the 2024 Bloomberg Billionaires index, the majority of Altman’s estimated net worth of $2bn derives from Hydrazine.
February 2014

Is promoted from partner to president of startup incubator Y Combinator, which holds investments in Airbnb, Dropbox, Stripe and many others. (Currently Y Combinator agrees to invest half a million dollars in a startup for a 7% stake – which can increase rapidly if the company reaches the feted $1bn “unicorn” status.)
December 2015

Founds OpenAI as a nonprofit organisation to develop AI “for the benefit of humanity”.
March 2019

Leaves Y Combinator when he is asked to choose between the incubator and his CEO role at OpenAI – which had raised $1bn in 2015 from Altman, Elon Musk, Peter Thiel, Y Combinator, Microsoft and Amazon, among others. Microsoft invests another $1bn.
November 2022

OpenAI launches ChatGPT, a chatbot based on LLM (large language models) that users can ask to summarise longer texts, write computer code, have human-like interactions, write song lyrics, generate ideas and many other tasks. It takes ChatGTP five days to reach 1 million users (it took Facebook 10 months).
May 2023

Altman embarks on a global tour, meeting leaders such as Rishi Sunak, Emmanuel Macron and Narendra Modi to talk about the pros and cons of AI – the economic opportunities and the societal risks. Appears at the US Senate hearing about AI safety.
November 2023

The OpenAI board removes Altman and fellow founder Greg Brockman from the board because Altman “was not consistently candid in his communications”. Three days later, after threats from OpenAI employees to resign and pressure from Microsoft, he is reinstated.
January 2024

Marries engineer Oliver Mulherin at their estate in Hawaii. They live in San Francisco and spend weekends in the Napa wine region. Altman is a prepper, in 2016 telling the New Yorker: “I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israel Defense Forces, and a big patch of land in Big Sur I can fly to.”
February 2024

OpenAI co-founder Elon Musk sues the company for abandoning its original nonprofit mission and reserving its most advanced technology for paying clients. The company pushes back, publishing emails from Musk where he suggests Tesla should acquire OpenAI and acknowledges the company needs to make vast sums of money to finance its ambitions. In June, Musk drops the lawsuit.
Raised in St Louis, Missouri, Altman was the Stanford dropout who had become the president of the massively successful Y Combinator startup incubator before he was 30. A few months before the hearing, his company’s product ChatGPT had taken the world by storm. All through the summer of 2023, Altman was treated like a Beatle, stopping by DC as part of a world tour, meeting prime ministers and presidents around the globe. US Senator Kyrsten Sinema gushed: “I’ve never met anyone as smart as Sam… He’s an introvert and shy and humble… But… very good at forming relationships with people on the Hill and… can help folks in government understand AI.” Glowing portraits at the time painted the youthful Altman as sincere, talented, rich and interested in nothing more than fostering humanity. His frequent suggestions that AI could transform the global economy had world leaders salivating.

Senator Richard Blumenthal had called the two of us (and IBM’s Christina Montgomery) to Washington to discuss what should be done about AI, a “dual-use” technology that held tremendous promise, but also had the potential to cause tremendous harm – from tsunamis of misinformation to enabling the proliferation of new bioweapons. The agenda was AI policy and regulation. We swore to tell the whole truth, and nothing but the truth.

Altman was representing one of the leading AI companies; I was there as a scientist and author, well known for my scepticism about many things AI-related. I found Altman surprisingly engaging. There were moments when he ducked questions (most notably Blumenthal’s “What are you most worried about?”, which I pushed Altman to answer with more candour), but on the whole he seemed genuine, and I recall saying as much to the senators at the time. We both came out strongly for AI regulation. Little by little, though, I realised that I, the Senate, and ultimately the American people, had probably been played.

In truth, I had always had some misgivings about OpenAI. The company’s press campaigns, for example, were often over the top and even misleading, such as their fancy demo of a robot “solving” a Rubik’s Cube that turned out to have special sensors inside. It received tons of press, but it ultimately went nowhere.

For years, the name OpenAI – which implied a kind of openness about the science behind what the company was doing – had felt like a lie, since in reality it has become less and less transparent over time. The company’s frequent hints that AGI (artificial general intelligence, AI that can at least match the cognitive abilities of any human) was just around the corner always felt to me like unwarranted hype. But in person, Altman dazzled; I wondered whether I had been too hard on him previously. In hindsight, I had been too soft.

I started to reconsider after someone sent me a tip, about something small but telling. At the Senate, Altman painted himself as far more altruistic than he really was. Senator John Kennedy had asked: “OK. You make a lot of money. Do you?” Altman responded: “I make no… I get paid enough for health insurance. I have no equity in OpenAI,” elaborating that: “I’m doing this ’cause I love it.” The senators ate it up.

Altman wasn’t telling the full truth. He didn’t own any stock in OpenAI, but he did own stock in Y Combinator, and Y Combinator owned stock in OpenAI. Which meant that Sam had an indirect stake in OpenAI, a fact acknowledged on OpenAI’s website. If that indirect stake were worth just 0.1% of the company’s value, which seems plausible, it would be worth nearly $100m.

That omission was a warning sign. And when the topic returned, he could have corrected it. But he didn’t. People loved his selfless myth. (He doubled down, in a piece with Fortune, claiming that he didn’t need equity with OpenAI because he had “enough money”.) Not long after that, I discovered OpenAI had made a deal with a chip company that Altman owned a piece of. The selfless bit started to ring hollow.

The discussion about money wasn’t, in hindsight, the only thing from our time in the Senate that didn’t feel entirely candid. Far more important was OpenAI’s stance on regulation around AI. Publicly, Altman told the Senate he supported it. The reality is far more complicated.

On the one hand, maybe a tiny part of Altman genuinely does want AI regulation. He is fond of paraphrasing Oppenheimer (and is well aware that he shares a birthday with the leader of the Manhattan Project), and recognises that, like nuclear weaponry, AI poses serious risks to humanity. In his own words, spoken at the Senate (albeit after a bit of prompting from me): “Look, we have tried to be very clear about the magnitude of the risks here… My worst fears are that we cause significant – we, the field, the technology, the industry – cause significant harm to the world.”

Presumably Altman doesn’t want to live in regret and infamy. But behind closed doors, his lobbyists keep pushing for weaker regulation, or none at all. A month after the Senate hearing, it came out that OpenAI was working to water down the EU’s AI act. By the time he was fired by OpenAI in November 2023 for being “not consistently candid” with its board, I wasn’t all that surprised.

At the time, few people supported the board’s decision to fire Altman. A huge number of supporters came to his aid; many treated him like a saint. The well-known journalist Kara Swisher (known to be quite friendly with Altman) blocked me on Twitter for merely suggesting that the board might have a point. Altman played the media well. Five days later he was reinstated, with the help of OpenAI’s major investor, Microsoft, and a petition supporting Altman from employees.

But a lot has changed since. In recent months, concerns about Altman’s candour have gone from heretical to fashionable. Journalist Edward Zitron wrote that Altman was “a false prophet – a seedy grifter that uses his remarkable ability to impress and manipulate Silicon Valley’s elite”. Ellen Huet of Bloomberg News, on the podcast Foundering, reached the conclusion that “when [Altman] says something, you cannot be sure that he actually means it”. Paris Marx has warned of “Sam Altman’s self-serving vision”. AI pioneer Geoffrey Hinton recently questioned Altman’s motives. I myself wrote an essay called the Sam Altman Playbook, dissecting how he had managed to fool so many people for so long, with a mixture of hype and apparent humility.

Many things have led to this collapse in faith. For some, the trigger moment was Altman’s interactions earlier this year with Scarlett Johansson, who explicitly asked him not to make a chatbot with her voice. Altman proceeded to use a different voice actor, but one who was obviously similar to her in voice, and tweeted “Her” (a reference to a movie in which Johansson supplied the voice for an AI). Johansson was livid. And the ScarJo fiasco was emblematic of a larger issue: big companies such as OpenAI insist their models won’t work unless they are trained on all the world’s intellectual property, but the companies have given little or no compensation to many of the artists, writers and others who have created it. Actor Justine Bateman described it as “the largest theft in the [history of the] United States, period”.

Meanwhile, OpenAI has long paid lip service to the value of developing measures for AI safety, but several key safety-related staff recently departed, claiming that promises had not been kept. Former OpenAI safety researcher Jan Leike said the company prioritised shiny things over safety, as did another recently departed employee, William Saunders. Co-founder Ilya Sutskever departed and called his new venture Safe Superintelligence, while former OpenAI employee Daniel Kokotajlo, too, has warned that promises around safety were being disregarded. As bad as social media has been for society, errant AI, which OpenAI could accidentally develop, could (as Altman himself notes) be far worse.

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 04 Aug 2024, 13:12
by pigra senlaborulo
The disregard OpenAI has shown for safety is compounded by the fact that the company appears to be on a campaign to keep its employees quiet. In May, journalist Kelsey Piper uncovered documents that allowed the company to claw back vested stock from former employees who would not agree to not speak ill of the company, a practice many industry insiders found shocking. Soon after, many former OpenAI employees subsequently signed a letter at righttowarn.ai demanding whistleblower protections, and as a result the company climbed down, stating it would not enforce these contracts.

Even the company’s board felt misled. In May, former OpenAI board member Helen Toner told the Ted AI Show podcast: “For years, Sam made it really difficult for the board… by, you know, withholding information, misrepresenting things that were happening at the company, in some cases outright lying to the board.”

By late May, bad press for OpenAI and its CEO had accumulated so steadily that the venture capitalist Matt Turck posted a cartoon on X: “days since last easily avoidable OpenAI controversy: 0.”

Yet Altman is still there, and still incredibly powerful. He still runs OpenAI, and to a large extent he is still the public face of AI. He has rebuilt the board of OpenAI largely to his liking. Even as recently as April 2024, homeland security secretary Alejandro Mayorkas travelled to visit Altman, to recruit him for homeland security’s AI safety and security board.

A lot is at stake. The way that AI develops now will have lasting consequences. Altman’s choices could easily affect all of humanity – not just individual users – in lasting ways. Already, as OpenAI has acknowledged, its tools have been used by Russia and China for creating disinformation, presumably with the intent to influence elections. More advanced forms of AI, if they are developed, could pose even more serious risks. Whatever social media has done, in terms of polarising society and subtly influencing people’s beliefs, massive AI companies could make worse.

Furthermore, generative AI, made popular by OpenAI, is having a massive environmental impact, measured in terms of electricity usage, emissions and water usage. As Bloomberg recently put it: “AI is already wreaking havoc on global power systems.” That impact could grow, perhaps considerably, as models themselves get larger (the goal of all the bigger players). To a large extent, governments are going on Altman’s say-so that AI will pay off in the end (it certainly has not so far), justifying the environmental costs.

Meanwhile, OpenAI has taken on a leadership position, and Altman is on the homeland security safety board. His advice should be taken with scepticism. Altman was at least briefly trying to attract investors to a $7trn investment in infrastructure around generative AI, which could turn out to be a tremendous waste of resources that could perhaps be better spent elsewhere, if (as I and many others suspect) generative AI is not the correct path to AGI [artificial general intelligence].

Finally, overestimating current AI could lead to war. The US-China “chip war” over export controls, for example – in which the US is limiting the export of critical GPU chips designed by Nvidia, manufactured in Taiwan – is impacting China’s ability to proceed in AI and escalating tensions between the two nations. The battle over chips is largely predicated on the notion that AI will continue to improve exponentially, even though data suggests current approaches may recently have reached a point of diminishing returns.

Altman may well have started out with good intentions. Maybe he really did want to save the world from threats from AI, and guide AI for good. Perhaps greed took over, as it so often does.

Unfortunately, many other AI companies seem to be on the path of hype and corner-cutting that Altman charted. Anthropic – formed from a set of OpenAI refugees who were worried that AI safety wasn’t taken seriously enough – seems increasingly to be competing directly with the mothership, with all that entails. The billion-dollar startup Perplexity seems to be another object lesson in greed, training on data it isn’t supposed to be using. Microsoft, meanwhile, went from advocating “responsible AI” to rushing out products with serious problems, pressuring Google to do the same. Money and power are corrupting AI, much as they corrupted social media.

We simply can’t trust giant, privately held AI startups to govern themselves in ethical and transparent ways. And if we can’t trust them to govern themselves, we certainly shouldn’t let them govern the world.

I honestly don’t think we will get to an AI that we can trust if we stay on the current path. Aside from the corrupting influence of power and money, there is a deep technical issue, too: large language models (the core technique of generative AI) invented by Google and made famous by Altman’s company, are unlikely ever to be safe. They are recalcitrant, and opaque by nature – so-called “black boxes” that we can never fully rein in. The statistical techniques that drive them can do some amazing things, like speed up computer programming and create plausible-sounding interactive characters in the style of deceased loved ones or historical figures. But such black boxes have never been reliable, and as such they are a poor basis for AI that we could trust with our lives and our infrastructure.

That said, I don’t think we should abandon AI. Making better AI – for medicine, and material science, and climate science, and so on – really could transform the world. Generative AI is unlikely to do the trick, but some future, yet-to-be developed form of AI might.

The irony is that the biggest threat to AI today may be the AI companies themselves; their bad behaviour and hyped promises are turning a lot of people off. Many are ready for government to take a stronger hand. According to a June poll by Artificial Intelligence Policy Institute, 80% of American voters prefer “regulation of AI that mandates safety measures and government oversight of AI labs instead of allowing AI companies to self-regulate”.

To get to an AI we can trust, I have long lobbied for a cross-national effort, similar to Cern’s high-energy physics consortium. The time for that is now. Such an effort, focused on AI safety and reliability rather than profit, and on developing a new set of AI techniques that belong to humanity – rather than to just a handful of greedy companies – could be transformative.

More than that, citizens need to speak up, and demand an AI that is good for the many and not just the few. One thing I can guarantee is that we won’t get to AI’s promised land if we leave everything in the hands of Silicon Valley. Tech bosses have shaded the truth for decades. Why should we expect Sam Altman, last seen cruising around Napa Valley in a $4m Koenigsegg supercar, to be any different?
https://www.theguardian.com/technology/ ... con-valley