Page 69 of 116

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 19 Feb 2024, 18:46
by dundergubbe
Saatana kun meni aivot solmuun noissa esimerkeissä ennenkuin tajusin että copypaste sotkenut potenssit :x

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 19 Feb 2024, 18:47
by Spandau Mullet
raju röllykkä wrote:
19 Feb 2024, 18:46
Saatana kun meni aivot solmuun noissa esimerkeissä ennenkuin tajusin että copypaste sotkenut potenssit :x
hupsista saatana :D

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 20 Feb 2024, 19:32
by pigra senlaborulo
mikrobitti.fi
Chatbotti löperteli väärää tietoa asiakkaalle: vastahakoinen lentoyhtiö joutui maksamaan mokailusta
Joakim Kullas
~2 minutes

Lentoyhtiö Air Canada joutuu maksamaan matkustajalle satoja Kanadan dollareita vahingonkorvauksia, koska sen verkkosivujen chatbotti antoi asiakkaalle väärää tietoa lentojen varaamisesta.

Asiasta uutisoivat The Register ja CBC News, joiden mukaan asiakas oli ostanut lentoliput päästäkseen isoäitinsä hautajaisiin viime vuoden marraskuussa. Verkkosivujen chatbotti oli kertonut hänelle alennuksesta, joka myönnetään perheenjäsenen menehtymisen vuoksi matkustaville.

Chatbotti kertoi asiakkaalle, että jos hän ostaisi normaalihintaisen lipun, hänellä olisi 90 päivää aikaa pyytää alennusta lippuun jälkikäteen. Kun asiakas haki alennusta, hänelle kuitenkin kerrottiin, ettei sen saaminen ole mahdollista lippujen ostamisen jälkeen. Asiakas vei korvausvaatimuksensa tuomioistuimeen ja sanoi, että chatbotti antoi hänelle vääriä tietoja.

Lentoyhtiö yritti väittää korvaukset välttääkseen, että sen chatbotti oli erillinen juridinen toimija, joka on vastuussa omasta toiminnastaan.

Korvausvaatimuksia käsittelevän tuomioistuimen mukaan chatbotti on kuitenkin osa Air Canadan verkkosivuja, ja yhtiö on vastuussa kaikista tiedoista sen verkkosivulla.

”Sillä ei ole mitään merkitystä, tuleeko tieto staattiselta sivulta vai chatbotilta.”

Lentoyhtiö yritti myös vedota siihen, että chatbotti oli välittänyt asiakkaalle linkin verkkosivustolle, jossa kerrottiin, ettei alennusten hakeminen jälkikäteen ole mahdollista.

Tuomioistuimen mukaan lentoyhtiön selityksestä ei kuitenkaan käy ilmi, miksi sen verkkosivuilla olevat tiedot olisivat luotettavampia kuin chatbotin antamat tiedot. Lisäksi selityksestä ei selvinnyt, miksi asiakkaan pitäisi tarkistaa lentoyhtiön verkkosivuilta saadut tiedot toisesta paikasta samalta sivulta.

https://www.mikrobitti.fi/uutiset/mb/91 ... 0579f99fc1

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 20 Feb 2024, 22:36
by 38911 BASIC BYTES FREE
Nonih. Nyt kun yritykset tajuaisi ton riskin laajemminkin ja lakkaisi käyttämästä puppugeneraattoreita asiakasrajapintana.

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 21 Feb 2024, 07:14
by pigra senlaborulo
time.com
AI Monitoring in Schools Can Come at a Great Cost
8–11 minutes

Suicide is now the second leading cause of death among American youth between the ages of 10 and 14. The problem of youth suicide has only gotten worse lately, in part due to a nationwide shortage of mental health professionals, particularly in schools, where if available, an on-staff psychologist, counselor or social worker can help identify at-risk youth, and take steps toward an appropriate intervention.

As a remedy, school administrators, faced with daunting funding and staffing shortages, have increasingly looked to technology to help them manage the youth suicide crisis. Specifically, companies such as Bark, Gaggle, GoGuardian and Securly have developed technology in the form of AI-based student monitoring software that tracks students’ computer use to identify students facing mental health challenges. It is generally designed to operate in the background of students’ school issued computing devices and accounts, and flag activity that may indicate that they are at risk for self-harm.

This tracking software is being used nationwide, on millions of students. But many parents and community members remain unaware of its existence. Students may have some sense that their school devices are being monitored, but likely have a limited understanding of how it is used. And even though identifying suicide risk might be a worthwhile objective, AI surveillance may feel like a significant breach of privacy, while also posing other unanticipated harms.

As researchers whose work has focused on dimensions of inequality, mental health, and technology policy, we interviewed and talked to staff to better understand the benefits and risks of this software. One superintendent told us that this monitoring software can identify at-risk students who may not already be on the radar of school staff, providing them with an opportunity to intervene before the situation gets worse.

We are researchers, but we are all also parents, and this added layer of safety for suicide risk detection can feel, at first, like a no-brainer. The idea of losing a child is terrifying, and so it is completely understandable that schools would reach for a seemingly low-cost tool that can “catch” the private, sensitive, suicide-related thoughts that students might not disclose to anyone outside their Google search bar.

But the problem is that, apart from anecdotes, there is little hard evidence supporting the accuracy of this software, and there are numerous examples throughout history where well-meaning approaches to mental health intervention have caused unintended harms. Similarly, it is increasingly clear that emerging technology also has a range of harmful collateral effects on youth mental health.

Read More: It Will Take More Than Robots to Manage the Robots

Through a careful review of the existing evidence, and through interviews with dozens of school staff, parents, and others, we found that AI based monitoring, far from being a solution to the persistent and growing problem of youth suicide, might well give rise to more problems than what it seeks to solve.

First, the use of AI-based monitoring threatens student privacy. Because the software runs while students use their school-issued computing devices and accounts, it has the potential to collect large amounts of data about their lives. While some companies have taken voluntary pledges to safeguard student data, there is no national regulation restricting much of the data that are collected, how they are stored, and whether they are shared.

Adding to this privacy risk, families may find it difficult to opt-out of using the software. We find that across many school districts, families are required to consent to AI-based monitoring as a condition of using school-issued computing devices to begin with. If families opt out of monitoring, they must provide their own computer for school use, which is not an affordable option for many families.

Second, our research shows that many parents and researchers have concerns that using AI-based algorithms to identify at-risk students could exacerbate inequalities. For example, there have been reports that internet searches of LGBTQ+ students have been flagged at disproportionate rates by AI software. Their activities may then be brought to the attention of school officials, involuntarily “outing” these students.

The potential for suicide risk prediction algorithms to be biased against minoritized groups has been well documented through other studies. And while many have claimed that these algorithms can be corrected for bias, there is a lack of transparency about just how and when AI alerts get generated, which makes it difficult to audit the data in order to better understand if, indeed, it is biased. Another 2023 study raised further concerns about the alerts generated by AI-based student monitoring software, documenting that the programs consistently flag content related to race, gender and sexual orientation. This includes searches related to topics such as Malcolm X and the Gay Men’s Chorus of Washington.

Lastly, while the AI software does the flagging of kids, it is then up to schools to decide how to respond to alerts they receive. Throughout our interviews, we heard stories of alerts generated by AI-based monitoring being used to discipline students. For example, we talked to a teacher who told us about a student experiencing a mental health challenge who was suspended from school, rather than meeting with a counselor or other mental health professionals.

Worse still, AI based monitoring might lead to increased encounters between students and law enforcement. For example, we found that, on weekends and school holidays, when they do not have staff on hand to review information, many schools automatically direct AI-generated suicide risk alerts to local law enforcement. From the school’s point of view, such a move is often the best way to ensure that a student experiencing a mental health crisis receives immediate help. But law enforcement might not be best positioned to help students in need of support, and might even exacerbate problems. This is something we have already seen in other situations, when police have been called in to assist with mental health crises; the risk of violent interactions with law enforcement is real—especially for youth of color—and must be considered in weighing the pros and cons of using these tools.

Some people we interviewed also pointed out that this software has the potential to fuel existing inequalities regarding school discipline. For example, we know that students of color already face disproportionately high school disciplinary actions, such as suspensions and expulsions, which is connected to the school-to-prison-pipeline. Alerts created by AI software could fuel these disparities by increasing the likelihood of law enforcement contact.

Read More: How Racism Affects the Mental Health of Black Youth

Ultimately, it remains unclear whether tools can accurately detect suicide risk in students. So far no studies have followed up with the students these programs have flagged as “at risk” for suicide, to see if they actually were at risk (“true positives”) or not (“false positives”); nor have studies looked at the extent to which students at risk for suicide were not flagged by the programs (“false negatives”). School and law enforcement responses to these alerts, and ultimate student outcomes—whether a student receives medical attention or mental health care, or if a flagged student has a violent encounter with law enforcement—are also not documented. This lack of evidence means it is not clear that benefits of the software outweigh the risks we found in our research.

Parents, students, school staff, and health professionals must carefully weigh the potential benefits and challenges of AI-based monitoring. While it may serve as an important resource for schools amidst a growing youth mental health crisis, the actual, realized benefits and harms of this technology—including whether it can accurately detect suicide risk—are unknown.

In the meantime, as school districts use their budgets to deploy AI based tools for suicide risk detection, it is important to recognize the known problems. The software raises important privacy concerns and has the possibility to perpetuate existing inequalities. As a result, AI companies and schools must ensure that families have comprehensive information about how the software is being used. Families should also be able to opt-out of monitoring without penalty. In addition, more regulation is needed on the federal, state, and local levels, to ensure safeguards are in place to protect students so that this software, which is after all designed to improve students’ mental health outcomes, does not end up doing more harm than good.

https://time.com/6694425/ai-monitoring- ... ost-essay/

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 21 Feb 2024, 07:18
by hailander
Trivagonainen on vaihdettu ai:lla lokalisoituun ukkoon jonka suu käy täysin käsittänmättömällä ilmeikkyydellä. Vaikuttaa siltä että koulutusmatsku on aika tavalla rauhallisempaa puhetta ja siksi toi näyttää todella rajulta artikuloinnilta.

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 21 Feb 2024, 11:37
by Spandau Mullet
Hyvä teksti tässä näin. Koskee toki muutakin kun AI:ta, mutta ripuligeneraattoreiden käytön lisääntyminen ei ainakaan vähennä datakeskusten energiankäytön tarvetta. En pasteta kokonaan mutta tossa olennainen pätkä.

https://www.helsinki.fi/en/researchgrou ... t-the-time
Dismantling public values, one data center at the time
When politicians in the Nordics welcomed Big Tech for business, they hoped that Googles, Microsofts, Amazons and alike would bring jobs, wealth and brand value to struggling municipalities. This was also connected to the hope that they would reinforce the image of Sweden, Finland and Denmark as global tech leaders, and boost their position as the most digitalised countries in the world — something that we, in the Nordics take very much pride in. But instead of boosting this position, the Nordics are turning into the next Big Cheap processing place for the global digital economies. If China remains the place of cheap labor for Silicon Valley innovations, the Nordic countries are today the source of cheap land and cheap renewable electricity for machines needed to produce the new business of Sillicon Valley around data processing and AI. This development brings completely new problems that we are not ready to handle.

In Sweden, Microsoft, Amazon and Facebook have claimed very high amounts of the capacity of electric grids to power the computers in their data centers. These grid capacities are much higher than actual use or need. The energy industry jargon calls this practice “air bookings” (luftbokningar). This means that a lot of the available capacity of a regional electricity grid is reserved by one actor who does not make use of it but keeps the booking indefinitely in order to enable its own further growth in a situation that the whole society needs more energy and particularly green electricity. It also means that the grid capacities reserved by Big Tech are someone else’s lost opportunity to connect to the grid. The immediate problem is that the Big Tech capacity reservations prevent development projects by local municipalities, industries, and households to come to fruition, and practically eliminate competitors — through curtailing access to electricity — and putting Big Tech companies in a position of informal monopolies over the available capacities in a grid in those regions where their data centers are establsihed. These two examples make the point:

In Skåne, Microsoft booked so much electricity from the grid in the Malmö region that the local Swedish bread company Pågen could no longer build a bread baking factory in the area and had to expand elsewhere.

In Sörmland, Amazon Web Services has reserved 1/4 of the capacity of the grid in the area of Katrineholm and is currently preventing other data center actors and more job-intense industries to come and establish their businesses. Today, AWS is the largest consumer of electricity in the region, followed – and not closely — by the largest chicken slaughter house in Sweden that slaughters 200 000 chicken per day.1

The “air bookings” and scale of grid capacity reservations has created instabilities in the power grid, the prospect of power cuts for households and the prioritisation for vital service provision in case of shortage of capacity or grid overload. In effect, both Microsoft and AWS are currently engaged in projects to build new power lines and diesel backup generators in order to secure even more access to electricity and grid capacities, entering in conflicts with local farmers and climate-concerned citizens. The scale of their diesel constructions is also huge – AWS are currently planning for building a 600 MW diesel generator in Katrineholm, which is a fossil fuel driven industrial complex with the capacity of a nuclear power production factory.2 This happens while other companies in Sweden are struggling to get on to the grid at all, and while municipalities such as Uppsala can not develop public transport electrification projects because of shortages of grid capacity.

Are we ready to lose local companies, businesses, and non-digital industries in favour of providing storage space and substantial parts of the available electricity to Big Tech giants? Is this trade-off the best and most needed one in light of current developments? And are we ready to pay more for electricity, or face conflicts with citizens about land use because data companies have vacuumed the available electricity and destabilized electric grids built over many years to cater to all citizens?

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 21 Feb 2024, 12:07
by pigra senlaborulo
ibtimes.co.uk
Study Finds That ChatGPT Makes Scientific Articles More Prone To Errors
Pratiti Nath
4–5 minutes

Generative AI tools like ChatGPT created by OpenAI are affecting many avenues of life product reviews, journalism, etc. The latest victim is scientific research.

A new study conducted by the Indiana University School of Medicine has revealed the extent of the influence of artificial intelligence on scientific research.

Researchers from the university wanted to explore the role of ChatGPT in writing scientific research papers, and peer review analysis ever since OpenAI launched the generative AI tool in November 2022, leading to its huge impact in the graphics, art and writing profession. This led to many Hollywood studios replacing writers and artists with AI causing the SAG-AFTRA strikes for better pay.

Amidst this, the Indiana University School of Medicine analysed how ChatGPT could be effectively used to write scientific articles and how differently they can be used.

Speaking about the matter, the Voice Chair of Research at the Indiana University School of Medicine, Melissa Kacena said: "Right now, many journals do not want people to use ChatGPT to write their articles, but a lot of people are still trying to use it. We wanted to study whether ChatGPT can write a scientific article and what are the different ways you could successfully use it."

To study ChatGPT's ways of writing scientific research papers, the researchers used three different topics – COVID-19 and bone health, fractures and the nervous system, Alzheimer's and bone health. They asked the $20 per month ChatGPT version to write scientific articles on the three topics.

The researchers further analysed ChatGPT by employing a mixture of approaches. One involved all human writing, another used ChatGPT writing and a third one had a combination of both human and AI writing.

The result of the study was a compilation of 12 articles which is available in the special edition of Current Osteoporosis Reports.

Kacena explained how they compared the results of different approaches by collecting data regarding "how much time it takes for this human method and how much time it takes for ChatGPT to write and then for faculty to edit the different articles".

The conventional standard for peer reviewing an article is to "do a literature search, write an outline, start writing, and then faculty members revise and edit the draft", Kacena said.

The researchers found that generated scientific articles had nearly 70 per cent wrong references. However, the human and AI combined research papers showcased more plagiarism, especially when they were given more references.

The study found that generative AI tools like ChatGPT speed up the writing process as it decreases the time spent to write articles but they require more fact-checking than human-AI combined written articles.
Scientific barriers to using Generative AI tools like ChatGPT

The researchers also identified barriers to using ChatGPT for scientific research writing as the language used by the AI tool isn't suitable for scientific writing. Despite giving prompts that used higher levels of scientific writing, the words and phrases generated by ChatGPT are not according to research standards.

One of the authors of the study, Lilian Plotkin, a professor of physiology at the Indiana University School of Medicine termed the results "scary" as it meant that the writing was repetitive with incorrect references and wrong information.

This comes at a time when many authors like Game of Thrones writer George RR Martin have sued OpenAI for mass-scale systematic theft for using their work to train AI.

Another researcher, Jill Fehrenbacher, a professor of pharmacology at the same University revealed that the problem of this AI bias will persist as many non-native English speakers will use ChatGPT for writing research papers despite prohibition from journals.

Fehrenbacher said that people may write everything themselves but run it in ChatGPT to fix grammatical errors or to help with their writing, hence the necessity of using it appropriately.

"We hope to provide a guide for the scientific community so that if people are going to use it, here are some tips and advice," said Fehrenbacher regarding their research.

"I think it's here to stay, but we need to understand how we can use it in an appropriate manner that won't compromise someone's reputation or spread misinformation," Kacena added.

https://www.ibtimes.co.uk/study-finds-t ... rs-1723487

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 21 Feb 2024, 12:17
by 38911 BASIC BYTES FREE
En kyllä uskaltais tommosta puppugeneraattoria käyttää edes oikolukuun ja kielentarkistukseen.

Oikeasti tohon käyttöön tarttettaisiin sellanen teknologia jolla olisi jokin todellinen semanttinen malli. Nää LLM-kauppiaathan väittää että ei oo olemassa edes mitään semantiikkaa, että kieli on ylipäätään puppua jolla on vaan konteksti eikä lainkaan merkitystä.

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 21 Feb 2024, 16:29
by Spandau Mullet
https://www.404media.co/chatgpt-started ... -gibberish

ChatGPT Started Speaking Complete Gibberish


Tuesday evening ChatGPT users reported that the chatbot started providing them with answers that were clearly incorrect or just speaking in gibberish.

“This is by the work of service and any medical today to the field of data,” ChatGPT said in response to one user who shared a screenshot on the ChatGPT subreddit. “The 12th to the degree and the pool to the land to the top of the seam, with trade and feature, can spend the large and the all before it’s under the care.”

OpenAI said it has “remediated” the issue in a message on the ChatGPT status page. I tested ChatGPT a bit this morning and did not see any issue. “We’re continuing to monitor the situation,” the status page added.

Another user on the ChatGPT subreddit shared a screenshot of a response that looked like something that would be scrawled on the walls of the Overlook Hotel in The Shining. The user asked for a “synonym for overgrown” and ChatGPT responded:
A synonym for “overgrown” is “overgrown” is “overgrown” is “overgrown” is “Overgrown” is “overgrown” is “overgrown” is “overgrown” is “overgrown” is “Overgrown” is “overgrown” is “overgrown” is “overgrown” is “overgrown” is “Overgrown” is “overgrown” is “overgrown” is “overgrown” is “overgrown” is “Overgrown” is “overgrown” is “overgrown” is “overgrown” is “overgrown” is “Overgrown” is “
Another user on the subreddit shared a screenshot where they asked ChatGPT what is “the biggest city on earth that begins with an a”

ChatGPT responded that “The biggest city on Earth that begins with an ‘A’ is Tokyo, Japan.”

The user repeated that they wanted a city with the Letter “a.”

“My apologies for the oversight,” ChatGPT responded. “The largest city on Earth that begins with the letter ‘A’ is Beijing, China.”

OpenAI, it seems, has already fixed the issue, but the fact that the most popular AI tool in the world and arguably the primary reason we’re currently in the middle of an AI boom can suddenly go off the rails without warning is a great reminder that we can’t trust these tools blindly.

AI tools like ChatGPT don’t really “know” anything. They are just generating answers that sound and often are correct based on their training data. But it’s entirely possible for them to make shit up. For example, by now we’ve seen multiple examples of lawyers getting in trouble for using AI to generate legal documents that cite cases that don’t exist. Just because ChatGPT can provide an answer instantly doesn’t mean it’s correct.
Juu kyllä nää Kalle Kotipsykiatrit syrjäyttää ihmiskunnan ihan just, menee puol tuntii viiva tunti.

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 21 Feb 2024, 22:36
by Marxin Ryyppy
...voiko tähän HepuliGPT:hen ny muuta todeta ku että ¡Whoops!

Image

(lähde twitteristä:
Spoiler:
)

Image

Image

Image

Image

Tämä löytyi tästä vastauksesta:

Image

Image

(twit:
Spoiler:
)

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 21 Feb 2024, 22:50
by kohta alkavat aikamoiset setit
Spandau Mullet wrote:
19 Feb 2024, 18:42
Olis kyllä ollu vitun siistiä, jos omina kouluaikoina taskulaskin olisi antanut satunnaisesti vääriä vastauksia ja kaikki olis ollut vaan että kyllä se siitä joskus vielä paranee.
Vetinen paskakin kiinteytyy aikojen saatossa - Bruce Lee 1:14

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 21 Feb 2024, 23:02
by kantasolu
Image

Harlan Ellison olisi varmaan jo virittelemässä oikeusjuttua, jos vielä eläisi
Spoiler:
en tiedä aitoudesta ja mitä tossa on annettu parametreiksi ennen tuota viimeisintä, mutta haluan uskoa

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 21 Feb 2024, 23:22
by pigra senlaborulo
kantasolu wrote:
21 Feb 2024, 23:02
haluan uskoa
[-o<

Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta

Posted: 22 Feb 2024, 00:18
by ☽☽☽
kantasolu wrote:
21 Feb 2024, 23:02
Image

Harlan Ellison olisi varmaan jo virittelemässä oikeusjuttua, jos vielä eläisi
Spoiler:
en tiedä aitoudesta ja mitä tossa on annettu parametreiksi ennen tuota viimeisintä, mutta haluan uskoa
kerrankin kunnon tekstiä chat botilta [-o<