Juuh terveellä pohjalla on pisnekset ja ihan kohta aletaan kääriä fyffeä, uskokaa ny.Big Tech Fails to Convince Wall Street That AI Is Paying Off
Amazon.com Inc., Microsoft Corp. and Alphabet Inc. had one job heading into this earnings season: show that the billions of dollars they’ve each sunk into the infrastructure propelling the artificial intelligence boom is translating into real sales.
In the eyes of Wall Street, they disappointed. Shares in Google owner Alphabet have fallen 7.4% since it reported last week. Microsoft’s stock price has declined in the three days since the company’s own results. Shares of Amazon — the latest to drop its earnings on Thursday — plunged by the most since October 2022 on Friday.
Silicon Valley hailed 2024 as the year that companies would begin to deploy generative AI, the type of technology that can create text, images and videos from simple prompts. This mass adoption is meant to finally bring about meaningful profits from the likes of Google’s Gemini and Microsoft’s Copilot. The fact that those returns have yet to meaningfully materialize is stoking broader concerns about how worthwhile AI will really prove to be.
The technology represents an enormous opportunity — and that opportunity continues to grow, said Daniel Morgan, a senior portfolio manager at Synovus Trust. “But unfortunately, so does the upfront investment.” Investors are left to wonder, he said: “Can these hyper-scalers capture enough incremental increase in profit growth from their investments?”
A winning product would also do the trick, he added.
Cloud Growth
It wasn’t all bad. The three tech titans reported a healthy pace of growth in their cloud-computing divisions, the most obvious business to benefit from generative AI as the technology requires copious amounts of computational resources to perform. Those gains weren’t enough though to appease investors who are growing increasingly impatient to see returns from quarter after quarter of heavy spending on data centers and other AI infrastructure.
Amazon projected third-quarter operating income that fell shy of analysts’ estimates on Thursday. Chief Executive Officer Andy Jassy has been waging a cost-cutting campaign to free up resources to invest in AI.
“It’s really a positive indicator when we step up capital expenditures,” Amazon Chief Financial Officer Brian Olsavsky said while working to assure investors and analysts on a call Thursday.
The company’s capital expenditures totaled $30.5 billion, mostly in its AWS cloud unit, in the first half of the year. Jassy said the company has developed sophisticated algorithms to guide its investment decisions so that it builds enough capacity to meet demand without denting profits. He’s vowed the investments will be worth it to support what he and his team have called a multibillion-dollar revenue run rate business.
Alphabet’s outlook for the AI growth that investors should expect was short on specifics. Chief Investment Officer Ruth Porat said on a call with analysts that the company had “seen the benefit of our strength in AI, AI infrastructure, as well as generative AI solutions for cloud customers,” without detailing how much of the cloud unit’s growth could be attributed to investment in the technology.
Wall Street’s concerns about capital expenditures, which totaled $13.2 billion in the quarter, overshadowed better-than-expected sales. Shares fell 5% the day after the results.
Microsoft also disappointed. Sales growth for Azure, the company’s cloud computing service, slowed from the previous period. Microsoft said AI drove 8 percentage points of Azure’s growth in the quarter, compared with 7 percentage points in the previous period.
Analysts pressed Microsoft executives during a call to explain whether the sales growth would justify such heavy spending. CEO Satya Nadella stressed the investments were driven by customer demand.
One company that has bucked trend is Facebook parent Meta Platforms Inc. The company unexpectedly raised its forecast for capital expenditures, citing AI investments, but it’s second-quarter revenue also beat expectations. CEO Mark Zuckerberg credited spending on AI with driving improvements in ad targeting and content recommendations.
Zuckerberg has framed Meta’s sky-high spending on AI as a short-term sacrifice for long-term gain.
On Thursday, Apple Inc. similarly said new AI features will spur iPhone upgrades in the coming months, helping the company reemerge from a sales slowdown that has hit its China business especially hard.
Chipmakers Slide
The spending backlash also raised fears for the tech companies that have arguably benefited most from the push into AI: chipmakers. Nvidia Corp.’s shares tumbled 6.7% on Thursday and extended declines on Friday. SK Hynix Inc., the leading supplier of AI memory chips, tumbled 10% in Seoul trading, part of a broader decline in semiconductor-related stocks in Asia that also included Taiwan Semiconductor Manufacturing Corp. and Samsung Electronics Co.
Intel Corp. was headed for its worst share decline in more than 40 years after losing market share to AI leaders such as Nvidia. The shares were down 28.1% on Friday after the chipmaker reported a grim sales outlook and laid out plans to slash jobs.
To justify what could amount to a total of $1 trillion of investment in AI infrastructure over the next several years, companies need to show the technology is capable of solving increasingly complicated tasks. It must go beyond the incremental improvements that tools have delivered to professions such as coding and advertising, Jim Covello, the head of equity research at Goldman Sachs Group Inc., said in July.
Covello, who has emerged as a leader of a small but growing cohort casting doubt on the AI rally, has predicted that the tide will turn against the AI rally in the next year and a half if more significant use cases for the technology don’t start emerging.
In a mid-July interview, Zuckerberg worked to justify his industry’s spending and encouraged the market to look to the future.
“I actually think all the companies that are investing are making a rational decision,” he said at the time. “The downside of being behind is that you’re out of position for, like, the most important technology for the next 10 to 15 years.”
Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta
- Spandau Mullet
- Matti Partanen

- Posts: 99540
- Joined: 28 Jul 2014, 20:37
- Location: Raakaa paskaa akselilta Reetunlehto-Ruksimäki
Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta
https://www.bloomberg.com/news/articles ... f=YfHlo0rL
Tämä nimimerkki kirjoittaa suurimmaksi osaksi Roskakori-osioon lyhyitä viestejä, joissa ei ole juurikaan sisältöä.
- Spandau Mullet
- Matti Partanen

- Posts: 99540
- Joined: 28 Jul 2014, 20:37
- Location: Raakaa paskaa akselilta Reetunlehto-Ruksimäki
Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta
Ja lisää samasta aiheesta.
https://nymag.com/intelligencer/article ... oning.html
https://nymag.com/intelligencer/article ... oning.html
Wall Street’s $2 Trillion AI Reckoning
On July 16, the S&P 500 index, one of the most widely cited benchmarks in American capitalism, reached its highest-ever market value: $47 trillion. Of those 500 publicly traded companies, just seven of them made up a third of that valuation. To put it another way, 1.4 percent of those companies were worth more than $16 trillion, the greatest concentration of capital in the smallest number of companies in the history of the U.S. stock market.
The names are familiar: Microsoft, Apple, Amazon, Nvidia, Meta, Alphabet, and Tesla. They are all Silicon Valley, if not by address than by ethos. All of them, too, have made giant bets on artificial intelligence, hoping that the new technology will target better ads (in the case of Meta’s Facebook and Instagram), make robotaxis a possibility (as per Elon Musk), or, in the case of Nvidia, just make the chips that allow the technology to run in the first place. For all their similarities, these trillion-dollar-plus companies have been grouped together under a single banner: the Magnificent Seven.
In the past month, though, these giants of the U.S. economy have been faltering. On Friday, the Nasdaq, an index of 100 tech companies’ shares, had fallen more than 11 percent from its peak in early July, entering a technical correction, after the likes of Google and Meta revealed that their spending on AI technology had far exceeded Wall Street’s expectations. “This is an amazing about-face, like we’ve crashed into a brick wall,” Bill Stone, chief investment officer at Glenview Trust, told Bloomberg.
This rout has led to a collapse of $2.6 trillion in their market value — which, as one Twitter commenter pointed out, represents the entire market cap of Nvidia, which for a brief moment a couple of weeks ago was the most valuable private company in the world. There has been bad news all around. For example, Tesla’s new self-driving future seemed a lot like its old self-driving future — that is, a big promise that seems unlikely to come anytime soon, if at all. And Microsoft, which owns a large minority stake in OpenAI, reported that its AI cloud-computing business hadn’t grown as much as investors had expected.
This steep decline in the Magnificent Seven stocks might seem like a clear signal that Wall Street has already become disenchanted with AI, that the thesis it would revolutionize industries and create a massive productivity boom anytime soon may not be such a sure thing. And that is true — at least to a point.
Wall Street seems to be coming to grips with the fact that AI is the kind of industry that has great marketing already built into the name. It’s like saying you work in “cures” — sounds great, if it works. Clearly, ChatGPT shows that the technology is viable. But the worst scenarios, which involve mass layoffs and a sudden concentration of political power, have also not come to pass. AI hasn’t really replaced a significant number of jobs, and in the cases where it has, employers have ended up hiring people back anyway. (Let’s set aside the hyperbolic predictions around Matrix-like scenarios or superintelligent computers.) That has led to something of a vibe shift. It is commonplace now to denigrate something mediocre or sloppy as having been created by AI. Earlier this year, Goldman Sachs issued a deeply skeptical report on the industry, calling it too expensive, too clunky, and just simply not as useful as it has been chalked up to be. “There’s not a single thing that this is being used for that’s cost-effective at this point,” Jim Covello, an influential Goldman analyst, said on a company podcast.
AI is not going away, and it will surely become more sophisticated over the next few months, to say nothing of the long-term potential. This explains why, even with the tempering of the AI-investment thesis, these companies are still absolutely massive. When you talk with Silicon Valley CEOs, they love to roll their eyes at their East Coast skeptics. Banks, especially, are too cautious, too concerned with short-term goals, too myopic to imagine another world. So what if it’s not cost-effective? It’s the future!
Be that as it may, public companies rely on public dollars, and Wall Street has taken an about-face on its winners-take-all strategy. Another index of smaller companies, the Russell 2000, has risen by more than 11 percent amid the Magnificent Seven’s rout. In terms of dollars and cents, that comes out to be about $300 billion, or less than 2 percent of the Magnificent Seven’s peak valuation. But it signals that there is something much bigger going on. This is a broad base of companies, like consumer brands including E.l.f. Beauty and Abercrombie & Fitch, which simply don’t have as much capital to do anything effective with AI or where the technology is beside the point.
Look — nobody is going to invest in a company just because it doesn’t use AI. Wall Street has not gone Luddite all of a sudden. What’s happening here comes down to what Covello was talking about: the boring but necessary work of deciding what is cost-effective.
Tämä nimimerkki kirjoittaa suurimmaksi osaksi Roskakori-osioon lyhyitä viestejä, joissa ei ole juurikaan sisältöä.
- Spandau Mullet
- Matti Partanen

- Posts: 99540
- Joined: 28 Jul 2014, 20:37
- Location: Raakaa paskaa akselilta Reetunlehto-Ruksimäki
Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta
Clearly, ChatGPT shows that the technology is viable.
Vahva eri.AI is not going away, and it will surely become more sophisticated over the next few months, to say nothing of the long-term potential.
Tämä nimimerkki kirjoittaa suurimmaksi osaksi Roskakori-osioon lyhyitä viestejä, joissa ei ole juurikaan sisältöä.
-
kohta alkavat aikamoiset setit
- Kauheeta kattoo.
- Posts: 121726
- Joined: 03 May 2016, 16:03
- Location: OnlyBrutal-yhteisö
Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta
No johan sillä on puhallettu triljoonia, joten on se toiminut samoin, kuin muutkin teknokuplat.


- Spandau Mullet
- Matti Partanen

- Posts: 99540
- Joined: 28 Jul 2014, 20:37
- Location: Raakaa paskaa akselilta Reetunlehto-Ruksimäki
Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta
https://mastodon.gamedev.place/@sos/112909003599380417
Sos Sosowski
@[email protected]
Chipmakers puting AI cores in your CPU and not letting you use them for absolutely anything is the biggest waste of silicon in the history of modern computing.
Those tensor cores are godsend for things like large-scale CAD simulations but the only SDKs/samples provided are hardwired to run pretrained models.
There's no way to access the matrix/tensor capabilities directly. And that goes for both AMD and Intel.
![]()
Tämä nimimerkki kirjoittaa suurimmaksi osaksi Roskakori-osioon lyhyitä viestejä, joissa ei ole juurikaan sisältöä.
- Marxin Ryyppy
- -=Lord Of PIF=-

- Posts: 14144
- Joined: 02 Mar 2020, 18:55
- Location: Ylen sankia pride
Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta
Nvidia vois haistaa pitkän paskan.
TL;DR Nvidia käyttää vitusti resursseja ja kiertää sääntöjä ja rajoituksia scrapettaessaan Netflixiä, Youtubea ym. oman AI-paskansa koulutusmateriaaliksi.
https://archive.ph/VxxV9
TL;DR Nvidia käyttää vitusti resursseja ja kiertää sääntöjä ja rajoituksia scrapettaessaan Netflixiä, Youtubea ym. oman AI-paskansa koulutusmateriaaliksi.
https://archive.ph/VxxV9
Artikkeli jatkuu ja siellä on lisää raskauttavaa dataa tästä kyseenalaisesta kusetuksesta.Leaked Documents Show Nvidia Scraping ‘A Human Lifetime’ of Videos Per Day to Train AI
Nvidia scraped videos from Youtube and several other sources to compile training data for its AI products, internal Slack chats, emails, and documents obtained by 404 Media show.
When asked about legal and ethical aspects of using copyrighted content to train an AI model, Nvidia defended its practice as being “in full compliance with the letter and the spirit of copyright law.” Internal conversations at Nvidia viewed by 404 Media show when employees working on the project raised questions about potential legal issues surrounding the use of datasets compiled by academics for research purposes and YouTube videos, managers told them they had clearance to use that content from the highest levels of the company.
A former Nvidia employee, whom 404 Media granted anonymity to speak about internal Nvidia processes, said that employees were asked to scrape videos from Netflix, YouTube, and other sources to train an AI model for Nvidia’s Omniverse 3D world generator, self-driving car systems, and “digital human” products. The project, internally named Cosmos (but different from the company’s existing Cosmos deep learning product), has not yet been released to the public.
Emails from the project’s leadership to employees show that the goal of Cosmos was to build a state-of-the-art video foundation model “that encapsulates simulation of light transport, physics, and intelligence in one place to unlock various downstream applications critical to NVIDIA.”
Slack messages from inside a channel the company set up for the project show employees using an open-source YouTube video downloader called yt-dlp, combined with virtual machines that refresh IP addresses to avoid being blocked by YouTube. According to the messages, they were attempting to download full-length videos from a variety of sources including Netflix, but were focused on YouTube videos. Emails viewed by 404 Media show project managers discussing using 20 to 30 virtual machines in Amazon Web Services to download 80 years-worth of videos per day.
“We are finalizing the v1 data pipeline and securing the necessary computing resources to build a video data factory that can yield a human lifetime visual experience worth of training data per day,” Ming-Yu Liu, vice president of Research at Nvidia and a Cosmos project leader said in an email in May.
The conversations and directives from inside Nvidia show its employees discussing the legal and ethical considerations at the company designing the chips and APIs powering the generative AI boom, which made it one of the most valuable publicly traded companies in the world. It also highlights how the industry’s biggest companies, including Runway and OpenAI, have an insatiable appetite for content to serve as data to train their AI models.
“We respect the rights of all content creators and are confident that our models and our research efforts are in full compliance with the letter and the spirit of copyright law,” an Nvidia spokesperson told 404 Media in an email. “Copyright law protects particular expressions but not facts, ideas, data, or information. Anyone is free to learn facts, ideas, data, or information from another source and use it to make their own expressions. Fair use also protects the ability to use a work for a transformative purpose, such as model training.”
When asked for comment about Nvidia’s use of YouTube videos as training data for its model, a Google spokesperson told 404 Media that the company’s “previous comments still stand,” and linked to an April 2024 Bloomberg article where YouTube CEO Neal Mohan said if OpenAI used YouTube videos to refine Sora, its AI video generator, that would be a “clear violation” of YouTube’s terms of use.
A Netflix spokesperson told 404 Media that Netflix does not have a deal with Nvidia for content ingestion, and the platform’s terms of service don't allow scraping.
Questions from employees working on the project about legal issues were often dismissed by project managers, who said the decision to scrape videos without permission was an “executive decision” that they need not worry about, and the topic of what constitutes fair, ethical use of copyrighted content and academic, noncommercial-use datasets were regarded as an “open legal issue” that they’d resolve in the future.
Our investigation highlights the ‘don't ask for permission’ ethos technology companies have when it comes to scraping massive amounts of copyrighted content into datasets for training some of the most valuable AI models in the world.
Who am I? Who else is there? Who am I? Let's put it this way: who has the best tunes?- pigra senlaborulo
- pyllypuhelinmyyjä
- Posts: 125088
- Joined: 12 Jan 2013, 02:48
- Location: ~/
Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta
theregister.com
What AI bubble? Groq rakes in $640M to grow inference cloud
Tobias Mann
6–7 minutes
Even as at least some investors begin to question the return on investment of AI infrastructure and services, venture capitalists appear to be doubling down. On Monday, AI chip startup Groq — not to be confused with xAI's Grok chatbot — announced it had scored $640 million in series-D funding to bolster its inference cloud.
Founded in 2016, the Mountain View, California-based startup began its life as an AI chip slinger targeting high throughput, low cost inferencing as opposed to training. Since then the company has transitioned to an AI infrastructure-as-a-service provider and walked away from selling hardware.
In total, Groq has raised more than $1 billion and now boasts a valuation of $2.8 billion, with its latest funding round led by the likes of BlackRock, Neuberger Berman, Type One Ventures, Cisco Investments, Global Brain, and Samsung Catalyst.
The firm's main claim to fame is that its chips can generate more tokens faster, while using less energy, than GPU-based equipment. At the heart of all of this, is Groq's Language Processing Unit (LPU), which approaches the problem of running LLMs a little differently.
As our sibling site The Next Platform previously explored, Groq's LPUs don't require gobs of pricy high-bandwidth memory or advantaged packaging — both factors that have contributed to bottlenecks in the supply of AI infrastructure.
Instead, Groq's strategy is to stitch together hundreds of LPUs, each packed with on-die SRAM, using a fiber optic interconnect. Using a cluster of 576 LPUs, Groq claims it was able to achieve generation rates of more than 300 tokens per second on Meta's Llama 2 70B model, 10x that of an HGX H100 system with eight GPUs, while consuming a tenth of the power.
Groq now intends to use its millions to expand headcount and bolster its inference cloud to support more customers. As it stands, Groq purports to have more than 360,000 developers build on GroqCloud creating applications using openly available models.
"This funding will enable us to deploy more than 100,000 additional LPUs into GroqCloud," CEO Jonathan Ross said Monday.
"Training AI models is solved, now it's time to deploy these models so the world can use them. Having secured twice the funding sought, we now plan to significantly expand our talent density.
These won't, however, be Groq's next-gen LPUs. Instead, they'll be built using GlobalFoundries' 14nm process node, and delivered by the end of Q1 2025. Nvidia's next-gen Blackwell GPUs are expected to be arriving within the next 12 or so months, depending on how delayed they turn out to be.
Groq is said to be working on two new generations of LPUs, which, last we heard, would utilize Samsung's 4nm process tech and deliver somewhere between 15x and 20x higher power efficiency.
You can find a deeper dive on Groq's LPU strategy and performance claims on The Next Platform.
VC Capital continues to flow into AI startups
Groq isn't the only infrastructure vendor that's managed to capitalize on all the AI hype. In fact, $640 billion is far from the largest chunk of change we've seen startups walk away with in recent memory.
As you may recall, back in May, GPU bit barn CoreWeave scored $1.1 billion in series-C funding weeks before it managed to talk Blackstone, Blackrock, and others into a loan for $7.5 billion using its GPUs as collateral.
Meanwhile, Lambda labs, another GPU cloud operator, used its cache of GPUs to secure a combined $820 million in fresh funding and debt financing since February, and it doesn't look like it is satisfied yet. Last month we learned Lambda was reportedly in talks with VCs for another $800 million in funding to support the deployment of yet more Nvidia GPUs.
While VC funding continues to flow into AI startups, it seems some on Wall Street are increasingly nervous about whether these multi-billion-dollar investments in AI infrastructure will ever pay off.
Still that hasn't stopped ML upstarts, such as Cerebras, from pursuing an initial public offering (IPO). Last week the outfit, best known for its dinner plate-sized accelerators aimed at model training, revealed it had confidentially filed for a public listing.
The size and price range of the IPO have yet to be determined. Cerebras' rather unusual approach to the problem of AI training has helped it win north of $900 million in commitments from the likes of G42.
Meanwhile, with the rather notable exception in Intel, which saw its profits plunge $1.6 billion year-over-year in Q2 amid plans to lay off at least 15 percent of its workforce, chip vendors and the cloud providers reselling access to their accelerators have been among the biggest beneficiaries of the AI boom. Last week, AMD revealed its MI300X GPUs accounted for more than $1 billion of its datacenter sales.
However, it appears that the real litmus test for whether the AI hype train is about to derail won't come until the market leader Nvidia announces its earnings and outlook later this month. https://www.theregister.com/2024/08/05/groq_ai_funding/
Ei ole mitään rikkuria alhaisempaa.
Marx propagoi fiksuuttaan lukemalla kirjoja ja kirjoittamalla niitä. Bakunin taas tuhosi aivosolujaan alkoholilla. Jäljellejääneet aivosolut saivat tilaa kasvaa ja kehittyä, ja lopulta Bakuninin pääkopassa oli vain yksi helvetin iso ja fiksu aivosolu. Bakunin oli siis fiksumpi kuin Marx.
- pigra senlaborulo
- pyllypuhelinmyyjä
- Posts: 125088
- Joined: 12 Jan 2013, 02:48
- Location: ~/
Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta
theregister.com
All y'all love AI, right? Get ready for Gemini in Nest cameras, Google Assistant
Matthew Connatser
4–5 minutes
Google's Gemini AI is making its way to Nest cameras and Google Assistant, with the web goliath claiming the upgrade will make its smart devices smarter.
Announced today, these Gemini-based features will be rolled out later this year to Nest devices for owners who are subscribed to Nest Aware, an $8-monthly service. The AI updates detailed so far are for Nest cameras and Google Assistant, which runs on Nest speakers and displays.
The Chocolate Factory's pitch for AI in cameras rests on Gemini's ability to comprehend, in a very artificial way, what it's seeing. Instead of just being able to detect motion, for instance, it will apparently be able to describe for users exactly what's happening, such as "the dog is digging in the garden" or "balloons, basket on doormat."
This capability will purportedly make it possible for Nest owners to use the Google Home app to search footage with questions like "did the FedEx truck drive by today?" That would, in theory, eliminate the need to scrub a timeline to find whatever a user is looking for.
Gemini can also make custom automations (essentially automated scripts for Nest devices) out of a simple question. Users don't even need to describe exactly what they want to Google's AI, as it can come up with a suggested automation that fits what a user is looking for.
Voice-and-keyboard controlled Google Assistant meanwhile will be getting something of a ChatGPT-like upgrade. The tech giant hasn't really made any specific claims beyond the digital assistant being "more natural and helpful," but did demonstrate it answering a question about whether Pluto was a planet or not, and then answering a follow up question on if that could change in the future. Not exactly world-shattering at this point, though. Pun intended.
While the features certainly seem like a big improvement for Google's smart home devices, there may be some points of concern. For starters, AI has been known to be funky at times, and Gemini is no exception. Users will definitely want to double check any automations created by Gemini, unless they want to experience what it's like to live in a haunted house.
Privacy could also be a salient issue, though Google says it does "all of this while ensuring your data is safe and private, consistent with our principles." The Chocolate Factory has been hit and miss when it comes to privacy over the years, and is by far not the worst offender though it is currently fending off a suit specifically over children's privacy.
And of course, there's the matter of Google eventually killing support for these features, because that's just what the ad biz does if it's given enough time.
https://www.theregister.com/2024/08/06/ ... meras_and/
Ei ole mitään rikkuria alhaisempaa.
Marx propagoi fiksuuttaan lukemalla kirjoja ja kirjoittamalla niitä. Bakunin taas tuhosi aivosolujaan alkoholilla. Jäljellejääneet aivosolut saivat tilaa kasvaa ja kehittyä, ja lopulta Bakuninin pääkopassa oli vain yksi helvetin iso ja fiksu aivosolu. Bakunin oli siis fiksumpi kuin Marx.
- Spandau Mullet
- Matti Partanen

- Posts: 99540
- Joined: 28 Jul 2014, 20:37
- Location: Raakaa paskaa akselilta Reetunlehto-Ruksimäki
Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta
https://yle.fi/a/74-20102412

tl;dr mutta toivottavasti tuleeTuleeko EU:sta tekoälyn takapajula? ”Se nyt on aika paljon eurooppalaisista itsestään kiinni”
Tämä nimimerkki kirjoittaa suurimmaksi osaksi Roskakori-osioon lyhyitä viestejä, joissa ei ole juurikaan sisältöä.
- pigra senlaborulo
- pyllypuhelinmyyjä
- Posts: 125088
- Joined: 12 Jan 2013, 02:48
- Location: ~/
Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta
kompSpandau Mullet wrote: ↑07 Aug 2024, 08:40https://yle.fi/a/74-20102412
tl;dr mutta toivottavasti tuleeTuleeko EU:sta tekoälyn takapajula? ”Se nyt on aika paljon eurooppalaisista itsestään kiinni”![]()
Ei ole mitään rikkuria alhaisempaa.
Marx propagoi fiksuuttaan lukemalla kirjoja ja kirjoittamalla niitä. Bakunin taas tuhosi aivosolujaan alkoholilla. Jäljellejääneet aivosolut saivat tilaa kasvaa ja kehittyä, ja lopulta Bakuninin pääkopassa oli vain yksi helvetin iso ja fiksu aivosolu. Bakunin oli siis fiksumpi kuin Marx.
Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta
Amerikkalaiset teknologiayhtiöt eivät tuo uusimpia tekoälysovelluksiaan Eurooppaan

I Am Devloper wrote:my CV says "enthusiastic" but my eyes say "dead inside"
Krigg wrote:Olen oman elämäni Marco Bjuzöm, mutta ilman nami namia.
Laku Setä wrote: tee plääni,raaka toteutus ja massit fikkaan.
- pigra senlaborulo
- pyllypuhelinmyyjä
- Posts: 125088
- Joined: 12 Jan 2013, 02:48
- Location: ~/
Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta
newatlas.com
ChatGPT is as (in)accurate at diagnosis as ‘Dr Google’
By Paul McClure
6 - 8 minutes
ChatGPT is mediocre at diagnosing medical conditions, getting it right only 49% of the time, according to a new study. The researchers say their findings show that AI shouldn’t be the sole source of medical information and highlight the importance of maintaining the human element in healthcare.
The convenience of access to online technology has meant that some people bypass seeing a medical professional, choosing to google their symptoms instead. While being proactive about one’s health is not a bad thing, ‘Dr Google’ is just not that accurate. A 2020 Australian study looking at 36 international mobile and web-based symptom checkers found that a correct diagnosis was listed first only 36% of the time.
Surely, AI has improved since 2020. Yes, it definitely has. OpenAI’s ChatGPT has progressed in leaps and bounds – it’s able to pass the US Medical Licensing Exam, after all. But does that make it better than Dr Google in terms of diagnostic accuracy? That’s the question that researchers from Western University in Canada sought to answer in a new study.
Using ChatGPT 3.5, a large language model (LLM) trained on a massive dataset of over 400 billion words from the internet from sources that include books, articles, and websites, the researchers conducted a qualitative analysis of the medical information the chatbot provided by having it answer Medscape Case Challenges.
Medscape Case Challenges are complex clinical cases that challenge a medical professional’s knowledge and diagnostic skills. Medical professionals are required to make a diagnosis or choose an appropriate treatment plan for a case by selecting from four multiple-choice answers. The researchers chose Medscape’s Case Challenges because they’re open-source and freely accessible. To prevent the possibility that ChatGPT had prior knowledge of the cases, only those authored after model 3.5’s training in August 2021 were included.
A total of 150 Medscape cases were analyzed. With four multiple-choice responses per case, that meant there were 600 possible answers in total, with only one correct answer per case. The analyzed cases covered a wide range of medical problems, with titles like "Beer, Aspirin Worsen Nasal Issues in a 35-Year-Old With Asthma", "Gastro Case Challenge: A 33-Year-Old Man Who Can’t Swallow His Own Saliva", "A 27-Year-Old Woman With Constant Headache Too Tired To Party", "Pediatric Case Challenge: A 7-Year-Old Boy With a Limp and Obesity Who Fell in the Street", and "An Accountant Who Loves Aerobics With Hiccups and Incoordination". Cases with visual assets, like clinical images, medical photography, and graphs, were excluded.
An example of a standardized prompt fed to ChatGPT
An example of a standardized prompt fed to ChatGPT
Hadi et al.
To ensure consistency in the input provided to ChatGPT, each case challenge was turned into one standardized prompt, including a script of the output the chatbot was to provide. All cases were evaluated by at least two independent raters, medical trainees, blinded to each other’s responses. They assessed ChatGPT’s responses based on diagnostic accuracy, cognitive load (that is, the complexity and clarity of information provided, from low to high), and quality of medical information (including whether it was complete and relevant).
Out of the 150 Medscape cases analyzed, ChatGPT provided correct answers in 49% of cases. However, the chatbot demonstrated an overall accuracy of 74%, meaning it could identify and reject incorrect multiple-choice options.
“This higher value is due to the ChatGPT’s ability to identify true negatives (incorrect options), which significantly contributes to the overall accuracy, enhancing its utility in eliminating incorrect choices,” the researchers explain. “This difference highlights ChatGPT’s high specificity, indicating its ability to excel at ruling out incorrect diagnoses. However, it needs improvement in precision and sensitivity to reliably identify the correct diagnosis.”
In addition, ChatGPT provided false positives (13%) and false negatives (13%), which has implications for its use as a diagnostic tool. A little over half (52%) of the answers provided were complete and relevant, with 43% incomplete but still relevant. ChatGPT tended to produce answers with a low (51%) to moderate (41%) cognitive load, making them easy to understand for users. However, the researchers point out that this ease of understanding, combined with the potential for incorrect or irrelevant information, could result in “misconceptions and a false sense of comprehension”, particularly if ChatGPT is being used as a medical education tool.
“ChatGPT also struggled to distinguish between diseases with subtly different presentations and the model also occasionally generated incorrect or implausible information, known as AI hallucinations, emphasizing the risk of sole reliance on ChatGPT for medical guidance and the necessity of human expertise in the diagnostic process,” said the researchers.
The researchers say that AI should be used as a tool to enhance, not replace, medicine's human element
The researchers say that AI should be used as a tool to enhance, not replace, medicine's human element
Of course – and the researchers point this out as a limitation of the study – ChatGPT 3.5 is only one AI model that may not be representative of other models and is bound to improve in future iterations, which may improve its accuracy. Also, the Medscape cases analyzed by ChatGPT primarily focused on differential diagnosis cases, where medical professionals must differentiate between two or more conditions with similar signs or symptoms.
While future research should assess the accuracy of different AI models using a wider range of case sources, the results of the present study are nonetheless instructive.
“The combination of high relevance with relatively low accuracy advises against relying on ChatGPT for medical counsel, as it can present important information that may be misleading,” the researchers said. “While our results indicate that ChatGPT consistently delivers the same information to different users, demonstrating substantial inter-rater reliability, it also reveals the tool’s shortcomings in providing factually correct medical information, as evident [sic] by its low diagnostic accuracy.”
The study was published in the journal PLOS One.
https://newatlas.com/technology/chatgpt ... diagnosis/
Ei ole mitään rikkuria alhaisempaa.
Marx propagoi fiksuuttaan lukemalla kirjoja ja kirjoittamalla niitä. Bakunin taas tuhosi aivosolujaan alkoholilla. Jäljellejääneet aivosolut saivat tilaa kasvaa ja kehittyä, ja lopulta Bakuninin pääkopassa oli vain yksi helvetin iso ja fiksu aivosolu. Bakunin oli siis fiksumpi kuin Marx.
- Spandau Mullet
- Matti Partanen

- Posts: 99540
- Joined: 28 Jul 2014, 20:37
- Location: Raakaa paskaa akselilta Reetunlehto-Ruksimäki
Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta
Semmonen ihan pieni ongelma tässä on, että ainakin täälläpäin ei todel-vitun-lakaan ole laillista syöttää potilasdataa mihinkään kaupallisen lafkan datankerääjä-ripulipulputtimeen. Rapakon takanahan tämmönen ei varmasti ole mikään ongelma. Toki jutussa myöhemmin mainittuja itsediagnooseja ei voi estää.pigra senlaborulo wrote: ↑07 Aug 2024, 13:34ChatGPT is mediocre at diagnosing medical conditions, getting it right only 49% of the time, according to a new study. The researchers say their findings show that AI shouldn’t be the sole source of medical information and highlight the importance of maintaining the human element in healthcare.
Tämä nimimerkki kirjoittaa suurimmaksi osaksi Roskakori-osioon lyhyitä viestejä, joissa ei ole juurikaan sisältöä.
- Spandau Mullet
- Matti Partanen

- Posts: 99540
- Joined: 28 Jul 2014, 20:37
- Location: Raakaa paskaa akselilta Reetunlehto-Ruksimäki
Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta
https://www.theverge.com/2024/8/7/24211 ... than-sales
Kiinnostaisi kyllä, miksi mikään firma haluaisi ostaa tämmösen roskispalon. Normaalisti käyttäjien data voisi olla rahanarvoista tavaraa, mutta tässä tapauksessa ei ole niitä käyttäjiäkään.
Sillä laillaHumane’s daily returns are outpacing sales
Shortly after Humane released its $699 AI Pin in April, the returns started flowing in.
Between May and August, more AI Pins were returned than purchased, according to internal sales data obtained by The Verge. By June, only around 8,000 units hadn’t been returned, a source with direct knowledge of sales and return data told me. As of today, the number of units still in customer hands had fallen closer to 7,000, a source with direct knowledge said.
At launch, the AI Pin was met with overwhelmingly negative reviews. Our own David Pierce said it “just doesn’t work,” and Marques Brownlee called it “the worst product” he’s ever reviewed. Now, Humane is attempting to stabilize its operations and maintain confidence among staff and potential acquirers. The New York Times reported in June that HP is considering purchasing the company, and The Information reported last week that Humane is negotiating with its current investors to raise debt, which could later be converted into equity.
Humane’s AI Pin and accessories have brought in just over $9 million in lifetime sales, according to the internal data seen by The Verge. But around 1,000 purchases were canceled before shipping, and more than $1 million worth of product has been returned.
These figures, which have not been reported before, paint a better picture of the difficult position Humane finds itself in with limited options on a path forward. The low sales figures also pale in comparison to the over $200 million that Humane has raised from notable Silicon Valley executives like OpenAI CEO Sam Altman and Salesforce CEO Marc Benioff. To date, around 10,000 Pins and accessories have shipped in total. Humane hoped to ship about 100,000 Pins within the first year, according to a source with knowledge of the plan and first reported by The New York Times.
Zoz Cuccias, a spokesperson for Humane, said there were inaccuracies to The Verge’s reporting, “including the financial data.” When pressed about the specifics of those inaccuracies, Cuccias said “we have nothing else to provide as we do not comment on financial data, and will refer it to our legal counsel.”
Once a Humane Pin is returned, the company has no way to refurbish it, sources with knowledge of the return process confirmed. The Pin becomes e-waste, and Humane doesn’t have the opportunity to reclaim the revenue by selling it again. The core issue is that there is a T-Mobile limitation that makes it impossible (for now) for Humane to reassign a Pin to a new user once it’s been assigned to someone. One source said they don’t believe Humane has disposed of the old Pins because “they’re still hopeful they can solve this problem eventually.” T-Mobile declined to comment and referred us to Humane.
“We knew we were at the starting line, not the finish line” when the AI Pin launched, Cucias wrote via email. She said that Humane has since released several software updates to “address user feedback.”
There has been some notable turnover in Humane’s executive roles in recent months. Humane’s director of customer experience, Tori Geiken, disappeared from the company’s internal Slack last week, sources say. One source said the employees who worked under Geiken weren’t given official notice about her leaving. Geiken didn’t respond to multiple requests for comment.
Humane has also seen turnover in its software engineering leadership since the start of the year. Jeremy Werner, the vice president of engineering who joined Humane while it was still building in secret, Patrick Gates, the former CTO, and Ken Kocienda, the head of product engineering, all left Humane with little explanation to their staff, according to sources. In January, Humane laid off 4 percent of employees as a cost cutting measure ahead of the AI Pin’s launch.
Cucias told The Verge that “we continue to build an incredibly talented and deeply experienced team” and are “committed to unlocking a new era of ambient and contextual computing” with the addition of Rubén Caballero, chief engineering and strategy officer, to Humane’s leadership team in June.
Imran Chaudhri, president and cofounder of Humane, debuted the product to significant buzz when he first demoed the AI Pin onstage during a TED Talk in April 2023. Both Chaudhri and his wife, Bethany Bongiorno, cofounder and CEO of Humane, previously worked at Apple, where they claimed involvement in developing the Macintosh, iPod, iPad, Apple Watch, and iPhone. On the day the bad reviews started to emerge, Bongiorno posted on X, “This is the starting point. No gen 1 is perfect nor is it ever the complete vision.”
Humane’s leadership appears to have ignored some warning signs that those bad reviews were coming. After a small group of testers — which included the cofounders’ parents and friends, several investors, and employees — received Pins prior to the device’s public launch, many voiced concerns about the product. One alpha tester contacted customer support to describe the product as “disorienting” and “frustrating” and described the many ways the product failed to replicate abilities shown in demo videos. A source said the early feedback “ripped through the company like a bullet.” They launched the AI Pin anyway.
Kiinnostaisi kyllä, miksi mikään firma haluaisi ostaa tämmösen roskispalon. Normaalisti käyttäjien data voisi olla rahanarvoista tavaraa, mutta tässä tapauksessa ei ole niitä käyttäjiäkään.Tämä nimimerkki kirjoittaa suurimmaksi osaksi Roskakori-osioon lyhyitä viestejä, joissa ei ole juurikaan sisältöä.
- pigra senlaborulo
- pyllypuhelinmyyjä
- Posts: 125088
- Joined: 12 Jan 2013, 02:48
- Location: ~/
Re: Täällä seurataan AI-ripuligeneraattoreiden maailmanvalloitusta
Ei ole mitään rikkuria alhaisempaa.
Marx propagoi fiksuuttaan lukemalla kirjoja ja kirjoittamalla niitä. Bakunin taas tuhosi aivosolujaan alkoholilla. Jäljellejääneet aivosolut saivat tilaa kasvaa ja kehittyä, ja lopulta Bakuninin pääkopassa oli vain yksi helvetin iso ja fiksu aivosolu. Bakunin oli siis fiksumpi kuin Marx.