Future of Being: #Harari: "They Don't Need You Anymore" — The Replacement Phase Has Already Begun - YouTube https://www.youtube.com/watch?v=Hk5tONCthSY #KünstlicheIntelligenz #KI #Arbeit #Work #ArtificialIntelligence #AI
Future of Being: #Harari: "They Don't Need You Anymore" — The Replacement Phase Has Already Begun - YouTube https://www.youtube.com/watch?v=Hk5tONCthSY #KünstlicheIntelligenz #KI #Arbeit #Work #ArtificialIntelligence #AI
Using #AI to detect pump-and-dump scams: https://www.theregister.com/2026/02/03/south_korea_ai_stock_fraud/ #ArtificialIntelligence
Using #AI to detect pump-and-dump scams: https://www.theregister.com/2026/02/03/south_korea_ai_stock_fraud/ #ArtificialIntelligence
Using #AI to detect pump-and-dump scams: https://www.theregister.com/2026/02/03/south_korea_ai_stock_fraud/ #ArtificialIntelligence
Using #AI to detect pump-and-dump scams: https://www.theregister.com/2026/02/03/south_korea_ai_stock_fraud/ #ArtificialIntelligence
Say hello to the enshittification of AI:
"Many people frame the problem of funding A.I. as choosing the lesser of two evils: restrict access to transformative technology to a select group of people wealthy enough to pay for it, or accept advertisements even if it means exploiting users’ deepest fears and desires to sell them a product. I believe that’s a false choice. Tech companies can pursue options that could keep these tools broadly available while limiting any company’s incentives to surveil, profile and manipulate its users.
OpenAI says it will adhere to principles for running ads on ChatGPT: The ads will be clearly labeled, appear at the bottom of answers and will not influence responses. I believe the first iteration of ads will probably follow those principles. But I’m worried subsequent iterations won’t, because the company is building an economic engine that creates strong incentives to override its own rules. (The New York Times has sued OpenAI for copyright infringement of news content related to A.I. systems. OpenAI has denied those claims.)
In its early years, Facebook promised that users would control their data and be able to vote on policy changes. Those commitments eroded. The company eliminated holding public votes on policy. Privacy changes marketed as giving users more control over their data were found by the Federal Trade Commission to have done the opposite, and in fact made private information public. All of this happened gradually under pressure from an advertising model that rewarded engagement above all else.
The erosion of OpenAI’s own principles to maximize engagement may already be underway. It’s against company principles to optimize user engagement solely to generate more advertising revenue, but it has been reported that the company already optimizes for daily active users anyway..."
#AI #GenerativeAI #OpenAI #LLMs #ChatGPTs #Chatbots #BigTech #AdTech #Privacy #DataProtection
Say hello to the enshittification of AI:
"Many people frame the problem of funding A.I. as choosing the lesser of two evils: restrict access to transformative technology to a select group of people wealthy enough to pay for it, or accept advertisements even if it means exploiting users’ deepest fears and desires to sell them a product. I believe that’s a false choice. Tech companies can pursue options that could keep these tools broadly available while limiting any company’s incentives to surveil, profile and manipulate its users.
OpenAI says it will adhere to principles for running ads on ChatGPT: The ads will be clearly labeled, appear at the bottom of answers and will not influence responses. I believe the first iteration of ads will probably follow those principles. But I’m worried subsequent iterations won’t, because the company is building an economic engine that creates strong incentives to override its own rules. (The New York Times has sued OpenAI for copyright infringement of news content related to A.I. systems. OpenAI has denied those claims.)
In its early years, Facebook promised that users would control their data and be able to vote on policy changes. Those commitments eroded. The company eliminated holding public votes on policy. Privacy changes marketed as giving users more control over their data were found by the Federal Trade Commission to have done the opposite, and in fact made private information public. All of this happened gradually under pressure from an advertising model that rewarded engagement above all else.
The erosion of OpenAI’s own principles to maximize engagement may already be underway. It’s against company principles to optimize user engagement solely to generate more advertising revenue, but it has been reported that the company already optimizes for daily active users anyway..."
#AI #GenerativeAI #OpenAI #LLMs #ChatGPTs #Chatbots #BigTech #AdTech #Privacy #DataProtection
as somebody who has never used any #LLM coding assistant type thing (no Copilot, no Claude, no Codex, whatever they're all called) - it's absolutely *wild* what #AI vibers talk like
"today I switched from TalkFlabbity 7.1 Corpo to Misanthrop 2000 Ultra, the sub is 200€ per 10 words but it's absolutely worth it. wait, what, *read* documentation? I'll ask FloopBoop 3 Max Ultra Mini to summarize that for me. hey, just yesterday I vibed a 20 kLOC app and released it as GPLv3 ... it's a new age man"
as somebody who has never used any #LLM coding assistant type thing (no Copilot, no Claude, no Codex, whatever they're all called) - it's absolutely *wild* what #AI vibers talk like
"today I switched from TalkFlabbity 7.1 Corpo to Misanthrop 2000 Ultra, the sub is 200€ per 10 words but it's absolutely worth it. wait, what, *read* documentation? I'll ask FloopBoop 3 Max Ultra Mini to summarize that for me. hey, just yesterday I vibed a 20 kLOC app and released it as GPLv3 ... it's a new age man"
AshutoshShrivastava (@ai_for_success)
Google의 대형언어모델 Gemini 3와 관련된 'Deep Think'가 언급되었습니다. 원문은 매우 간단히 "Gemini 3 Deep Think."만 적혀 있어 'Deep Think'가 Gemini 3의 새로운 버전이나 고성능 모드, 기능명 등으로 보이는 업데이트 또는 발표 표기임을 시사합니다.
Security concerns are a major impediment to #AI adoption: https://www.techtarget.com/searchitoperations/news/366638794/AI-security-worries-stall-enterprise-production-deployments #ArtificialIntelligence
Security concerns are a major impediment to #AI adoption: https://www.techtarget.com/searchitoperations/news/366638794/AI-security-worries-stall-enterprise-production-deployments #ArtificialIntelligence
Security concerns are a major impediment to #AI adoption: https://www.techtarget.com/searchitoperations/news/366638794/AI-security-worries-stall-enterprise-production-deployments #ArtificialIntelligence
Security concerns are a major impediment to #AI adoption: https://www.techtarget.com/searchitoperations/news/366638794/AI-security-worries-stall-enterprise-production-deployments #ArtificialIntelligence
AshutoshShrivastava (@ai_for_success)
Google의 대형언어모델 Gemini 3와 관련된 'Deep Think'가 언급되었습니다. 원문은 매우 간단히 "Gemini 3 Deep Think."만 적혀 있어 'Deep Think'가 Gemini 3의 새로운 버전이나 고성능 모드, 기능명 등으로 보이는 업데이트 또는 발표 표기임을 시사합니다.
"Launched a year ago, the Manosphere Report now follows about 80 podcasts hand-selected by reporters at the Times on desks covering politics, public health, and internet culture. That includes right-wing podcasts like The Ben Shapiro Show, Red Scare with “Dimes Square” shock jocks Dasha Nekrasova and Anna Khachiyan, and The Clay Travis & Buck Sexton Show, a successor to Rush Limbaugh’s talk radio show. It also keeps tabs on Huberman Lab, a podcast hosted by Stanford neuroscientist Andrew Huberman that has been criticized for spreading health misinformation. Seward notes the report also includes some liberal-leaning shows, like MeidasTouch, an anti-Trump podcast with a largely male audience.
When one of the shows publishes a new episode, the tool automatically downloads it, transcribes it, and summarizes the transcript. Every 24 hours the tool collates those summaries and generates a meta-summary with shared talking points and other notable daily trends. The final report is automatically emailed to journalists each morning at 8 a.m. ET. The Times is exploring how to use this workflow to launch similar AI-generated summary reports for other beats.
Seward says the emails signal when there is a growing sentiment or shift in rhetoric across the manosphere. Ultimately, it falls on Times journalists to deliver stories by chasing leads they find in the reports."
#USA #Trump #Manosphere #Podcasts #AI #LLMs #GenerativeAI #Media #News #NYTimes #Journalism #Newspapers
"Launched a year ago, the Manosphere Report now follows about 80 podcasts hand-selected by reporters at the Times on desks covering politics, public health, and internet culture. That includes right-wing podcasts like The Ben Shapiro Show, Red Scare with “Dimes Square” shock jocks Dasha Nekrasova and Anna Khachiyan, and The Clay Travis & Buck Sexton Show, a successor to Rush Limbaugh’s talk radio show. It also keeps tabs on Huberman Lab, a podcast hosted by Stanford neuroscientist Andrew Huberman that has been criticized for spreading health misinformation. Seward notes the report also includes some liberal-leaning shows, like MeidasTouch, an anti-Trump podcast with a largely male audience.
When one of the shows publishes a new episode, the tool automatically downloads it, transcribes it, and summarizes the transcript. Every 24 hours the tool collates those summaries and generates a meta-summary with shared talking points and other notable daily trends. The final report is automatically emailed to journalists each morning at 8 a.m. ET. The Times is exploring how to use this workflow to launch similar AI-generated summary reports for other beats.
Seward says the emails signal when there is a growing sentiment or shift in rhetoric across the manosphere. Ultimately, it falls on Times journalists to deliver stories by chasing leads they find in the reports."
#USA #Trump #Manosphere #Podcasts #AI #LLMs #GenerativeAI #Media #News #NYTimes #Journalism #Newspapers