News from the AI & ML world

DeeperML - #chatgpt

@www.linkedin.com //
Universities are increasingly integrating artificial intelligence into education, not only to enhance teaching methodologies but also to equip students with the essential AI skills they'll need in the future workforce. There's a growing understanding that students should learn how to use AI tools effectively and ethically, rather than simply relying on them as a shortcut for completing assignments. This shift involves incorporating AI into the curriculum in meaningful ways, ensuring students understand both the capabilities and limitations of these technologies.

Estonia is taking a proactive approach with the launch of AI chatbots designed specifically for high school classrooms. This initiative aims to familiarize students with AI in a controlled educational environment. The goal is to empower students to use AI tools responsibly and effectively, moving beyond basic applications to more sophisticated problem-solving and critical thinking.

Furthermore, Microsoft is introducing new AI features for educators within Microsoft 365 Copilot, including Copilot Chat for teens. Microsoft's 2025 AI in Education Report highlights that over 80% of surveyed educators are using AI, but a significant portion still lack confidence in its effective and responsible use. These initiatives aim to provide necessary training and guidance to teachers and administrators, ensuring they can integrate AI seamlessly into their instruction.

Recommended read:
References :

Sean Endicott@windowscentral.com //
A recent MIT study has sparked debate about the potential cognitive consequences of over-reliance on AI tools like ChatGPT. The research suggests that using these large language models (LLMs) can lead to reduced brain activity and potentially impair critical thinking and writing skills. The study, titled "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task," examined the neural and behavioral effects of using ChatGPT for essay writing. The findings raise questions about the long-term impact of AI on learning, memory, and overall cognitive function.

The MIT researchers divided 54 participants into three groups: one group that used ChatGPT exclusively, a search engine group, and a brain-only group relying solely on their own knowledge. Participants wrote essays on various topics over three sessions while wearing EEG headsets to monitor brain activity. The results showed that the ChatGPT group experienced a 32% lower cognitive load compared to the brain-only group. In a fourth session, participants from the ChatGPT group were asked to write without AI assistance, and their performance was notably worse, indicating a decline in independent writing ability.

While the study highlights potential drawbacks, other perspectives suggest that AI tools don't necessarily make users less intelligent. The study's authors themselves acknowledge nuances, stating that the criticism of LLMs is supported and qualified by the findings, avoiding a black-and-white conclusion. Experts suggest that using ChatGPT strategically and not as a complete replacement for cognitive effort could mitigate the risks. They emphasized the importance of understanding the tools capabilities and limitations, focusing on augmentation rather than substitution of human skills.

Recommended read:
References :
  • The Algorithmic Bridge: A nuanced AI study, you've got to love it!
  • www.windowscentral.com: A new MIT study suggests that relying on ChatGPT can lower cognitive effort and lead to worse thinking and writing without AI assistance.
  • www.laptopmag.com: MIT finds AI tools like ChatGPT can limit brain activity, impair memory, and decrease user engagement.

@www.marktechpost.com //
OpenAI has recently released an open-sourced version of a customer service agent demo, built using its Agents SDK. The "openai-cs-agents-demo," available on GitHub, showcases the creation of domain-specialized AI agents. This demo models an airline customer service chatbot, which adeptly manages a variety of travel-related inquiries by dynamically routing user requests to specialized agents. The system's architecture comprises a Python backend utilizing the Agents SDK for agent orchestration and a Next.js frontend providing a conversational interface and visual representation of agent transitions.

The demo boasts several focused agents including a Triage Agent, Seat Booking Agent, Flight Status Agent, Cancellation Agent, and an FAQ Agent. Each agent is meticulously configured with specific instructions and tools tailored to their particular sub-tasks. When a user submits a request, the Triage Agent analyzes the input to discern intent and subsequently dispatches the query to the most appropriate downstream agent. Guardrails for relevance and jailbreak attempts are implemented, ensuring topicality and preventing misuse.

In related news, OpenAI CEO Sam Altman has claimed that Meta is aggressively attempting to poach OpenAI's AI employees with extravagant offers, including $100 million signing bonuses and significantly higher annual compensation packages. Despite these lucrative offers, Altman stated that "none of our best people have decided to take them up on that," suggesting OpenAI's culture and vision are strong factors in retaining talent. Altman believes Meta's approach focuses too heavily on monetary incentives rather than genuine innovation and a shared purpose, which he sees as crucial for success in the AI field.

Recommended read:
References :
  • techstrong.ai: Meta is Offering OpenAI Employees $100 Million Bonuses to Switch Companies, Altman Claims
  • www.laptopmag.com: Why OpenAI engineers are turning down $100 million from Meta, according to Sam Altman
  • www.marktechpost.com: OpenAI Releases an Open‑Sourced Version of a Customer Service Agent Demo with the Agents SDK
  • www.tomshardware.com: Sam Altman says Meta is offering obscene $100M bonuses to poach AI employees and even bigger salaries — OpenAI CEO says ‘none of our best people decided to take them up on that’

Chris McKay@Maginative //
References: Maginative , techstrong.ai , MarkTechPost ...
OpenAI has secured a significant contract with the U.S. Defense Department, marking its first major foray into the national security sector. The one-year agreement, valued at $200 million, signifies a pivotal moment as OpenAI aims to supply its AI tools for administrative tasks and proactive cyberdefense. This initiative is the inaugural project under OpenAI's new "OpenAI for Government" program, highlighting the company's strategic shift and ambition to become a key provider of generative AI solutions for national security agencies. This deal follows OpenAI's updated usage policy, which now permits defensive or humanitarian military applications, signaling a departure from its earlier stance against military use of its AI models.

This move by OpenAI reflects a broader trend in the AI industry, with rival companies like Anthropic and Meta also embracing collaborations with defense contractors and intelligence agencies. OpenAI emphasizes that its usage policy still prohibits weapon development or kinetic targeting, and the Defense Department contract will adhere to these restrictions. The "OpenAI for Government" program includes custom models, hands-on support, and previews of product roadmaps for government agencies, offering them an enhanced Enterprise feature set.

In addition to its government initiatives, OpenAI is expanding its enterprise strategy by open-sourcing a new multi-agent customer service demo on GitHub. This demo showcases how to build domain-specialized AI agents using the Agents SDK, offering a practical example for developers. The system models an airline customer service chatbot capable of handling various travel-related queries by dynamically routing requests to specialized agents like Seat Booking, Flight Status, and Cancellation. By offering transparent tooling and clear implementation examples, OpenAI aims to accelerate the adoption of agentic systems in everyday enterprise applications.

Recommended read:
References :
  • Maginative: OpenAI has clinched a one-year, $200 million contract—its first with the U.S. Defense Department—kicking off a new “OpenAI for Government†program and intensifying the race to supply generative AI to national-security agencies.
  • techstrong.ai: The Defense Department on Monday awarded OpenAI a one-year, $200 million contract for use of its artificial intelligence (AI) tools for administrative tasks and proactive cyberdefense – the first project of what the ChatGPT maker hopes will be many under its new OpenAI for Government initiative.
  • AI News | VentureBeat: By offering transparent tooling and clear implementation examples, OpenAI is pushing agentic systems out of the lab and into everyday use.
  • MarkTechPost: OpenAI has open-sourced a new multi-agent customer service demo on GitHub, showcasing how to build domain-specialized AI agents using its Agents SDK.
  • www.marktechpost.com: OpenAI has open-sourced a new multi-agent customer service demo on GitHub, showcasing how to build domain-specialized AI agents using its Agents SDK.

Mark Tyson@tomshardware.com //
References: , Last Week in AI , composio.dev ...
OpenAI has launched O3 PRO for ChatGPT, marking a significant advancement in both performance and cost-efficiency for its reasoning models. This new model, O3-Pro, is now accessible through the OpenAI API and the Pro plan, priced at $200 per month. The company highlights substantial improvements with O3 PRO and has also dropped the price of its previous o3 model by 80%. This strategic move aims to provide users with more powerful and affordable AI capabilities, challenging competitors in the AI model market and expanding the boundaries of reasoning.

The O3-Pro model is set to offer enhanced raw reasoning capabilities, but early reviews suggest mixed results when compared to competing models like Claude 4 Opus and Gemini 2.5 Pro. While some tests indicate that Claude 4 Opus currently excels in prompt following, output quality, and understanding user intentions, Gemini 2.5 Pro is considered the most economical option with a superior price-to-performance ratio. Initial assessments suggest that O3-Pro might not be worth the higher cost unless the user's primary interest lies in research applications.

The launch of O3-Pro coincides with other strategic moves by OpenAI, including consolidating its public sector AI products under the "OpenAI for Government" banner, including ChatGPT Gov. OpenAI has also secured a $200 million contract with the U.S. Department of Defense to explore AI applications in administration and security. Despite these advancements, OpenAI is also navigating challenges, such as the planned deprecation of GPT-4.5 Preview in the API, which has caused frustration among developers who relied on the model for their applications and workflows.

Recommended read:
References :
  • : OpenAI finally unveiled their much expensive O3, the O3-Pro. The model is available in their API and Pro plan, which costs $200
  • Last Week in AI: OpenAI introduces O3 PRO for ChatGPT, highlighting significant improvements in performance and cost-efficiency.
  • Last Week in AI: OpenAI adds o3 Pro to ChatGPT and drops o3 price by 80 per cent, ProRL: Prolonged Reinforcement Learning Expands Reasoning Boundaries
  • composio.dev: OpenAI finally unveiled their much expensive O3, the O3-Pro. The model is available in their API and Pro plan, which costs $200

@www.unite.ai //
OpenAI has significantly upgraded ChatGPT's Projects feature, introducing a suite of new tools designed to enhance productivity and streamline workflows. This update marks the most substantial improvement to Projects since its initial launch, transforming it from a simple organizational tool into a smarter, more context-aware workspace. Users can now leverage features like voice mode, enhanced memory, and mobile file uploads to manage research, code repositories, and creative endeavors more efficiently. The implications for professionals and anyone managing complex tasks are considerable.

The upgraded Projects feature now includes six key enhancements. Voice Mode allows users to engage in discussions about files and past chats hands-free, enabling activities like reviewing reports while walking or brainstorming during commutes. Enhanced memory ensures ChatGPT retains context from previous conversations and documents within a project, eliminating the need for repetitive explanations. The ability to upload files directly from mobile devices offers greater flexibility and convenience.

Furthermore, Projects now supports Deep Research, providing more in-depth analysis and insights. Users can also set project-specific instructions that override general custom instructions, ensuring a tailored experience. These updates collectively transform Projects into a centralized hub where chats, files, and instructions coexist within a focused workspace, making it ideal for larger, iterative tasks requiring deeper context and potential collaboration. OpenAI aims for Projects to function more like smart workspaces than one-off chats.

Recommended read:
References :

Alyssa Mazzina@blog.runpod.io //
References: , Ken Yeung
AI is rapidly changing how college students approach their education. Instead of solely using AI for cheating, students are finding innovative ways to leverage tools like ChatGPT for studying, organization, and collaboration. For instance, students are using AI to quiz themselves on lecture notes, summarize complex readings, and alphabetize citations. These tasks free up time and mental energy, allowing students to focus on deeper learning and understanding course material. This shift reflects a move toward optimizing their learning processes, rather than simply seeking shortcuts.

Students are also using AI tools like Grammarly to refine their communications with professors and internship coordinators. Tools like Notion AI are helping students organize their schedules and generate study plans that feel less overwhelming. Furthermore, a collaborative AI-sharing culture has emerged, with students splitting the cost of ChatGPT Plus and sharing accounts. This collaborative spirit extends to group chats where students exchange quiz questions generated by AI, fostering a supportive learning environment.

Handshake, the college career network, has launched a new platform, Handshake AI, to connect graduate students with leading AI research labs, creating new opportunities for monetization. This service allows PhD students to train and evaluate AI models, offering their academic expertise to improve large language models. Experts are needed in fields like mathematics, physics, chemistry, biology, music, and education. Handshake AI provides AI labs with access to vetted individuals who can offer the human judgment needed for AI to evolve, while providing graduate students with valuable experience and income in the burgeoning AI space.

Recommended read:
References :
  • : AI on Campus: How Students Are Really Using AI to Write, Study, and Think
  • Ken Yeung: The New Side Hustle for Graduate Students: Training AI

Brandon Vigliarolo@The Register - Software //
ChatGPT experienced a major global outage on June 10, 2025, causing disruptions for users worldwide. OpenAI confirmed elevated error rates and latency across its services, including ChatGPT, the Sora text-to-video product, and its APIs. Users reported that prompts were either taking significantly longer to be answered or were met with an error message. The issue started around 3:00 AM Eastern Time, with reports on Downdetector steadily climbing as the morning progressed and the United States woke up. Downdetector indicated the problems were not restricted to the United States, with users in other countries reporting similar issues.

OpenAI stated that they had identified the root cause of the issue and were working on implementing a fix. According to the company's status page, the login services appeared to be functioning normally, but other services were experiencing partial outages. A separate entry for elevated error rates in Sora was also included on the status page. Initially, some users appeared to regain access around 6:00 AM ET, but reports of issues soon returned.

Later in the day, OpenAI reported that the fix was underway, with API calls in the process of recovering. However, the company also stated that full access to other affected services, including ChatGPT, could take "another few hours." While user reports on Down Detector initially dipped, it remained to be seen whether this signaled the outage ramping down or if further spikes would occur. OpenAI's service status later switched from red to yellow as the company reported that ChatGPT and API calls were slowly recovering, with Sora back to full operation.

Recommended read:
References :
  • bsky.app: ChatGPT and the API are currently experiencing elevated errors and latency, we are rolling out a fix
  • The Register - Software: Oh, great – now who'll do my thinking for me? Updated   If you're having trouble getting ChatGPT to do your work for you this morning, you wouldn't be alone. It appears OpenAI services are experiencing a variety of issues.…
  • PCMag Middle East ai: The issues began at 3 a.m. ET, with reports growing as the United States wakes up. If you’re finding has gone silent, you’re not alone. OpenAI’s chatbot is down for many users around the world right now.
  • www.laptopmag.com: It's not just you, ChatGPT isn't talking to us either
  • TechInformed: ChatGPT has been hit with a massive outage, causing elevated error rates and latency across its AI services, according to OpenAI.
  • www.techradar.com: ChatGPT is down. If you're having issues with OpenAI's chatbot, you're not alone!
  • techinformed.com: ChatGPT’s AI services hit by major global outage

Mark Tyson@tomshardware.com //
OpenAI has recently launched its newest reasoning model, o3-pro, making it available to ChatGPT Pro and Team subscribers, as well as through OpenAI’s API. Enterprise and Edu subscribers will gain access the following week. The company touts o3-pro as a significant upgrade, emphasizing its enhanced capabilities in mathematics, science, and coding, and its improved ability to utilize external tools.

OpenAI has also slashed the price of o3 by 80% and o3-pro by 87%, positioning the model as a more accessible option for developers seeking advanced reasoning capabilities. This price adjustment comes at a time when AI providers are competing more aggressively on both performance and affordability. Experts note that evaluations consistently prefer o3-pro over the standard o3 model across all categories, especially in science, programming, and business tasks.

O3-pro utilizes the same underlying architecture as o3, but it’s tuned to be more reliable, especially on complex tasks, with better long-range reasoning. The model supports tools like web browsing, code execution, vision analysis, and memory. While the increased complexity can lead to slower response times, OpenAI suggests that the tradeoff is worthwhile for the most challenging questions "where reliability matters more than speed, and waiting a few minutes is worth the tradeoff.”

Recommended read:
References :
  • Maginative: OpenAI’s new o3-pro model is now available in ChatGPT and the API, offering top-tier performance in math, science, and coding—at a dramatically lower price.
  • AI News | VentureBeat: OpenAI's most powerful reasoning model, o3, is now 80% cheaper, making it more affordable for businesses, researchers, and individual developers.
  • Latent.Space: OpenAI just dropped the price of their o3 model by 80% today and launched o3-pro.
  • THE DECODER: OpenAI has lowered the price of its o3 language model by 80 percent, CEO Sam Altman said.
  • Simon Willison's Weblog: OpenAI's Adam Groth explained that the engineers have optimized inference, allowing a significant price reduction for the o3 model.
  • the-decoder.com: OpenAI lowered the price of its o3 language model by 80 percent, CEO Sam Altman said.
  • AI News | VentureBeat: OpenAI released the latest in its o-series of reasoning model that promises more reliable and accurate responses for enterprises.
  • bsky.app: The OpenAI API is back to running at 100% again, plus we dropped o3 prices by 80% and launched o3-pro - enjoy!
  • Sam Altman: We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.
  • siliconangle.com: OpenAI’s newest reasoning model o3-pro surpasses rivals on multiple benchmarks, but it’s not very fast
  • SiliconANGLE: OpenAI’s newest reasoning model o3-pro surpasses rivals on multiple benchmarks, but it’s not very fast
  • bsky.app: the OpenAI API is back to running at 100% again, plus we dropped o3 prices by 80% and launched o3-pro - enjoy!
  • bsky.app: OpenAI has launched o3-pro. The new model is available to ChatGPT Pro and Team subscribers and in OpenAI’s API now, while Enterprise and Edu subscribers will get access next week. If you use reasoning models like o1 or o3, try o3-pro, which is much smarter and better at using external tools.
  • The Algorithmic Bridge: OpenAI o3-Pro Is So Good That I Can’t Tell How Good It Is

Mark Tyson@tomshardware.com //
References: bsky.app , the-decoder.com , Maginative ...
OpenAI has launched o3-pro, a new and improved version of its AI model designed to provide more reliable and thoughtful responses, especially for complex tasks. Replacing the o1-pro model, o3-pro is accessible to Pro and Team users within ChatGPT and through the API, marking OpenAI's ongoing effort to refine its AI technology. The focus of this upgrade is to enhance the model’s reasoning capabilities and maintain consistency in generating responses, directly addressing shortcomings found in previous models.

The o3-pro model is designed to handle tasks requiring deep analytical thinking and advanced reasoning. While built upon the same transformer architecture and deep learning techniques as other OpenAI chatbots, o3-pro distinguishes itself with an improved ability to understand context. Some users have noted that o3-pro feels like o3, but is only modestly better in exchange for being slower.

Comparisons with other leading models such as Claude 4 Opus and Gemini 2.5 Pro reveal interesting insights. While Claude 4 Opus has been praised for prompt following and understanding user intentions, Gemini 2.5 Pro stands out for its price-to-performance ratio. Early user experiences suggest o3-pro might not always be worth the expense due to its speed, except for research purposes. Some users have suggested that o3-pro hallucinates modestly less, though this is still being debated.

Recommended read:
References :
  • bsky.app: the OpenAI API is back to running at 100% again, plus we dropped o3 prices by 80% and launched o3-pro - enjoy!
  • the-decoder.com: OpenAI cuts o3 model prices by 80% and launches o3-pro today
  • AI News | VentureBeat: OpenAI launches o3-pro AI model, offering increased reliability and tool use for enterprises — while sacrificing speed
  • Maginative: OpenAI’s new o3-pro model is now available in ChatGPT and the API, offering top-tier performance in math, science, and coding—at a dramatically lower price.
  • THE DECODER: OpenAI has lowered the price of its o3 language model by 80 percent, CEO Sam Altman said. The article appeared first on The Decoder.
  • AI News | VentureBeat: OpenAI announces 80% price drop for o3, it’s most powerful reasoning model
  • www.cnbc.com: The figure includes sales from the company’s consumer products, ChatGPT business products and its application programming interface, or API.
  • Latent.Space: OpenAI dropped o3 pricing 80% today and launched o3-pro. Ben Hylak of Raindrop.ai returns with the world's first early review.
  • siliconangle.com: OpenAI’s newest reasoning model o3-pro surpasses rivals on multiple benchmarks, but it’s not very fast
  • SiliconANGLE: Silicon Angle reports on OpenAI’s newest reasoning model o3-pro surpassing rivals.
  • bsky.app: OpenAI has launched o3-pro. The new model is available to ChatGPT Pro and Team subscribers and in OpenAI’s API now, while Enterprise and Edu subscribers will get access next week. If you use reasoning models like o1 or o3, try o3-pro, which is much smarter and better at using external tools.
  • The Algorithmic Bridge: OpenAI o3-Pro Is So Good That I Can’t Tell How Good It Is
  • Windows Report: OpenAI rolls out new o3-pro AI model to ChatGPT Pro subscribers
  • Windows Report: OpenAI Rolls out new o3-pro AI Model to ChatGPT Pro Subscribers
  • www.indiatoday.in: OpenAI introduces O3 PRO for ChatGPT, highlighting significant improvements in performance and cost-efficiency.
  • www.marketingaiinstitute.com: [The AI Show Episode 153]: OpenAI Releases o3-Pro, Disney Sues Midjourney, Altman: “Gentle Singularity†Is Here, AI and Jobs & News Sites Getting Crushed by AI Search
  • datafloq.com: OpenAI is now considered another name of innovation in the field of AI, and with the launch of OpenAI o3, this claim is becoming stronger.
  • composio.dev: OpenAI finally unveiled their much expensive O3, the O3-Pro. The model is available in their API and Pro plan, which costs $200

@www.cnbc.com //
OpenAI has reached a significant milestone, achieving $10 billion in annual recurring revenue (ARR). This surge in revenue is primarily driven by the popularity and adoption of its ChatGPT chatbot, along with its suite of business products and API services. The ARR figure excludes licensing revenue from Microsoft and other large one-time deals. This achievement comes roughly two and a half years after the initial launch of ChatGPT, demonstrating the rapid growth and commercial success of OpenAI's AI technologies.

Despite the financial success, OpenAI is also grappling with the complexities of AI safety and responsible use. Concerns have been raised about the potential for AI models to generate malicious content and be exploited for cyberattacks. The company is actively working to address these issues, including clamping down on ChatGPT accounts linked to state-sponsored cyberattacks. Furthermore, the company will now retain deleted ChatGPT conversations to comply with a court order.

In related news, a security vulnerability was discovered in Google Accounts, potentially exposing users to phishing and SIM-swapping attacks. The vulnerability allowed researchers to brute-force any Google account's recovery phone number by knowing their profile name and an easily retrieved partial phone number. Google has since patched the bug. Separately, OpenAI is facing a court order to retain deleted ChatGPT conversations in connection with a copyright lawsuit filed by The New York Times, who allege that OpenAI used their content without permission. The company plans to appeal the ruling, ensuring that data will be stored separately in a secure system and only be accessed to meet legal obligations.

Recommended read:
References :
  • SiliconANGLE: OpenAI reaches $10B in annual recurring revenue as ChatGPT adoption accelerates
  • Simon Willison's Weblog: OpenAI hits $10 billion in annual recurring revenue fueled by ChatGPT growth
  • TechCrunch: OpenAI claims to have hit $10B in annual revenue
  • www.cnbc.com: OpenAI hits $10 billion in annual recurring revenue fueled by ChatGPT growth.
  • www.digitimes.com: OpenAI annualized revenue doubles to US$10B, eyes profitability by 2029
  • www.bleepingcomputer.com: A vulnerability allowed researchers to brute-force any Google account's recovery phone number simply by knowing a their profile name and an easily retrieved partial phone number, creating a massive risk for phishing and SIM-swapping attacks.
  • siliconangle.com: OpenAI reaches $10B in annual recurring revenue as ChatGPT adoption accelerates
  • Analytics India Magazine: AI news updates—OpenAI’s Annual Revenue Touches $10 Billion, Up 81.8% From Last Year
  • Dataconomy: OpenAI confirms $10B annual revenue milestone
  • The Tech Portal: OpenAI doubles annual revenue to $10Bn from $5.5Bn in December 2024
  • analyticsindiamag.com: OpenAI’s Annual Revenue Touches $10 Billion, Up 81.8% From Last Year
  • Quartz: OpenAI says it's making $10 billion in annual recurring revenue as ChatGPT grows

Pierluigi Paganini@securityaffairs.com //
OpenAI is actively combating the misuse of its AI tools, including ChatGPT, by malicious groups from countries like China, Russia, and Iran. The company recently banned multiple ChatGPT accounts linked to these threat actors, who were exploiting the platform for illicit activities. These banned accounts were involved in assisting with malware development, automating social media activities to spread disinformation, and conducting research on sensitive topics such as U.S. satellite communications technologies.

OpenAI's actions highlight the diverse ways in which malicious actors are attempting to leverage AI for their campaigns. Chinese groups used AI to generate fake comments and articles on platforms like TikTok and X, posing as real users to spread disinformation and influence public opinion. North Korean actors used AI to craft fake resumes and job applications in an attempt to secure remote IT jobs and potentially steal data. Russian groups employed AI to develop malware and plan cyberattacks, aiming to compromise systems and exfiltrate sensitive information.

The report also details specific operations like ScopeCreep, where a Russian-speaking threat actor used ChatGPT to develop and refine Windows malware. They also use AI to debug code in multiple languages and setup their command and control infrastructure. This malware was designed to escalate privileges, establish stealthy persistence, and exfiltrate sensitive data while evading detection. OpenAI's swift response and the details revealed in its report demonstrate the ongoing battle against the misuse of AI and the proactive measures being taken to safeguard its platforms.

Recommended read:
References :
  • securityaffairs.com: OpenAI bans ChatGPT accounts linked to Russian, Chinese cyber ops
  • thehackernews.com: OpenAI has revealed that it banned a set of ChatGPT accounts that were likely operated by Russian-speaking threat actors and two Chinese nation-state hacking groups to assist with malware development, social media automation, and research about U.S. satellite communications technologies, among other things.
  • Tech Monitor: OpenAI highlights exploitative use of ChatGPT by Chinese entities
  • gbhackers.com: OpenAI Shuts Down ChatGPT Accounts Linked to Russian, Iranian & Chinese Cyber
  • iHLS: AI Tools Exploited in Covert Influence and Cyber Ops, OpenAI Warns
  • The Register - Security: OpenAI boots accounts linked to 10 malicious campaigns
  • hackread.com: OpenAI, a leading artificial intelligence company, has revealed it is actively fighting widespread misuse of its AI tools…
  • Metacurity: OpenAI banned ChatGPT accounts tied to Russian and Chinese hackers using the tool for malware, social media abuse, and U.S.

Pierluigi Paganini@securityaffairs.com //
OpenAI is facing scrutiny over its ChatGPT user logs due to a recent court order mandating the indefinite retention of all chat data, including deleted conversations. This directive stems from a lawsuit filed by The New York Times and other news organizations, who allege that ChatGPT has been used to generate copyrighted news articles. The plaintiffs believe that even deleted chats could contain evidence of infringing outputs. OpenAI, while complying with the order, is appealing the decision, citing concerns about user privacy and potential conflicts with data privacy regulations like the EU's GDPR. The company emphasizes that this retention policy does not affect ChatGPT Enterprise or ChatGPT Edu customers, nor users with a Zero Data Retention agreement.

Sam Altman, CEO of OpenAI, has advocated for what he terms "AI privilege," suggesting that interactions with AI should be afforded the same privacy protections as communications with professionals like lawyers or doctors. This stance comes as OpenAI faces criticism for not disclosing to users that deleted and temporary chat logs were being preserved since mid-May in response to the court order. Altman argues that retaining user chats compromises their privacy, which OpenAI considers a core principle. He fears that this legal precedent could lead to a future where all AI conversations are recorded and accessible, potentially chilling free expression and innovation.

In addition to privacy concerns, OpenAI has identified and addressed malicious campaigns leveraging ChatGPT for nefarious purposes. These activities include the creation of fake IT worker resumes, the dissemination of misinformation, and assistance in cyber operations. OpenAI has banned accounts linked to ten such campaigns, including those potentially associated with North Korean IT worker schemes, Beijing-backed cyber operatives, and Russian malware distributors. These malicious actors utilized ChatGPT to craft application materials, auto-generate resumes, and even develop multi-stage malware. OpenAI is actively working to combat these abuses and safeguard its platform from being exploited for malicious activities.

Recommended read:
References :
  • chatgptiseatingtheworld.com: After filing an objection with Judge Stein, OpenAI took to the court of public opinion to seek the reversal of Magistrate Judge Wang’s broad order requiring OpenAI to preserve all ChatGPT logs of people’s chats.
  • Reclaim The Net: Private prompts once thought ephemeral could now live forever, thanks for demands from the New York Times.
  • Digital Information World: If you’ve ever used ChatGPT’s temporary chat feature thinking your conversation would vanish after closing the window — well, it turns out that wasn’t exactly the case.
  • iHLS: AI Tools Exploited in Covert Influence and Cyber Ops, OpenAI Warns
  • Schneier on Security: Report on the Malicious Uses of AI
  • The Register - Security: ChatGPT used for evil: Fake IT worker resumes, misinfo, and cyber-op assist
  • Jon Greig: Russians are using ChatGPT to incrementally improve malware. Chinese groups are using it to mass create fake social media comments. North Koreans are using it to refine fake resumes is likely only catching a fraction of nation-state use
  • Jon Greig: Russians are using ChatGPT to incrementally improve malware. Chinese groups are using it to mass create fake social media comments. North Koreans are using it to refine fake resumes is likely only catching a fraction of nation-state use
  • www.zdnet.com: How global threat actors are weaponizing AI now, according to OpenAI
  • thehackernews.com: OpenAI has revealed that it banned a set of ChatGPT accounts that were likely operated by Russian-speaking threat actors and two Chinese nation-state hacking groups to assist with malware development, social media automation, and research about U.S. satellite communications technologies, among other things.
  • securityaffairs.com: OpenAI bans ChatGPT accounts linked to Russian, Chinese cyber ops
  • therecord.media: Russians are using ChatGPT to incrementally improve malware. Chinese groups are using it to mass create fake social media comments. North Koreans are using it to refine fake resumes is likely only catching a fraction of nation-state use
  • siliconangle.com: OpenAI to retain deleted ChatGPT conversations following court order
  • eWEEK: ‘An Inappropriate Request’: OpenAI Appeals ChatGPT Data Retention Court Order in NYT Case
  • gbhackers.com: OpenAI Shuts Down ChatGPT Accounts Linked to Russian, Iranian & Chinese Cyber
  • Policy ? Ars Technica: OpenAI is retaining all ChatGPT logs “indefinitely.†Here’s who’s affected.
  • AI News | VentureBeat: Sam Altman calls for ‘AI privilege’ as OpenAI clarifies court order to retain temporary and deleted ChatGPT sessions
  • www.techradar.com: Sam Altman says AI chats should be as private as ‘talking to a lawyer or a doctor’, but OpenAI could soon be forced to keep your ChatGPT conversations forever
  • aithority.com: New Relic Report Shows OpenAI’s ChatGPT Dominates Among AI Developers
  • the-decoder.com: ChatGPT scams range from silly money-making ploys to calculated political meddling
  • hackread.com: OpenAI Shuts Down 10 Malicious AI Ops Linked to China, Russia, N. Korea
  • Tech Monitor: OpenAI highlights exploitative use of ChatGPT by Chinese entities

iHLS News@iHLS //
OpenAI has revealed that state-linked groups are increasingly experimenting with artificial intelligence for covert online operations, including influence campaigns and cyber support. A newly released report by OpenAI highlights how these groups, originating from countries like China, Russia, and Cambodia, are misusing generative AI technologies, such as ChatGPT, to manipulate content and spread disinformation. The company's latest report outlines examples of AI misuse and abuse, emphasizing a steady evolution in how AI is being integrated into covert digital strategies.

OpenAI has uncovered several international operations where its AI models were misused for cyberattacks, political influence, and even employment scams. For example, Chinese operations have been identified posting comments on geopolitical topics to discredit critics, while others used fake media accounts to collect information on Western targets. In one instance, ChatGPT was used to draft job recruitment messages in multiple languages, promising victims unrealistic payouts for simply liking social media posts, a scheme discovered accidentally by an OpenAI investigator.

Furthermore, OpenAI shut down a Russian influence campaign that utilized ChatGPT to produce German-language content ahead of Germany's 2025 federal election. This campaign, dubbed "Operation Helgoland Bite," operated through social media channels, attacking the US and NATO while promoting a right-wing political party. While the detected efforts across these various campaigns were limited in scale, the report underscores the critical need for collective detection efforts and increased vigilance against the weaponization of AI.

Recommended read:
References :
  • Schneier on Security: Report on the Malicious Uses of AI
  • iHLS: AI Tools Exploited in Covert Influence and Cyber Ops, OpenAI Warns
  • www.zdnet.com: The company's new report outlines the latest examples of AI misuse and abuse originating from China and elsewhere.
  • The Register - Security: ChatGPT used for evil: Fake IT worker resumes, misinfo, and cyber-op assist
  • cyberpress.org: CyberPress article on OpenAI Shuts Down ChatGPT Accounts Linked to Russian, Iranian, and Chinese Hackers
  • securityaffairs.com: SecurityAffairs article on OpenAI bans ChatGPT accounts linked to Russian, Chinese cyber ops
  • thehackernews.com: OpenAI has revealed that it banned a set of ChatGPT accounts that were likely operated by Russian-speaking threat actors and two Chinese nation-state hacking groups
  • Tech Monitor: OpenAI highlights exploitative use of ChatGPT by Chinese entities

@siliconangle.com //
OpenAI is facing increased scrutiny over its data retention policies following a recent court order related to a high-profile copyright lawsuit filed by The New York Times in 2023. The lawsuit alleges that OpenAI and Microsoft Corp. used millions of the Times' articles without permission to train their AI models, including ChatGPT. The paper further alleges that ChatGPT outputted Times content verbatim without attribution. As a result, OpenAI has been ordered to retain all ChatGPT logs, including deleted conversations, indefinitely to ensure that potentially relevant evidence is not destroyed. This move has sparked debate over user privacy and data security.

OpenAI COO Brad Lightcap announced that while users' deleted ChatGPT prompts and responses are typically erased after 30 days, this practice will cease to comply with the court order. The retention policy will affect users of ChatGPT Free, Plus, and Pro, as well as users of OpenAI's application programming interface (API), but not those using the Enterprise or Edu editions or those with a Zero Data Retention agreement. The company asserts that the retained data will be stored separately in a secure system accessible only by a small, audited OpenAI legal and security team, solely to meet legal obligations. The court order was granted within one day of the NYT's request due to concerns that users might delete chats if using ChatGPT to bypass paywalls.

OpenAI CEO Sam Altman has voiced strong opposition to the court order, calling it an "inappropriate request" and stating that OpenAI will appeal the decision. He argues that AI interactions should be treated with similar privacy protections as conversations with a lawyer or doctor, suggesting the need for "AI privilege". The company also expressed concerns about its ability to comply with the European Union's General Data Protection Regulation (GDPR), which grants users the right to be forgotten. Altman pledged to fight any demand that compromises user privacy, which he considers a core principle, promising customers that the company will fight to protect their privacy at every step if the plaintiffs continue to push for access.

Recommended read:
References :
  • Policy ? Ars Technica: OpenAI is retaining all ChatGPT logs “indefinitely.†Here’s who’s affected.
  • www.eweek.com: ‘An Inappropriate Request’: OpenAI Appeals ChatGPT Data Retention Court Order in NYT Case
  • SiliconANGLE: OpenAI will retain users’ deleted ChatGPT conversations to comply with a recently issued court order. Brad Lightcap, the artificial intelligence developer’s chief operating officer, disclosed the move in a late Thursday blog post. When users delete ChatGPT prompts and the chatbot’s responses, OpenAI usually retains the data for 30 days before permanently erasing it. Going […]
  • www.techradar.com: Sam Altman says AI chats should be as private as ‘talking to a lawyer or a doctor’, but OpenAI could soon be forced to keep your ChatGPT conversations forever

Megan Crouse@eWEEK //
OpenAI's ChatGPT is expanding its reach with new integrations, allowing users to connect directly to tools like Google Drive and Dropbox. This update allows ChatGPT to access and analyze data from these cloud storage services, enabling users to ask questions and receive summaries with cited sources. The platform is positioning itself as a user interface for data, offering one-click access to files, effectively streamlining the search process for information stored across various documents and spreadsheets. In addition to cloud connectors, ChatGPT has also introduced a "Record" feature for Team accounts that can record meetings, generate summaries, and offer action items.

These new features for ChatGPT come with data privacy considerations. While OpenAI states that files accessed through Google Drive or Dropbox connectors are not used for training its models for ChatGPT Team, Enterprise, and Education accounts, concerns remain about the data usage for free users and ChatGPT Plus subscribers. However, OpenAI confirms that audio recorded by the tool is immediately deleted after transcription, and transcripts are subject to workspace retention policies. Moreover, content from Team, Enterprise, and Edu workspaces, including audio recordings and transcripts from ChatGPT record, is excluded from model training by default.

Meanwhile, Reddit has filed a lawsuit against Anthropic, alleging the AI company scraped Reddit's data without permission to train its Claude AI models. Reddit accuses Anthropic of accessing its servers over 100,000 times after promising to stop scraping and claims Anthropic intentionally trained on the personal data of Reddit users without requesting their consent. Reddit has licensing deals with OpenAI and Google, but Anthropic doesn't have such a deal. Reddit seeks an injunction to force Anthropic to stop using any Reddit data immediately, and also asking the court to prohibit Anthropic from selling or licensing any product that was built using that data. Despite these controversies, Microsoft CEO Satya Nadella has stated that Microsoft profits from every ChatGPT usage, highlighting the success of their investment in OpenAI.

Recommended read:
References :
  • shellypalmer.com: OpenAI's latest update to ChatGPT lets it read your files in Google Drive and Dropbox. Just like that, your cloud storage is now part of your prompt.
  • www.artificialintelligence-news.com: Reddit sues Anthropic over AI data scraping
  • Tech News | Euronews RSS: Social media company Reddit sued artificial intelligence (AI) company Anthropic for allegedly scraping user comments to train its chatbot Claude.
  • www.itpro.com: Latest ChatGPT update lets users record meetings and connect to tools like Dropbox and Google Drive
  • Maginative: Reddit Sues Anthropic for Allegedly Scraping Its Data Without Permission
  • www.windowscentral.com: Satya Nadella says Microsoft makes money every time you use ChatGPT: "Every day that ChatGPT succeeds is a fantastic day"