@www.linkedin.com
//
References:
IEEE Spectrum
, The Cognitive Revolution
Universities are increasingly integrating artificial intelligence into education, not only to enhance teaching methodologies but also to equip students with the essential AI skills they'll need in the future workforce. There's a growing understanding that students should learn how to use AI tools effectively and ethically, rather than simply relying on them as a shortcut for completing assignments. This shift involves incorporating AI into the curriculum in meaningful ways, ensuring students understand both the capabilities and limitations of these technologies.
Estonia is taking a proactive approach with the launch of AI chatbots designed specifically for high school classrooms. This initiative aims to familiarize students with AI in a controlled educational environment. The goal is to empower students to use AI tools responsibly and effectively, moving beyond basic applications to more sophisticated problem-solving and critical thinking. Furthermore, Microsoft is introducing new AI features for educators within Microsoft 365 Copilot, including Copilot Chat for teens. Microsoft's 2025 AI in Education Report highlights that over 80% of surveyed educators are using AI, but a significant portion still lack confidence in its effective and responsible use. These initiatives aim to provide necessary training and guidance to teachers and administrators, ensuring they can integrate AI seamlessly into their instruction. Recommended read:
References :
Sean Endicott@windowscentral.com
//
References:
The Algorithmic Bridge
, www.windowscentral.com
,
A recent MIT study has sparked debate about the potential cognitive consequences of over-reliance on AI tools like ChatGPT. The research suggests that using these large language models (LLMs) can lead to reduced brain activity and potentially impair critical thinking and writing skills. The study, titled "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task," examined the neural and behavioral effects of using ChatGPT for essay writing. The findings raise questions about the long-term impact of AI on learning, memory, and overall cognitive function.
The MIT researchers divided 54 participants into three groups: one group that used ChatGPT exclusively, a search engine group, and a brain-only group relying solely on their own knowledge. Participants wrote essays on various topics over three sessions while wearing EEG headsets to monitor brain activity. The results showed that the ChatGPT group experienced a 32% lower cognitive load compared to the brain-only group. In a fourth session, participants from the ChatGPT group were asked to write without AI assistance, and their performance was notably worse, indicating a decline in independent writing ability. While the study highlights potential drawbacks, other perspectives suggest that AI tools don't necessarily make users less intelligent. The study's authors themselves acknowledge nuances, stating that the criticism of LLMs is supported and qualified by the findings, avoiding a black-and-white conclusion. Experts suggest that using ChatGPT strategically and not as a complete replacement for cognitive effort could mitigate the risks. They emphasized the importance of understanding the tools capabilities and limitations, focusing on augmentation rather than substitution of human skills. Recommended read:
References :
@www.marktechpost.com
//
OpenAI has recently released an open-sourced version of a customer service agent demo, built using its Agents SDK. The "openai-cs-agents-demo," available on GitHub, showcases the creation of domain-specialized AI agents. This demo models an airline customer service chatbot, which adeptly manages a variety of travel-related inquiries by dynamically routing user requests to specialized agents. The system's architecture comprises a Python backend utilizing the Agents SDK for agent orchestration and a Next.js frontend providing a conversational interface and visual representation of agent transitions.
The demo boasts several focused agents including a Triage Agent, Seat Booking Agent, Flight Status Agent, Cancellation Agent, and an FAQ Agent. Each agent is meticulously configured with specific instructions and tools tailored to their particular sub-tasks. When a user submits a request, the Triage Agent analyzes the input to discern intent and subsequently dispatches the query to the most appropriate downstream agent. Guardrails for relevance and jailbreak attempts are implemented, ensuring topicality and preventing misuse. In related news, OpenAI CEO Sam Altman has claimed that Meta is aggressively attempting to poach OpenAI's AI employees with extravagant offers, including $100 million signing bonuses and significantly higher annual compensation packages. Despite these lucrative offers, Altman stated that "none of our best people have decided to take them up on that," suggesting OpenAI's culture and vision are strong factors in retaining talent. Altman believes Meta's approach focuses too heavily on monetary incentives rather than genuine innovation and a shared purpose, which he sees as crucial for success in the AI field. Recommended read:
References :
Chris McKay@Maginative
//
OpenAI has secured a significant contract with the U.S. Defense Department, marking its first major foray into the national security sector. The one-year agreement, valued at $200 million, signifies a pivotal moment as OpenAI aims to supply its AI tools for administrative tasks and proactive cyberdefense. This initiative is the inaugural project under OpenAI's new "OpenAI for Government" program, highlighting the company's strategic shift and ambition to become a key provider of generative AI solutions for national security agencies. This deal follows OpenAI's updated usage policy, which now permits defensive or humanitarian military applications, signaling a departure from its earlier stance against military use of its AI models.
This move by OpenAI reflects a broader trend in the AI industry, with rival companies like Anthropic and Meta also embracing collaborations with defense contractors and intelligence agencies. OpenAI emphasizes that its usage policy still prohibits weapon development or kinetic targeting, and the Defense Department contract will adhere to these restrictions. The "OpenAI for Government" program includes custom models, hands-on support, and previews of product roadmaps for government agencies, offering them an enhanced Enterprise feature set. In addition to its government initiatives, OpenAI is expanding its enterprise strategy by open-sourcing a new multi-agent customer service demo on GitHub. This demo showcases how to build domain-specialized AI agents using the Agents SDK, offering a practical example for developers. The system models an airline customer service chatbot capable of handling various travel-related queries by dynamically routing requests to specialized agents like Seat Booking, Flight Status, and Cancellation. By offering transparent tooling and clear implementation examples, OpenAI aims to accelerate the adoption of agentic systems in everyday enterprise applications. Recommended read:
References :
Mark Tyson@tomshardware.com
//
OpenAI has launched O3 PRO for ChatGPT, marking a significant advancement in both performance and cost-efficiency for its reasoning models. This new model, O3-Pro, is now accessible through the OpenAI API and the Pro plan, priced at $200 per month. The company highlights substantial improvements with O3 PRO and has also dropped the price of its previous o3 model by 80%. This strategic move aims to provide users with more powerful and affordable AI capabilities, challenging competitors in the AI model market and expanding the boundaries of reasoning.
The O3-Pro model is set to offer enhanced raw reasoning capabilities, but early reviews suggest mixed results when compared to competing models like Claude 4 Opus and Gemini 2.5 Pro. While some tests indicate that Claude 4 Opus currently excels in prompt following, output quality, and understanding user intentions, Gemini 2.5 Pro is considered the most economical option with a superior price-to-performance ratio. Initial assessments suggest that O3-Pro might not be worth the higher cost unless the user's primary interest lies in research applications. The launch of O3-Pro coincides with other strategic moves by OpenAI, including consolidating its public sector AI products under the "OpenAI for Government" banner, including ChatGPT Gov. OpenAI has also secured a $200 million contract with the U.S. Department of Defense to explore AI applications in administration and security. Despite these advancements, OpenAI is also navigating challenges, such as the planned deprecation of GPT-4.5 Preview in the API, which has caused frustration among developers who relied on the model for their applications and workflows. Recommended read:
References :
@www.unite.ai
//
References:
www.techradar.com
, www.tomsguide.com
,
OpenAI has significantly upgraded ChatGPT's Projects feature, introducing a suite of new tools designed to enhance productivity and streamline workflows. This update marks the most substantial improvement to Projects since its initial launch, transforming it from a simple organizational tool into a smarter, more context-aware workspace. Users can now leverage features like voice mode, enhanced memory, and mobile file uploads to manage research, code repositories, and creative endeavors more efficiently. The implications for professionals and anyone managing complex tasks are considerable.
The upgraded Projects feature now includes six key enhancements. Voice Mode allows users to engage in discussions about files and past chats hands-free, enabling activities like reviewing reports while walking or brainstorming during commutes. Enhanced memory ensures ChatGPT retains context from previous conversations and documents within a project, eliminating the need for repetitive explanations. The ability to upload files directly from mobile devices offers greater flexibility and convenience. Furthermore, Projects now supports Deep Research, providing more in-depth analysis and insights. Users can also set project-specific instructions that override general custom instructions, ensuring a tailored experience. These updates collectively transform Projects into a centralized hub where chats, files, and instructions coexist within a focused workspace, making it ideal for larger, iterative tasks requiring deeper context and potential collaboration. OpenAI aims for Projects to function more like smart workspaces than one-off chats. Recommended read:
References :
Alyssa Mazzina@blog.runpod.io
//
References:
, Ken Yeung
AI is rapidly changing how college students approach their education. Instead of solely using AI for cheating, students are finding innovative ways to leverage tools like ChatGPT for studying, organization, and collaboration. For instance, students are using AI to quiz themselves on lecture notes, summarize complex readings, and alphabetize citations. These tasks free up time and mental energy, allowing students to focus on deeper learning and understanding course material. This shift reflects a move toward optimizing their learning processes, rather than simply seeking shortcuts.
Students are also using AI tools like Grammarly to refine their communications with professors and internship coordinators. Tools like Notion AI are helping students organize their schedules and generate study plans that feel less overwhelming. Furthermore, a collaborative AI-sharing culture has emerged, with students splitting the cost of ChatGPT Plus and sharing accounts. This collaborative spirit extends to group chats where students exchange quiz questions generated by AI, fostering a supportive learning environment. Handshake, the college career network, has launched a new platform, Handshake AI, to connect graduate students with leading AI research labs, creating new opportunities for monetization. This service allows PhD students to train and evaluate AI models, offering their academic expertise to improve large language models. Experts are needed in fields like mathematics, physics, chemistry, biology, music, and education. Handshake AI provides AI labs with access to vetted individuals who can offer the human judgment needed for AI to evolve, while providing graduate students with valuable experience and income in the burgeoning AI space. Recommended read:
References :
Brandon Vigliarolo@The Register - Software
//
ChatGPT experienced a major global outage on June 10, 2025, causing disruptions for users worldwide. OpenAI confirmed elevated error rates and latency across its services, including ChatGPT, the Sora text-to-video product, and its APIs. Users reported that prompts were either taking significantly longer to be answered or were met with an error message. The issue started around 3:00 AM Eastern Time, with reports on Downdetector steadily climbing as the morning progressed and the United States woke up. Downdetector indicated the problems were not restricted to the United States, with users in other countries reporting similar issues.
OpenAI stated that they had identified the root cause of the issue and were working on implementing a fix. According to the company's status page, the login services appeared to be functioning normally, but other services were experiencing partial outages. A separate entry for elevated error rates in Sora was also included on the status page. Initially, some users appeared to regain access around 6:00 AM ET, but reports of issues soon returned. Later in the day, OpenAI reported that the fix was underway, with API calls in the process of recovering. However, the company also stated that full access to other affected services, including ChatGPT, could take "another few hours." While user reports on Down Detector initially dipped, it remained to be seen whether this signaled the outage ramping down or if further spikes would occur. OpenAI's service status later switched from red to yellow as the company reported that ChatGPT and API calls were slowly recovering, with Sora back to full operation. Recommended read:
References :
Mark Tyson@tomshardware.com
//
OpenAI has recently launched its newest reasoning model, o3-pro, making it available to ChatGPT Pro and Team subscribers, as well as through OpenAI’s API. Enterprise and Edu subscribers will gain access the following week. The company touts o3-pro as a significant upgrade, emphasizing its enhanced capabilities in mathematics, science, and coding, and its improved ability to utilize external tools.
OpenAI has also slashed the price of o3 by 80% and o3-pro by 87%, positioning the model as a more accessible option for developers seeking advanced reasoning capabilities. This price adjustment comes at a time when AI providers are competing more aggressively on both performance and affordability. Experts note that evaluations consistently prefer o3-pro over the standard o3 model across all categories, especially in science, programming, and business tasks. O3-pro utilizes the same underlying architecture as o3, but it’s tuned to be more reliable, especially on complex tasks, with better long-range reasoning. The model supports tools like web browsing, code execution, vision analysis, and memory. While the increased complexity can lead to slower response times, OpenAI suggests that the tradeoff is worthwhile for the most challenging questions "where reliability matters more than speed, and waiting a few minutes is worth the tradeoff.” Recommended read:
References :
Mark Tyson@tomshardware.com
//
OpenAI has launched o3-pro, a new and improved version of its AI model designed to provide more reliable and thoughtful responses, especially for complex tasks. Replacing the o1-pro model, o3-pro is accessible to Pro and Team users within ChatGPT and through the API, marking OpenAI's ongoing effort to refine its AI technology. The focus of this upgrade is to enhance the model’s reasoning capabilities and maintain consistency in generating responses, directly addressing shortcomings found in previous models.
The o3-pro model is designed to handle tasks requiring deep analytical thinking and advanced reasoning. While built upon the same transformer architecture and deep learning techniques as other OpenAI chatbots, o3-pro distinguishes itself with an improved ability to understand context. Some users have noted that o3-pro feels like o3, but is only modestly better in exchange for being slower. Comparisons with other leading models such as Claude 4 Opus and Gemini 2.5 Pro reveal interesting insights. While Claude 4 Opus has been praised for prompt following and understanding user intentions, Gemini 2.5 Pro stands out for its price-to-performance ratio. Early user experiences suggest o3-pro might not always be worth the expense due to its speed, except for research purposes. Some users have suggested that o3-pro hallucinates modestly less, though this is still being debated. Recommended read:
References :
@www.cnbc.com
//
OpenAI has reached a significant milestone, achieving $10 billion in annual recurring revenue (ARR). This surge in revenue is primarily driven by the popularity and adoption of its ChatGPT chatbot, along with its suite of business products and API services. The ARR figure excludes licensing revenue from Microsoft and other large one-time deals. This achievement comes roughly two and a half years after the initial launch of ChatGPT, demonstrating the rapid growth and commercial success of OpenAI's AI technologies.
Despite the financial success, OpenAI is also grappling with the complexities of AI safety and responsible use. Concerns have been raised about the potential for AI models to generate malicious content and be exploited for cyberattacks. The company is actively working to address these issues, including clamping down on ChatGPT accounts linked to state-sponsored cyberattacks. Furthermore, the company will now retain deleted ChatGPT conversations to comply with a court order. In related news, a security vulnerability was discovered in Google Accounts, potentially exposing users to phishing and SIM-swapping attacks. The vulnerability allowed researchers to brute-force any Google account's recovery phone number by knowing their profile name and an easily retrieved partial phone number. Google has since patched the bug. Separately, OpenAI is facing a court order to retain deleted ChatGPT conversations in connection with a copyright lawsuit filed by The New York Times, who allege that OpenAI used their content without permission. The company plans to appeal the ruling, ensuring that data will be stored separately in a secure system and only be accessed to meet legal obligations. Recommended read:
References :
Pierluigi Paganini@securityaffairs.com
//
OpenAI is actively combating the misuse of its AI tools, including ChatGPT, by malicious groups from countries like China, Russia, and Iran. The company recently banned multiple ChatGPT accounts linked to these threat actors, who were exploiting the platform for illicit activities. These banned accounts were involved in assisting with malware development, automating social media activities to spread disinformation, and conducting research on sensitive topics such as U.S. satellite communications technologies.
OpenAI's actions highlight the diverse ways in which malicious actors are attempting to leverage AI for their campaigns. Chinese groups used AI to generate fake comments and articles on platforms like TikTok and X, posing as real users to spread disinformation and influence public opinion. North Korean actors used AI to craft fake resumes and job applications in an attempt to secure remote IT jobs and potentially steal data. Russian groups employed AI to develop malware and plan cyberattacks, aiming to compromise systems and exfiltrate sensitive information. The report also details specific operations like ScopeCreep, where a Russian-speaking threat actor used ChatGPT to develop and refine Windows malware. They also use AI to debug code in multiple languages and setup their command and control infrastructure. This malware was designed to escalate privileges, establish stealthy persistence, and exfiltrate sensitive data while evading detection. OpenAI's swift response and the details revealed in its report demonstrate the ongoing battle against the misuse of AI and the proactive measures being taken to safeguard its platforms. Recommended read:
References :
Pierluigi Paganini@securityaffairs.com
//
OpenAI is facing scrutiny over its ChatGPT user logs due to a recent court order mandating the indefinite retention of all chat data, including deleted conversations. This directive stems from a lawsuit filed by The New York Times and other news organizations, who allege that ChatGPT has been used to generate copyrighted news articles. The plaintiffs believe that even deleted chats could contain evidence of infringing outputs. OpenAI, while complying with the order, is appealing the decision, citing concerns about user privacy and potential conflicts with data privacy regulations like the EU's GDPR. The company emphasizes that this retention policy does not affect ChatGPT Enterprise or ChatGPT Edu customers, nor users with a Zero Data Retention agreement.
Sam Altman, CEO of OpenAI, has advocated for what he terms "AI privilege," suggesting that interactions with AI should be afforded the same privacy protections as communications with professionals like lawyers or doctors. This stance comes as OpenAI faces criticism for not disclosing to users that deleted and temporary chat logs were being preserved since mid-May in response to the court order. Altman argues that retaining user chats compromises their privacy, which OpenAI considers a core principle. He fears that this legal precedent could lead to a future where all AI conversations are recorded and accessible, potentially chilling free expression and innovation. In addition to privacy concerns, OpenAI has identified and addressed malicious campaigns leveraging ChatGPT for nefarious purposes. These activities include the creation of fake IT worker resumes, the dissemination of misinformation, and assistance in cyber operations. OpenAI has banned accounts linked to ten such campaigns, including those potentially associated with North Korean IT worker schemes, Beijing-backed cyber operatives, and Russian malware distributors. These malicious actors utilized ChatGPT to craft application materials, auto-generate resumes, and even develop multi-stage malware. OpenAI is actively working to combat these abuses and safeguard its platform from being exploited for malicious activities. Recommended read:
References :
iHLS News@iHLS
//
OpenAI has revealed that state-linked groups are increasingly experimenting with artificial intelligence for covert online operations, including influence campaigns and cyber support. A newly released report by OpenAI highlights how these groups, originating from countries like China, Russia, and Cambodia, are misusing generative AI technologies, such as ChatGPT, to manipulate content and spread disinformation. The company's latest report outlines examples of AI misuse and abuse, emphasizing a steady evolution in how AI is being integrated into covert digital strategies.
OpenAI has uncovered several international operations where its AI models were misused for cyberattacks, political influence, and even employment scams. For example, Chinese operations have been identified posting comments on geopolitical topics to discredit critics, while others used fake media accounts to collect information on Western targets. In one instance, ChatGPT was used to draft job recruitment messages in multiple languages, promising victims unrealistic payouts for simply liking social media posts, a scheme discovered accidentally by an OpenAI investigator. Furthermore, OpenAI shut down a Russian influence campaign that utilized ChatGPT to produce German-language content ahead of Germany's 2025 federal election. This campaign, dubbed "Operation Helgoland Bite," operated through social media channels, attacking the US and NATO while promoting a right-wing political party. While the detected efforts across these various campaigns were limited in scale, the report underscores the critical need for collective detection efforts and increased vigilance against the weaponization of AI. Recommended read:
References :
@siliconangle.com
//
OpenAI is facing increased scrutiny over its data retention policies following a recent court order related to a high-profile copyright lawsuit filed by The New York Times in 2023. The lawsuit alleges that OpenAI and Microsoft Corp. used millions of the Times' articles without permission to train their AI models, including ChatGPT. The paper further alleges that ChatGPT outputted Times content verbatim without attribution. As a result, OpenAI has been ordered to retain all ChatGPT logs, including deleted conversations, indefinitely to ensure that potentially relevant evidence is not destroyed. This move has sparked debate over user privacy and data security.
OpenAI COO Brad Lightcap announced that while users' deleted ChatGPT prompts and responses are typically erased after 30 days, this practice will cease to comply with the court order. The retention policy will affect users of ChatGPT Free, Plus, and Pro, as well as users of OpenAI's application programming interface (API), but not those using the Enterprise or Edu editions or those with a Zero Data Retention agreement. The company asserts that the retained data will be stored separately in a secure system accessible only by a small, audited OpenAI legal and security team, solely to meet legal obligations. The court order was granted within one day of the NYT's request due to concerns that users might delete chats if using ChatGPT to bypass paywalls. OpenAI CEO Sam Altman has voiced strong opposition to the court order, calling it an "inappropriate request" and stating that OpenAI will appeal the decision. He argues that AI interactions should be treated with similar privacy protections as conversations with a lawyer or doctor, suggesting the need for "AI privilege". The company also expressed concerns about its ability to comply with the European Union's General Data Protection Regulation (GDPR), which grants users the right to be forgotten. Altman pledged to fight any demand that compromises user privacy, which he considers a core principle, promising customers that the company will fight to protect their privacy at every step if the plaintiffs continue to push for access. Recommended read:
References :
Megan Crouse@eWEEK
//
OpenAI's ChatGPT is expanding its reach with new integrations, allowing users to connect directly to tools like Google Drive and Dropbox. This update allows ChatGPT to access and analyze data from these cloud storage services, enabling users to ask questions and receive summaries with cited sources. The platform is positioning itself as a user interface for data, offering one-click access to files, effectively streamlining the search process for information stored across various documents and spreadsheets. In addition to cloud connectors, ChatGPT has also introduced a "Record" feature for Team accounts that can record meetings, generate summaries, and offer action items.
These new features for ChatGPT come with data privacy considerations. While OpenAI states that files accessed through Google Drive or Dropbox connectors are not used for training its models for ChatGPT Team, Enterprise, and Education accounts, concerns remain about the data usage for free users and ChatGPT Plus subscribers. However, OpenAI confirms that audio recorded by the tool is immediately deleted after transcription, and transcripts are subject to workspace retention policies. Moreover, content from Team, Enterprise, and Edu workspaces, including audio recordings and transcripts from ChatGPT record, is excluded from model training by default. Meanwhile, Reddit has filed a lawsuit against Anthropic, alleging the AI company scraped Reddit's data without permission to train its Claude AI models. Reddit accuses Anthropic of accessing its servers over 100,000 times after promising to stop scraping and claims Anthropic intentionally trained on the personal data of Reddit users without requesting their consent. Reddit has licensing deals with OpenAI and Google, but Anthropic doesn't have such a deal. Reddit seeks an injunction to force Anthropic to stop using any Reddit data immediately, and also asking the court to prohibit Anthropic from selling or licensing any product that was built using that data. Despite these controversies, Microsoft CEO Satya Nadella has stated that Microsoft profits from every ChatGPT usage, highlighting the success of their investment in OpenAI. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |