News from the AI & ML world

DeeperML - #privacy

Pierluigi Paganini@securityaffairs.com //
OpenAI is facing scrutiny over its ChatGPT user logs due to a recent court order mandating the indefinite retention of all chat data, including deleted conversations. This directive stems from a lawsuit filed by The New York Times and other news organizations, who allege that ChatGPT has been used to generate copyrighted news articles. The plaintiffs believe that even deleted chats could contain evidence of infringing outputs. OpenAI, while complying with the order, is appealing the decision, citing concerns about user privacy and potential conflicts with data privacy regulations like the EU's GDPR. The company emphasizes that this retention policy does not affect ChatGPT Enterprise or ChatGPT Edu customers, nor users with a Zero Data Retention agreement.

Sam Altman, CEO of OpenAI, has advocated for what he terms "AI privilege," suggesting that interactions with AI should be afforded the same privacy protections as communications with professionals like lawyers or doctors. This stance comes as OpenAI faces criticism for not disclosing to users that deleted and temporary chat logs were being preserved since mid-May in response to the court order. Altman argues that retaining user chats compromises their privacy, which OpenAI considers a core principle. He fears that this legal precedent could lead to a future where all AI conversations are recorded and accessible, potentially chilling free expression and innovation.

In addition to privacy concerns, OpenAI has identified and addressed malicious campaigns leveraging ChatGPT for nefarious purposes. These activities include the creation of fake IT worker resumes, the dissemination of misinformation, and assistance in cyber operations. OpenAI has banned accounts linked to ten such campaigns, including those potentially associated with North Korean IT worker schemes, Beijing-backed cyber operatives, and Russian malware distributors. These malicious actors utilized ChatGPT to craft application materials, auto-generate resumes, and even develop multi-stage malware. OpenAI is actively working to combat these abuses and safeguard its platform from being exploited for malicious activities.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • chatgptiseatingtheworld.com: After filing an objection with Judge Stein, OpenAI took to the court of public opinion to seek the reversal of Magistrate Judge Wang’s broad order requiring OpenAI to preserve all ChatGPT logs of people’s chats.
  • Reclaim The Net: Private prompts once thought ephemeral could now live forever, thanks for demands from the New York Times.
  • Digital Information World: If you’ve ever used ChatGPT’s temporary chat feature thinking your conversation would vanish after closing the window — well, it turns out that wasn’t exactly the case.
  • iHLS: AI Tools Exploited in Covert Influence and Cyber Ops, OpenAI Warns
  • Schneier on Security: Report on the Malicious Uses of AI
  • The Register - Security: ChatGPT used for evil: Fake IT worker resumes, misinfo, and cyber-op assist
  • Jon Greig: Russians are using ChatGPT to incrementally improve malware. Chinese groups are using it to mass create fake social media comments. North Koreans are using it to refine fake resumes is likely only catching a fraction of nation-state use
  • Jon Greig: Russians are using ChatGPT to incrementally improve malware. Chinese groups are using it to mass create fake social media comments. North Koreans are using it to refine fake resumes is likely only catching a fraction of nation-state use
  • Latest news: How global threat actors are weaponizing AI now, according to OpenAI
  • The Hacker News: OpenAI has revealed that it banned a set of ChatGPT accounts that were likely operated by Russian-speaking threat actors and two Chinese nation-state hacking groups to assist with malware development, social media automation, and research about U.S. satellite communications technologies, among other things.
  • securityaffairs.com: OpenAI bans ChatGPT accounts linked to Russian, Chinese cyber ops
  • therecord.media: Russians are using ChatGPT to incrementally improve malware. Chinese groups are using it to mass create fake social media comments. North Koreans are using it to refine fake resumes is likely only catching a fraction of nation-state use
  • siliconangle.com: OpenAI to retain deleted ChatGPT conversations following court order
  • eWEEK: ‘An Inappropriate Request’: OpenAI Appeals ChatGPT Data Retention Court Order in NYT Case
  • gbhackers.com: OpenAI Shuts Down ChatGPT Accounts Linked to Russian, Iranian & Chinese Cyber
  • Policy ? Ars Technica: OpenAI is retaining all ChatGPT logs “indefinitely.†Here’s who’s affected.
  • AI News | VentureBeat: Sam Altman calls for ‘AI privilege’ as OpenAI clarifies court order to retain temporary and deleted ChatGPT sessions
  • www.techradar.com: Sam Altman says AI chats should be as private as ‘talking to a lawyer or a doctor’, but OpenAI could soon be forced to keep your ChatGPT conversations forever
  • aithority.com: New Relic Report Shows OpenAI’s ChatGPT Dominates Among AI Developers
  • the-decoder.com: ChatGPT scams range from silly money-making ploys to calculated political meddling
  • hackread.com: OpenAI Shuts Down 10 Malicious AI Ops Linked to China, Russia, N. Korea
  • Tech Monitor: OpenAI highlights exploitative use of ChatGPT by Chinese entities
Classification:
Megan Crouse@eWEEK //
OpenAI's ChatGPT is expanding its reach with new integrations, allowing users to connect directly to tools like Google Drive and Dropbox. This update allows ChatGPT to access and analyze data from these cloud storage services, enabling users to ask questions and receive summaries with cited sources. The platform is positioning itself as a user interface for data, offering one-click access to files, effectively streamlining the search process for information stored across various documents and spreadsheets. In addition to cloud connectors, ChatGPT has also introduced a "Record" feature for Team accounts that can record meetings, generate summaries, and offer action items.

These new features for ChatGPT come with data privacy considerations. While OpenAI states that files accessed through Google Drive or Dropbox connectors are not used for training its models for ChatGPT Team, Enterprise, and Education accounts, concerns remain about the data usage for free users and ChatGPT Plus subscribers. However, OpenAI confirms that audio recorded by the tool is immediately deleted after transcription, and transcripts are subject to workspace retention policies. Moreover, content from Team, Enterprise, and Edu workspaces, including audio recordings and transcripts from ChatGPT record, is excluded from model training by default.

Meanwhile, Reddit has filed a lawsuit against Anthropic, alleging the AI company scraped Reddit's data without permission to train its Claude AI models. Reddit accuses Anthropic of accessing its servers over 100,000 times after promising to stop scraping and claims Anthropic intentionally trained on the personal data of Reddit users without requesting their consent. Reddit has licensing deals with OpenAI and Google, but Anthropic doesn't have such a deal. Reddit seeks an injunction to force Anthropic to stop using any Reddit data immediately, and also asking the court to prohibit Anthropic from selling or licensing any product that was built using that data. Despite these controversies, Microsoft CEO Satya Nadella has stated that Microsoft profits from every ChatGPT usage, highlighting the success of their investment in OpenAI.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • shellypalmer.com: OpenAI's latest update to ChatGPT lets it read your files in Google Drive and Dropbox. Just like that, your cloud storage is now part of your prompt.
  • www.artificialintelligence-news.com: Reddit sues Anthropic over AI data scraping
  • Tech News | Euronews RSS: Social media company Reddit sued artificial intelligence (AI) company Anthropic for allegedly scraping user comments to train its chatbot Claude.
  • www.itpro.com: Latest ChatGPT update lets users record meetings and connect to tools like Dropbox and Google Drive
  • Maginative: Reddit Sues Anthropic for Allegedly Scraping Its Data Without Permission
  • www.windowscentral.com: Satya Nadella says Microsoft makes money every time you use ChatGPT: "Every day that ChatGPT succeeds is a fantastic day"
Classification:
  • HashTags: #ChatGPT #DataScraping #AILawsuit
  • Company: OpenAI
  • Target: Reddit, Google Drive, Dropbox
  • Product: ChatGPT
  • Feature: Data Scraping
  • Type: AI
  • Severity: Medium
Waqas@hackread.com //
A massive database containing over 184 million unique login credentials has been discovered online by cybersecurity researcher Jeremiah Fowler. The unprotected database, which amounted to approximately 47.42 gigabytes of data, was found on a misconfigured cloud server and lacked both password protection and encryption. Fowler, from Security Discovery, identified the exposed Elastic database in early May and promptly notified the hosting provider, leading to the database being removed from public access.

The exposed credentials included usernames and passwords for a vast array of online services, including major tech platforms like Apple, Microsoft, Facebook, Google, Instagram, Snapchat, Roblox, Spotify, WordPress, and Yahoo, as well as various email providers. More alarmingly, the data also contained access information for bank accounts, health platforms, and government portals from numerous countries, posing a significant risk to individuals and organizations. The authenticity of the data was confirmed by Fowler, who contacted several individuals whose email addresses were listed in the database, and they verified that the passwords were valid.

The origin and purpose of the database remain unclear, with no identifying information about its owner or collector. The sheer scope and diversity of the login details suggest that the data may have been compiled by cybercriminals using infostealer malware. Jeremiah Fowler described the find as "one of the most dangerous discoveries" he has found in a very long time. The database's IP address pointed to two domain names, one of which was unregistered, further obscuring the identity of the data's owner and intended use.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • hackread.com: Database Leak Reveals 184 Million Infostealer-Harvested Emails and Passwords
  • PCMag UK security: Security Nightmare: Researcher Finds Trove of 184M Exposed Logins for Google, Apple, More
  • WIRED: Mysterious Database of 184 Million Records Exposes Vast Array of Login Credentials
  • Latest news: Massive data breach exposes 184 million passwords for Google, Microsoft, Facebook, and more
  • Davey Winder: 184,162,718 Passwords And Logins Leaked — Apple, Facebook, Snapchat
  • DataBreaches.Net: Mysterious database of 184 million records exposes vast array of login credentials
  • 9to5Mac: Apple logins with plain text passwords found in massive database of 184M records
  • www.engadget.com: Someone Found Over 180 Million User Records in an Unprotected Online Database
  • borncity.com: Suspected InfoStealer data leak exposes 184 million login data
  • databreaches.net: The possibility that data could be inadvertently exposed in a misconfigured or otherwise unsecured database is a longtime privacy nightmare that has been difficult to fully address.
  • borncity.com: [German]Security researcher Jeremiah Fowler came across a freely accessible and unprotected database on the Internet. The find was quite something, as a look at the data sets suggests that it was probably data collected by InfoStealer malware. Records containing 184 …
  • securityonline.info: 184 Million Leaked Credentials Found in Open Database
  • Know Your Adversary: 184 Million Records Database Leak: Microsoft, Apple, Google, Facebook, PayPal Logins Found
  • securityonline.info: Security researchers have identified a database containing a staggering 184 million account credentials—prompting yet another urgent reminder to The post appeared first on .
Classification: