Waqas@hackread.com
//
A massive database containing over 184 million unique login credentials has been discovered online by cybersecurity researcher Jeremiah Fowler. The unprotected database, which amounted to approximately 47.42 gigabytes of data, was found on a misconfigured cloud server and lacked both password protection and encryption. Fowler, from Security Discovery, identified the exposed Elastic database in early May and promptly notified the hosting provider, leading to the database being removed from public access.
The exposed credentials included usernames and passwords for a vast array of online services, including major tech platforms like Apple, Microsoft, Facebook, Google, Instagram, Snapchat, Roblox, Spotify, WordPress, and Yahoo, as well as various email providers. More alarmingly, the data also contained access information for bank accounts, health platforms, and government portals from numerous countries, posing a significant risk to individuals and organizations. The authenticity of the data was confirmed by Fowler, who contacted several individuals whose email addresses were listed in the database, and they verified that the passwords were valid.
The origin and purpose of the database remain unclear, with no identifying information about its owner or collector. The sheer scope and diversity of the login details suggest that the data may have been compiled by cybercriminals using infostealer malware. Jeremiah Fowler described the find as "one of the most dangerous discoveries" he has found in a very long time. The database's IP address pointed to two domain names, one of which was unregistered, further obscuring the identity of the data's owner and intended use.
Recommended read:
References :
- hackread.com: Database Leak Reveals 184 Million Infostealer-Harvested Emails and Passwords
- PCMag UK security: Security Nightmare: Researcher Finds Trove of 184M Exposed Logins for Google, Apple, More
- WIRED: Mysterious Database of 184 Million Records Exposes Vast Array of Login Credentials
- www.zdnet.com: Massive data breach exposes 184 million passwords for Google, Microsoft, Facebook, and more
- Davey Winder: 184,162,718 Passwords And Logins Leaked — Apple, Facebook, Snapchat
- DataBreaches.Net: Mysterious database of 184 million records exposes vast array of login credentials
- 9to5Mac: Apple logins with plain text passwords found in massive database of 184M records
- www.engadget.com: Someone Found Over 180 Million User Records in an Unprotected Online Database
- borncity.com: Suspected InfoStealer data leak exposes 184 million login data
- databreaches.net: The possibility that data could be inadvertently exposed in a misconfigured or otherwise unsecured database is a longtime privacy nightmare that has been difficult to fully address.
- borncity.com: [German]Security researcher Jeremiah Fowler came across a freely accessible and unprotected database on the Internet. The find was quite something, as a look at the data sets suggests that it was probably data collected by InfoStealer malware. Records containing 184 …
- securityonline.info: 184 Million Leaked Credentials Found in Open Database
- Know Your Adversary: 184 Million Records Database Leak: Microsoft, Apple, Google, Facebook, PayPal Logins Found
- securityonline.info: Security researchers have identified a database containing a staggering 184 million account credentials—prompting yet another urgent reminder to The post appeared first on .
@cyberinsider.com
//
WhatsApp has unveiled 'Private Processing', a new AI infrastructure designed to enable AI features while maintaining user privacy. This technology allows users to utilize advanced AI capabilities, such as message summarization and composition tools, by offloading tasks to privacy-preserving cloud servers. The system is designed to ensure that neither Meta nor WhatsApp can access the content of end-to-end encrypted chats during AI processing. This move comes as messaging platforms seek to integrate AI capabilities without compromising secure communications, addressing user concerns about the privacy implications of AI integration within the popular messaging app.
While WhatsApp already incorporates a light blue circle that gives users access to the Meta AI assistant, interactions with the AI assistant are not shielded from Meta the way end-to-end encrypted WhatsApp chats are. The new feature, dubbed Private Processing, is meant to address these concerns with what the company says is a carefully architected and purpose-built platform devoted to processing data for AI tasks without the information being accessible to Meta, WhatsApp, or any other party. This is achieved by processing messages in Trusted Execution Environments (TEEs), ensuring that the data remains confidential and secure. It also designed to allow cloud access without letting Meta or anyone else see end-to-end encrypted chats.
The 'Private Processing' feature will be optional, giving users complete control over how and when they choose to utilize it. Meta security engineering director Chris Rohlf states that this wasn't just about managing the expansion of that threat model and making sure the expectations for privacy and security were met—it was about careful consideration of the user experience and making this opt-in. Although initial reviews by researchers of the scheme’s integrity have been positive, some note that the move toward AI features could ultimately put WhatsApp on a slippery slope. It is not immediately available to WhatsApp users but will gradually be rolled out in the upcoming weeks.
Recommended read:
References :
- cyberinsider.com: WhatsApp Unveils ‘Private Processing’ AI System That Offloads Message Data
- Security Risk Advisors: Meta unveils "Private Processing" architecture for WhatsApp AI features that processes messages in Trusted Execution Environments while preserving end-to-end encryption. #PrivacyByDesign #SecureAI
- WIRED: WhatsApp's AI tools will use a new “Private Processing†system designed to allow cloud access without letting Meta or anyone else see end-to-end encrypted chats. But experts still see risks.
- BleepingComputer: WhatsApp has announced the introduction of 'Private Processing,' a new technology that enables users to utilize advanced AI features by offloading tasks to privacy-preserving cloud servers.
- bsky.app: WhatsApp Launches Private Processing to Enable AI Features While Protecting Message Privacy
- The Hacker News: WhatsApp Launches Private Processing to Enable AI Features While Protecting Message Privacy
- www.scworld.com: Novel WhatsApp feature adds AI while maintaining data privacy
Mia Sato@The Verge
//
Meta, the parent company of Instagram, is intensifying its efforts to use AI to identify teenagers using adult accounts on its platform. This initiative aims to ensure that young users are placed into more restrictive "Teen Account" settings, which offer enhanced protections and address child safety concerns. Instagram is actively working to enroll more teens into these teen accounts, providing a safer online experience for younger users. The social media network is implementing measures to automatically place suspected teens into Teen Account settings.
As part of this effort, Instagram will begin sending notifications to parents, providing information on the importance of ensuring their teens provide accurate age information online. The notifications will also include tips for parents to check and confirm their teens' ages together. Meta is using AI to proactively look for teen accounts that have an adult birthday and change settings for users it suspects are kids. This AI-driven age detection system analyzes various signals to determine if a user is under 18, such as messages from friends containing birthday wishes.
In addition to age detection enhancements, Meta AI is introducing Collaborative Reasoner (Coral), an AI framework designed to evaluate and improve collaborative reasoning skills in large language models (LLMs). Coral reformulates traditional reasoning problems into multi-agent tasks, where two agents must reach consensus through natural conversation. This framework aims to emulate real-world social dynamics, requiring agents to challenge incorrect conclusions and negotiate conflicting viewpoints to arrive at joint decisions, furthering Meta's investment into responsible AI development.
Recommended read:
References :
- The Verge: Meta is ramping up its AI-driven age detection
- www.marktechpost.com: Meta AI Introduces Collaborative Reasoner (Coral): An AI Framework Specifically Designed to Evaluate and Enhance Collaborative Reasoning Skills in LLMs
- about.fb.com: Meta using AI to enroll more teens into teen accounts.
- Entrepreneur: Instagram, Facebook, and Messenger offer the Teen Accounts feature, which provides restrictions around messaging and sensitive content.
- Meta Newsroom: Working With Parents and New Technology to Enroll More Teens Into Teen Accounts
- Flipboard Tech Desk: Meta says it's testing artificial intelligence tech in the U.S. to detect whether a person is a teen — even if they've lied about their birthday to make it seem like they're an adult — and then move them to a teen account.
- Tech News | Euronews RSS: Instagram tests AI to catch underage users as part of teen safety push, Meta says
- www.socialmediatoday.com: Instagram Implements Expanded AI Age Checking, New Parent Prompts
- MyElectricSparks MES: Facebook Has Shifted From Friends to Entertainment at Antitrust Trial
- www.verdict.co.uk: Instagram reportedly testing AI to detect teens lying about age
- Flipboard Tech Desk: Meta says it's testing artificial intelligence tech in the U.S. to detect whether a person is a teen — even if they've lied about their birthday to make it seem like they're an adult
Ashutosh Singh@The Tech Portal
//
Apple is enhancing its AI capabilities, known as Apple Intelligence, by employing synthetic data and differential privacy to prioritize user privacy. The company aims to improve features like Personal Context and Onscreen Awareness, set to debut in the fall, without collecting or copying personal content from iPhones or Macs. By generating synthetic text and images that mimic user behavior, Apple can gather usage data and refine its AI models while adhering to its strict privacy policies.
Apple's approach involves creating artificial data that closely matches real user input to enhance Apple Intelligence features. This method addresses the limitations of training AI models solely on synthetic data, which may not always accurately reflect actual user interactions. When users opt into Apple's Device Analytics program, the AI models will compare these synthetic messages against a small sample of a user’s content stored locally on the device. The device then identifies which of the synthetic messages most closely matches its user sample, and sends information about the selected match back to Apple, with no actual user data leaving the device.
To further protect user privacy, Apple utilizes differential privacy techniques. This involves adding randomized data to broader datasets to prevent individual identification. For example, when analyzing Genmoji prompts, Apple polls participating devices to determine the popularity of specific prompt fragments. Each device responds with a noisy signal, ensuring that only widely-used terms become visible to Apple, and no individual response can be traced back to a user or device. Apple plans to extend these methods to other Apple Intelligence features, including Image Playground, Image Wand, Memories Creation, and Writing Tools. This technique allows Apple to improve its models for longer-form text generation tasks without collecting real user content.
Recommended read:
References :
- www.artificialintelligence-news.com: Apple leans on synthetic data to upgrade AI privately
- The Tech Portal: Apple to use synthetic data that matches user data to enhance Apple Intelligence features
- www.it-daily.net: Apple AI stresses privacy with synthetic and anonymised data
- www.macworld.com: How will Apple improve its AI while protecting your privacy?
- www.techradar.com: Apple has a plan for improving Apple Intelligence, but it needs your help – and your data
- machinelearning.apple.com: Understanding Aggregate Trends for Apple Intelligence Using Differential Privacy
- AI News: Apple AI stresses privacy with synthetic and anonymised data
- THE DECODER: Apple will use your emails to improve AI features without ever seeing them
- www.computerworld.com: Apple’s big plan for better AI is you
- Maginative: Apple Unveils Clever Workaround to Improve AI Without Collecting Your Data
- thetechbasic.com: Apple intends to improve its AI products, Siri and Genmoji, by developing better detection capabilities without accessing personal communication content. Apple released a method that functions with artificial data and privacy mechanisms.
- www.verdict.co.uk: Apple to begin on-device data analysis to enhance AI
- 9to5mac.com: Apple details on-device Apple Intelligence training system using user data
- The Tech Basic: Apple intends to improve its AI products, Siri and Genmoji, by developing better detection capabilities without accessing personal communication content.
- Digital Information World: Apple Silently Shifting Gears on AI by Analyzing User Data Through Recent Snippets of Real World Data
- PCMag Middle East ai: With an upcoming OS update, Apple will compare synthetic AI training data with real customer data to improve Apple Intelligence—but only if you opt in.
- www.zdnet.com: How Apple plans to train its AI on your data without sacrificing your privacy
- www.eweek.com: Apple recently outlined several methods it plans to use to improve Apple Intelligence while maintaining user privacy.
- eWEEK: Apple Reveals How It Plans to Train AI – Without Sacrificing Users’ Privacy
- analyticsindiamag.com: New Training Methods to Save Apple Intelligence?
- Pivot to AI: If you report a bug, Apple reserves the right to train Apple Intelligence on your private logs
@www.thecanadianpressnews.ca
//
Meta Platforms, the parent company of Facebook and Instagram, has announced it will resume using publicly available content from European users to train its artificial intelligence models. This decision comes after a pause last year following privacy concerns raised by activists. Meta plans to use public posts, comments, and interactions with Meta AI from adult users in the European Union to enhance its generative AI models. The company says this data is crucial for developing AI that understands the nuances of European languages, dialects, colloquialisms, humor, and local knowledge.
Meta emphasizes that it will not use private messages or data from users under 18 for AI training. To address privacy concerns, Meta will notify EU users through in-app and email notifications, providing them with a way to opt out of having their data used. These notifications will include a link to a form allowing users to object to the use of their data, and Meta has committed to honoring all previously and newly submitted objection forms. The company states its AI is designed to cater to diverse perspectives and to acknowledge the distinctive attributes of various European communities.
Meta claims its approach aligns with industry practices, noting that companies like Google and OpenAI have already utilized European user data for AI training. Meta defends its actions as necessary to develop AI services that are relevant and beneficial to European users. While Meta highlights that a panel of EU privacy regulators “affirmed” that its original approach met legal obligations. Groups like NOYB had previously complained and urged regulators to intervene, advocating for an opt-in system where users actively consent to the use of their data for AI training.
Recommended read:
References :
- cyberinsider.com: Meta has announced it will soon begin using public data from adult users in the European Union — including posts, comments, and AI interactions — to train its generative AI models, raising concerns about the boundaries of consent and user awareness across its major platforms.
- discuss.privacyguides.net: Meta to start training its AI models on public content in the EU after Est. reading time: 3 minutes If you are an EU resident with an Instagram or Facebook account, you should know that Meta will start training its AI models on your posted content.
- Malwarebytes: Meta users in Europe will have their public posts swept up and ingested for AI training, the company announced this week.
- : Meta says it will start using publicly available content from European users to train its artificial intelligence models, resuming work put on hold last year after activists raised concerns about data privacy.
- bsky.app: Meta announced today that it will soon start training its artificial intelligence models using content shared by European adult users on its Facebook and Instagram social media platforms. https://www.bleepingcomputer.com/news/technology/meta-to-resume-ai-training-on-content-shared-by-europeans/
- BleepingComputer: Meta to resume AI training on content shared by Europeans
- oodaloop.com: Meta says it will resume AI training with public content from European users
- BleepingComputer: Meta announced today that it will soon start training its artificial intelligence models using content shared by European adult users on its Facebook and Instagram social media platforms.
- techxplore.com: Social media company Meta said Monday that it will start using publicly available content from European users to train its artificial intelligence models, resuming work put on hold last year after activists raised concerns about data privacy.
- finance.yahoo.com: Meta says it will resume AI training with public content from European users
- www.theverge.com: Reports on Meta resuming AI training with public content from European users.
- The Hacker News: Meta Resumes E.U. AI Training Using Public User Data After Regulator Approval
- www.socialmediatoday.com: Meta Begins Training its AI Tools on EU User Data
- Meta Newsroom: Today, we’re announcing our plans to train AI at Meta using public content —like public posts and comments— shared by adults on our products in the EU.
- Synced: Meta’s Novel Architectures Spark Debate on the Future of Large Language Models
- securityaffairs.com: Meta will use public EU user data to train its AI models
- about.fb.com: Today, we’re announcing our plans to train AI at Meta using public content —like public posts and comments— shared by adults on our products in the EU. People’s interactions with Meta AI – like questions and queries – will also be used to train and improve our models.
- www.bitdegree.org: Meta Cleared to Train AI with Public Posts in the EU
- MEDIANAMA: Meta to begin using EU users’ data to train AI models
- www.medianama.com: Meta to begin using EU users’ data to train AI models
- The Register - Software: Meta to feed Europe's public posts into AI brains again
- www.artificialintelligence-news.com: Meta will train AI models using EU user data
- AI News: Meta will train AI models using EU user data
- techxmedia.com: Meta announced it will use public posts and comments from adult EU users to train its AI models, ensuring compliance with EU regulations.
- Digital Information World: Despite all the controversy that arose, tech giant Meta is now preparing to train its AI systems on data belonging to Facebook and Instagram users in the EU.
- TechCrunch: Meta will start training its AI models on public content in the EU
Ryan Daws@AI News
//
OpenAI is set to release its first open-weight language model since 2019, marking a strategic shift for the company. This move comes amidst growing competition in the AI landscape, with rivals like DeepSeek and Meta already offering open-source alternatives. Sam Altman, OpenAI's CEO, announced the upcoming model will feature reasoning capabilities and allow developers to run it on their own hardware, departing from OpenAI's traditional cloud-based approach.
This decision follows OpenAI securing a $40 billion funding round, although reports suggest a potential breakdown of $30 billion from SoftBank and $10 billion from Microsoft and venture capital funds. Despite the fresh funding, OpenAI also faces scrutiny over its training data. A recent study by the AI Disclosures Project suggests that OpenAI's GPT-4o model demonstrates "strong recognition" of copyrighted data, potentially accessed without consent. This raises ethical questions about the sources used to train OpenAI's large language models.
Recommended read:
References :
- Fello AI: OpenAI Secures Historic $40 Billion Funding Round
- AI News | VentureBeat: $40B into the furnace: As OpenAI adds a million users an hour, the race for enterprise AI dominance hits a new gear
- InnovationAus.com: OpenAI has closed a significant $40 billion funding round, led by SoftBank Group, pushing its valuation to $300 billion.
- Maginative: OpenAI Secures Record $40 Billion in Funding, Reaching $300 Billion Valuation
- www.theguardian.com: OpenAI said it had raised $40bn in a funding round that valued the ChatGPT maker at $300bn – the biggest capital-raising session ever for a startup.
- The Verge: OpenAI just raised another $40 billion round led by SoftBank
- SiliconANGLE: OpenAI bags $40B in funding, increasing its post-money valuation to $300B
- techxplore.com: OpenAI says it raised $40 bn at valuation of $300 bn
- THE DECODER: OpenAI nears completion of multi-billion dollar funding round
- Kyle Wiggers ?: OpenAI raises $40B at $300B post-money valuation
- THE DECODER: Softbank leads OpenAI's $40 billion funding round
- Verdict: OpenAI has secured a $40 billion funding round, marking the biggest capital raising ever for a startup, with a $300 billion valuation. The deal is led by SoftBank and backed by leading investors.
- Crunchbase News: OpenAI secured $40 billion in funding in a record-breaking round led by SoftBank, valuing the company at $300 billion.
- bsky.app: OpenAI has raised $40 billion at a $300 billion valuation. For context, Boeing has a $128 billion market cap, Disney has a $178 billion market cap, and Chevron has a $295 billion market cap.
- Pivot to AI: OpenAI signs its $40 billion deal with SoftBank! Or maybe $30 billion, probably
- TechInformed: OpenAI has raised more than $40 billion in a fundraise with Japanese telco SoftBank and other investors, valuing the ChatGPT company at more than $300bn.… The post appeared first on .
- CyberInsider: OpenSNP to Shut Down and Delete All User-Submitted DNA Data
- www.techrepublic.com: OpenAI Secures $40B in Historic Funding Round — But There’s a $10B Catch
- techstrong.ai: OpenAI has secured up to $40 billion in a record new funding round led by SoftBank Group that would give the artificial intelligence (AI) pioneer a whopping $300 billion valuation as it ramps up AI research, infrastructure and tools.
- SiliconANGLE: OpenAI to launch its first ‘open-weights’ model since 2019
- venturebeat.com: OpenAI to release open-source model as AI economics force strategic shift
- AI News: Study claims OpenAI trains AI models on copyrighted data
- Charlie Fink: OpenAI raises $40 billion, Runway’s $380 million raise and its stunning Gen-4 AI model, Anthropic warns AI may lie, plus vibe filmmaking with DeepMind.
- thezvi.wordpress.com: Greetings from Costa Rica! The image fun continues. We Are Going to Need A Bigger Compute Budget Fun is being had by all, now that OpenAI has dropped its rule about not mimicking existing art styles.
|
|