@openssf.org
// 14d
Global cybersecurity agencies, including the U.S. Cybersecurity and Infrastructure Security Agency (CISA), the National Security Agency (NSA), the Federal Bureau of Investigation (FBI), and international partners, have jointly released guidance on AI data security best practices. The new Cybersecurity Information Sheet (CSI) aims to address the critical importance of securing data used to train and operate AI systems, emphasizing that the accuracy, integrity, and trustworthiness of AI outcomes are directly linked to the quality and security of the underlying data. The guidance identifies potential risks related to data security and integrity throughout the AI lifecycle, from initial planning and design to post-deployment operation and monitoring.
Building on previous guidance, the new CSI provides ten general best practices organizations can implement to enhance AI data security. These steps include ensuring data comes from trusted, reliable sources using provenance tracking to verify data changes, using checksums and cryptographic hashes to maintain data integrity during storage and transport, and employing quantum-resistant digital signatures to authenticate and verify trusted revisions during training and other post-training processes. The guidance also recommends using only trusted infrastructure, such as computing environments leveraging zero trust architecture, classifying data based on sensitivity to define proper access controls, and encrypting data using quantum-resistant methods like AES-256. The guidelines also emphasize the importance of secure data storage using certified devices compliant with NIST FIPS 140-3, which covers security requirements for cryptographic modules, and privacy preservation of sensitive data through methods like data masking. Furthermore, the agencies advise secure deletion of AI training data from repurposed or decommissioned storage devices. Owners and operators of National Security Systems, the Defense Industrial Base, federal agencies, and critical infrastructure sectors are urged to review the publication and implement its recommended best practices to mitigate risks like data supply chain poisoning and malicious data tampering. References :
Classification:
Waqas@hackread.com
// 16d
A massive database containing over 184 million unique login credentials has been discovered online by cybersecurity researcher Jeremiah Fowler. The unprotected database, which amounted to approximately 47.42 gigabytes of data, was found on a misconfigured cloud server and lacked both password protection and encryption. Fowler, from Security Discovery, identified the exposed Elastic database in early May and promptly notified the hosting provider, leading to the database being removed from public access.
The exposed credentials included usernames and passwords for a vast array of online services, including major tech platforms like Apple, Microsoft, Facebook, Google, Instagram, Snapchat, Roblox, Spotify, WordPress, and Yahoo, as well as various email providers. More alarmingly, the data also contained access information for bank accounts, health platforms, and government portals from numerous countries, posing a significant risk to individuals and organizations. The authenticity of the data was confirmed by Fowler, who contacted several individuals whose email addresses were listed in the database, and they verified that the passwords were valid. The origin and purpose of the database remain unclear, with no identifying information about its owner or collector. The sheer scope and diversity of the login details suggest that the data may have been compiled by cybercriminals using infostealer malware. Jeremiah Fowler described the find as "one of the most dangerous discoveries" he has found in a very long time. The database's IP address pointed to two domain names, one of which was unregistered, further obscuring the identity of the data's owner and intended use. References :
Classification:
@www.salesforce.com
// 21d
Salesforce is placing significant emphasis on data security as it rolls out its AI agent implementations. According to recent research, a strong data foundation and robust governance capabilities are critical for businesses to securely implement agentic AI. While IT security leaders are largely optimistic about the potential benefits of AI agents, a majority acknowledge that there are significant readiness gaps in deploying the necessary security safeguards. This highlights the importance of prioritizing security measures as AI adoption accelerates across various industries.
As AI becomes more prevalent and cyber threats continue to evolve, a considerable number of IT security leaders recognize the need to transform their security practices. In fact, nearly 8 in 10 acknowledge that their security protocols require upgrades to address the challenges posed by AI. This transformation includes building AI fluency within organizations, redesigning workflows to incorporate AI agents effectively, and ensuring adequate human supervision to mitigate risks and maintain ethical standards. It's also worth noting that Salesforce's own data indicates unanimous optimism among IT leaders regarding the potential of AI agents. Salesforce is actively developing and acquiring technologies to enhance its Agentforce platform and empower developers to build secure and effective AI agents. The acquisition of Convergence, an AI automation startup, will integrate advanced AI agent design and automation capabilities into Agentforce. Furthermore, the launch of the Salesforce Developer Edition, which includes access to Agentforce and Data Cloud, provides developers with the tools to create, customize, and deploy autonomous AI agents with appropriate guardrails. By emphasizing secure data handling and governance, Salesforce aims to enable businesses to leverage the benefits of AI agents while minimizing potential security risks. References :
Classification:
|
BenchmarksBlogsResearch Tools |