Steve Vandenberg@Microsoft Security Blog
//
Microsoft is making significant strides in AI and data security, demonstrated by recent advancements and reports. The company's commitment to responsible AI is highlighted in its 2025 Responsible AI Transparency Report, detailing efforts to build trustworthy AI technologies. Microsoft is also addressing the critical issue of data breach reporting, offering solutions like Microsoft Data Security Investigations to assist organizations in meeting stringent regulatory requirements such as GDPR and SEC rules. These initiatives underscore Microsoft's dedication to ethical and secure AI development and deployment across various sectors.
AI's transformative potential is being explored in higher education, with Microsoft providing AI solutions for creating AI-ready campuses. Institutions are focusing on using AI for unique differentiation and innovation rather than just automation and cost savings. Strategies include establishing guidelines for responsible AI use, fostering collaborative communities for knowledge sharing, and partnering with technology vendors like Microsoft, OpenAI, and NVIDIA. Comprehensive training programs are also essential to ensure stakeholders are proficient with AI tools, promoting a culture of experimentation and ethical AI practices. Furthermore, Microsoft Research has achieved a breakthrough in computational chemistry by using deep learning to enhance the accuracy of density functional theory (DFT). This advancement allows for more reliable predictions of molecular and material properties, accelerating scientific discovery in fields such as drug development, battery technology, and green fertilizers. By generating vast amounts of accurate data and using scalable deep-learning approaches, the team has overcome limitations in DFT, enabling the design of molecules and materials through computational simulations rather than relying solely on laboratory experiments. Recommended read:
References :
@openssf.org
//
References:
industrialcyber.co
, www.scworld.com
,
Global cybersecurity agencies, including the U.S. Cybersecurity and Infrastructure Security Agency (CISA), the National Security Agency (NSA), the Federal Bureau of Investigation (FBI), and international partners, have jointly released guidance on AI data security best practices. The new Cybersecurity Information Sheet (CSI) aims to address the critical importance of securing data used to train and operate AI systems, emphasizing that the accuracy, integrity, and trustworthiness of AI outcomes are directly linked to the quality and security of the underlying data. The guidance identifies potential risks related to data security and integrity throughout the AI lifecycle, from initial planning and design to post-deployment operation and monitoring.
Building on previous guidance, the new CSI provides ten general best practices organizations can implement to enhance AI data security. These steps include ensuring data comes from trusted, reliable sources using provenance tracking to verify data changes, using checksums and cryptographic hashes to maintain data integrity during storage and transport, and employing quantum-resistant digital signatures to authenticate and verify trusted revisions during training and other post-training processes. The guidance also recommends using only trusted infrastructure, such as computing environments leveraging zero trust architecture, classifying data based on sensitivity to define proper access controls, and encrypting data using quantum-resistant methods like AES-256. The guidelines also emphasize the importance of secure data storage using certified devices compliant with NIST FIPS 140-3, which covers security requirements for cryptographic modules, and privacy preservation of sensitive data through methods like data masking. Furthermore, the agencies advise secure deletion of AI training data from repurposed or decommissioned storage devices. Owners and operators of National Security Systems, the Defense Industrial Base, federal agencies, and critical infrastructure sectors are urged to review the publication and implement its recommended best practices to mitigate risks like data supply chain poisoning and malicious data tampering. Recommended read:
References :
Waqas@hackread.com
//
A massive database containing over 184 million unique login credentials has been discovered online by cybersecurity researcher Jeremiah Fowler. The unprotected database, which amounted to approximately 47.42 gigabytes of data, was found on a misconfigured cloud server and lacked both password protection and encryption. Fowler, from Security Discovery, identified the exposed Elastic database in early May and promptly notified the hosting provider, leading to the database being removed from public access.
The exposed credentials included usernames and passwords for a vast array of online services, including major tech platforms like Apple, Microsoft, Facebook, Google, Instagram, Snapchat, Roblox, Spotify, WordPress, and Yahoo, as well as various email providers. More alarmingly, the data also contained access information for bank accounts, health platforms, and government portals from numerous countries, posing a significant risk to individuals and organizations. The authenticity of the data was confirmed by Fowler, who contacted several individuals whose email addresses were listed in the database, and they verified that the passwords were valid. The origin and purpose of the database remain unclear, with no identifying information about its owner or collector. The sheer scope and diversity of the login details suggest that the data may have been compiled by cybercriminals using infostealer malware. Jeremiah Fowler described the find as "one of the most dangerous discoveries" he has found in a very long time. The database's IP address pointed to two domain names, one of which was unregistered, further obscuring the identity of the data's owner and intended use. Recommended read:
References :
@www.salesforce.com
//
References:
Salesforce
, Salesforce
,
Salesforce is placing significant emphasis on data security as it rolls out its AI agent implementations. According to recent research, a strong data foundation and robust governance capabilities are critical for businesses to securely implement agentic AI. While IT security leaders are largely optimistic about the potential benefits of AI agents, a majority acknowledge that there are significant readiness gaps in deploying the necessary security safeguards. This highlights the importance of prioritizing security measures as AI adoption accelerates across various industries.
As AI becomes more prevalent and cyber threats continue to evolve, a considerable number of IT security leaders recognize the need to transform their security practices. In fact, nearly 8 in 10 acknowledge that their security protocols require upgrades to address the challenges posed by AI. This transformation includes building AI fluency within organizations, redesigning workflows to incorporate AI agents effectively, and ensuring adequate human supervision to mitigate risks and maintain ethical standards. It's also worth noting that Salesforce's own data indicates unanimous optimism among IT leaders regarding the potential of AI agents. Salesforce is actively developing and acquiring technologies to enhance its Agentforce platform and empower developers to build secure and effective AI agents. The acquisition of Convergence, an AI automation startup, will integrate advanced AI agent design and automation capabilities into Agentforce. Furthermore, the launch of the Salesforce Developer Edition, which includes access to Agentforce and Data Cloud, provides developers with the tools to create, customize, and deploy autonomous AI agents with appropriate guardrails. By emphasizing secure data handling and governance, Salesforce aims to enable businesses to leverage the benefits of AI agents while minimizing potential security risks. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |