loading

Month: February 2025

  • Home
  • February, 2025

AI Assistants and the Risk of Inaccurate Information

The media landscape is undergoing a significant transformation, thanks to advancements in Artificial Intelligence (AI). This shift is presenting media companies, including the BBC, with new capabilities and opportunities to enhance their services and content delivery. From adding subtitles to programs on BBC Sounds to translating content into multiple languages on BBC News, AI is already making a positive impact. The BBC is also developing AI tools to assist staff in daily tasks and exploring innovative ways to provide audiences with new experiences, such as personal tutors on Bitesize.

However, with great power comes great responsibility. While AI promises substantial value, it also poses significant challenges for audiences and the UK’s information ecosystem. One key concern is the role of AI assistants, like those from OpenAI, Google, and Microsoft, in distorting journalism.

The Research on AI Assistants and News Accuracy

AI assistants are being adapted to perform various tasks, including drafting emails, analyzing data, and summarizing information. They also provide answers to questions about news and current affairs, often repurposing content from publishers’ websites without permission (Economic Times, 2024). To understand the accuracy of news-related outputs from AI assistants, the BBC conducted research on four prominent AI assistants: OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity.

During the research, the AI assistants were given access to the BBC website and asked questions about the news, with the prompt to use BBC News articles as sources where possible. BBC journalists, experts in the topics, reviewed the AI-generated answers based on criteria such as accuracy, impartiality, and representation of BBC content.

Findings and Implications

The research revealed alarming results. AI assistants produced answers containing significant inaccuracies and distorted BBC content:

  • 51% of AI answers to news questions had significant issues.
  • 19% of AI answers citing BBC content contained factual errors.
  • 13% of quotes from BBC articles were either altered or not present in the cited article.

These inaccuracies matter because they undermine trust in the news, essential for a society that relies on a shared understanding of facts. Inaccuracies from AI assistants can be easily amplified when shared on social networks, leading to real harm. News publishers must ensure their content is used accurately and with permission. Internal research shows that when AI assistants cite trusted brands like the BBC, audiences are more likely to trust the answer, even if it is incorrect.

Image Courtesy of BBC News

Examples of Distortion

Individual errors from AI assistants highlight broader issues. For instance, Google’s Gemini incorrectly stated that “The NHS advises people not to start vaping,” while the NHS actually recommends vaping as a method to quit smoking. Microsoft’s Copilot incorrectly reported details about Gisèle Pelicot’s discovery of crimes against her, and Perplexity misstated the date of Michael Mosley’s death and misquoted a statement from Liam Payne’s family. OpenAI’s ChatGPT falsely claimed in December 2024 that Ismail Haniyeh, assassinated in Iran in July 2024, was part of Hamas leadership.

The Unknown Scale of the Issue

The scope and scale of errors and content distortion by AI assistants remain unknown. AI assistants provide answers on a wide range of questions, and users can receive different answers to the same question. The extent of the issue is unclear to audiences, media companies, regulators, and possibly even AI companies.

Urgent Need for Accurate and Trustworthy AI

AI assistants currently cannot be relied upon to provide accurate news and risk misleading audiences. Unlike professional news outlets that correct errors, AI applications lack mechanisms for error correction. This issue may extend to other areas where reliability and accuracy are crucial, such as health, education, and security.

Next Steps and Call to Action

As the use of AI assistants grows, it is critical to ensure they provide accurate and trustworthy information. Publishers, like the BBC, should control how their content is used, and AI companies should transparently show how assistants process news and the scale of errors.

The proposed steps to Avoid Distortions and Inaccuracies by AI Assistants:

  • Content Usage Policies: Develop clear policies on how AI assistants can use and repurpose content from publishers.
  • Improvement in AI Training: Work together to improve AI models by incorporating feedback from different organizations to reduce distortions.
  • Public Disclosures: AI companies should disclose how their systems process and summarize news content.
  • Error Reporting Mechanism: Implement systems for users to report errors and inaccuracies in AI-generated content.
  • Regulations on AI Content Use: Develop regulations that ensure AI assistants use content from news publishers accurately and with permission.
  • Regular Audits: Conduct regular audits of AI systems to check for compliance with accuracy and ethical standards.
  • Improved Algorithms: Invest in advanced algorithms that better understand and accurately summarize content.
  • Real-time Monitoring: Implement real-time monitoring systems to detect and correct inaccuracies in AI responses.
  • Human-in-the-Loop Systems: Integrate human oversight in AI systems to review and approve content summaries, especially for sensitive topics.
  • Feedback Mechanisms: Collect feedback from practitioners, experts, researchers, journalists, and other relevant communities to continuously improve AI systems.
  • Educational Initiatives: Increase AI literacy among audiences to help them critically evaluate AI-generated content.
  • Bias Mitigation: Implement strategies to mitigate bias in AI systems to ensure balanced and fair reporting.
  • Regular Updates: Update AI models regularly with the latest information and improved processing techniques.

Independent Evaluations: Conduct independent evaluations of AI systems to identify and rectify potential flaws.

Reference: Representation of BBC News content in AI Assistants

Proactive Network Intrusion Detection. Stop Waiting and Start Hunting

Are you confident your network is secure?

In today’s cyber landscape, waiting for breaches to happen is a way to disaster. The average time to detect an attacker lurking within a network is 10 days, according to a 2024 Mandiant Special Report. While dwell time statistics, particularly those found in reports like Mandiant’s M-Trends, offer valuable insights, it is important to consider the context in which they are presented.

In my view, these reports often reflect the experiences of organizations with mature incident response capabilities. These organizations tend to be larger or more frequently targeted by sophisticated attacks, making them more likely to engage firms like Mandiant. This can create a potential bias in the data, as it may not fully represent the experiences of smaller or less mature organizations, which often lack the same resources and expertise.

This blog post explores why continuous network intrusion hunting is crucial and how to implement it effectively.

Why Reactive Security Isn’t Enough

Traditional security measures like Intrusion Detection Systems (IDS), Endpoint Detection and Response (EDR), and Security Information and Event Management (SIEM) are essential, but they’re not foolproof. Sophisticated attackers are skilled to evade these automated defenses, buying themselves precious time within your network. This is where proactive threat hunting comes in. Instead of simply reacting to alerts, threat hunters assume a breach has already occurred and actively search for the signs. This proactive approach significantly reduces dwell time, minimizes damage, and speeds up recovery.

Threat Hunting a step-by-step approach

Effective threat hunting requires a structured approach. Following is a breakdown of the key steps:

1. Establish a Baseline: Know your “Normal”

Before you can identify anomalies, you need to understand what “normal” looks like. Establishing a baseline of your network traffic, user behavior, and application activity is crucial. This baseline acts as a benchmark against which you can compare current activity to detect deviations. Think of it like knowing the typical routine operations in your office. A sudden silence in a normally busy area or an unusual noise, will immediately grab your attention. A network baseline serves the same purpose especially when coupled with robust asset management and network topology, allowing you to quickly identify unusual or suspicious activity that deviates from the established norm.

2. Data Collection: Gathering the Clues

Threat hunters rely on Indicators of Compromise (IOCs) the pieces of data that suggest malicious activity. To find these clues, you need comprehensive data collection. This involves gathering network flow data, packet captures, logs from various sources (servers, endpoints, network devices), and alerts from your security tools. SIEM solutions play a critical role here, aggregating and correlating data from across your network for efficient analysis. Think of it as assembling a detective’s evidence board.

3. Searching and analyzing to connecting the dots

With data collected and aggregated in your SIEM, the real hunting begins. This involves searching for IOCs, correlating events, and analyzing logs to understand the attacker’s movements. Leveraging analytics and machine learning can significantly enhance this process, helping to identify subtle patterns and anomalies that might otherwise go unnoticed. Frameworks like MITRE ATT&CK and the NSA Technical Cyber Threat Framework (NTCTF) provide valuable guidance on attacker tactics and techniques, helping hunters focus their search. The Pyramid of Pain helps prioritize IOCs, from easily changeable hashes to more impactful Tactics, Techniques, and Procedures (TTPs).

4. Incident response to recover

When a hunt uncovers malicious activity, it’s time to take actions. A well-defined incident response plan is essential for containing the breach, eradicating the threat, and restoring your systems. This involves assessing the scope of the attack, collecting evidence, and implementing your recovery procedures. Think of it as executing a well-rehearsed emergency plan.

5. Penetration Testing a valuable ally

While this isn’t strictly a threat hunting practice, penetration testing plays a crucial role in strengthening your defenses. By simulating real-world attacks, penetration testers can identify vulnerabilities and weaknesses in your network, providing valuable insights for your threat hunting team. It’s like a fire drill for your security team.

Challenges and Considerations

Threat hunting isn’t without its challenges. The vast amount of data, the cost of storage, the need for skilled hunters, and the difficulty of inspecting encrypted traffic are just a few of the hurdles. However, the benefits compensate the challenges.

Be Proactive Not Reactive

In current ever-evolving cyber threat landscape, proactive threat hunting is no longer a luxury which is only for large organization but it’s a necessity. By continuously searching for intruders, you can significantly reduce dwell time, minimize damage, and protect your organization from costly breaches. Don’t wait for the next attack but start hunting today.

Adopt an Agile Framework for AI Privacy

Artificial intelligence is becoming vital for businesses and introduces businesses with immense opportunities to improve efficiency, create new products and services, and gain a competitive edge. However, this technological evolution also brings complex security and privacy regulatory challenges but global data privacy regulations are still evolving. While the EU has enacted broad legislation like GDPR, DSA, and the AI Act but these frameworks are still subject to frequent revisions. Other regulators around the globe have already enforced or are striving to create a stable and effective guideline.

Organizations are therefore required to develop adaptive strategies tailored to this rapidly shifting AI obligations. This can be particularly difficult for many businesses still in the early stages of AI maturity. This creates a challenging situation where businesses desire rapid progress and want to leverage the AI at their maximum competence, while simultaneously needing to remain compliant with strict regulations and retain customer confidence in their data handling practices. How can organizations balance with both business and regulatory requirements? The ideal solution is to implement an agile controls framework that enables innovation while protecting the organization and its customers as regulations change.

Through the following posts, we’ll share practical guidance on how data privacy officers can implement agile controls frameworks to enable AI innovation without compromising data privacy or compliance.

  • Foundational Data Governance: Building a Privacy-First Foundation. What is foundational data governance, and why is it critical for AI and data privacy? We’ll demystify the core components, including data discovery, classification, and policy development, and explore how to establish a robust governance framework that supports privacy by design. We’ll go beyond the basics and provide a clear understanding of how to implement these principles, ensuring a solid foundation for all your privacy initiatives. (Coming Soon)
  • Proactive Risk Management: Mitigating Privacy Risks in the Age of AI. What are the unique privacy risks posed by AI, and how can you proactively mitigate them? We’ll delve into risk assessment, data ethics, and the importance of Privacy Impact Assessments (PIAs), particularly for AI-driven projects. We’ll explore how to manage the AI data supply chain and implement robust controls to protect sensitive data. We’ll go beyond reactive measures and provide a practical guide to proactive risk management in the AI era. (Coming Soon)
  • Data Subject Rights and Transparency: Empowering Individuals and Building Trust. How can you empower individuals with control over their data and build trust through transparency? We’ll explore data subject rights, including access, rectification, and erasure, and discuss how to implement processes to facilitate these rights. We’ll also cover best practices for communicating transparently with individuals about data usage, particularly in the context of AI. We’ll go beyond compliance and show how to build trust through proactive communication and user-centric privacy practices. (Coming Soon)
  • Continuous Monitoring and Improvement: Ensuring Ongoing Privacy Compliance. Why is continuous monitoring essential for data privacy, and how can you implement an effective program? We’ll explore monitoring and auditing techniques, incident response planning, and the crucial role of remediation. We’ll also discuss horizon scanning and how to stay ahead of evolving regulations and best practices, especially in the rapidly changing landscape of AI. We’ll go beyond basic monitoring and provide a framework for continuous improvement in your privacy program. (Coming Soon)
  • Reporting and Communication: Fostering a Privacy-Conscious Culture. How can you demonstrate accountability and foster a culture of privacy awareness? We’ll cover best practices for reporting privacy risks to the board and other stakeholders, as well as strategies for implementing effective privacy training programs. We’ll go beyond simple reporting and provide guidance on building a privacy-first culture within your organization. (Coming Soon)

Resent Post

Archives

Categories

Tags

Recent Post