The Dark Side of AI Adoption: ChatGPT Data Leaks & Malicious npm Packages | The Ciphered Reality Podcast
Failed to add items
Add to cart failed.
Add to wishlist failed.
Remove from wishlist failed.
Follow podcast failed
Unfollow podcast failed
-
Narrated by:
-
Written by:
About this listen
Artificial Intelligence is transforming businesses but it’s also quietly expanding the attack surface.
In this episode of the RITC Cybersecurity Podcast, we break down two real-world AI-related security incidents that every business leader, developer, and IT decision-maker needs to understand:
🔴 Two malicious Chrome extensions caught stealing ChatGPT and DeepSeek conversations from over 900,000 users 🔴 A fake WhatsApp API package on npm harvesting messages, contacts, and login tokens
This conversation isn’t about fear-mongering AI. It’s about poor security hygiene, blind trust in tools, and the growing risks around AI-integrated workflows.
What you’ll learn in this episode:-
How browser extensions become silent data exfiltration tools
-
Why AI chats can expose sensitive business and customer data
-
How npm and open-source supply chain attacks actually work
-
The real security risks of AI adoption in SMBs and enterprises
-
Practical steps organizations should take now to reduce AI-related risk
If your organization uses ChatGPT, AI copilots, browser extensions, APIs, or open-source packages, this episode is not optional listening.
📅 Episode Date09 January 2026
🎙 Presented byRITC Cybersecurity Architecture • Operations • GRC • Security Frameworks
📩 info@ritcsecurity.com 🌐 www.ritcsecurity.com ▶️ youtube.com/@ritc_cybersecurity 📸 instagram.com/ritc.cybersecurity
🔐 Who should watch:-
SMB Owners & Founders
-
CISOs, CIOs & IT Leaders
-
Developers & Security Engineers
-
Anyone deploying AI tools without a formal security review
AI doesn’t break security. Assumptions do. #podcast #cipheredreality #ritccybersecurity #cybersecurity #cyberawareness