• S7, E264 - Season Seven, New Threats
    Jan 21 2026

    Send us a text

    We kick off season seven with a tour of the year’s early privacy & security news: neighborhood watchtowers from Ring, a rival-led hack of Breach Forums, a massive stitched leak in France, a heavy Microsoft patch drop, AI agents on the rise, and new state privacy laws. We share practical steps: self-host cameras, freeze your credit, harden identity portals, and keep humans in the loop when AI handles sensitive data.

    • CES unveils Ring’s neighborhood watchtower and its surveillance tradeoffs
    • Why self‑hosted DVR systems beat cloud video for privacy
    • Breach Forums doxxed by rivals and lessons in OPSEC
    • France’s 45 million record “combo” leak and re‑identification risks
    • Credit freezes, hard vs soft inquiries, and portal security
    • Microsoft’s 114 patches and sane patch management
    • AI agents escalating breach risk and human‑in‑the‑loop controls
    • New privacy laws in Indiana, Kentucky, and Rhode Island and actionable rights

    Please go to theproblemlounge.com and sign up for the newsletter
    If you have guests or topics or anything, please reach out to us!


    Support the show

    Show More Show Less
    25 mins
  • S6, E263 -Year-End Reality Check On Privacy And AI
    Jan 5 2026

    Send us a text

    We look back at 2025’s privacy and security reality: useful AI where data was ready, repeating breach patterns, and infrastructure limits that slowed the hype. We call out backdoors, weak 2FA, and the shift toward passkeys, decentralization, and owning more of our stack.

    • AI succeeds when data, process and governance are mature
    • Power, chips and cost constraints limit AI growth
    • SALT Typhoon shows backdoor risk and patching failures
    • SMS 2FA remains weak while passkeys gain ground
    • Data hoarding expands breach blast radius
    • Streaming consolidation drives algorithm control and piracy’s return
    • Decentralization and self‑hosting rebuild trust with users
    • 2026 outlook: AI contraction, ML pragmatism, fewer but stronger tools

    Check out our website: the problemlounge.com
    If you have episode guest ideas or topics you want us to talk about, please send them our way
    Go check out YouTube channel, Privacy Please Podcast

    In 2026, would you like to see us do live streams?


    Everyday AI: Your daily guide to grown with Generative AI
    Can't keep up with AI? We've got you. Everyday AI helps you keep up and get ahead.

    Listen on: Apple Podcasts Spotify

    Support the show

    Show More Show Less
    47 mins
  • S6, E262 - WARNER BROS CRISIS: Class Action Lawsuit & The $108B Hostile Takeover (Dec 15 Update)
    Dec 15 2025

    Send us a text

    It is Monday, December 15th, and the battle for Hollywood has officially gone nuclear.

    What started as an $82 billion acquisition by Netflix has morphed into a $108 billion hostile takeover battle with Paramount Skydance. As of this morning, stocks are volatile, the government has frozen the deal, and a massive Class Action Lawsuit has just been filed to burn it all down.

    In this Special Report from Privacy Please, we break down the chaos of the last 72 hours. We uncover the "National Security" weapon Netflix is using to kill the deal, the foreign money backing Paramount, and the leaked memos that reveal why executives are selling you out.

    No matter who wins—the Algorithm or the Oligarchs—your privacy is the casualty.

    Time Stamps / Key Moments:

    0:00 - Monday Morning Chaos: Stocks Halted & The $108B Counter-Bid

    2:15 - Future A vs. Future B: The Algorithm Era vs. The Oligarch Era

    5:30 - BREAKING: The "National Security" Argument & Class Action Lawsuit

    8:45 - Leaked Memos: The "Golden Parachute" Betrayal

    11:20 - The Fallout: Why Streaming Prices Will Hit $35/Month

    What you'll uncover in this deep dive:

    The Weekend of Chaos: A complete timeline of how Netflix lost control of the deal over the weekend.

    The "Foreign Money" Threat: Why Paramount's backing by sovereign wealth funds has regulators panicked.

    Netflix's Hypocrisy: How the surveillance giant is weaponizing "privacy" to stop their competitors.

    The Consumer Cost: Why the era of cheap streaming is officially dead.

    Join the Community: We are building a community dedicated to navigating these complex digital issues.

    Website & Newsletter: https://www.theproblemlounge.com

    Support the Show: http://buzzsprout.com/622234/support

    Don't forget to Like, Comment, and Subscribe! Your support helps us uncover the stories Big Tech wants to hide.

    #WarnerBros #Netflix #Paramount #StreamingWars #PrivacyPlease #Antitrust #FTC #DataPrivacy #Hollywood #BreakingNews #ClassAction #StockMarket

    Support the show

    Show More Show Less
    10 mins
  • S6, E261 - The Red Line: Salt Typhoon, Temu Spyware & The 'Side Door' Attack
    Dec 4 2025

    Send us a text

    A week where the lawful intercept backdoor became the front door, a supply chain hop hit 200+ companies, a bargain app faced a malware lawsuit, and a university breach turned into a donor-targeting roadmap. We share simple moves to lower risk fast and set guardrails that actually hold.

    • Salt Typhoon abusing CALEA at major US telecoms
    • Negligence, unpatched routers and weak passwords
    • Why SMS is transparent and how to switch to Signal
    • Kill SMS 2FA and use authenticators or YubiKey
    • Gainsight-to-Salesforce island hopping at scale
    • Audit connected apps and revoke stale API keys
    • Arizona AG lawsuit calling Timu malware
    • Shop via browser sandbox and use masked payments
    • UPenn donor data leak and Oracle exploit
    • Whaling protections with voice verification and data scrubbing
    • Practical recap: trust nothing, verify everything

    Please follow us or subscribe on your podcast app, and watch the video on our YouTube or at theproblemlounge.com. If you have topics or guest ideas, we would love to hear from you


    Support the show

    Show More Show Less
    12 mins
  • S6, E260 - How Digital Therapy is Changing Mental Health (and Privacy) Forever
    Nov 17 2025

    Send us a text

    A sleepless night, a soft prompt, and a flood of relief—the rise of AI therapy and companion apps is rewriting how we seek comfort when it matters most. We explore why these tools feel so human and so helpful, and what actually happens to the raw, intimate data shared in moments of vulnerability. From CBT-style exercises to memory-rich chat histories, the promise is powerful: instant support, lower cost, and zero visible judgment. The tradeoff is less visible but just as real—monetization models that thrive on sensitive inputs, “anonymized” data that can often be re-identified, and breach risks that turn private confessions into attack surfaces.

    We dig into the ethical edge: can a language model provide mental health care, or does it simulate empathy without the duty of care? We look at misinformation, hallucinated advice, and the way overreliance on AI can delay genuine human connection and professional help. The legal landscape lags behind the technology, with HIPAA often out of scope and accountability unclear when harm occurs. Still, there are practical ways to reduce exposure without forfeiting every benefit. We walk through privacy policies worth reading, data controls worth using, and signs that an app takes security seriously, from encryption to third‑party audits.

    Most of all, we focus on agency. Use AI for structure, journaling, and small reframes; lean on people for crisis, nuance, and real relationship. Create boundaries for what you share, separate identities when possible, and revisit whether a tool is helping you act or just keeping you company. If you’ve ever confided in a bot at 2 a.m., this conversation gives you the context and steps to stay safer while still finding support. If it resonates, subscribe, share with a friend who might need it, and leave a review to help others find the show.

    Support the show

    Show More Show Less
    20 mins
  • S6, E259 - Poisoned Patches & Billionaire Breaches
    Nov 7 2025

    Send us a text

    In this episode of Privacy Please, host Cameron Ivey discusses significant security threats, including a critical vulnerability in Microsoft's WSUS, a major data breach at the University of Pennsylvania, and the emergence of sophisticated malware known as Glassworm. The conversation highlights the importance of cybersecurity measures and the potential consequences of negligence in IT security.

    Support the show

    Show More Show Less
    10 mins
  • S6, E258 - The Synthetic Star: The AI Influencer Earning More Than You
    Oct 23 2025

    Send us a text

    She has millions of followers, lands six-figure brand deals, and lives a life of curated perfection. The only catch? She isn't real. She was entirely created by artificial intelligence.

    Welcome to the unsettling world of synthetic influencers.

    In this compelling episode of Privacy Please, we dive deep into the booming industry of AI-generated online personalities. Discover:

    • The Technology: How advanced AI image generators, 3D modeling, and Large Language Models combine to create hyper-realistic avatars and their compelling "personalities."
    • The Business Case: Why major brands and marketing agencies are investing millions in digital beings that offer total control, scalability, and no risk of scandal.
    • The Privacy & Ethical Dilemmas: We explore the "uncanny valley" of trust, the impact of deception by design, the new extremes of unrealistic beauty standards, and the potential for these AI personas to be used for sophisticated scams or propaganda.
    • The Future of Authenticity: What does the rise of the synthetic star mean for human creativity, genuine connection, and the very definition of "real" in our digital world?

    It's a future that's already here, shaping what we see, what we buy, and even what we believe.

    Key Topics Covered:

    • What are virtual/synthetic influencers?
    • Examples: Lil Miquela, Aitana Lopez, Shudu Gram
    • AI technologies used: image generation, 3D modeling, LLMs
    • Reasons for their rise: control, cost, scalability, data collection
    • Ethical concerns: deception, parasocial relationships with AI
    • Impacts: unrealistic standards, displacement of human creators, potential for malicious use (scams, propaganda)
    • Debate around regulation and disclosure for AI-generated content
    • The future of authenticity and trust online

    Connect with Privacy Please:

    • Website: theproblemlounge.com
    • YouTube: https://www.youtube.com/@privacypleasepodcast
    • Social Media:
      • LinkedIn: https://www.linkedin.com/company/problem-lounge-network

    Resources & Further Reading (Sources Used / Suggested):

    • Federal Trade Commission (FTC):
      • Guidelines on disclosure for influencers (relevant for future AI disclosure discussions)
    • Academic Research:
      • Studies on parasocial relationships with media figures (can be applied to AI)
      • Research on the ethics of AI and synthetic media.
    • Industry Insights:
      • Reports from marketing agencies on virtual influencer trends
      • Articles from tech publications (e.g., Wired, The Verge, MIT Tech Review) covering Lil Miquela and similar figures.

    Support the show

    Show More Show Less
    15 mins
  • S6, E257 - How Apple’s New Chip Rewrites Mobile Security
    Oct 3 2025

    Send us a text

    We unpack how Apple’s Memory Integrity Enforcement changes the rules of mobile security by rebuilding memory architecture, not just adding guardrails. We weigh who should upgrade now, what this means for Android, and why people remain the biggest risk.

    • memory corruption explained with apartment analogy
    • why NOP sleds and heap sprays fail under MIE
    • tags, type segregation, and synchronous checks at runtime
    • market-share vs design: Apple, Windows, Android trade-offs
    • Pegasus, zero-click exploits, and threat profiles
    • game hacking parallels: reading vs corrupting memory
    • should you upgrade: high-risk users vs everyday users
    • why architecture-level security beats bolt-on tools


    Everyday AI: Your daily guide to grown with Generative AI
    Can't keep up with AI? We've got you. Everyday AI helps you keep up and get ahead.

    Listen on: Apple Podcasts Spotify

    Support the show

    Show More Show Less
    33 mins