• Data Is Hazardous Material: How Data Brokers Telematics and Over-Collection Are Reshaping Cyber Risk
    Jan 20 2026

    The FTC has issued an order against General Motors for collecting and selling drivers’ precise location and behavior data, gathered every few seconds and marketed as a safety feature. That data was sold into insurance ecosystems and used to influence pricing and coverage decisions — a clear reminder that how organizations collect, retain, and share data now carries direct security, regulatory, and financial risk.

    In this episode of Cyberside Chats, we explain why the GM case matters to CISOs, cybersecurity leaders, and IT teams everywhere. Data proliferation doesn’t just create privacy exposure; it creates systemic risk that fuels identity abuse, authentication bypass, fake job applications, and deepfake campaigns across organizations. The message is simple: data is hazardous material, and minimizing it is now a core part of cybersecurity strategy.

    Key Takeaways:

    1. Prioritize data inventory and mapping in 2026

    You cannot assess risk, select controls, or meet regulatory obligations without knowing what data you have, where it lives, how it flows, and why it is retained.

    2. Reduce data to reduce risk

    Data minimization is a security control that lowers breach impact, compliance burden, and long-term cost.

    3. Expect that regulators care about data use, not just breaches

    Enforcement increasingly targets over-collection, secondary use, sharing, and retention even when no breach occurs.

    4. Create and actively use a data classification policy

    Classification drives retention, access controls, monitoring, and protection aligned to data value and regulatory exposure.

    5. Design identity and recovery assuming personal data is already compromised

    Build authentication and recovery flows that do not rely on the secrecy of SSNs, dates of birth, addresses, or other static personal data.

    6. Train teams on data handling, not just security tools

    Ensure engineers, IT staff, and business teams understand what data can be collected, how long it can be retained, where it may be stored, and how it can be shared.

    Resources:

    1. California Privacy Protection Agency — Delete Request and Opt-Out Platform (DROP)

    https://privacy.ca.gov/drop/

    2. FTC Press Release — FTC Takes Action Against General Motors for Sharing Drivers’ Precise Location and Driving Behavior Data

    https://www.ftc.gov/news-events/news/press-releases/2025/01/ftc-takes-action-against-general-motors-sharing-drivers-precise-location-driving-behavior-data

    3. California Delete Act (SB 362) — Overview

    https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB362

    4. Texas Attorney General — Data Privacy Enforcement Actions

    https://www.texasattorneygeneral.gov/news/releases

    5. Data Breaches by Sherri Davidoff

    https://www.amazon.com/Data-Breaches-Opportunity-Sherri-Davidoff/dp/0134506782

    Show More Show Less
    19 mins
  • Venezuela’s Blackout: Cybercrime Domino Effect
    Jan 13 2026

    When Venezuela experienced widespread power and internet outages, the impact went far beyond inconvenience—it created a perfect environment for cyber exploitation.

    In this episode of Cyberside Chats, we use Venezuela’s disruption as a case study to show how cyber risk escalates when power, connectivity, and trusted services break down. We examine why phishing, fraud, and impersonation reliably surge after crises, how narratives around cyber-enabled disruption can trigger copycat or opportunistic attacks, and why even well-run organizations resort to risky security shortcuts when normal systems fail.

    We also explore how attackers weaponize emergency messaging, impersonate critical infrastructure and connectivity providers, and exploit verification failures when standard workflows are disrupted. The takeaway is simple: when infrastructure collapses, trust erodes—and cybercrime scales quickly to fill the gap.

    Show More Show Less
    14 mins
  • What the Epstein Files Teach Us About Redaction and AI
    Jan 6 2026

    The December release of the Epstein files wasn’t just controversial—it exposed a set of security problems organizations face every day. Documents that appeared heavily redacted weren’t always properly sanitized. Some files were pulled and reissued, drawing even more attention. And as interest surged, attackers quickly stepped in, distributing malware and phishing sites disguised as “Epstein archives.”

    In this episode of Cyberside Chats, we use the Epstein files as a real-world case study to explore two sides of the same problem: how organizations can be confident they’re not releasing more data than intended, and how they can trust—or verify—the information they consume under pressure. We dig into redaction failures, how AI tools change the risk model, how attackers weaponize breaking news, and practical ways teams can authenticate data before reacting.

    Show More Show Less
    15 mins
  • Amazon's Warning: The New Reality of Initial Access
    Dec 30 2025

    Amazon released two security disclosures in the same week — and together, they reveal how modern attackers are getting inside organizations without breaking in.

    One case involved a North Korean IT worker who entered Amazon’s environment through a third-party contractor and was detected through subtle behavioral anomalies rather than malware. The other detailed a years-long Russian state-sponsored campaign that shifted away from exploits and instead abused misconfigured edge devices and trusted infrastructure to steal and replay credentials.

    Together, these incidents show how nation-state attackers are increasingly blending into human and technical systems that organizations already trust — forcing defenders to rethink how initial access really happens going into 2026.

    Key Takeaways

    1. Treat hiring and contractors as part of your attack surface.

    Nation-state actors are deliberately targeting IT and technical roles. Contractor onboarding, identity verification, and access scoping should be handled with the same rigor as privileged account provisioning.

    2. Secure and monitor network edge devices as identity infrastructure

    Misconfigured edge devices have become a primary initial access vector. Inventory them, assign ownership, restrict management access, and monitor them like authentication systems — not just networking gear.

    3. Enforce strong MFA everywhere credentials matter

    If credentials can be used without MFA, assume they will be abused. Require MFA on VPNs, edge device management interfaces, cloud consoles, SaaS admin portals, and internal administrative access.

    4. Harden endpoints and validate how access actually occurs

    Endpoint security still matters. Harden devices and look for signs of remote control, unusual latency, or access paths that don’t match how work is normally done.

    5. Shift detection from “malicious” to “out of place”

    The most effective attacks often look legitimate. Focus detection on behavioral mismatches — access that technically succeeds but doesn’t align with role, geography, timing, or expected workflow.

    Resources:

    1. Amazon Threat Intelligence Identifies Russian Cyber Threat Group Targeting Western Critical Infrastructure

    https://aws.amazon.com/blogs/security/amazon-threat-intelligence-identifies-russian-cyber-threat-group-targeting-western-critical-infrastructure/

    2. Amazon Caught North Korean IT Worker by Tracing Keystroke Data

    https://www.bloomberg.com/news/newsletters/2025-12-17/amazon-caught-north-korean-it-worker-by-tracing-keystroke-data/

    3. North Korean Infiltrator Caught Working in Amazon IT Department Thanks to Keystroke Lag

    https://www.tomshardware.com/tech-industry/cyber-security/north-korean- infiltrator-caught-working-in-amazon-it-department-thanks-to-lag-110ms- keystroke-input-raises-red-flags-over-true-location

    4. Confessions of a Laptop Farmer: How an American Helped North Korea’s Remote Worker Scheme

    https://www.bloomberg.com/news/articles/2023-08-23/confessions-of-a-laptop- farmer-how-an-american-helped-north-korea-s-remote-worker-scheme

    5. Hiring security checklist

    https://www.lmgsecurity.com/resources/hiring-security-checklist/

    Show More Show Less
    16 mins
  • AI Broke Trust: Identity Has to Step Up in 2026
    Dec 23 2025

    AI has supercharged phishing, deepfakes, and impersonation attacks—and 2025 proved that our trust systems aren’t built for this new reality. In this episode, Sherri and Matt break down the #1 change every security program needs in 2026: dramatically improving identity and authentication across the organization. 

    We explore how AI blurred the lines between legitimate and malicious communication, why authentication can no longer stop at the login screen, and where organizations must start adding verification into everyday workflows—from IT support calls to executive requests and financial approvals. 

    Plus, we discuss what “next-generation” user training looks like when employees can no longer rely on old phishing cues and must instead adopt identity-safety habits that AI can’t easily spoof. 

    If you want to strengthen your security program for the year ahead, this is the episode to watch. 

    Key Takeaways:

    1. Audit where internal conversations trigger action. Before adding controls, understand where trust actually matters—financial approvals, IT support, HR changes, executive requests—and treat those points as attack surfaces. 
    2. Expand authentication into everyday workflows. Add verification to calls, video meetings, chats, approvals, and support interactions using known systems, codes, and out-of-band confirmation. Apply friction intentionally where mistakes are costly. 
    3. Use verified communication features in collaboration platforms. Enable identity indicators, reporting features, and access restrictions in tools like Teams and Slack, and treat them as identity systems rather than just chat tools. 
    4. Implement out-of-band push confirmation for high-risk requests. Authenticator-based confirmation defeats voice, video, and message impersonation because attackers rarely control multiple channels simultaneously. 
    5. Move toward continuous identity validation. Identity should be reassessed as behavior and risk change, with step-up verification and session revocation for high-risk actions. 
    6. Redesign training around identity safety. Teach employees how to verify people and requests, not just emails, and reward them for slowing down and confirming—even when it frustrates leadership. 

    Tune in weekly on Tuesdays at 6:30 am ET for more cybersecurity advice, and visit www.LMGsecurity.com if you need help with cybersecurity testing, advisory services, or training.

    Resources:

    CFO.com – Deepfake CFO Scam Costs Engineering Firm $25 Million https://www.cfo.com/news/deepfake-cfo-hong-kong-25-million-fraud-cyber-crime/

    Retool – MFA Isn’t MFA https://retool.com/blog/mfa-isnt-mfa

    Sophos MDR tracks two ransomware campaigns using “email bombing,” Microsoft Teams “vishing” https://news.sophos.com/en-us/2025/01/21/sophos-mdr-tracks-two-ransomware-campaigns-using-email-bombing-microsoft-teams-vishing/

    Wired – Doxers Posing as Cops Are Tricking Big Tech Firms Into Sharing People’s Private Data https://www.wired.com/story/doxers-posing-as-cops-are-tricking-big-tech-firms-into-sharing-peoples-private-data/

    LMG Security – 5 New-ish Microsoft Security Features & What They Reveal About Today’s Threats https://www.lmgsecurity.com/5-new-ish-microsoft-security-features-what-they-reveal-about-todays-threats/

    Show More Show Less
    33 mins
  • The 5 New-ish Microsoft Security Features to Roll Out in 2026
    Dec 16 2025

    Microsoft is rolling out a series of new-ish security features across Microsoft 365 in 2026 — and these updates are no accident. They’re direct responses to how attackers are exploiting collaboration tools like Teams, Slack, Zoom, and Google Chat. In this episode, Sherri and Matt break down the five features that matter most, why they’re happening now, and how every organization can benefit from these lessons, even if you’re not a Microsoft shop.

    We explore the rise of impersonation attacks inside collaboration platforms, the security implications of AI copilots like Microsoft Copilot and Gemini, and why identity boundaries and data governance are quickly becoming foundational to modern security programs. You’ll come away with a clear understanding of what these new-ish Microsoft features signal about the evolving threat landscape — and practical steps you can take today to strengthen your security posture.

    Key Takeaways

    1. Treat collaboration platforms as high-risk communication channels. Attackers increasingly use Teams, Slack, Zoom, and similar tools to impersonate coworkers or support staff, and organizations should help employees verify unexpected contacts just as rigorously as they verify email.
    2. Make it easy for users to report suspicious activity. Whether or not your platform offers a built-in reporting feature like Microsoft’s suspicious-call button, employees need a simple, well-understood way to escalate strange messages or calls inside collaboration tools.
    3. Monitor external collaboration for anomalies. Microsoft’s new anomaly report highlights a growing need across all ecosystems to watch for unexpected domains, unusual activity patterns, and impersonation attempts that occur through external collaboration channels.
    4. Classify and label sensitive data before enabling AI assistants. AI tools such as Copilot, Gemini, and Slack GPT inherit user permissions and may access far more information than intended if organizations haven’t established clear sensitivity labels and access boundaries.
    5. Enforce identity and tenant boundaries to limit data leakage. Features like Tenant Restrictions v2 demonstrate the importance of restricting where users can authenticate and ensuring that corporate data stays within approved environments.
    6. Update security training to reflect collaboration-era social engineering. Modern attacks frequently occur through chat messages, impersonated vendor accounts, malicious external domains, or voice/video calls, and training must evolve beyond traditional email-focused programs.

    Please follow our podcast for the latest cybersecurity advice, and visit us at www.LMGsecurity.com if you need help with technical testing, cybersecurity consulting, and training!

    Resources Mentioned

    • Microsoft 365: Advancing Microsoft 365 – New Capabilities and Pricing Update: https://www.microsoft.com/en-us/microsoft-365/blog/2025/12/04/advancing-microsoft-365-new-capabilities-and-pricing-update/
    • Microsoft 365 Roadmap – Suspicious Call Reporting (ID 536573): https://www.microsoft.com/en-us/microsoft-365/roadmap?id=536573
    • Check Point Research: Exploiting Trust in Microsoft Teams: https://blog.checkpoint.com/research/exploiting-trust-in-collaboration-microsoft-teams-vulnerabilities-uncovered/
    • Phishing Susceptibility Study (arXiv): https://arxiv.org/abs/2510.27298
    • LMG Security Video: Email Bombing & IT Helpdesk Spoofing Attacks—How to Stop Them: https://www.lmgsecurity.com/videos/email-bombing-it-helpdesk-spoofing-attacks-how-to-stop-them/
    Show More Show Less
    21 mins
  • The Extension That Spied on You: Inside ShadyPanda’s 7-Year Attack
    Dec 9 2025

    A massive 7-year espionage campaign hid in plain sight. Harmless Chrome and Edge extensions — wallpaper tools, tab managers, PDF converters — suddenly flipped into full surveillance implants, impacting more than 4.3 million users. In this episode, we break down how ShadyPanda built trust over years, then weaponized auto-updates to steal browsing history, authentication tokens, and even live session cookies. We’ll walk through the timeline, what data was stolen, why session hijacking makes this attack so dangerous, and the key steps security leaders must take now to prevent similar extension-based compromises.

    Key Takeaways

    1. Audit and restrict browser extensions across the organization. Inventory all extensions in use, remove unnecessary ones, and enforce an allowlist through enterprise browser controls.
    1. Treat extensions as part of your software supply chain. Extensions can flip from safe to malicious overnight. Include them in risk assessments and governance processes.
    1. Detect and mitigate session hijacking. Monitor for unusual token reuse, shorten token lifetimes where possible, and watch for logins that bypass MFA.
    1. Enforce enterprise browser security controls. Use Chrome/Edge enterprise features or MDM to lock down permissions, block unapproved installations, and enable safe browsing modes.
    1. Reduce extension sprawl with policy and training. Educate employees that extensions carry real security risk. Require justification for new installations and empower IT to remove unnecessary ones.

    Please tune in weekly for more cybersecurity advice, and visit www.LMGsecurity.com if you need help with your cybersecurity testing, advisory services, and training.

    Resources:

    • KOI Intelligence (Original Research): https://www.koi.ai/blog/4-million-browsers-infected-inside-shadypanda-7-year-malware-campaign
    • Malwarebytes Labs Coverage: https://www.malwarebytes.com/blog/news/2025/12/sleeper-browser-extensions-woke-up-as-spyware-on-4-million-devices
    • Infosecurity Magazine Article: https://www.infosecurity-magazine.com/news/shadypanda-infects-43m-chrome-edge/

    #ShadyPanda #browserextension #browsersecurity #cybersecurity #cyberaware #infosec #cyberattacks #ciso

    Show More Show Less
    21 mins
  • Inside Jobs: How CrowdStrike, DigitalMint & Tesla Got Burned
    Dec 2 2025

    Insider threats are accelerating across every sector. In this episode, Sherri and Matt unpack the CrowdStrike insider leak, the two DigitalMint employees indicted for BlackCat ransomware activity, and Tesla’s multi-year insider incidents ranging from nation-state bribery to post-termination extortion. They also examine the 2025 crackdown on North Korean operatives who used stolen identities and deepfake interviews to get hired as remote workers inside U.S. companies. Together, these cases reveal how attackers are buying, recruiting, impersonating, and embedding insiders — and why organizations must rethink how they detect and manage trusted access.

    Key Takeaways

    1. Build a culture of ethics and make legal consequences explicit. Use real cases — Tesla, CrowdStrike, DigitalMint — to show employees that insider misconduct leads to indictments and prison time. Clear messaging, training, and leadership visibility reinforce deterrence.
    2. Enforce least-privilege access and conduct quarterly access reviews. Limit who can view or modify sensitive dashboards, admin tools, and SSO consoles. Regular recertification ensures employees only retain the permissions they legitimately need.
    3. Deploy screenshot prevention and data-leak controls across critical systems. Implement watermarking, VDI/browser isolation, screenshot detection, and DLP/CASB rules to deter and detect unauthorized capture or exfiltration of sensitive data.
    4. Strengthen identity verification for remote and distributed employees. Use periodic identity rechecks and require company-managed, attested devices for sensitive roles. Prohibit personal-device access for privileged work to reduce impersonation risk.
    5. Monitor high-risk users with behavior and anomaly analytics. Flag unusual patterns such as off-hours access, atypical data movement, sudden repository interest, or crypto-related activity on work devices. Behavioral analytics helps uncover malicious intent even when credentials appear valid.
    6. Require your vendors to follow the same insider-threat safeguards you use internally. Ensure MSPs, SaaS providers, IR partners, and software vendors enforce strong access controls, identity verification, monitoring, and device security. Vendor insiders can quickly become your insiders.

    Resources:

    • TechCrunch – CrowdStrike insider leak coverage: https://techcrunch.com/2025/11/21/crowdstrike-fires-suspicious-insider-who-passed-information-to-hackers/
    • Reuters – DigitalMint ransomware indictment reporting: https://www.reuters.com/legal/government/us-prosecutors-say-cybersecurity-pros-ran-cybercrime-operation-2025-11-03/
    • BleepingComputer – North Korean fake remote worker scheme: https://www.bleepingcomputer.com/news/security/us-arrests-key-facilitator-in-north-korean-it-worker-fraud-scheme/
    • “Ransomware and Cyber Extortion: Response and Prevention” (Book by Sherri & Matt & Karen): https://www.amazon.com/Ransomware-Cyber-Extortion-Response-Prevention-ebook/dp/B09RV4FPP9
    • LMG’s Hiring Security Checklist: https://www.lmgsecurity.com/resources/hiring-security-checklist/

    Want to attend a live version of Cyberside Chats? Visit us at https://www.lmgsecurity.com/lmg-resources/cyberside-chats-podcast/ to register for our next monthly live session.

    #insiderthreat #cybersecurity #cyberaware #cybersidechats #ransomware #ransomwareattack #crowdstrike #DigitalMint #tesla #remotework

    Show More Show Less
    23 mins