Nobody Told Me About IT cover art

Nobody Told Me About IT

Nobody Told Me About IT

Written by: Tad Doyle and Nabil Gharbieh
Listen for free

About this listen

Nobody Told Me About IT is a weekly podcast. Hosts Tad Doyle and Nabil Gharbieh have frank, practical conversations about IT strategy, cybersecurity, budgeting, and technology leadership — for the business leaders who need to understand these topics but weren't trained for them. No jargon. No vendor pitch. Just the IT conversations your organization needs to be having. New episodes every Monday.Tad Doyle and Nabil Gharbieh
Episodes
  • AI for Deep Research: How We Actually Use It
    May 5 2026

    Most mid-market organizations have AI in use, whether approved, shadow, or embedded in vendor tools. Almost none have a governance framework for it. In Episode 02, co-host Nabil Gharbieh and Tad Doyle demonstrate live, on screen, how an experienced IT advisor actually uses Claude to research AI governance platforms. This is a structured methodology, not a product endorsement.

    Tad opens with a real-world scenario: a mid-market financial services organization has approved AI use but has no governance framework. Employees are using Copilot, Claude, ChatGPT, and other tools across platforms the organization has never reviewed. In regulated industries.

    The research question: what platforms exist to help solve this?

    Before the demonstration begins, Tad shares a practical tip: ask the model which version to use. Claude will tell you that Opus is overkill for most research tasks. Using the right model saves tokens and cost.

    Claude returns a structured framework covering research approach, timeline, tool categories, and evaluation criteria. Tad's first move is a deliberate sanity check, scrolling through the output to confirm it aligns with what he knows about NIST frameworks before going further.

    The second prompt adds real context: the organization runs Microsoft 365, uses Copilot as its primary AI tool, and carries SEC and FINRA obligations. Under 500 employees. Shadow IT is a known concern.

    Nabil surfaces a real concern. Claude has a tendency to validate rather than challenge, using phrases like "perfect" and "great research" regularly. Tad's answer: maintain your own skepticism. Ask the model what it might have missed, or how you could phrase the question better. The positivity in the interface is not the same as accuracy.

    The resulting report runs 18 pages after Tad requests a completeness check. It includes an executive summary, tool categories mapped to governance needs, a quick reference guide with vendor pricing, maturity assessments, detailed vendor profiles with advantages and risks, a scored comparison, and a summary recommendation with two platforms for Phase 1 evaluation.

    One honest caveat on pricing: enterprise software pricing from AI research is rough. Most platforms are quote-based. The research narrows the field. A real pricing conversation follows separately.

    The report surfaces established platforms like Microsoft Purview alongside less familiar names. Tad's approach: the presence of independently validated platforms on the same list gives you confidence Claude is comparing real tools. For the unknowns, due diligence follows, including account rep conversations and Gartner or Forrester reports. Those reports can be uploaded back into Claude to refine the document further, with citations.

    Nabil's closing question: you went from zero knowledge to a vendor list. You wouldn't actually recommend based on this alone?

    Tad's answer: correct. The output is a starting map, not a destination. When presenting to clients, he is explicit about his qualifications and reservations, especially on pricing, and brings domain knowledge to reality-check the recommendations.

    The third prompt stress-tests the output: which platforms are mature and field-proven at mid-market scale? Which are overhyped or early-stage? What concerns would you have about recommending each? The result confirms which platforms are well-regarded and flags the less-proven tools for additional review before moving forward.

    "AI doesn't replace the advisor. It makes the advisor faster, more efficient, and more productive. The value isn't in the tool. It's in knowing what questions to ask, and knowing how to evaluate what comes back."

    Chapters 2:10 The Starting Point 3:32 Reading the Output 6:25 How Do You Know It's Not Just Telling You What You Want to Hear? 7:13 Walking Through the Report 14:44 What to Do with Vendors You've Never Heard Of 17:20 Trust But Verify 19:20 Stress-Testing the Output

    Show Notes on www.nobodytoldmeaboutit.com

    Show More Show Less
    23 mins
  • Anthropic Project Glasswing: When AI Finds the Bugs Before the Hackers Do
    Apr 29 2026

    Anthropic's Mythos Found A 27-Year-Old Bug. Here's What IT Leaders Should Do Monday.

    Show More Show Less
    10 mins
adbl_web_anon_alc_button_suppression_c
No reviews yet