• Article 29. Algorithmic System Integrity: Explainability (Part 6) - Interpretability
    Dec 22 2025

    Spoken by a human version of this article.

    TL;DR (TL;DL?)

    • Technical stakeholders need detailed explanations.
    • Non-technical stakeholders need plain language.
    • Visuals, layering, literacy, and feedback are among the techniques we can use.

    To subscribe to the weekly articles: https://riskinsights.com.au/blog#subscribe

    About this podcast

    A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

    Hosted by Yusuf Moolla.
    Produced by Risk Insights (riskinsights.com.au).

    Show More Show Less
    4 mins
  • Article 28. Algorithmic System Integrity: Explainability (Part 5) - Privacy and Confidentiality
    Dec 21 2025

    Spoken by a human version of this article.

    TL;DR (TL;DL?)

    • Algorithmic systems create challenges in balancing explainability with privacy and confidentiality.
    • Key challenges include protecting sensitive information, preserving proprietary algorithms, and securing fraud detection systems.
    • Focusing on what audiences need, with a few specific considerations, can help address these.

    To subscribe to the weekly articles: https://riskinsights.com.au/blog#subscribe

    About this podcast

    A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

    Hosted by Yusuf Moolla.
    Produced by Risk Insights (riskinsights.com.au).

    Show More Show Less
    5 mins
  • Article 27. Algorithmic System Integrity: Explainability (Part 4)
    Dec 20 2025

    Spoken by a human version of this article.

    TL;DR (TL;DL?)

    • Explainability is necessary to build trust in AI systems.
    • There is no universally accepted definition of explainability.
    • So we focus on key considerations that don't require us to select any particular definition.

    To subscribe to the weekly articles: https://riskinsights.com.au/blog#subscribe

    About this podcast

    A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

    Hosted by Yusuf Moolla.
    Produced by Risk Insights (riskinsights.com.au).

    Show More Show Less
    4 mins
  • Article 26. Algorithmic System Integrity: Explainability (Part 3) - Complicated Processes
    Dec 20 2025

    Spoken by a human version of this article.

    TL;DR (TL;DL?)

    • Algorithmic processes are often complicated by intricate data flows and transformations.
    • Data flow diagrams and documentation can help make processes simpler.

    To subscribe to the weekly articles: https://riskinsights.com.au/blog#subscribe

    About this podcast

    A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

    Hosted by Yusuf Moolla.
    Produced by Risk Insights (riskinsights.com.au).

    Show More Show Less
    5 mins
  • Article 25. Algorithmic System Integrity: Explainability (Part 2) - Complexity
    Dec 19 2025

    Spoken by a human version of this article.

    TL;DR (TL;DL?)

    • Complexity must be actively managed rather than passively accepted.
    • Data relevance directly impacts both accuracy and explainability.
    • Technical “visibility” techniques can be useful.

    To subscribe to the weekly articles: https://riskinsights.com.au/blog#subscribe

    About this podcast

    A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

    Hosted by Yusuf Moolla.
    Produced by Risk Insights (riskinsights.com.au).

    Show More Show Less
    6 mins
  • Article 24. Algorithmic System Integrity: Explainability (Part 1)
    Dec 19 2025

    Spoken by a human version of this article.

    TL;DR (TL;DL?)

    • Why Explainability Matters: It builds trust, is needed to meet compliance obligations, and can help identify errors faster.
    • Key Challenges: Complex algorithms, intricate workflows, privacy concerns, and making explanations understandable for all stakeholders.
    • What’s Next: Future articles will explore practical solutions to these challenges.


    To subscribe to the weekly articles: https://riskinsights.com.au/blog#subscribe

    About this podcast

    A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

    Hosted by Yusuf Moolla.
    Produced by Risk Insights (riskinsights.com.au).

    Show More Show Less
    6 mins
  • Article 23. Algorithmic System Integrity: Testing
    Feb 21 2025

    Spoken by a human version of this article.

    TL;DR (TL;DL?)

    • Testing is a core basic step for algorithmic integrity.
    • Testing involves various stages, from developer self-checks to UAT. Where these happen will depend on whether the system is built in-house or bought.
    • Testing needs to cover several integrity aspects, including accuracy, fairness, security, privacy, and performance.
    • Continuous testing is needed for AI systems, differing from traditional testing due to the way these newer systems change (without code changes).


    To subscribe to the weekly articles: https://riskinsights.com.au/blog#subscribe

    About this podcast

    A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

    Hosted by Yusuf Moolla.
    Produced by Risk Insights (riskinsights.com.au).

    Show More Show Less
    6 mins
  • Article 22. Algorithm Integrity: Third party assurance
    Feb 16 2025

    Spoken by a human version of this article.

    One question that comes up often is “How do we obtain assurance about third party products or services?”

    Depending on the nature of the relationship, and what you need assurance for, this can vary widely.

    This article attempts to lay out the options, considerations, and key steps to take.

    TL;DR (TL;DL?)

    • Third-party assurance for algorithm integrity varies based on the nature of the relationship and specific needs, with several options.
    • Key factors to consider include the importance and risk level of the service/product, regulatory expectations, complexity, transparency, and frequency of updates.
    • Standardised assurance frameworks for algorithm integrity are still emerging; adopt a risk-based approach, and consider sector-specific standards like CPS230(Australia).


    To subscribe to the weekly articles: https://riskinsights.com.au/blog#subscribe

    About this podcast

    A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI.

    Hosted by Yusuf Moolla.
    Produced by Risk Insights (riskinsights.com.au).

    Show More Show Less
    8 mins