Episodes

  • PToPI: A periodic table for Artificial Intelligence - Your visual guide to choosing the right metrics
    Oct 18 2024

    Confused about choosing the right performance metrics for your classification models 🤔? Say goodbye to confusion 👋 and hello to PToPI! Namely Periodic Table of Performance Instruments.

    This video podcast provides a clear and engaging visual guide to understanding and selecting the most effective classification performance metrics for your machine learning tasks 🚀.

    This visual guide, designed by Dr. Gürol Canbek, will cover:

    • A comprehensive review of 57 performance instruments, including graphical, probabilistic, and entropic ones.
    • Formal definitions and explanations of key concepts like canonical forms, duality, complementation, and leveling.
    • How to interpret and analyze performance metrics to choose the right one for your needs.
    • Practical examples from different domains to show you how to apply PToPI in real-world scenarios.

    Don't miss out on this opportunity to master classification performance metrics! 🙌

    Access the free full text article and download the PToPI poster here:

    • Free full text: https://bit.ly/PToPlpaper
    • Download PToPI poster (as an Excel xlsx file): https://github.com/gurol/PToPI

    This video podcast will empower you to make informed decisions about your machine learning models and achieve better results! 💯

    Show More Show Less
    14 mins
  • Accuracy Barrier (ACCBAR): Why we need new AI performance indicators for binary classification
    Oct 4 2024

    Join us as we explore the fascinating world of AI and uncover a hidden danger lurking beneath the surface of seemingly impressive and common performance metrics, specifically Accuracy (ACC).

    In this episode, we'll explore the concept of the Accuracy Barrier (ACCBAR) performance indicator and why relying solely on accuracy scores can lead to a false sense of security.

    We'll examine:

    The Accuracy Paradox: Discover how a 99% accuracy rate can be utterly misleading and why conventional performance indicators fall short in certain scenarios.

    Accuracy Barrier (ACCBAR): Uncover this groundbreaking novel performance indicator that unveils the limitations of Accuracy -the most traditional metric- and exposes potential biases in AI models.

    Real-World Implications: Learn how ACCBAR can revolutionize performance evaluation in various domains, from cybersecurity to medical diagnosis, by providing a more reliable assessment of AI systems.

    Publication Bias and Confirmation Bias in Research: We'll discuss how ACCBAR can help researchers identify and address potential confirmation bias in their classifications, ensuring more robust and trustworthy AI development.

    Don't miss this opportunity to gain a deeper understanding of AI performance evaluation and learn how to critically assess the true capabilities of AI systems.

    Free access to the full research paper is available at: https://bit.ly/ACCBARPaper

    👉 Please cite my article as follows: Canbek, G., Temizel, T. T., & Sagiroglu, S. (2022). Accuracy Barrier (ACCBAR): A novel performance indicator for binary classification. 2022 15th International Conference on Information Security and Cryptography (ISCTURKEY), 92–97. https://doi.org/10.1109/ISCTURKEY56345.2022.9931888

    Show More Show Less
    7 mins
  • TasKar: Your new secret weapon for understanding AI classification performance beyond accuracy
    Sep 30 2024

    Are you tired of relying solely on accuracy to evaluate your classification models? Do cryptic metrics like F1 score or Matthew's Correlation Coefficient leave you scratching your head?

    Join us as we unlock the secrets of binary classification performance measurement and go beyond simple accuracy. We'll explore a comprehensive set of 65 metrics, each providing unique insights into your model's strengths and weaknesses.

    We'll break down complex concepts into easily understandable terms and discuss how these metrics can help you make more informed decisions about your models. We'll also unveil TasKar, a powerful new dashboard that visualizes your classification results with innovative graphics, making it easier than ever to interpret your model's performance.

    Whether you're a seasoned machine learning expert or just starting, this podcast will equip you with the knowledge and tools to evaluate and compare your binary classification models confidently.

    Tune in to discover the full potential of your classification models!

    Download TasKar for free: https://github.com/gurol/TasKar (Best viewed with free Libre Office or Apache Open Office.

    👉 Please cite my article as follows: Canbek, G., Taskaya Temizel, T., & Sagiroglu, S. (2021). TasKar: A research and education tool for calculation and representation of binary classification performance instruments. IEEE 14th International Conference on Information Security and Cryptology (ISCTurkey), 105–110. https://doi.org/10.1109/ISCTURKEY53027.2021.9654359

    Show More Show Less
    10 mins
  • Garbage In, Garbage Out (GIGO): Unlocking data insights with AI - Feature space distribution fitting
    Sep 25 2024

    Welcome to the very first episode of our podcast, where we dive deep into the fascinating world of data, artificial intelligence, and cybersecurity. I'm Gürol Canbek. In this episode, we’ll explore one of the most critical concepts in AI: Garbage In, Garbage Out, or GIGO.

    We often focus on building smarter algorithms, but what happens when the data we feed into these systems is flawed or incomplete? Like using spoiled ingredients in a recipe, bad data can lead to disastrous results. In this episode, I'll discuss my latest research on how data quality affects AI's ability to generate insights and how we can avoid those "bad ingredients."

    We’ll talk about patterns, data fingerprints, and even some surprising parallels between natural phenomena like earthquakes and your smartphone apps! 🧐

    For full access to the research behind this episode, you can read the paper here: bit.ly/GIGOpaper.

    Also, be sure to check out more of my work at gurol.canbek.com.

    Join me as we uncover how clean, well-structured data can make all the difference in AI, and why GIGO is more relevant than ever in our increasingly data-driven world.

    👉 Please cite my article as follows: Canbek, G. (2022). Gaining insights in datasets in the shade of “garbage in, garbage out” rationale: Feature space distribution fitting. WIREs Data Mining and Knowledge Discovery, 12(3), 1–18. https://doi.org/10.1002/widm.1456

    Show More Show Less
    8 mins