Abstract Synthesis cover art

Abstract Synthesis

Abstract Synthesis

Written by: Ndea
Listen for free

About this listen

Go beyond the paper abstract to synthesize new ideas. AGI research lab Ndea presents the stories behind remarkable academic papers in the field of program synthesis.Ndea
Episodes
  • February 2026 Podcast Recap
    Feb 9 2026

    Program synthesis is the problem of automatically generating code that satisfies a specification. The real challenge isn’t searching faster, it’s making the right parts of the search space searchable at all.


    This week's episode is a short recap of the podcast so far. Across the past 8 conversations - spanning grammar filtering, temporal synthesis, inductive logic programming, vision-language programs, and symbolic world models - we explore 3 emergent themes.


    1. Shrinking the search space, without breaking correctness

    2. Why "correct" programs still behave badly

    3. The real meaning of "neurosymbolic"


    At a high level, all of the solutions we've explored are grappling with the problem of search - from problem representation to the optimal divide between neural and symbolic.


    Credits -

    Host, Editor, Music: Bryan Landers, Technical Staff, Ndea

    https://x.com/ndea

    https://x.com/bryanlanders

    https://ndea.com

    Show More Show Less
    6 mins
  • Relational Decomposition for Program Synthesis - Céline Hocquette
    Feb 2 2026

    The way a problem is represented can determine whether it is solvable at all.


    Céline Hocquette, AI researcher at Ndea and former postdoctoral researcher at the University of Oxford, discusses her paper “Relational Decomposition for Program Synthesis”, which introduces a representation-driven approach to inductive program synthesis based on decomposing examples into relational facts.


    The paper emerged from Hocquette’s long-standing engagement with inductive logic programming (ILP), beginning with her doctoral work at Imperial College London under Stephen Muggleton and continuing through her time in Andrew Cropper’s group in Oxford. Motivated by the scalability limits of learning long chains of reasoning, the work reflects a broader intellectual trajectory focused on making symbolic learning systems more efficient by rethinking representation and decomposition rather than adding domain-specific heuristics.


    In This Episode -

    • Inductive logic programming (ILP)

    • Deductive vs. inductive program synthesis

    • Relational vs. functional programs

    • Decomposing examples into logical facts

    • Datasets: ARC-AGI, 1D-ARC, strings, list functions

    • Systems & approaches: POPPER, ARGA, METABIAS, BEN, Hacker-Like


    References -

    • https://github.com/logic-and-learning-lab/Popper

    • https://andrewcropper.com/

    • ARC-AGI - https://arcprize.org/arc-agi

    • 1D-ARC - https://arxiv.org/abs/2305.18354

    • ARGA - https://arxiv.org/abs/2210.09880

    • METABIAS - https://www.doc.ic.ac.uk/~shm/Papers/ECAI-546.pdf

    • BEN - https://arxiv.org/abs/2301.03094

    • Hacker-Like - https://www.nature.com/articles/s41467-024-50966-x


    About the Paper -


    “Relational Decomposition for Program Synthesis”

    Céline Hocquette, Andrew Cropper

    arXiv, 2024


    The paper proposes transforming inductive program synthesis problems into sets of relational input–output facts, allowing systems to learn smaller, reusable logical rules instead of long functional compositions. This decomposition significantly improves scalability and generalization when learning programs from few examples across strings, lists, and ARC-style reasoning tasks.


    https://arxiv.org/abs/2408.12212


    About the Guest -


    Céline Hocquette, Technical Staff at Ndea, works on program synthesis, inductive logic programming, and symbolic reasoning. She completed her PhD at Imperial College London and previously held a research position at the University of Oxford in Andrew Cropper’s lab. Her work focuses on scalable learning of interpretable programs from small data.


    https://celinehocquette.github.io/


    Credits -

    Host & Music: Bryan Landers, Technical Staff, Ndea

    Editor: Alejandro Ramirez

    https://x.com/ndea

    https://x.com/bryanlanders

    https://ndea.com

    Show More Show Less
    48 mins
  • Symbolic World Models - Top Piriyakulkij
    Jan 26 2026

    Wasu "Top" Piriyakulkij, PhD student at Cornell University advised by Kevin Ellis, discusses his paper "PoE-World: Compositional World Modeling with Products of Programmatic Experts." The episode explores how symbolic, programmatic world models can achieve strong generalization and sample efficiency by composing many small causal programs instead of learning a single monolithic model.

    The conversation traces how PoE-World emerged from earlier work on active concept learning and hypothesis testing, and how object-centric Atari environments became a natural testbed for scaling symbolic world models beyond grid worlds. Piriyakulkij reflects on design failures, surprising successes, and the moment the learned world model became interactive enough to serve as a real-time simulator.


    In This Episode -

    • Symbolic vs. neural world models

    • Products of programmatic experts

    • Modular causal rules as world models

    • Object-centric Atari environments

    • Montezuma’s Revenge as exploration benchmark

    • Sample-efficient learning from demonstrations

    • Weights as expert confidence signals

    • World models as executable simulators

    • Exploration as program testing


    References -

    • WorldCoder - https://arxiv.org/abs/2402.12275

    • Object-Centric Atari - https://arxiv.org/abs/2306.08649v2

    • ARC-AGI-3 - https://arcprize.org

    • VisualPredicator - https://arxiv.org/abs/2410.23156

    • People: Marvin Minsky, François Chollet, Armando Solar-Lezama


    About the Paper -

    "PoE-World: Compositional World Modeling with Products of Programmatic Experts"

    Authors: Wasu Top Piriyakulkij, Yishou Wang, Hao Tang, Martha Lewis, Kevin Ellis

    The paper introduces a symbolic world modeling framework in which many small, interpretable programs - each encoding a simple causal rule - are combined multiplicatively into a probabilistic world model. By learning weights over these programmatic experts from limited demonstrations, the system produces accurate, stochastic simulators that generalize to new environments with minimal data.

    https://arxiv.org/abs/2505.10819


    About the Guest -

    Wasu Top Piriyakulkij is a PhD student at Cornell University advised by Kevin Ellis. His research focuses on symbolic world models, program synthesis, and human-like learning and exploration in artificial agents. He is particularly interested in how compositional structure enables generalization in complex environments.

    • https://www.cs.cornell.edu/~wp237/

    • https://scholar.google.com/citations?user=nlO1TkkAAAAJ&hl=en


    Credits -

    Host & Music: Bryan Landers, Technical Staff, Ndea

    Editor: Alejandro Ramirez

    https://x.com/ndea

    https://x.com/bryanlanders

    https://ndea.com

    Show More Show Less
    58 mins
No reviews yet