Skip to main content

Naturally Interpretable Control Policies via Graph-Based Genetic Programming

  • Conference paper
  • First Online:
Genetic Programming (EuroGP 2024)

Abstract

In most high-risk applications, interpretability is crucial for ensuring system safety and trust. However, existing research often relies on hard-to-understand, highly parameterized models, such as neural networks. In this paper, we focus on the problem of policy search in continuous observations and actions spaces. We leverage two graph-based Genetic Programming (GP) techniques—Cartesian Genetic Programming (CGP) and Linear Genetic Programming (LGP)—to develop effective yet interpretable control policies. Our experimental evaluation on eight continuous robotic control benchmarks shows competitive results compared to state-of-the-art Reinforcement Learning (RL) algorithms. Moreover, we find that graph-based GP tends towards small, interpretable graphs even when competitive with RL. By examining these graphs, we are able to explain the discovered policies, paving the way for trustworthy AI in the domain of continuous control.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Notes

  1. 1.

    https://giorgia-nadizar.github.io/cgpax//hopper_cgp_flying.

  2. 2.

    https://giorgia-nadizar.github.io/cgpax/swimmer_cgp

    https://giorgia-nadizar.github.io/cgpax/swimmer_lgp.

References

  1. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Article  Google Scholar 

  2. Amaral, R., Ianta, A., Bayer, C., Smith, R.J., Heywood, M.I.: Benchmarking genetic programming in a multi-action reinforcement learning locomotion task. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 522–525 (2022)

    Google Scholar 

  3. Bradbury, J., et al.: Jax: composable transformations of python+ numpy programs (2018)

    Google Scholar 

  4. Brameier, M., Banzhaf, W., Banzhaf, W.: Linear Genetic Programming, vol. 1. Springer, New York (2007). https://doi.org/10.1007/978-0-387-31030-5

    Book  Google Scholar 

  5. Coulom, R.: Reinforcement learning using neural networks, with applications to motor control. Ph.D. thesis, Institut National Polytechnique de Grenoble-INPG (2002)

    Google Scholar 

  6. Custode, L.L., Iacca, G.: Evolutionary learning of interpretable decision trees. arXiv preprint arXiv:2012.07723 (2020)

  7. Custode, L.L., Iacca, G.: Interpretable pipelines with evolutionary optimized modules for reinforcement learning tasks with visual inputs. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 224–227 (2022)

    Google Scholar 

  8. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002)

    Article  Google Scholar 

  9. Ferigo, A., Custode, L.L., Iacca, G.: Quality diversity evolutionary learning of decision trees. arXiv preprint arXiv:2208.12758 (2022)

  10. Ferigo, A., Custode, L.L., Iacca, G.: Quality-diversity optimization of decision trees for interpretable reinforcement learning. Neural Comput. Appl. 1–12 (2023)

    Google Scholar 

  11. Françoso Dal Piccol Sotto, L., Kaufmann, P., Atkinson, T., Kalkreuth, R., Porto Basgalupp, M.: Graph representations in genetic programming. Genet. Program. Evolvable Mach. 22(4), 607–636 (2021)

    Google Scholar 

  12. Freeman, C.D., Frey, E., Raichuk, A., Girgin, S., Mordatch, I., Bachem, O.: Brax-a differentiable physics engine for large scale rigid body simulation. arXiv preprint arXiv:2106.13281 (2021)

  13. Glanois, C., Weng, P., Zimmer, M., Li, D., Yang, T., Hao, J., Liu, W.: A survey on interpretable reinforcement learning. arXiv preprint arXiv:2112.13112 (2021)

  14. Glass, A., McGuinness, D.L., Wolverton, M.: Toward establishing trust in adaptive agents. In: Proceedings of the 13th International Conference on Intelligent User Interfaces, pp. 227–236 (2008)

    Google Scholar 

  15. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. (CSUR) 51(5), 1–42 (2018)

    Article  Google Scholar 

  16. Haarnoja, T., Zhou, A., Abbeel, P., Levine, S.: Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor. In: International Conference on Machine Learning, pp. 1861–1870, PMLR (2018)

    Google Scholar 

  17. Hein, D., Udluft, S., Runkler, T.A.: Interpretable policies for reinforcement learning by genetic programming. Eng. Appl. Artif. Intell. 76, 158–169 (2018)

    Article  Google Scholar 

  18. Henderson, P., Islam, R., Bachman, P., Pineau, J., Precup, D., Meger, D.: Deep reinforcement learning that matters. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  19. Kantschik, W., Banzhaf, W.: Linear-graph GP - a new GP structure. In: Foster, J.A., Lutton, E., Miller, J., Ryan, C., Tettamanzi, A. (eds.) EuroGP 2002. LNCS, vol. 2278, pp. 83–92. Springer, Heidelberg (2002). https://doi.org/10.1007/3-540-45984-7_8

    Chapter  Google Scholar 

  20. Kaufmann, E., Bauersfeld, L., Loquercio, A., Müller, M., Koltun, V., Scaramuzza, D.: Champion-level drone racing using deep reinforcement learning. Nature 620(7976), 982–987 (2023)

    Article  Google Scholar 

  21. Kelly, S., Heywood, M.I.: Emergent tangled graph representations for Atari game playing agents. In: McDermott, J., Castelli, M., Sekanina, L., Haasdijk, E., García-Sánchez, P. (eds.) EuroGP 2017. LNCS, vol. 10196, pp. 64–79. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-55696-3_5

    Chapter  Google Scholar 

  22. Kelly, S., Heywood, M.I.: Multi-task learning in atari video games with emergent tangled program graphs. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 195–202 (2017)

    Google Scholar 

  23. Kelly, S., et al.: Discovering adaptable symbolic algorithms from scratch. arXiv preprint arXiv:2307.16890 (2023)

  24. Kelly, S., Voegerl, T., Banzhaf, W., Gondro, C.: Evolving hierarchical memory-prediction machines in multi-task reinforcement learning. Genet. Program Evolvable Mach. 22, 573–605 (2021)

    Article  Google Scholar 

  25. Koza, J.R.: Genetic programming as a means for programming computers by natural selection. Stat. Comput. 4(2), 87–112 (1994)

    Article  Google Scholar 

  26. Koza, J.R., Rice, J.P.: Automatic programming of robots using genetic programming. In: AAAI, vol. 92, pp. 194–207 (1992)

    Google Scholar 

  27. Landajuela, M., et al.: Discovering symbolic policies with deep reinforcement learning. In: International Conference on Machine Learning, pp. 5979–5989, PMLR (2021)

    Google Scholar 

  28. Lipton, Z.C.: The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery. Queue 16(3), 31–57 (2018)

    Article  Google Scholar 

  29. Liu, D., Virgolin, M., Alderliesten, T., Bosman, P.A.: Evolvability degeneration in multi-objective genetic programming for symbolic regression. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 973–981 (2022)

    Google Scholar 

  30. Machado, M.C., Bellemare, M.G., Talvitie, E., Veness, J., Hausknecht, M., Bowling, M.: Revisiting the arcade learning environment: evaluation protocols and open problems for general agents. J. Artif. Intell. Res. 61, 523–562 (2018)

    Article  MathSciNet  Google Scholar 

  31. Medvet, E., Nadizar, G.: GP for continuous control: teacher or learner? The case of simulated modular soft robots. In: Winkler, S., Trujillo, L., Ofria, C., Hu, T. (eds.) Genetic Programming Theory and Practice XX. Genetic and Evolutionary Computation, Springer, Singapore (2023). https://doi.org/10.1007/978-981-99-8413-8_11

    Chapter  Google Scholar 

  32. Miller, J.F.: Cartesian genetic programming: its status and future. Genet. Program Evolvable Mach. 21, 129–168 (2020)

    Article  Google Scholar 

  33. Miller, J.F., Thomson, P.: Cartesian genetic programming. In: Poli, R., Banzhaf, W., Langdon, W.B., Miller, J., Nordin, P., Fogarty, T.C. (eds.) EuroGP 2000. LNCS, vol. 1802, pp. 121–132. Springer, Heidelberg (2000). https://doi.org/10.1007/978-3-540-46239-2_9

    Chapter  Google Scholar 

  34. Nadizar, G., Rovito, L., De Lorenzo, A., Medvet, E., Virgolin, M.: An analysis of the ingredients for learning interpretable symbolic regression models with human-in-the-loop and genetic programming. ACM Tran. Evol. Learn. (2024)

    Google Scholar 

  35. Puiutta, E., Veith, E.M.S.P.: Explainable reinforcement learning: a survey. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) CD-MAKE 2020. LNCS, vol. 12279, pp. 77–95. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-57321-8_5

    Chapter  Google Scholar 

  36. Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley, Hoboken (2014)

    Google Scholar 

  37. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019)

    Article  Google Scholar 

  38. Salvato, E., Fenu, G., Medvet, E., Pellegrino, F.A.: Crossing the reality gap: a survey on sim-to-real transferability of robot controllers in reinforcement learning. IEEE Access 9, 153171–153187 (2021)

    Article  Google Scholar 

  39. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017)

  40. Sigaud, O., Stulp, F.: Policy search in continuous action domains: an overview. Neural Netw. 113, 28–40 (2019)

    Article  Google Scholar 

  41. Silver, D., et al.: Mastering the game of go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016)

    Article  Google Scholar 

  42. Todorov, E., Erez, T., Tassa, Y.: Mujoco: a physics engine for model-based control. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033. IEEE (2012)

    Google Scholar 

  43. Verma, A., Murali, V., Singh, R., Kohli, P., Chaudhuri, S.: Programmatically interpretable reinforcement learning. In: International Conference on Machine Learning, pp. 5045–5054. PMLR (2018)

    Google Scholar 

  44. Videau, M., Leite, A., Teytaud, O., Schoenauer, M.: Multi-objective genetic programming for explainable reinforcement learning. In: Medvet, E., Pappa, G., Xue, B. (eds.) EuroGP 2022. LNCS, vol. 13223, pp. 278–293. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-02056-8_18

    Chapter  Google Scholar 

  45. Virgolin, M., De Lorenzo, A., Medvet, E., Randone, F.: Learning a formula of interpretability to learn interpretable formulas. In: Bäck, T., et al. (eds.) PPSN 2020. LNCS, vol. 12270, pp. 79–93. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58115-2_6

    Chapter  Google Scholar 

  46. Virgolin, M., De Lorenzo, A., Randone, F., Medvet, E., Wahde, M.: Model learning with personalized interpretability estimation (ml-pie). In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 1355–1364 (2021)

    Google Scholar 

  47. Wells, L., Bednarz, T.: Explainable AI and reinforcement learning-a systematic review of current approaches and trends. Front. Artif. Intell. 4, 550030 (2021)

    Article  Google Scholar 

  48. Wilson, D.G., Cussat-Blanc, S., Luga, H., Miller, J.F.: Evolving simple programs for playing Atari games. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 229–236 (2018)

    Google Scholar 

  49. Wilson, D.G., Miller, J.F., Cussat-Blanc, S., Luga, H.: Positional cartesian genetic programming. arXiv preprint arXiv:1810.04119 (2018)

  50. Zhou, R., Hu, T.: Evolutionary approaches to explainable machine learning. arXiv preprint arXiv:2306.14786 (2023)

Download references

Acknowledgements

The paper is based upon work from a scholarship supported by SPECIES (http://species-society.org), the Society for the Promotion of Evolutionary Computation in Europe and its Surroundings. This study was carried out within the PNRR research activities of the consortium iNEST (Interconnected North-Est Innovation Ecosystem) funded by the European Union Next-GenerationEU (Piano Nazionale di Ripresa e Resilienza (PNRR) - Missione 4 Componente 2, Investimento 1.5 - D.D. 1058 23/06/2022, ECS_00000043).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eric Medvet .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nadizar, G., Medvet, E., Wilson, D.G. (2024). Naturally Interpretable Control Policies via Graph-Based Genetic Programming. In: Giacobini, M., Xue, B., Manzoni, L. (eds) Genetic Programming. EuroGP 2024. Lecture Notes in Computer Science, vol 14631. Springer, Cham. https://doi.org/10.1007/978-3-031-56957-9_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-56957-9_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-56956-2

  • Online ISBN: 978-3-031-56957-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics