Skip to main content

Using FPGA Devices to Accelerate Tree-Based Genetic Programming: A Preliminary Exploration with Recent Technologies

  • Conference paper
  • First Online:
Genetic Programming (EuroGP 2023)

Abstract

In this paper, we explore the prospect of accelerating tree-based genetic programming (TGP) by way of modern field-programmable gate array (FPGA) devices, which is motivated by the fact that FPGAs can sometimes leverage larger amounts of data/function parallelism, as well as better energy efficiency, when compared to general-purpose CPU/GPU systems. In our preliminary study, we introduce a fixed-depth, tree-based architecture capable of evaluating type-consistent primitives that can be fully unrolled and pipelined. The current primitive constraints preclude arbitrary control structures, but they allow for entire programs to be evaluated every clock cycle. Using a variety of floating-point primitives and random programs, we compare to the recent TensorGP tool executing on a modern 8 nm GPU, and we show that our accelerator implemented on a 14 nm FPGA achieves an average speedup of 43\(\times \). When compared to the popular baseline tool DEAP executing across all cores of a 2-socket, 28-core (56-thread), 14 nm CPU server, our accelerator achieves an average speedup of 4,902\(\times \). Finally, when compared to the recent state-of-the-art tool Operon executing on the same 2-processor CPU system, our accelerator executes about 2.4\(\times \) slower on average. Despite not achieving an average speedup over every tool tested, our single-FPGA accelerator is the fastest in several instances, and we describe five future extensions that could allow for a 32–144\(\times \) speedup over our current design as well as allow for larger program depths/sizes. Overall, we estimate that a future version of our accelerator will constitute a state-of-the-art GP system for many applications.

This material is based upon work supported by the National Science Foundation under Grant Nos. CNS-1718033 and CCF-1909244.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Software: https://github.com/christophercrary/conference-eurogp-2023.

  2. 2.

    Frequently, the statistic of GP operations per second (GPops) is used when comparing the runtime performance of GP tools, but we use NEPS to emphasize that our runtimes do not include time taken for evolution.

  3. 3.

    A possible exception could occur when dealing with timing optimization, since the resulting clock frequency may unexpectedly get better or worse with design changes.

References

  1. Baeta, F., Correia, J., Martins, T., Machado, P.: Exploring genetic programming in TensorFlow with TensorGP. SN Comput. Sci. 3(2), 1–16 (2022). https://doi.org/10.1007/s42979-021-01006-8

    Article  Google Scholar 

  2. Banzhaf, W., Harding, S., Langdon, W.B., Wilson, G.: Accelerating genetic programming through graphics processing units. In: Worzel, B., Soule, T., Riolo, R. (eds.) Genetic Programming Theory and Practice VI. Genetic and Evolutionary Computation, pp. 1–19. Springer, Boston (2009). https://doi.org/10.1007/978-0-387-87623-8_15

    Chapter  Google Scholar 

  3. Burlacu, B., Kronberger, G., Kommenda, M.: Operon C++: an efficient genetic programming framework for symbolic regression. In: Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion, GECCO 2020, pp. 1562–1570. Association for Computing Machinery, New York (2020). https://doi.org/10.1145/3377929.3398099

  4. Chitty, D.M.: Fast parallel genetic programming: multi-core CPU versus many-core GPU. Soft. Comput. 16(10), 1795–1814 (2012). https://doi.org/10.1007/s00500-012-0862-0

    Article  Google Scholar 

  5. Chitty, D.M.: Faster GPU-based genetic programming using a two-dimensional stack. Soft. Comput. 21(14), 3859–3878 (2016). https://doi.org/10.1007/s00500-016-2034-0

    Article  Google Scholar 

  6. Fortin, F.A., De Rainville, F.M., Gardner, M.A.G., Parizeau, M., Gagné, C.: DEAP: evolutionary algorithms made easy. J. Mach. Learn. Res. 13(1), 2171–2175 (2012)

    MathSciNet  Google Scholar 

  7. Funie, A.-I., Grigoras, P., Burovskiy, P., Luk, W., Salmon, M.: Run-time reconfigurable acceleration for genetic programming fitness evaluation in trading strategies. J. Signal Process. Syst. 90(1), 39–52 (2017). https://doi.org/10.1007/s11265-017-1244-8

    Article  Google Scholar 

  8. Goribar-Jimenez, C., Maldonado, Y., Trujillo, L., Castelli, M., Gonçalves, I., Vanneschi, L.: Towards the development of a complete GP system on an FPGA using geometric semantic operators. In: 2017 IEEE Congress on Evolutionary Computation (CEC), pp. 1932–1939 (2017). https://doi.org/10.1109/CEC.2017.7969537

  9. Hennessy, J.L., Patterson, D.A.: Computer Architecture: A Quantitative Approach, 6th edn. Morgan Kaufmann Publishers Inc., San Francisco (2017)

    MATH  Google Scholar 

  10. Hooker, S.: The hardware lottery. Commun. ACM 64(12), 58–65 (2021). https://doi.org/10.1145/3467017

    Article  Google Scholar 

  11. Intel: Intel Agilex™ M-Series FPGA and SoC FPGA Product Table (2015). https://cdrdv2.intel.com/v1/dl/getContent/721636

  12. Koza, J.R.: Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press, Cambridge (1992)

    MATH  Google Scholar 

  13. La Cava, W., et al.: Contemporary symbolic regression methods and their relative performance. In: Vanschoren, J., Yeung, S. (eds.) Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks, vol. 1 (2021)

    Google Scholar 

  14. Langdon, W.B., Banzhaf, W.: A SIMD interpreter for genetic programming on GPU graphics cards. In: O’Neill, M., et al. (eds.) EuroGP 2008. LNCS, vol. 4971, pp. 73–85. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-78671-9_7

    Chapter  MATH  Google Scholar 

  15. Martin, P.: A hardware implementation of a genetic programming system using FPGAs and Handel-C. Genet. Program Evolvable Mach. 2(4), 317–343 (2001). https://doi.org/10.1023/A:1012942304464

    Article  MATH  Google Scholar 

  16. Miller, J.F.: Cartesian genetic programming: its status and future. Genetic Programm. Evolvable Mach. 21(1), 129–168 (2020). https://doi.org/10.1007/s10710-019-09360-6

    Article  Google Scholar 

  17. Nicolau, M., Agapitos, A.: Choosing function sets with better generalisation performance for symbolic regression models. Genet. Program Evolvable Mach. 22(1), 73–100 (2020). https://doi.org/10.1007/s10710-020-09391-4

    Article  Google Scholar 

  18. Nurvitadhi, E., et al.: Can FPGAs beat GPUs in accelerating next-generation deep neural networks? In: Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, FPGA 2017, pp. 5–14. Association for Computing Machinery, New York (2017). https://doi.org/10.1145/3020078.3021740

  19. Poli, R., Langdon, W.B., McPhee, N.F.: A Field Guide to Genetic Programming. Lulu Enterprises Ltd., UK (2008)

    Google Scholar 

  20. Putnam, A., et al.: A reconfigurable fabric for accelerating large-scale datacenter services. IEEE Micro 35(3), 10–22 (2015). https://doi.org/10.1109/MM.2015.42

    Article  Google Scholar 

  21. Robilliard, D., Marion-Poty, V., Fonlupt, C.: Genetic programming on graphics processing units. Genet. Program Evolvable Mach. 10(4), 447–471 (2009). https://doi.org/10.1007/s10710-009-9092-3

    Article  Google Scholar 

  22. Sidhu, R.P.S., Mei, A., Prasanna, V.K.: Genetic programming using self-reconfigurable FPGAs. In: Lysaght, P., Irvine, J., Hartenstein, R. (eds.) FPL 1999. LNCS, vol. 1673, pp. 301–312. Springer, Heidelberg (1999). https://doi.org/10.1007/978-3-540-48302-1_31

    Chapter  Google Scholar 

  23. Stitt, G., Gupta, A., Emas, M.N., Wilson, D., Baylis, A.: Scalable window generation for the Intel Broadwell+Arria 10 and high-bandwidth FPGA systems. In: Proceedings of the 2018 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, FPGA 2018, pp. 173–182. Association for Computing Machinery (2018). https://doi.org/10.1145/3174243.3174262

  24. Tan, T., Nurvitadhi, E., Shih, D., Chiou, D.: Evaluating the highly-pipelined Intel Stratix 10 FPGA architecture using open-source benchmarks. In: 2018 International Conference on Field-Programmable Technology (FPT), pp. 206–213 (2018). https://doi.org/10.1109/FPT.2018.00038

  25. Veeramachaneni, K., Arnaldo, I., Derby, O., O’Reilly, U.-M.: FlexGP. J. Grid Comput. 13(3), 391–407 (2014). https://doi.org/10.1007/s10723-014-9320-9

    Article  Google Scholar 

  26. Wilson, D., Stitt, G.: The unified accumulator architecture: a configurable, portable, and extensible floating-point accumulator. ACM Trans. Reconfigurable Technol. Syst. 9(3) (2016). https://doi.org/10.1145/2809432

  27. Wright, L.G., et al.: Deep physical neural networks trained with backpropagation. Nature 601(7894), 549–555 (2022). https://doi.org/10.1038/s41586-021-04223-6

    Article  Google Scholar 

  28. Yao, X.: Following the path of evolvable hardware. Commun. ACM 42(4), 46–49 (1999). https://doi.org/10.1145/299157.299169

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christopher Crary .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Crary, C., Piard, W., Stitt, G., Bean, C., Hicks, B. (2023). Using FPGA Devices to Accelerate Tree-Based Genetic Programming: A Preliminary Exploration with Recent Technologies. In: Pappa, G., Giacobini, M., Vasicek, Z. (eds) Genetic Programming. EuroGP 2023. Lecture Notes in Computer Science, vol 13986. Springer, Cham. https://doi.org/10.1007/978-3-031-29573-7_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-29573-7_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-29572-0

  • Online ISBN: 978-3-031-29573-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics