Plamen P. Angelov [1] is a computer scientist. He is a chair professor in Intelligent Systems and Director of Research at the School of Computing and Communications of Lancaster University, Lancaster, United Kingdom. He is founding Director of the Lancaster Intelligent, Robotic and Autonomous systems (LIRA) research centre.[2] Angelov was Vice President of the International Neural Networks Society [3] (serving two consecutive terms, 2017-2020) of which he is now Governor-at-large. He is the founder of the Intelligent Systems Research group and the Data Science group at the School of Computing and Communications. He is member of the Board of Governors also of the Systems, Man and Cybernetics Society of the IEEE for two terms (2015-2017) and (2022-2024). Prof. Angelov was named Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2016[4] for contributions to neuro-fuzzy and autonomous learning systems. He is also a Fellow of ELLIS [5] and the IET. Dr. Angelov is a founding co-Editor-in-chief of the Evolving Systems journal since 2009 as well as associate editor of the IEEE Transactions on Cybernetics, IEEE Transactions on Fuzzy Systems, IEEE Transactions on AI, Complex and Intelligent Systems and other scientific journals. He is recipient of the 2020 Dennis Gabor Award [6] as well as IEEE and INNS awards for Outstanding Contributions (2013, 2017), The Engineer 2008 special award and others. Author of over 400 publications including 3 research monographs (by Springer, 2002; Wiley, 2012 and Springer Nature, 2012), 3 granted US patents, over 120 articles in peer reviewed scientific journals, over 160 papers in peer reviewed conference proceedings, etc. These publications were cited over 15000 times (Google Scholar, 2023), h=index 63. His research contributions are centred around autonomous learning systems, Wiley, 2012, dynamically self-evolving systems and the empirical approach to machine learning, Springer Nature, 2012. Most recently, his research is addressing the problems of interpretability and explainability, xDNN, 2020, catastrophic forgetting, continual learning, ability to adapt, computational and energy costs of deep foundation models and their whole life cycle.