Keynote Speakers

Panagiotis Papapetrou, Prof.

Dept. of Computer and Systems Sciences
Stockholm University

Brief Bio

  • April 2017 – present: Professor, Department of Computer and Systems Sciences, Faculty of Social Sciences, Stockholm University, Sweden
  • December 2013 – March 2017: Associate Professor, Department of Computer and Systems Sciences, Faculty of Social Sciences, Stockholm University, Sweden
  • September 2013 – November 2013: Senior Lecturer (tenured), Department of Computer and Systems Sciences, Faculty of Social Sciences, Stockholm University, Sweden
  • September 2012 – November 2013: Lecturer and director of the IT Applications Programme, Department of Computer Science and Information Systems, School of Business-Economics-Informatics, Birkbeck, University of London, UK
  • September 2009 – August 2012: Postdoctoral Researcher at the Department of Computer Science, Aalto University, Finland
  • June 2009: Received Ph.D. in Computer Science
  • September 2006: Received M.A. in Computer Science
  • January 2004: Admitted to the MA/PhD Program of the Department of Computer Science at Boston University, USA
  • June 2003: Received B.Sc. in Computer Science
  • September 1999: Admitted to the Department of Computer ScienceUniversity of Ioannina, Greece
  • September 1982: Moved to the city of Ioannina, my home-town (Northwest of Greece)
  • July 1981: Was brought to the world in the city of Patra, Greece

Abstract

Learning from Electronic Health Records: from temporal abstractions to time series interpretability

The first part of the talk will focus on data mining methods for learning from Electronic Health Records (EHRs), which are typically perceived as big and complex patient data sources. On them, scientists strive to perform predictions on patients' progress, to understand and predict response to therapy, to detect adverse drug effects, and many other learning tasks. Medical researchers are also interested in learning from cohorts of population-based studies and of experiments. Learning tasks include the identification of disease predictors that can lead to new diagnostic tests and the acquisition of insights on interventions.  The talk will elaborate on data sources, methods, and case studies in medical mining.

The second part of the talk will tackle the issue of interpretability and explainability of opaque machine learning models, with focus on time series classification. Time series classification has received great attention over the past decade with a wide range of methods focusing on predictive performance by exploiting various types of temporal features. Nonetheless, little emphasis has been placed on interpretability and explainability.  This talk will formulate the novel problem of explainable time series tweaking, where, given a time series and an opaque classifier that provides a particular classification decision for the time series, the objective is to find the minimum number of changes to be performed to the given time series so that the classifier changes its decision to another class. Moreover, it will be shown that the problem is NP-hard. Two instantiations of the problem will be presented. The classifier under investigation will be the random shapelet forest classifier.  Moreover, two algorithmic solutions for the two problem instantiations will be presented along with simple optimizations, as well as a baseline solution using the nearest neighbor classifier. 

Plamen Angelov, Prof.

Dept. of Computing and Communications
University of Lancaster

Brief Bio

Prof. Angelov (MEng 1989, PhD 1993, DSc 2015) is a Fellow of the IEEE, of the IET and of the HEA. His PhD supervisor, Dr. Dimitar P. Filev is now Member of the National Academy of Engineering, USA. Professor Angelov is Vice President of the International Neural Networks Society (INNS) for Conference and Governor of the Systems, Man and Cybernetics Society of the IEEE. He has 30 years of professional experience in high level research and holds a Personal Chair in Intelligent Systems at Lancaster University, UK. He founded in 2010 the Intelligent Systems Research group which he led till 2014 when he founded the Data Science group at the School of Computing and Communications before going on sabbatical in 2017 and established LIRA (Lancaster Intelligent, Robotic and Autonomous systems) Research Centre (www.lancaster.ac.uk/lira ) which includes over 30 academics across different Faculties and Departments of the University. He is a founding member of the Data Science Institute and of the CyberSecurity Academic Centre of Excellence at Lancaster. He has authored or co-authored 300 peer-reviewed publications in leading journals, peer-reviewed conference proceedings, 6 patents, 3 research monographs (by Wiley, 2012 and Springer, 2002 and 2018) cited over 7600+ times with an h-index of 42 and i10-index of 123. His single most cited paper has 850 citations. He has an active research portfolio in the area of computational intelligence and machine learning and internationally recognised results into online and evolving learning and algorithms for knowledge extraction in the form of human-intelligible fuzzy rule-based systems. Prof. Angelov leads numerous projects (including several multimillion ones) funded by UK research councils, EU, industry, UK MoD. His research was recognised by ‘The Engineer Innovation and Technology 2008 Special Award’ and ‘For outstanding Services’ (2013) by IEEE and INNS. He is also the founding co-Editor-in-Chief of Springer’s journal on Evolving Systems and Associate Editor of several leading international scientific journals, including IEEE Transactions on Fuzzy Systems (the IEEE Transactions with the highest impact factor) of the IEEE Transactions on Systems, Man and Cybernetics as well as of several other journals such as Applied Soft Computing, Fuzzy Sets and Systems, Soft Computing, etc. He gave over a dozen plenary and key note talks at high profile conferences. Prof. Angelov was General co-Chair of a number of high profile conferences including IJCNN2013, Dallas, TX; IJCNN2015, Killarney, Ireland; the inaugural INNS Conference on Big Data, San Francisco; the 2nd INNS Conference on Big Data, Thessaloniki, Greece and a series of annual IEEE Symposia on Evolving and Adaptive Intelligent Systems. Dr Angelov is the founding Chair of the Technical Committee on Evolving Intelligent Systems, SMC Society of the IEEE and was previously chairing the Standards Committee of the Computational Intelligent Society of the IEEE (2010-2012). He was also a member of International Program Committee of over 100 international conferences (primarily IEEE). More details can be found at www.lancs.ac.uk/staff/angelov

Abstract

Empirical Approach: How to get Fast, Interpretable Deep Learning 

We are witnessing an explosion of data (streams) being generated and growing exponentially. Nowadays we carry in our pockets Gigabytes of data in the form of USB flash memory sticks, smartphones, smartwatches etc.
Extracting useful information and knowledge from these big data streams is of immense importance for the society, economy and science. Deep Learning quickly become a synonymous of a powerful method to enable items and processes with elements of AI in the sense that it makes possible human like performance in recognising images and speech. However, the currently used methods for deep learning which are based on neural networks (recurrent, belief, etc.) is opaque (not transparent), requires huge amount of training data and computing power (hours of training using GPUs), is offline and its online versions based on reinforcement learning has no proven convergence, does not guarantee same result for the same input (lacks repeatability).

The speaker recently introduced a new concept of empirical approach to machine learning and fuzzy sets and systems, had proven convergence for a class of such models and used the link between neural networks and fuzzy systems (neuro-fuzzy systems are known to have a duality from the radial basis functions (RBF) networks and fuzzy rule based models and having the key property of universal approximation proven for both).

In this talk he will present in a systematic way the basics of the newly introduced Empirical Approach to Machine Learning, Fuzzy Sets and Systems and its applications to problems like: anomaly detection, clustering, classification, prediction and control. The major advantages of this new paradigm is the liberation from the restrictive and often unrealistic assumptions and requirements concerning the nature of the data (random, deterministic, fuzzy), the need to formulate and assume a priori the type of distribution models, membership functions, the independence of the individual data observations, their large (theoretically infinite) number, etc.

From a pragmatic point of view, this direct approach from data (streams) to complex, layered model representation is automated fully and leads to very efficient model structures. In addition, the proposed new concept learns in a way similar to the way people learn – it can start from a single example. The reason why the proposed new approach makes this possible is because it is prototype based and non-parametric.

References:

[1] P. Angelov.
X. Gu, Empirical Approach to Machine Learning, Studies in Computational
Intelligence, vol.800, ISBN 978-3-030-02383-6, Springer, Cham, Switzerland,
2018.

[2] P. P.
Angelov, X. Gu, Deep rule-based classifier with human-level performance and
characteristics, Information Sciences, vol. 463-464, pp.196-213, Oct. 2018.

[3] P.
Angelov, X. Gu, J. Principe, Autonomous learning multi-model systems from data
streams, IEEE Transactions on Fuzzy
Systems
, 26(4): 2213-2224, Aug. 2018.

[4] P.
Angelov, X. Gu, J. Principe, A generalized methodology for data analysis, IEEE Transactions on Cybernetics, 48(10):
2981-2993, Oct 2018.

[5] P.
Angelov, X. Gu, MICE: Multi-layer multi-model images classifier ensemble, in IEEE International Conference on Cybernetics
(CYBCONF
), Exeter, UK, 2017, pp. 1-8.

[6] X. Gu, P.
Angelov, C. Zhang, P. Atkinson, A massively parallel deep rule-based ensemble
classifier for remote sensing scenes, IEEE
Geoscience and Remote Sensing Letters
, vol. 15 (3), pp. 345-349, 2018.

[7] X. Gu, P.
Angelov, Semi-supervised deep rule-based approach for image classification,
Applied Soft Computing, vol. 68, pp. 53-68, July 2018.

[8] P. Angelov,
Autonomous
Learning Systems: From Data Streams to Knowledge in Real time
, John Willey
and Sons, Dec.2012, ISBN: 978-1-1199-5152-0.

Evangelos Eleftheriou, Dr.

IBM Fellow, Cloud & Computing Infrastructure,
Zurich Research Laboratory, Zurich, Switzerland

Brief Bio

Evangelos Eleftheriou received a B.S degree in Electrical Engineering from the University of Patras, Greece, in 1979, and M.Eng. and Ph.D. degrees in Electrical Engineering from Carleton University, Ottawa, Canada, in 1981 and 1985, respectively. In 1986, he joined the IBM Research – Zurich Laboratory in Rüschlikon, Switzerland, as a Research Staff Member. After serving as head of the Cloud and Computing Infrastructure department of IBM Research – Zurich for many years, Dr. Eleftheriou returned to a research position in 2018 to strengthen his focus on neuromorphic computing and to coordinate the Zurich Lab's activities with those of the global Research efforts in this field.

His research interests focus on enterprise solid-state storage, storage for big data, neuromorphic computing, and non-von Neumann computing architecture and technologies in general. He has authored or coauthored about 200 publications, and holds over 160 patents (granted and pending applications).

In 2002, he became a Fellow of the IEEE. He was co-recipient of the 2003 IEEE Communications Society Leonard G. Abraham Prize Paper Award. He was also co-recipient of the 2005 Technology Award of the Eduard Rhein Foundation. In 2005, he was appointed IBM Fellow for his pioneering work in recording and communications techniques, which established new standards of performance in hard disk drive technology. In the same year, he was also inducted into the IBM Academy of Technology. In 2009, he was co-recipient of the IEEE CSS Control Systems Technology Award and of the IEEE Transactions on Control Systems Technology Outstanding Paper Award. In 2016, he received an honoris causa professorship from the University of Patras, Greece.

In 2018, he was inducted as a foreign member into the National Academy of Engineering for his contributions to digital storage and nanopositioning technologies, as implemented in hard disk, tape, and phase-change memory storage systems.

Abstract

“In-memory Computing": Accelerating AI Applications

In today’s computing systems based on the conventional von Neumann architecture, there are distinct memory and processing units. Performing computations results in a significant amount of data being moved back and forth between the physically separated memory and processing units. This costs time and energy, and constitutes an inherent performance bottleneck. It is becoming increasingly clear that for application areas such as AI (and indeed cognitive computing in general), we need to transition to computing architectures in which memory and logic coexist in some form. Brain-inspired neuromorphic computing and the fascinating new area of in-memory computing or computational memory are two key non-von Neumann approaches being researched. A critical requirement in these novel computing paradigms is a very-high-density, low-power, variable-state, programmable and non-volatile nanoscale memory device. There are many examples of such nanoscale memory devices in which the information is stored either as charge or as resistance. However, one particular example is phase-change-memory (PCM) devices, which are very well suited to address this need, owing to their multi-level storage capability and potential scalability.

 

In in-memory computing, the physics of the nanoscale memory devices, as well as the organization of such devices in cross-bar arrays, are exploited to perform certain computational tasks within the memory unit. I will present how computational memories accelerate AI applications and will show small- and large-scale experimental demonstrations that perform high-level computational primitives, such as ultra-low-power inference engines, optimization solvers including compressed sensing and sparse coding, linear solvers and temporal correlation detection. Moreover, I will discuss the efficacy of this approach to efficiently address not only inferencing but also training of deep neural networks. The results show that this co-existence of computation and storage at the nanometer scale could be the enabler for new, ultra-dense, low-power, and massively parallel computing systems.  Thus, by augmenting conventional computing systems, in-memory computing could help achieve orders of magnitude improvement in performance and efficiency.

John Oommen, Dr.

 Chancellor’s Professor; Life Fellow: IEEE; Fellow: IAPR
School of Computer Science, Carleton University, Ottawa, Canada 

Brief Bio

Dr. John Oommen was born in Coonoor, India on September 9, 1953. He obtained his B.Tech. degree from the Indian Institute of Technology, Madras, India in 1975. He obtained his M.E. from the Indian Institute of Science in Bangalore, India in 1977. He then went on for his M.S. and Ph. D. which he obtained from Purdue University, in West Lafayettte, Indiana in 1979 and 1982 respectively. He joined the School of Computer Science at Carleton University in Ottawa, Canada, in the 1981-82 academic year. He is still at Carleton and holds the rank of a Full Professor. Since July 2006, he has been awarded the honorary rank of Chancellor's Professor, which is a lifetime award from Carleton University. His research interests include Automata Learning, Adaptive Data Structures, Statistical and Syntactic Pattern Recognition, Stochastic Algorithms and Partitioning Algorithms. He is the author of more than 465 refereed journal and conference publications, and is a Life Fellow of the IEEE and a Fellow of the IAPR. Dr. Oommen has also served on the Editorial Board of the IEEE Transactions on Systems, Man and Cybernetics, and Pattern Recognition.

Abstract

The Power of the “Pursuit” Learning Paradigm in the Partitioning of Data   John Oommen

  Abstract
Traditional Learning Automata (LA) work with the understanding that the actions are chosen purely based on the “state” in which the machine is. This modus operandus completely ignores any estimation of the Random Environment’s reward/penalty probabilities. To take these into consideration, Estimator/Pursuit LA utilize “cheap” estimates of the Environment’s reward probabilities to make them converge by an order of magnitude faster. This concept is quite simply the following: Inexpensive estimates of the reward probabilities can be used to rank the actions. Thereafter, when the action probability vector has to be updated, it is done not on the basis of the Environment’s response alone, but also based on the ranking of these estimates.   While this phenomenon has been utilized in the field of LA, until recently, it has not been incorporated into solutions that solve partitioning problems, which is actually a very powerful AI concept that can be used in Large Data, in massive databases etc. In this paper, we will submit a complete survey of how the “Pursuit” learning paradigm can be and has been used in Object Partitioning. The results demonstrate that incorporating this paradigm can hasten the partitioning by an order of magnitude.   This is a joint work with the Speaker’s doctoral student, Dr. Abdolreza Shirvani

Comments are closed.