新用户注册送48体验金,自助领取彩金38,自助领体验金不限id

自助领取彩金38科技大学会议网站英文站

Frank Allgower, University of Stuttgart



 



 

 

Alessandro Astolfi, Imperial College London

TitleData-driven model reduction

AbstractThe aim of the talk is to discuss two methods for obtaining reduced order models, for linear and nonlinear systems, from data. In the first part of the talk the notion of moment for linear systems is generalized to nonlinear, possibly time-delay, systems. It is shown that this notion provides a powerful tool for the identification of reduced order models from input-output data. It is also shown that the canonical parameterization of the reduced order model as a rank-one update of the "interpolation-point matrix" is not necessary, hence one can prove robustness of data-driven model reduction algorithms against variations in the location of the interpolation points. In the second part of the talk the Loewner framework for model reduction is discussed and it is shown that the introduction of left- and right- Loewner matrices/functions simplifies the construction of reduced order models from data.This is joint work with Z. Wang (Southeast University), G. Scarciotti (Imperial College) and J. Simard (Imperial College).

BioAlessandro Astolfi was born in Rome, Italy, in 1967. He graduated in electrical engineering from the University of Rome in 1991. In 1992 he joined ETH-Zurich where he obtained a M.Sc. in Information Theory in 1995 and the Ph.D. degree with Medal of Honor in 1995 with a thesis on discontinuous stabilisation of nonholonomic systems. In 1996 he was awarded a Ph.D. from the University of Rome "La Sapienza" for his work on nonlinear robust control. Since 1996 he has been with the Electrical and Electronic Engineering Department of Imperial College London, London (UK), where he is currently Professor of Nonlinear Control Theory and Head of the Control and Power Group. From 1998 to 2003 he was also an Associate Professor at the Dept. of Electronics and Information of the Politecnico of Milano. Since 2005 he has also been a Professor at Dipartimento di Ingegneria Civile e Ingegneria Informatica, University of Rome Tor Vergata. His research interests are focussed on mathematical control theory and control applications, with special emphasis for the problems of discontinuous stabilisation, robust and adaptive control, observer design and model reduction.


Ben M. Chen(陈本美),  The Chinese University of Hong Ko陈本美Ben M. Chen简历照片ng

Title:Fully Autonomous UAS and Its Applications

Abstract:The research and market for the unmanned aerial systems (UAS), or drones, has greatly expanded over the last few years. It is expected that the currently small civilian unmanned aircraft market is likely to become one of the major technological and economic stories of the modern age, due to a wide variety of possible applications and added value related to this potential technology. Modern unmanned aerial systems are gaining promising success because of their versatility, flexibility, low cost, and minimized risk of operation. In this talk, we highlight some key techniques involved in developing fully autonomous unmanned aerial vehicles and their industrial application examples, which includes deep tunnel inspection, stock counting and checking in warehouses and building inspections.

Bio:Ben M. Chen is currently a Professor in the Department of Mechanical and Automation Engineering at the Chinese University of Hong Kong. He was a Provost's Chair Professor in the Department of Electrical and Computer Engineering, the Natinal University of Singapore (NUS), where he was also serving as the Director of Control, Intelligent Systems and Robotics Area, and Head of Control Science Group, NUS Temasek Laboratories. His current research interests are in unmanned systems, robust control and control applications.

          Dr. Chen is an IEEE Fellow. He has published more than 400 journal an conference articles, and a dozen research monographs in control theory and applications, unmanned systems and financial market modeling by Springer in New York and London. He had served on the editorial boards of several international journals including IEEE Transactions on Automatic Control and Automatica. He currently serves as an Editor‐in‐Chief of Unmanned Systems. Dr. Chen has received a number of research awards nationally and internationally. His research team has actively participated in international UAV competitions, and won many championships in the contests.


Derong Liu(刘德荣),  Guangdong University of Technology

Title:Reinforcement Learning for Optimal Control

Abstract:Reinforcement learning (RL) is one of the most important branches of artificial intelligence. Researchers have been using RL techniques in modern control theory. Self-learning control methodologies are a good representative of such efforts. RL recently has become a major force in the machine learning fields. On the other hand, adaptive dynamic programming (ADP) has now become popular in control communities. Both RL and ADP have roots in dynamic programming and in many ways they are equivalent. Major breakthroughs of ADPRL for optimal control were achieved around 2006, when iterative ADP approaches were introduced. The optimal control of nonlinear systems requires to solve the nonlinear Bellman equation instead of the Riccati equation as in the linear case. The discrete-time Bellman equation is more difficult to work with than the Riccati equation because it involves solving nonlinear partial difference equations. Though dynamic programming has been a useful computational technique in solving optimal control problems, it is often computationally untenable to run it to obtain the optimal solution, due to the backward numerical process required for its solutions, i.e., the well-known "curse of dimensionality". Self-learning optimal control based on ADPRL provides efficient tools for tackling the following two problems. (1) Nonlinear Bellman equation is solved using iterative ADP approaches which are shown to converge. (2) Neural networks are employed for function approximation in order to obtain forward numerical process. Some new developments in ADPRL for optimal control will be summarized.

Bio:Derong Liu received the PhD degree in electrical engineering from the University of Notre Dame in 1994. He became a Full Professor of Electrical and Computer Engineering and of Computer Science at the University of Illinois at Chicago in 2006. He was selected for the “100 Talents Program” by the Chinese Academy of Sciences in 2008, and he served as the Associate Director of The State Key Laboratory of Management and Control for Complex Systems at the Institute of Automation, from 2010 to 2015. He has published 19 books. He is the Editor-in-Chief of Artificial Intelligence Review (Springer). He was the Editor-in-Chief of the IEEE Transactions on Neural Networks and Learning Systems from 2010 to 2105. He is a Fellow of the IEEE, a Fellow of the International Neural Network Society, and a Fellow of the International Association of Pattern Recognition.