Thesis title: Control and Data-driven Operational Strategies for Industry 4.0 and Sustainable City Logistic
Digital transformation and automation affect every sector of society and industry and can significantly increase productivity by assisting decision-makers in complex environments. It is no surprise that the Internet of Things (IoT) and learning-based techniques have been an enormous boon and key enabler in addressing the complex, data-driven, and time-taking industrial and societal challenges. The cornerstone of these transformations is solid mathematical foundations, control and learning-based new system design techniques to cope with this fast-moving field. The present thesis focuses on developing a control and data-driven framework for time-critical operations where an asset (i.e., equipment, buildings, plants, and machinery) performance directly impacts business outputs, such as manufacturing, energy, services, and transportation sectors. The developed framework can be deployed in cross-disciplinary sectors to address the user-defined Key Performance Indicators (KPIs). We evaluated it against the most significant KPIs from manufacturing and transportation sectors by focusing on two specific use cases, given by the control of an assembly line and a traffic intersection. Relevant KPIs involved are linked with the control goals of minimizing the assembly
line resource utilization and optimally controlling the traffic signals to minimize the vehicles’ waiting time and balance the queues. Human safety is the central point of interest for real-time deployment in the proposed KPIs. Considering the sensitivity, firstly, the problem is addressed from a control perspective by using the Model Predictive Control (MPC) technique, which requires developing a comprehensive mathematical formulation of the assembly line control problem and the traffic signal control problem.
The MPC controller is in charge of interpreting the state of the plant and traffic signal to obtain the desired control inputs to optimize the performance of the system, according to the selected Key Performance Indicator (KPI)s. Secondly, to overcome the limitation regarding scalability, prediction horizon and extensive computation time that the MPC technique brings in large scenarios, a Deep Reinforcement Learning (DRL)-based control framework was developed to address both use cases. In this respect, one of the critical challenges for DRL-based control is ensuring constraint satisfaction (e.g., precedence constraints among the tasks, constraints on needed resources to run tasks, deadlines, signals, etc.). The proposed DRL method allows us to consider all the constraints explicitly. Once training is complete, it can be used in real-time to dynamically control the assembly line and the traffic intersection. Another motivation for the proposed work is the method’s ability to work in complex scenarios and the presence of uncertainties. The use of deep neural networks allows for learning the model of the environment, in contrast with, e.g., optimization-based techniques, which require explicitly writing all the equations of the model of the problem. We adopt a learning scheme to speed up the training phase in which more agents are trained in parallel. Simulations show that the proposed method can provide adequate real-time decision support to industrial operators for
scheduling and rescheduling activities, achieving the goal of minimizing the total execution time. To prove the capabilities of the developed algorithm, several simulation scenarios are proposed through the variation in the number of launch vehicles to be assembled and the presence of disturbances affecting the plant. The results are also validated through a microscopic and continuous multi-modal traffic simulator Simulation of Urban Mobility (SUMO)for the traffic control use case. The work is carried out in the context of the EU-funded project Smart European Space Access thru Modern Exploitation of Data Science (SESAME).