PAST DEPARTMENT SEMINARS

Fall 2021
Fall 2021 Seminars:

 
Wednesday, September 29, 2021 at 4:25pm 
Speaker: Dr. Yu Yang
Cyber-Physical Systems for Smart Cities
Zoom 

 

Abstract: In this talk, I will introduce our work on the foundations and applications of Cyber-Physical Systems to Smart Cities. For any city-scale systems, obtaining the accurate real-time status of interested entities (e.g., real-time locations of vehicles, devices, and users) is essential for downstream applications. However, the existing status sensing approaches in both industry and academy have limited scalability to city-scale due to cost issues. By using city-scale on-demand gig delivery as a concrete use case, I will introduce how we obtain real-time gig worker status (e.g., locations and routes) in a cost-efficient approach to enable timely delivery and efficient order scheduling. Based on our collaboration with Alibaba On-demand Delivery Platform, we design, deploy, and evaluate a nationwide sensing system called aBeacon exploring a hybrid solution of hardware, software, and human participation. aBeacon detects and infers the status of more than 3 million workers in 364 cities in China via a sophisticated tradeoff between performance, cost, and privacy. I will provide some lessons learned and insights when aBeacon evolves from conception to design, deployment, validation, and operation. Finally, I will discuss some ongoing directions.
 
Bio: Yu Yang is an Assistant Professor in the Department of Computer Science and Engineering at Lehigh University. He is broadly interested in Mobile Sensing, Cyber-Physical Systems, Cyber-Human Systems, and Data Science, with a focus on sensing, prediction, and decision-making for cross-domain urban systems including cellular networks, mobile payment, taxis, buses, subways, bikes, personal vehicles, and trucks with applications to Smart Cities, Gig Economy, and Community Services. Details: https://www.yyang.site

Wednesday, October 6, 2021 at 4:25pm 
Speaker: Dr. Rahul Mangharam 
Solving for Problem X: 3 Data-Driven Cyber-Physical Systems Challenge Problems
Zoom
 
 
 
Abstract: This talk focuses on 3 life-critical system problems across autonomous vehicles, medical devices and energy systems. We will understand how modeling for such Cyber-Physical Systems (CPS) requires a combination of formal methods, controls, and machine learning. We highlight fundamental challenges in guarantees of safety and performance in data-driven CPS.  
Theme 1 – Safe Autonomy: A Driver’s License Test for Driverless Vehicles
Autonomous vehicles (AVs) have already driven millions of miles on public roads, but even the simplest maneuvers such as a lane change or vehicle overtake have not been certified for safety. To capture the long tail of safety cases we describe the design of a search engine for AV crashes. 
Theme 2 – Safe Medical Devices: Computer-aided Clinical Trials
Clinical trials can cost $10-20 million, last anywhere from 4-6 years and over 35% fail. We investigate how computer models and simulations of the physiology and medical devices in the closed-loop conduct in-silico trials and can be used as regulatory-grade evidence. 
Theme 3 – Energy Systems: Learning and Control using Gaussian Processes
Electricity markets have become increasingly volatile where 20-40X price spikes have become the norm. We explore data-driven approaches that bridge machine learning and controls for volatile energy markets.
 
Bio: Rahul builds safe autonomous systems at the intersection of formal methods, machine learning and controls. He applies his work to safety-critical autonomous vehicles, urban air mobility, life-critical medical devices, IoT4Agriculture, and AI Co-designers for complex systems. He is the Penn Director for the Department of Transportation's $14MM Mobility21 National University Transportation Center which focuses on technologies for safe and efficient movement of people and goods. Rahul received the 2016 US Presidential Early Career Award (PECASE) from President Obama for his work on Life-Critical Systems. He also received the 2016 Department of Energy’s CleanTech Prize (Regional), the 2014 IEEE Benjamin Franklin Key Award, 2013 NSF CAREER Award, 2012 Intel Early Faculty Career Award and was selected by the National Academy of Engineering for the 2012 and 2017 US Frontiers of Engineering. He has won several ACM and IEEE best paper awards in Cyber-Physical Systems, controls, machine learning, and education. Rahul is an Associate Professor in the Dept. of Electrical & Systems Engineering and Dept. of Computer & Information Science at the University of Pennsylvania. He received his Ph.D. in Electrical & Computer Engineering from Carnegie Mellon University. He enjoys organizing autonomous racing competitions at https://f1tenth.org

Wednesday, October 13, 2021 at 4:25pm 
Speaker: Dr. Jiang Hu 
Machine Learning Techniques for Chip Design: From Graph Neural Network to Linear Regression
Packard Lab 466
 
 
 
 
 
Abstract: Machine learning becomes a popular approach to solving many complicated problems where conventional techniques failed, and chip design is no exception. The application of machine learning techniques is not necessarily simply plug-in use and often has significant room for maneuver. This presentation will cover a few recent results in this regard. The first part will be focused on the application and customization of graph neural network techniques for design predictions of both digital and analog circuits. The second part will show that a machine learning technique as simple as linear regression, with an elaborate use, can facilitate surprisingly strong results on simultaneous power modeling and monitoring for processor architecture designs.
 
Bio: Jiang Hu is currently a professor in the Department of Electrical and Computer Engineering at Texas A&M University. His research interests include design and automation of VLSI circuits and systems, computer architecture optimization and hardware security. He has published over 220 technical papers. He received a best paper award at the ACM/IEEE Design Automation Conference (DAC) in 2001, an IBM Invention Achievement Award in 2003, a best paper award at the IEEE/ACM International Conference on Computer‑Aided Design (ICCAD) in 2011 and a best paper award at the IEEE International Conference on Vehicular Electronics and Safety in 2018. He served as associate editor for the IEEE Transactions on CAD and the ACM Transactions on Design Automation of Electronic Systems. He was the technical program chair and the general chair of the ACM International Symposium on Physical Design (ISPD) in 2011 and 2012, respectively. He received Humboldt Research Fellowship in 2012. He was named IEEE fellow in 2016.

Wednesday, October 20, 2021 at 4:25pm 
Speaker: Dr. Vijaykrishnan Narayanan
Design of Processing-in-Memory Architectures for Deep Learning and Graph Applications
Zoom
 
 
 
 
Abstract: Processing-in-memory (PIM) architectures are becoming increasingly relevant to reduce the cost of data movement in data intensive applications in machine learning and graph analytics. This talk will introduce approaches that support processing at different levels of the memory hierarchy and focus on one approach that supports incache SRAM computations. The talk will also cover accelerators designed for sparse matrix computations, and graph analytics using cross-point memory technologies.
 
Bio: Vijaykrishnan Narayanan is the Robert Noll Chair Professor in the school of EECS at the Pennsylvania State University. He is a fellow of the National Academy of Inventors, IEEE and ACM. He is an IEEE CEDA Distinguished Lecturer. Vijay received his Bachelors in Computer Science & Engineering from University of Madras, India in 1993 and his Ph.D. in Computer Science & Engineering from the University of South Florida, USA, in 1998.

Wednesday, November 3, 2021 at 4:25pm 
Speaker: Dr. Michael Brodsky 
Fiber-Optic Cables as Quantum Channels- Good or Bad?
Christmas-Saucon Hall 201
 
 
 
 
 
Abstract: Quantum networks carry tantalizing promise of unsurpassable capabilities in distributed computing, cryptography and sensing. In these futuristic networks the information is carried by quantum bits (qubits) over quantum communication channels. The qubits are not necessarily independent like classical bits are. In fact, individual qubits could share a special quantum connection, or, in quantum parlor, be entangled with one another. Once entanglement is distributed to remote network nodes and stored in quantum memories it serves as a valuable resource for promising applications. Quantum networks naturally require robust quantum channels for fast and reliable entanglement distribution over long distances. As quantum communication technology matures, it moves towards utilizing actual fibers and free-space optical channels. In this talk I will review the progress of my group in our research on quantum networks with a special focus on different quantum channels, the pertinent decoherence mechanisms in fiber-optic channels and various schemes for quantum information recovery.
 
Bio:  Dr. Michael Brodsky leads research in quantum information, communications and quantum networks at the US Army Research Laboratory. Michael’s current research expertise is in photonics and optical physics, quantum information processing and communication technologies focused on creation, manipulation and transmission of entangled states, as well as in devices and technologies for switching, routing, and buffering of quantum information. Prior to that he was a member of technical staff at AT&T Labs, where his contributions to fiber optic communications were focused on optical transmission systems and the physics of fiber propagation, most notably through his work on polarization effects in fiberoptic networks. Dr. Brodsky spent 2019-2020 academic year teaching physics to cadets at the US Military Academy at West Point, NY. Also during 2013-2014 he taught physics to undergraduate engineers at the NYU Tandon School of Engineering and Cooper Union. Dr. Brodsky has authored or co-authored over 150 journal and conference papers, a book chapter and holds over 40 granted U.S. patents. He served as a topical editor for Optics Letters and has been active on numerous program committees for the IEEE Photonics Society and OSA conferences. Dr. Brodsky is a Fellow of OSA and holds a PhD in Physics from MIT.
Spring 2021
Spring 2021 Seminars:
February 9, 2021
Speaker: Dr. Yu Zhang
Computational Neuroimaging to Advance Precision Medicine
 
Abstract: With the rapid development of computer and imaging technologies, a huge amount of neural information has been continually produced worldwide and providing rich resources for the exploration of various applications. By exploiting these complex data including EEG and fMRI, artificial intelligence techniques have attracted increasing interests of researchers in the fields of healthcare and neural engineering. Exploring new theories and applications from the perspective of machine learning will significantly promote the development of neural engineering and personalized medicine. In this talk, I will present some of our recent work about neural pattern decoding and biomarker discovery by leveraging cutting-edge machine learning techniques. The presentation topic will cover latent-space biomarker quantification, predictive modeling of brain networks, Bayesian decoding of neural signals, and unsupervised disease subtyping, for brain-machine interaction and treatment personalization of brain disorders.
 
Biography: Dr. Yu Zhang is an Assistant Professor of Bioengineering at Lehigh University. He received postdoctoral training at the Wu Tsai Neurosciences Institute, Stanford University, and the Biomedical Research Imaging Center, University of North Carolina at Chapel Hill. In the past decade, he has been mainly working on data-driven neural pattern decoding and biomarker discovery using neuroimaging techniques for personalized medicine. He is the author of over 100 peer-reviewed papers that have been published in prestigious journals and conferences, such as Nature Biomedical Engineering, Nature Human Behaviour, Nature Biotechnology, Proceedings of the IEEE, IEEE Trans. Cybernetics, IEEE Trans. Neural Netw. Learn. Syst., IEEE Trans. Neural Syst. Rehabil. Eng., IEEE Trans. Biomed. Eng., Pattern Recognition, AAAI, MICCAI, and ICASSP. He is an IEEE Senior Member and serving as Associate Editor for Journals including Frontiers in Neuroscience, Frontiers in Computational Neuroscience, Brain- Computer Interfaces. He also served as PC member for international conferences, such as IJCAI and MICCAI. His research interests include computational neuroscience, brain network, machine learning, signal processing, brain-computer interface, medical imaging.
February 18, 2021
Speaker: Dr. Hao Xu
Learning and Optimizing Large Scale Multi-Agent System--A Mean Field Game Approach with Safe Reinforcement Learning

Abstract: Most complicated and coordinated tasks performed by the Large Scale Multi-Agent System (LS-MAS) require huge information exchange between the members of the team. This restricts the maximum population of LS-MAS due to the notorious "Curse of Dimensionality" and induces an interplay between the total number of agents and complexity of system optimization. The development of theory and algorithms that can break the "Curse of Dimensionality" while addressing the interplay, is, therefore, necessary to facilitate the optimal design of LS-MAS. In this talk, we will first introduce an emerging game theory, i.e. Mean-Field Game, to reformulate LS-MAS optimization problem without causing “Curse of Dimensionality” even while the number of agents is continuously increasing. Then, a novel reinforcement learning algorithm has been derived to obtain optimal control for LS-MAS by solving a coupled HJB-FPK equation from Mean-Field Game. Furthermore, to strengthen the practicality of derived learning technique, a safe reinforcement learning structure has been developed which cannot only learn the optimal control for LS-MAS but also strengthen the resiliency of LS-MAS in practical. Eventually, we will demonstrate the effectiveness of proposed work through numerical simulation results.

Bio: Hao Xu is an Assistant Professor in the Department of Electrical and Biomedical Engineering at University of Nevada, Reno (UNR). He received his Ph.D. degree from the Department of Electrical and Computer Engineering at Missouri University of Science and Technology, Rolla, MO (formerly known as University of Missouri-Rolla) in 2012. Before joining UNR, he worked at Texas A&M University– Corpus Christi, TX, USA, as an Assistant Professor with the College of Science and Engineering. His research focuses on artificial intelligence, cyber-physical systems, autonomous systems, multi-agent systems, intelligent design for power grid, and adaptive control. He has published more than 100 technical articles, many of them appeared in highly competitive venues (such as Automatica and IEEE Transactions on Neural Networks and Learning Systems. His researches have been supported by Department of Defense (DoD), National Science Foundation (NSF), National Aeronautics and Space Administration (NASA), and industrial companies.


March 8, 2021
Speaker: Dr. Rosa Zheng
The United Nations Decade of Ocean Science 2021-2030

Abstract: This talk will introduce the United Nations Decode of Ocean Science for Sustainable Development program that just kicked off in Jan. 2021. I will share the UN's decade vision, decade mission, and specific interests and applications tied to underwater energy harvesting, underwater navigation, wireless communication, marine robotics, sensing, and ocean big data technologies. 

Biography: Yahong Rosa Zheng received the Ph.D. degree Carleton University, Ottawa, ONT, Canada, in 2002. She was an NSERC Postdoctoral Fellow for two years with the University of Missouri-Columbia. then, she was on the faculty of the Department of Electrical and Computer Engineering at the Missouri University of Science and Technology for 13 years. She joined Lehigh University in Aug. 2018 as a professor in the ECE department. She developed eight new courses and taught 15 different courses in the area of embedded systems, wireless communications, real-time digital signal processing, FPGA, GPU, and robotics. Her research interests include underwater cyber-physical systems, compressive sensing, wireless communications, and wireless sensor networks. She has served as a Technical Program Committee (TPC) member for many IEEE international conferences and Associate Editor for three IEEE journals. She is currently a senior editor for IEEE Vehicular Technology Magazine and an Associate Editor for the IEEE Journal of Oceanic Engineering. She is the recipient of an NSF faculty CAREER award in 2009. She has been an IEEE fellow and a Distinguished Lecturer of IEEE Vehicular Technology Society since 2015.


April 28, 2021
Speaker: Dr. Narayana Santhanam
Data-derived formulations: regularization vs. learning
 
Abstract: Regularization is often used to match available training sample sizes to model complexity. As training sample sizes increase, regularization constraints are usually relaxed when choosing the model. A natural question then arises: as the constraints relax, does the selected model keep varying or is the procedure stable in the sense that at some point, no further relaxation of constraints changes the selected model substantially? To understand this, we develop a statistical framework of eventually-almost sure prediction. Using only samples from a probabilistic model, we predict properties of the model and of future observations.  The prediction game continues in an online fashion as the sample size grows with new observations. After each prediction, the predictor incurs a binary (0-1) loss. The probability model underlying a sample is otherwise unknown except that it belongs to a known class of models. The goal is to make finitely many errors (i.e. loss of 1) with probability 1 under the generating model, no matter what it may be in the known model class.
 
We characterize problems that can be predicted with finitely many errors. Our characterization is through regularization, and answers precisely the question of when regularization eventually settles on a model and when it does not. Furthermore, we also characterize when a universal stopping rule can identify (to any given confidence) at what point no further errors will be made. We specialize these general results to a number of problems---online classification, entropy prediction, Markov processes, risk management---of which we will focus on online classification in this talk.
 
Bio: Narayana Santhanam is an Associate Professor at the University of Hawaii with research interests in the intersection of learning theory, statistics and information theory, and applications thereof. He obtained his PhD from the University of California, San Diego and held a postdoctoral position at the University of California, Berkeley, before taking up a faculty position at the University of Hawaii. He is currently an Associate Editor of the IEEE Transactions of Information Theory, a member of the Center for Science of Information (a NSF Science and Technology center), and is a recipient of the IEEE IT Society Best Paper Award. Among his current pedagogical priorities is developing a robust data science curriculum grounded in engineering fundamentals aimed at electrical engineering students.
Fall 2020
Fall 2020 Seminars: 
October 1, 2020
Hosted by ECE & BioE
Speaker: Dr. Todd Coleman
Electrical Digestive Engineering 
 
Abstract: Gastrointestinal (GI) problems are the second leading cause for missing work or school after the common cold, giving rise to 10 percent of the reasons a patient visits their physician and costing $142 billion annually.  Although obstructions and infections are easy to diagnose, more than half of GI disorders involve abnormal functioning of the GI tract, where diagnosis entails subjective symptom-based questionnaires or objective but invasive, intermittent procedures in specialized centers.   In this talk, we will describe electrical waves of pacemaker activity that underlie contractions for digestion, their interconnection with the nervous and immune systems, and how their propagation patterns can go awry in GI disorders. We will describe our development of high-resolution multi-electrode abdominal recording systems as well as dynamic spatial signal processing methods that in concert enable extraction of propagation patterns that are typically acquired invasively in specialized centers.  Development of a miniaturized recording system to perform 24-hour ambulatory recordings, with an example of how this aided in solving a complex patient case will also be discussed, will also be discussed.  We will conclude with a vision for how modernizing gastroenterology with applied mathematics and engineering has transformational potential to remove bottlenecks, improve health outcomes, and reduce healthcare costs by improving timely diagnoses, optimizing interventions, and predicting treatment responses.
 
Bio: Todd P. Coleman received B.S. degrees in electrical engineering and computer engineering from the University of Michigan. He received M.S. and Ph.D. degrees from MIT in electrical engineering and did postdoctoral studies at Mass General Hospital in neuroscience. He is currently a Professor of Bioengineering at UCSD and directs the Neural Interaction Laboratory, which uses tools from applied probability, physiology, and bio-electronics to address basic science and translational problems. Dr. Coleman has been selected as a National Academy of Engineering Gilbreth Lecturer, a TEDMED speaker, and a Fellow of the American Institute for Medical and Biological Engineering.
 
October 20, 2020
Speaker: Dr. Javad Khazaei
Modeling, Control, and Security of Cyber-physical Power Systems
 
Abstract: While the advent of advanced power converter technologies, communication systems, and information technology has improved the efficiency and flexibility of energy supply, more challenges has been introduced to the security and control of smart grid systems. In this presentation, three aspects of modeling, control, and security of modern cyber-physical power systems will be discussed. First, cyber-physical attacks on power systems will be considered, where a cyberattacker can inject false data on measurements and result in a physical failure in the system. The impact of coordinated cyberattacks on failure of transmission and distribution systems will be shown using bi-level optimization problems. Next, control of cyber-physical power system will be covered and the role of distributed control in providing ancillary services to the grid will be demonstrated through case studies. Finally, the stability issues related to high penetration of converter-based renewable energy sources will be evaluated through classical control theories and case studies. It will be shown how the stability analysis can help system planners to find the stability margins and identify sensitive parameters that might negatively impact the performance of renewable energy sources in future smart grid systems.
 
Bio: Dr. Javad Khazaei received a Ph.D. degree in Electrical Engineering from University of South Florida (USF) in 2016 with focus on power and energy. He is currently an Assistant Professor at Pennsylvania State University, where he holds a joint appointment with the Electrical Engineering department at Penn State Harrisburg (PSH) and the Architectural Engineering department at Penn State University Park. His research is currently funded by the Department of Defense, Office of Naval Research (ONR) to develop distributed control algorithms for distributed energy resources in smart grids and cybersecurity research in power systems. He has experience in stability analysis of modern power systems, hardware development of microgrid systems and converter-based renewable energy sources as well as smart grid cybersecurity. His research interests include microgrid modeling, dynamic analysis of power converters in smart grids and buildings, application of distributed optimization in smart grids, cyber-physical power system security, renewable energy integration, and application of power electronics in power systems.
November 17, 2020
Speaker: Dr. David Kaeli
Side-channel Analysis and Resiliency Target Accelerators
 
Abstract:   GPUs are increasingly being used in security applications, especially for accelerating encryption/decryption. While GPUs are an attractive platform in terms of performance, the security of these devices raises a number of concerns. One vulnerability is the data-dependent timing/power information, which can be exploited by adversaries to recover the encryption key. Memory system features are frequently exploited since they create detectable variations.  In this talk we will discuss side-channels across a range of accelerators, from high performance NVIDIA/AMD graphics processors, to embedded Android-based systems, including the Qualcomm Snapdragon. Our work has also included attacks on CPUs and FPGAs, though as a path for informing our work to attack accelerator architectures. Our results include identifying a novel timing, power, EM and fault-based side-channels in devices from Intel, NVIDIA, AMD and Qualcomm GPU platforms. 
 
Bio: David Kaeli is a College of Engineering Distinguished Professor of Electrical and Computer Engineering at Northeastern University, where he directs the Northeastern University Computer Architecture Research Laboratory (NUCAR).  He received a BS and PhD in Electrical Engineering from Rutgers University, and an MS in Computer Engineering from Syracuse University.  He has also served as the Associate Dean of Undergraduate Programs in the College of Engineering from 2010-2013.  Prior to joining Northeastern in 1993, Kaeli spent 12 years at IBM, the last 7 at T.J. Watson Research Center, Yorktown Heights, NY.  He has been a visiting faculty fellow at the University of Edinburgh, University of Ghent, Technical University of Munich and Barcelona Tech. Dr. Kaeli has published over 400 critically reviewed publications, 7 books, and holds 12 US patents.  His research spans a range of areas including microarchitecture to back-end compilers and database systems. His current research topics include hardware security, graphics processors, virtualization, heterogeneous computing, and multi-layer reliability. He serves as an Editor-in-Chief of ACM Transactions on Architecture and Code Optimization, and Associate Editor of the IEEE Transactions on Parallel and Distributed Systems and the Journal of Parallel and Distributed Computing. Dr. Kaeli an IEEE Fellow and a Distinguished Scientist of the ACM.

 

Spring 2020
Spring 2020 Seminars:
Tuesday, February 11, 2020
Speaker: Dr. Marco Donato   
DNN-Based Edge Computing via Full-Stack Co-Design Approach

Abstract: Augmented/Virtual Reality platforms, Real-Time Translators, and Intelligent Personal Assistants are emerging applications within our grasp thanks to continuous developments in Deep Neural Network (DNN) models. DNN hardware accelerators are paving the way for such applications to be deployed to embedded devices. However, data movement remains a critical bottleneck and CMOS memory technologies struggle to keep up with the increase in memory footprint for state-of-the-art DNN models, forcing the system to perform costly off-chip DRAM memory access. As the future of computing is shaped by the way we store and process large amounts of information, a need for design solutions at the intersection of devices, circuits, architectures, and applications emerges. This talk will present full-stack design methodologies that leverage embedded non-volatile memories (eNVMs) as a dense, approximate storage solution to reduce DRAM accesses by storing DNN models entirely on chip. In evaluating the implications of building eNVM-based computing systems, it is critical to take into account the non-idealities of many eNVM implementations. Multi-level cell storage offers the opportunity for higher capacity, but introduces reliability issues. Moreover, the high energy and latency costs of eNVM writes, together with limited memory endurance, set a limit on how frequently and efficiently the memory content can be updated. I will show that all these limitations can be circumvented by adopting device, architectural, and algorithmic co-design optimizations, making eNVMs a viable solution for energy-efficient DNN inference at the edge.

Bio: Dr. Marco Donato is a Research Associate in the John A. Paulson School of Engineering and Applied Sciences at Harvard University. His research focuses on the design of novel embedded memory subsystems and circuitry in advanced CMOS technology nodes with applications to machine learning hardware accelerator SoCs. Before joining Harvard, Dr. Donato received his Ph.D. from Brown University where he worked on developing automated tools for noise-tolerant circuit architecture design, and physically accurate and computationally efficient simulation techniques for evaluating the effect of thermal and random telegraph signal (RTS) noise in nanoscale sub-threshold CMOS circuits. Dr. Donato holds a Master and Bachelor’s degree from the University La Sapienza in Rome.


Thursday, February 13, 2020
Speaker: Tamzidul Hoque  
Trust, but Tolerate and Verify: Ecosystem of Technologies for Trusted Microelectronics
 
Abstract: In the new era of intelligent connectivity, electronic devices have gained unprecedented access to our daily lives. However, the economic advantage of outsourcing the design, fabrication, and assembly process of electronic components has driven the semiconductor industry to rely on various foreign vendors across the globe. These untrusted vendors can introduce stealthy, malicious modifications to the hardware design, also known as hardware Trojans, to cause a functional failure or leak secret information during field operation. Even after deployment, the hardware design is vulnerable to various invasive in-field attacks by malicious end-users that can disrupt trusted execution.
In this talk, I will present an ecosystem of technologies that can verify the trustworthiness of the hardware designs before deployment and integrate the ability to tolerate various attacks in the field. First, I will outline the security and trust issues in the integrated circuit (IC) lifecycle. Next, I will present two solutions to verify the trustworthiness of ICs that can identify hardware Trojans before deployment, in both pre-silicon and post-silicon stages of the supply chain. Similarly, for post-deployment phases, I will present two solutions. The first one enables Trojan-tolerant computing in the field that can be used as the last line of defense for applications where trust verification is not feasible. The next technology in the ecosystem is developed to protect reconfigurable hardware that is vulnerable to tampering of its bitstream in the field. I will illustrate how a machine learning guided hardware obfuscation method can enable tamper-tolerance in reconfigurable hardware. In conclusion, I will discuss the future security challenges that I intend to address to establish hardware as the trust anchor for emerging applications.
 
Bio: Tamzidul Hoque is a Ph.D. candidate in the Electrical and Computer Engineering at the University of Florida (UF). His Ph.D. research is focused on hardware security, with emphasis on hardware Trojan detection and hardware intellectual property protection from piracy, reverse engineering, and tampering. He has worked within the Advanced Security Research and Government Group (ASRG) in Cisco as a Ph.D. intern during the summer 2016 and 2017. In Cisco, Tamzidul was selected to work with their Trustworthy Technologies (TT) group that is assigned to secure the entire product line. Tamzidul’s research in the area of security has so far resulted in more than 20 peer-reviewed publications in premier journals and international conferences, including lead-authored articles in IEEE International Test Conference, IEEE Consumer Electronics Magazine, and IEEE Design & Test. As a recognition of his research contribution, Tamzidul received the ECE Graduate Research Excellence Award from UF in 2018. The technical demonstration of his work has received five awards in leading hardware security conferences, including IEEE Hardware Oriented Security and Trust.

 

Fall 2019
Fall 2019 Seminars:
Monday, September 23, 2019
Co-Sponsored with the Institute for Cyber Physical Infrastructure & Energy
Speaker: Dr. Husheng Li, University of Tennessee  
Information for Control and Sensing in Cyber Physical Systems: From Maxwell's Demon to Millimeter Wave
 
Abstract: The talk is focused on the efficiency and gleaning of information in cyber physical systems (CPSs). For evaluating the interdependence of communications and controls in CPSs, we study the control of entropy (or equivalently uncertainty) via communications in CPSs. We consider the controller as the Maxwell's demon that decimates the system entropy. Due to the second law of thermodynamics, the system entropy cannot be spontaneously decreased. Therefore, to reduce the system entropy, the controller needs external information communicated from sensors. The information efficiency of reducing entropy is studied for both finite and continuous state systems. Then, we will discuss how to leverage the millimeter wave communications in 5G systems for the purpose of sensing, as a `bonus' of wireless communications. Detection, tracking and imaging for objects in the millimeter wave band will be discussed, based on the millimeter wave signals blocked or reflected by the objects of interest. Both experimental and numerical results on this `bonus' sensing mechanism will be introduced.
 
Bio: Husheng Li received the BS and MS degrees in electronic engineering from Tsinghua University, Beijing, China, in 1998 and 2000, respectively, and the Ph.D. degree in electrical engineering from Princeton University, Princeton, NJ, in 2005. From 2005 to 2007, he worked as a senior engineer at Qualcomm Inc., San Diego, CA. In 2007, he joined the EECS department of the University of Tennessee, Knoxville, TN, as an assistant professor, and has been promoted to full professor. His research is mainly focused on statistical signal processing, wireless communications, networking, smart grid and game theory. Dr. Li is the recipient of the Best Paper Awards of EURASIP Journal of Wireless Communications and Networks, 2005, EURASIP Journal of Advances in Signal Processing, 2015, IEEE Globecome 2017, IEEE ICC 2011 and IEEE SmartGridComm 2012, and the Best Demo Award of IEEE Globecom, 2010.

 
Wednesday, October 2, 2019
Speaker: Dr. Lizhong Zheng, MIT 
Universal Features- Information Extraction and Data-Knowledge Integration 
 
Abstract: With the growing demand of using data analytics in a wide range of applications, a key research challenge has emerged to represent data in a generic semantic space, where we need to have a quantitative way to represent the useful information and knowledge succinctly, and at an abstract level. The key issues include how to define a universal interface for knowledge representation, how to manage and integrate the knowledge from multiple data sources, how to utilize domain knowledge, and how to cope with non-ideal situations such as the disparity in the quality of different datasets and precision losses in the processing. There are numerous algorithms as possible ways to achieve such goals. Particularly, neural networks are expected to play a key role. The main difficulty is that we still do not have a complete theory about deep learning, to identify exactly what knowledge is learned by neural networks, what hidden assumptions are needed for the desirable performance. In this talk, we try to address this problem by developing a theoretical structure to measure the meaning of information by its relevance to specific inference problems, and from that we explain the behavior of neural networks as extracting “universal features”, defined as the solution to a specific optimization problem. This helps us not only to understand the learning process inside a large neural network, but also to draw connections to a number of well-known concepts in statistics and other learning algorithms. Based on this theoretic framework, our goal is to develop more flexible, robust, and interpretable data embedding algorithms.
 
Bio: Lizhong Zheng received the B.S and M.S. degrees, in 1994 and 1997 respectively, from the Department of Electronic Engineering, Tsinghua University, China, and the Ph.D. degree, in 2002, from the Department of Electrical Engineering and Computer Sciences, University of California, Berkeley. Since 2002, he has been working at MIT, where he is currently a professor of Electrical Engineering. His research interests include information theory, statistical inference, communications, and networks theory. He received Eli Jury award from UC Berkeley in 2002, IEEE Information Theory Society Paper Award in 2003, and NSF CAREER award in 2004, and the AFOSR Young Investigator Award in 2007. He served as an associate editor for IEEE Transactions on Information Theory, and the general co-chair for the IEEE International Symposium on Information Theory in 2012. He is an IEEE fellow.

Monday, November 11, 2019
Speaker: Dr. Peng Li  
Learning Mechanisms and Hardware Design for Computation with Spiking Neurons
 
Abstract: As one form of brain-inspired computing, spiking neural networks (SNN) have recently gained momentum. This is fueled by in part by advancements in emerging devices and neuromorphic hardware, e.g., availability of Intel Loihi and IBM TrueNorth neuromorphic chips, promising ultra-low energy event-driven processing of large amounts of data. Nevertheless, major challenges are yet to be conquered to make spikebased computation a competitive choice for real-world applications. This talk will present a multi-faceted SNN research approach: 1) empowering SNNs by exploring computationally-powerful feedforward and recurrent architectures; 2) tackling major challenges in training complex SNNs by developing biologically plausible learning mechanisms and error backpropagation operating on top of spiking discontinuities; and 3) enabling efficient FPGA spiking neural processors with integrated on-chip learning via algorithm-hardware co-optimization.
 
Bio: Peng Li received the Ph. D. degree from Carnegie Mellon University in 2003. He was on the faculty of Texas A&M University from August 2004 to June 2019. Since July 2019, he has been with the University of California at Santa Barbara as a professor of Electrical and Computer Engineering.His research interests are in integrated circuits and systems, electronic design automation, brain-inspired computing, and computational brain modeling. Li’s work has been recognized by an ICCAD Ten Year Retrospective Most Influential Paper Award, four IEEE/ACM Design Automation Conference (DAC) Best Paper Awards, an Honorary Mention Best Paper Award from ISCAS, an IEEE/ACM William J. McCalla ICCAD Best Paper Award, two SRC Inventor Recognition Awards, two MARCO Inventor Recognition Awards, and an NSF CAREER Award. He was honored by the ECE Outstanding Professor Award, and was named a TEES Fellow, a William O. and Montine P. Head Faculty Fellow, and a Eugene Webb Fellow by the College of Engineering at Texas A&M University. He was the Vice President for Technical Activities of the IEEE Council on Electronic Design Automation from Jan. 2016 to Dec. 2017. He is a Fellow of the IEEE and has consulted for Intel and several Silicon Valley startup companies.