My research brings together aerospace engineering, control theory, and computer science to improve autonomy on aerospace vehicles. I do this by examining the intersection of control and various subdisciplines within computer science such as real-time computing, software engineering, machine learning, human-robot interaction, cooperative control, and communication. I convert the examined intersectional relationship into new models, controllers, and software to improve low- and high-level autonomy in robotic vehicles. My collaborators, likewise, utilize our joint research to incorporate concepts from control into their domain. Holistically, the result is a tightly integrated robotic vehicle able to make more intelligent decisions, be more efficient, and more effectively allocate resources to accomplish missions in dynamic environments.
Together with the other co-directors of the NIMBUS lab, we develop robotic vehicles that specialize in close interaction with the environment. To-date, we have multicopters that interact closely with earth, water, fire, extreme weather, sensors, and delicate ecosystems (see https://nimbus.unl.edu).
- NSF CAREER: Resource-aware Cyber-Physical Vehicle Autonomy
- Remote Sensor Emplacement
- Wetlands Monitoring using UAS
- UAS-Rx Fire Ignitions
2-year Research Plan
The next two years will be critical in Unmanned Aircraft Systems (UAS) research to enable them to accomplish national strategic goals while remaining safe and secure like their larger inhabited counterparts. As machine learning is used for more autonomous capabilities, UAS must be able to adapt to changing circumstances to learn, act, sense, and plan when needed. My research gives Size, Weight, and Power (SWaP) constrained robotics this backend capability in order to devote resources where they are needed most as the environment changes. The impact is safer, more effective, and more useful UAS.
My research to date has already produced the control algorithms and mathematical foundation for safe controllers with this capability. During the next two years I will focus on the following remaining theoretical and technological computing pieces to accomplish these broad goals:
- To enable dynamic resource reallocation we must have computing tools that allow real-time system tasks to be modeled with time-varying periods and still ensure schedulability. To do this, I will:
- Develop rate-adaptive, real-time task models
- Design real-time scheduling algorithms for multiple rate-adaptive tasks, ensuring schedule feasibility
- Advances in adaptability and resource reallocation will not be widely adopted until they can be easily deployed. To enable this we need an autopilot employing our techniques. To do this, I will:
- Begin development of a new autopilot, compatible with the ubiquitous Robot Operating System (ROS), and popular firmware such as Pixhawk.
- Deploy a basic, early version of the new autopilot in our outdoor netted facility
- Test basic autopilot in a multicopter test in Costa Rica in our rainforest monitoring scenario.
Additionally, the urgency for plugging leaky pipelines in our STEM educational, and increasing scientific literacy requires we match this resource reallocation strategy ourselves and rededicate our efforts to this cause. The following outreach plan is designed with this goal in mind:
- To help plug our leaky pipeline, in 2022-2023 I will be running a 3-day, hands-on workshop on robotics as part of a STEM summer camp for middle school girls in rural Nebraska.
- During Fall 2022 - Spring 2023 semesters I will conduct 6, 1-hour sessions over 6-8 weeks at the University of Nebraska-Lincoln Osher Lifelong Learning Institute (OLLI). Topics will include:
- Varying degrees of automation in vehicles
- How cyber-physical system concepts and technologies are being used to keep people safe
- Helpful uses of technologies and their beneﬁt to society at large
- How to interpret news, data, and events involving automated vehicles.