Artificial Intelligence (AI) has permeated our lives. Our phones turn on when they see our faces. We can have full text conversations with ChatGPT. Amazon knows what I’m looking for, and my email finishes my sentences with amazing accuracy.
AI may seem magical, but these solutions rely on deep learning and neural networks (NNs), which require only a little calculus, and a lot of data and computing power.
The first neural networks, proposed in the 1960s, aimed to mimic human brains by perceiving stimuli (input), processing them with interconnected layers of “artificial neurons” and producing responses (output). For example, facial recognition technology on phones is trained to accept an input photo and answers, “Is this person my owner?” If yes, it opens.
Within an ANN, each pair of neurons has a “knob” that controls how strongly a signal is passed from one cell to another. “Training” an NN involves adjusting these knobs so that the NN continually maps a large training dataset from inputs to the desired outputs. This tweaking of millions or billions of knobs is guided by calculus to minimize errors in the output. Efficient NNs learn how to produce the desired training outputs but also generalize to work with the new inputs they encounter.
At Florida Tech’s Neurotransmission Lab (NETS), we study deep learning and develop our own neural networks. Alarmingly, artificial neural networks make errors for unknown reasons, making high-risk deployments risky. Much of our work focuses on these failure modes, evaluating why they occur and what we can do about them.
Led by Ph.D. student Mackenzie Meaney, we have developed a technique called PEEK that “peeks” into the inner workings of neural networks to visualize the details they focus on. PEEK explains NN decisions and reveals data biases. Interestingly, PEEK can often distinguish the correct output from internal workings, even when NN fails to produce them. Ongoing work aims to use this “corrected” output as a fail-safe means of quickly detecting and correcting errors.
The versatility of NNs allows us to collaborate across disciplines. We regularly work with aerospace and biomedical engineers.
With a Ph.D. Student Trupti Mahendrakar ’21 MS In the Autonomy Laboratory, we develop vision and guidance algorithms for autonomous satellite swarms for the Air Force Research Laboratory (AFRL), with ongoing work on human-guided vision algorithms.
Ph.D. Student Nehru Atz ’16, ’19 MS, is developing an algorithm to track satellite components in real time.
Ph.D. Student Ariana Isett ’23 and I are currently faculty/graduate fellows at AFRL, working on a project to send chase satellites into inspection orbits around spacecraft, capturing images to build 3D reconstructions. We design optimized inspection orbits and deploy them on spaceflight computers.
In addition, we are collaborating with the Multi-Scale Cardiovascular Fluids Laboratory to develop NNs that non-invasively estimate a patient’s intravascular blood flow dynamics in real time. This enables medical teams to make rapid diagnoses and treatment plans for patients with cardiovascular diseases.
The NETS Lab’s efforts aim to provide a deeper understanding of AI at scale and design effective solutions for safety-critical applications in spaceflight and medicine.
Quotation: Artificial Intelligence: Math, Not Magic (2024, November 3) Retrieved November 3, 2024 from
This document is subject to copyright. Notwithstanding any fair dealing for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for information purposes only.