With such an intense focus on research, along with the student body size, UW-Madison attracts a wide variety of professors universities around the world. This melting pot also includes professors from our excellent university. Robert Nowak, professor of electrical engineering, is one of these people. His history with UW-Madison dates back to his father, who received his master’s degree in electrical engineering here in the late 1960s.
Growing up in a science focused family, Professor Nowak was used to playing with oscilloscopes and homemade computers as a child. His father was working on the frontier of converting andalog signals to digital, initially called “sample data control.” His research would set the groundwork for Professor Nowak’s future studies. As an undergrad at UW-Madison, Professor Nowak was originally toying with the idea of majoring in mathematics, having always loved studying algorithms. Although he eventually decided to follow in his father’s footsteps and received his undergraduate degree in electrical engineering, his love for mathematics and theory would become the focal point of his graduate studies. After deciding to stay at UW-Madison for his graduate work, Professor Nowak began to focus much more on signal and information processing. With the explosion of computer technology and the internet in the 1990s, signal processing began to have a much more dominant space in research, with people beginning to understand the implication signal processing could have in a wide variety of areas, including MRI and CAT scans. After spending about 9 years as a professor at Michigan State University and Rice University, Professor Nowak came back to UW-Madison as a professor in 2003, where he has been conducting his research since.
Professor Nowak was able to integrate his love of mathematics and algorithms from childhood into his work, solving real world problems in the area of non-linear signal and image processing. When I asked Professor Nowak about his research, he labeled it under the general problem of “big data.” Professor Nowak says, “There is an overwhelming amount of information and data [now] at our disposal that we are actually acquiring more data than we can really process and analyze.” So the question becomes, how do we deal with all of that data?
The classic approach to signal processing uses an analog to digital converter, where the analog signal is sampled at a fixed rate by a computer and these samples can then be filtered or manipulated in different ways to achieve a plethora of effects. The sampling rate that is chosen for an analog signal is a concept called aliasing. Aliasing is an effect where a signal is sampled at too low of a rate, producing samples that do not exist in the real analog signal. The general formula for choosing this sampling rate, known as the Nyquist rate, is to sample two times more than the largest frequency in your analog signal.
However, this immediately becomes a problem in our “big data” era, as it is extremely time consuming to be taking so many samples of the complex audio signals and images of today’s world, let alone the ability to store all of that information. Compressing the samples to reduce redundancy by finding areas in signals that are closely related to each other, and instead of sampling every bit, one would sample one part of the signal and then use that data over the range where the original signal is similar.
Researchers began to theorize ways in which they could sample a signal in a systematic way that would still allow them to capture the most important components of a signal so that the digital output would mirror the original analog input. This idea is the main focus of Professor Nowak’s research here at UW-Madison.
Professor Nowak described this process as focusing on two main ideas: transforming coding and sparsity. Instead of sampling every signal at a prescribed rate, transform coding involves taking randomized samples of a signal and multiplying them by a transform matrix (this acts like a differentiator), giving you transformed building blocks of the original signal. The goal is that these transform coefficients (or building blocks) will mostly be zero, except in places where the original signal had a sudden change.
Imagine, for a second, a screenshot of the TV show SpongeBob SquarePants. The majority of the screen will be solid colors, except for the distinct edges of some character such as SpongeBob, where there will be a dramatic change in color. If we sampled this image classically, it would take a huge amount of space to save information about every pixel on the screen. If we instead applied transform coding to this image, we would randomly sample places of the image, multiply these samples with the transform matrix, and get a list of transform coefficients, consisting mostly of zeros except for places where there is an “edge” in the image. This dramatically decreases the amount of data that needs to be stored.
These topics are very mathematical, but rather than getting four years of mathematical foundation and then maybe doing something cool, it’s not that difficult to get started and do some fun stuff while still being pretty rigorous mathematicallyRobert Nowak
Going back to the two main ideas, the concept of sparsity can then be defined as the desired effect of transform coding – where the transform coefficients are mostly zero except in places with distinct change, making the amount of data that needs to be stored considerably less. The problem with this method then becomes decompressing the new digital signal back into a usable signal that can be projected on a screen or heard audibly, seeing as we have lost a majority of the original samples through transform coding.
Professor Nowak explained how to solve this problem, saying, “[Since] the signal can be represented by its transforming coefficients, you can think about it as a system of equations where there are k equations and m unknowns.” The problem in solving this system of equations is that the unknown quantities are non-linear, meaning the system of equations must be solved in a non-linear fashion, which is extremely complex.
This is the final definition of Professor Nowak’s area of study, answering the questions of: when will this type of sampling actually work (is a signal sparse enough so that random sampling will provide one unique answer when solving for the system of equations), and what type of algorithms will solve the non-linear system of equations?
While this research seems to be highly mathematical, Professor Nowak explained that there are many real world applications where this type of research is applicable. Using this type of processing in MRI and CAT scans can reduce the amount of time it takes a person to receive one of these medical procedures by 50%. This allows hospitals to lower the cost for one of these scans. It can also be used to model and increase cell phone signals, allowing your cell phone to receive better audio. Even the military has begun implementing some of this technology in order to detect foreign signals in their airspace which might indicate someone trying to activate a bomb remotely.
All of this technology requires many years of advanced mathematics and engineering courses to fully understand. However, Professor Nowak firmly believes that it is important to introduce young engineers to these topics in a way that allows them to see the real world applications signal processing can have. He explained it saying that, “[These topics] are very mathematical, but rather than getting four years of mathematical foundation and then maybe doing something cool, it’s not that difficult to get started and do some fun stuff while still being pretty rigorous mathematically.” His background at UW-Madison has allowed him to understand what undergraduate students would be interested in learning; as a result, he created ECE 203, a course on an introduction to signal processing. As a student in his class who has amped up the bass in a rap song and scanned through MRI images to detect finger movement, he certainly has caught my attention.