In their insatiable demand for faster, smaller and more mobile communications gadgets, says Zhiyuan Yan, consumers are straining the capacity of information technology.

Handheld devices that offer high-definition (HD) images and portability require high throughput, or processing speed, says Yan, an assistant professor of electrical and computer engineering. They must also be able to operate with little power.

These two trends – greater performance and stringent power requirements – pose challenges to the technology and mathematical equations on which IT relies.

“HD applications require high throughput and low power in order to be handheld and mobile,” says Yan. “When you take these two together, you’re in a sense burning the candle at both ends.”

Yan and Meghanad Wagh, an associate professor of electrical and computer engineering, have an NSF grant to devise scalable bilinear algorithms to meet these challenges. They also have support from Thales Communications, the U.S. Department of Defense, and the Pennsylvania Infrastructure Technology Alliance.

“This project represents a different way of thinking about algorithms,” says Wagh. “Normally in signal processing, you write algorithms for standard computer architectures. But to achieve the required speed, we have developed a new class of algorithms that can be directly cast into hardware. Our algorithms are extremely fast and take advantage of the new trend in technology.”

The new algorithms are more suitable for high-performance computing, says Yan.

“Until a few years ago, to improve computer speed, you used a larger processor. Now, you split the computation and do it in parallel with many smaller processors, which can be as good as or better than one superfast processor, while consuming less power.

“Many traditional signalprocessing algorithms, however, are not parallelizable and cannot take advantage of these new ideas.”

Parallel processing allows separate applications of a multimedia system to be dedicated to specific processors. This avoids overloading a single processor with competing demands.

“Our algorithms are inherently structured,” says Yan. “This enables us to extract the maximum parallelism in processing and to offload tasks to dedicated hardware.”

The Lehigh algorithms can also be scaled to handle the greater complexity required by computationally intensive jobs.

“Our algorithms scale gracefully and deal easily with size and complexity,” says Yan. “The earlier algorithms worked fine for small problems, but problems have become more complex. Without algorithms like ours, this complexity would overwhelm processors.

“Our goal is to make it possible for information technologies to continue to improve at the rate that consumers are accustomed to.”