When you view a YouTube video, you are viewing tens of gigabytes compressed up to 50 times. The process to transmit what an HD camera captures requires large quantities of frame-by-frame video data transmission—and such is the case in sports broadcasting—it must happen fast.
Computational complexity is high because sports coverage is real-time.
“We can take advantage of similarities of each frame to reduce the size of the transmissions,” Saeid Nooshabadi says.
In the case of sports, where video is captured from multiple angles, computer scientists can reconstruct missing coverage using free-view video technology. “The more cameras recording—the better,” he adds. Computational complexity is high because sports coverage is real-time. Applications of Nooshabadi’s multi-view video processing work, funded by the National Science Foundation, include not only sports reporting, but surveillance and even remote surgery.
When your smartphone captures photos in burst mode, capturing a photo every half-second, each image is ever-so-slightly different. The images can be combined, stacked, and processed using complex mathematical operations to enhance the quality. This technology is useful in consumer-imaging devices.
“One of my students is working with the Donald Danforth Plant Science Center to apply image registration techniques to phenotyping applications. The technique requires referencing data from multiple sensors to the same spatial location, so data from multiple sensors can be integrated and analyzed to extract useful information,” Nooshabadi says.
“Previously these technologies required supercomputers. Now with advancements in mobile digital devices, the technology is becoming faster and more accessible.”