Category Archives: Research

Self-Stabilizing Systems

It was August 15, 2003. A software bug invoked a blackout spanning the Northeast, Midwest, and parts of Canada. Subways shut down. Hospital patients suffered in stifling heat. And police evacuated people trapped in elevators.

What should have been a manageable, local blackout cascaded into widespread distress on the electric grid. A lack of alarm left operators unaware of the need to re-distribute power after overloaded transmission lines hit unpruned foliage, which triggered a race condition in the control software.*


Ali Ebnenasir is working to prevent another Northeast Blackout. He’s creating and testing new design methods for more dependable software in the presence of unanticipated environmental and internal faults. “What software does or doesn’t do is critical,” Ebnenasir explains. “Think about medical devices controlled by software. Patient lives are at stake when there’s a software malfunction.”

How do you make distributed software more dependable? In the case of a single machine—like a smartphone—it’s easy. Just hit reset. But for a network, there is no centralized reset. “Our challenge is to design distributed software systems that automatically recover from unanticipated events,” Ebnenasir says.

The problem—and some solutions—has been around for nearly 40 years, but no uniform theory for designing self-stabilizing systems exists. “Now we’re equipping software engineers with tools and methods to design systems that autonomously recover.”

Ebnenasir’s work has been funded by the National Science Foundation.

*Source: Wikipedia

Ubiquitous High-Performance Computing (UHPC) and X-Stack Projects

The Ubiquituous High-Performance Computing Project, funded by the Defense Advanced Research Projects Agency (DARPA), initiates research on energy-efficient, resilient, and many-core computing on the horizon for 2018. Faced with the end of Dennard scaling, it was imperative to provide better hardware and software to face energy consumption of future computers, but also to exploit a large number of cores in a single cabinet (up to 1015 floating-point operations per second), all the while consuming no more than 50kW. A thousand of those machines have the potential to reach one exaflop (1015 floating-point operations per second). The hardware should expose several “knobs” to the software, to allow applications to gracefully adapt to a very dynamic environment, and expand and/or contract parallelism depending on various constraints such as maximal authorized power envelope, desired energy-efficiency, and required minimal performance.

Following UHPC, the Department of Energy-funded X-Stack Software Research project recentered the objectives. By using traditional high-performance communication libraries such as the Message-Passing Interface (MPI), by revolutionizing both hardware and software at the compute-node level.

In both cases, it was deemed unlikely that traditional programming and execution models would be able to deal with novel hardware. Taking advantage of the parallelism offered by the target straw-man hardware platform would be impossible without new system software components.

The Codelet Model was then implemented in various runtime systems, and inspired the Intel-led X-Stack project to define the Open Community Runtime (OCR). The Codelet Model was used on various architectures, from the IBM Cyclops-64 general-purpose many-core processor, to regular x86 compute nodes, as well as the Intel straw-man architecture, Traleika Glacier. Depending on the implementations, codelet-based runtime systems run on shared-memory or distributed systems. They showed their potential on both classical scientific workloads based on linear algebra, and more recent (and irregular) ones such as graph-related parallel breadth-first search. To achieve good results, hierarchical parallelism and specific task-scheduling strategies were needed.

Self-awareness is a combination of introspection and adaptation mechanisms. Introspection is used to determine the health of the system, while adaptation changes parameters of the system so parts of the compute node consume less energy, shutdown processing units, etc. Introspection and adaptation are driven by high-level goals expressed by the user, related to power and energy consumption, performance, and resilience.

The team studied how to perform fine-grain resource management to achieve self-awareness using codelets, and built a self-aware simulation tool to evaluate the benefits of various adaptive strategies.


The TERAFLUX Project

The TERAFLUX project was funded by the European Union. It targeted so-called “teradevices,” devices featuring more than 1,000 cores on a single chip, but with an architecture that will make it near-impossible to exploit using traditional programming and execution models. DF-Threads, a novel execution model based on dataflow principles was proposed to exploit such devices. A simulation infrastructure was used to demonstrate the potential of such a solution, while remaining programmable. At the same time, it was important to maintain a certain level of compatibility with existing systems and features expected by application programmers.

Both models borrow from dataflow models of computation, but they each feature subtle differences requiring special care to bridge them. Stéphane Zuckerman and his colleagues ported DARTS—their implementation of the Codelet Model—to the TERAFLUX simulator, and showed a convergence path existed between DF-Thread and codelet-execution models. The research demonstrated the advantages of hardware-based, software-controlled multithreading with hardware scheduling units for scalability and performance.

Stéphane Zuckerman presented the results and outcomes of his research in peer-reviewed conferences and workshops.

Improving Cyber Security—Education and Application

Most cyber attacks aren’t new. Rather, they are new to the administrators encountering them. “The workforce isn’t well trained in these complex issues,” Jean Mayo explains. “One problem we encounter in education is that we cannot allow students to modify the software that controls an actual system—they can cause real damage.”

Our goal is to keep the data safe not only by controlling who has access, but by ensuring file integrity.

With support from the National Science Foundation, a team of Michigan Tech computer scientists teaches modern models of access control using visualization systems within user-level software.

Mayo and her team are also taking a fresh look at teaching students how to code securely. “The system we developed will detect when security is compromised and provide students with an explanation of what went wrong and how to fix it,” she adds.


File System Enhancement for Emerging Computer System Concerns

Mayo is applying existing firewall technology to file system access control. In her core research, she’s providing greater flexibility for administrators to determine when access is granted. “Using the firewall model to filter traffic content—like a guard standing by a door—we can add more variables to control file access, like time of day or location. It is more flexible, but also more complex—firewalls are familiar and help administrators navigate the complexity.”

Mayo is also developing a language for guaranteeing file security. “Our goal is to keep the data safe not only by controlling who has access, but by ensuring file integrity.” This system will disallow changes made to a file when the change doesn’t meet file specifications. “This helps to prevent users from entering incorrect data.”

Better, Faster Video Processing and Image Enhancement

When you view a YouTube video, you are viewing tens of gigabytes compressed up to 50 times. The process to transmit what an HD camera captures requires large quantities of frame-by-frame video data transmission—and such is the case in sports broadcasting—it must happen fast.

Computational complexity is high because sports coverage is real-time.

“We can take advantage of similarities of each frame to reduce the size of the transmissions,” Saeid Nooshabadi says.

In the case of sports, where video is captured from multiple angles, computer scientists can reconstruct missing coverage using free-view video technology. “The more cameras recording—the better,” he adds. Computational complexity is high because sports coverage is real-time. Applications of Nooshabadi’s multi-view video processing work, funded by the National Science Foundation, include not only sports reporting, but surveillance and even remote surgery.

When your smartphone captures photos in burst mode, capturing a photo every half-second, each image is ever-so-slightly different. The images can be combined, stacked, and processed using complex mathematical operations to enhance the quality. This technology is useful in consumer-imaging devices.


“One of my students is working with the Donald Danforth Plant Science Center to apply image registration techniques to phenotyping applications. The technique requires referencing data from multiple sensors to the same spatial location, so data from multiple sensors can be integrated and analyzed to extract useful information,” Nooshabadi says.

“Previously these technologies required supercomputers. Now with advancements in mobile digital devices, the technology is becoming faster and more accessible.”

Planning Under Uncertainty

The road below has no forks, nor is it fog-covered, but you still can’t predict what lies ahead. Making decisions under uncertainty involves more than being presented with multiple options and choosing the best one. The problem is much more complex because the forks and options are not readily seen.

Inevitably, plans go wrong. Plans for robots and plans for humans. It’s impossible to predict all the ways plans may go wrong—or how to fix them. Nilufer Onder works to create algorithms to address and fix plans—from construction management to the Mars rover and microarchitecture. Her research spans interdisciplinary areas where uncertainty is prevalent.

Simulator Verification: Searching for a Base Truth

Simulators are large, complex pieces of code. Simulation developers continually modify the code to adapt to ever-changing technology. Onder and her team from Michigan Tech, including Zhenlin Wang and Soner Onder, developed a graphical structure to automatically derive verification constraints from simulator traces. SFTAGs (state-flow temporal analysis graphs) take into account stochastic paths and durations taken by events that are being simulated.

Constructing Parallel Plans

Automatic generation of robust plans that operate in realistic domains involves reasoning under uncertainty, operating under time and resource constraints, and finding the optimal set of goals to work on. Creating plans that consider all of these features  is a computationally complex problem addressed with the planner CPOAO (concurrent probabilistic oversubscribed planning using AO). CPOAO includes novel domain independent heuristics and pruning techniques to reduce the search space.


Risk-Informed Project Management

The construction industry is the largest single production activity in the US economy–accounting for nearly 10 percent of the gross national product. Contingencies commonly cause delays and added costs in construction projects. Onder’s work involves providing automated techniques to avoid and respond to contingencies.

Together with Amlan Mukherjee, a researcher in civil and environmental engineering at Michigan Tech, Onder created a learning environment for construction management students to predict and address change. “Students take a construction plan and overlay it with events that cause delays. Then we ask students to react to the scenarios,” she explains.

Onder’s team developed ICDMA (interactive construction decision-making aid) which uses AI-planning technology to predict the paths a project can take.

Student Persistence in Engineering and Computer Science

Careers in engineering and computer science usually promise a well-paying and -respected job. However, approximately 55 percent of US students leave these fields within six years, choosing a non-STEM field or leaving higher education altogether. Onder’s group investigates the complex issues surrounding student persistence, including who influences career choices, what factors affect changing majors, and the under-representation issues involved in staying in a major.

Creating Opportunities for Women in Computing

For Linda Ott, debugging a program is like solving a mystery. “We don’t tell girls about computing when they’re young, so they don’t see how fun computing can be,” Ott explains. “They hear about biology and chemistry, but computing seems abstract.” And very few middle and high schools have computing courses or instructors. “Girls don’t see role models,” she adds.

Computer science is no longer lone individuals sitting in a dark room on a computer. It’s vibrant, team-based, and a lot more fun.

Ott studied computer science at Purdue University in the 1970s—a time when there were few other female computing scholars. At Michigan Tech, she is
devoted to giving more women the opportunity to discover computing.

Ott observes that when girls do have the chance to program—to create something out of nothing—they often really enjoy the experience. “It’s problem solving. They get to express ideas by writing code.”

With a grant from the Jackson National Life Insurance Company, in 2014 Ott helped restart the Women in Computer Science Summer Program. She is integral in the fundraising, curriculum, instruction, and coordination of the weeklong program that offers 36 girls from Michigan, Minnesota, Wisconsin, Illinois, North Carolina, and Pennsylvania the chance to discover computer science.


Through the National Center for Women and Information Technology Pacesetters, Ott works with a cohort of academic and industry professionals who are committed to dedicating resources, brainstorming, and marketing to recruit more women into the computing fields. “There’s a spectrum of possibilities for women in computing—they may work in the user experience end or be involved as a project manager,” Ott says.

Through her work with students, Ott observes that typical computer science job descriptions are obsolete and career assessments can be misguiding. “Women might not enter this field because an assessment directs them to other areas. What they don’t realize is the wide array of skills useful in this field.” She has convinced NCWIT to take a look at this problem—and to reevaluate career assessments, too.

“Computer science is no longer lone individuals sitting in a dark room on a computer. It’s vibrant, team-based, and a lot more fun.”

The Making of a Citizen Science App

Astronomy is a citizen’s science. Its foundation is ordinary people who help answer serious scientific questions by providing vital data to the astronomical community. Nebulas, supernovas, and gamma ray sightings.

The availability of smartphones make collecting and sharing scientific data easier, faster, and more accurate.

These days former astronomy teacher Robert Pastel isn’t as interested in the stars, but he is serious about environmental science and using computer science—and smartphones—to capture more data from citizen scientists.

The availability of smartphones make collecting and sharing scientific data easier, faster, and more accurate. Pastel works with Alex Mayer, professor of civil and environmental engineering at Michigan Tech, students in both computer science and humanities, and scientists around the world to build mobile apps that feed real-world projects.

It starts in the summer, with scientists. “We reach out to them, or they find us. They share an idea and how citizen science can be used,” Pastel explains. “Then the app building begins; it’s about a two-year process.”

When the academic year rolls around, Pastel challenges his Human-Computer Interactions class to build the initial app prototype. In the following year, during Pastel’s Senior Design course, the app undergoes a makeover—from mobile app to a web-based tool. “By this time the scientists have likely changed their minds or solidified their ideas, and more changes are made,” Pastel adds.


An interactive mushroom mapper is the group’s most successful accomplishment to date. Hikers, bikers, or climbers—anyone with a smartphone and an affinity for fungi—capture a photo of the fungus, specify the type, describe the location, and hit submit. All via the app. The mushroom observation data reaches Eric Lilleskoz, a research ecologist with the United States Department of Agriculture. Mushroom Mapper has more than 250 observations from around the country. The app is also used for natural science education in local middle schools.

In addition to creating apps for citizen science, this NSF-supported effort has spawned student-initiated software development and offline apps.

Student Success in Computer Science

Redeveloping Michigan Tech’s introductory computer science courses has not been an easy feat. But for Leo Ureel, it’s meaningful work. “It’s about setting the right environment,” he says.

Humans learn best when we communicate with others. We’ve taken what we know works in industry and applied it to the classroom.

In the old model, instructors lectured, then assigned independent tasks. Teaching assistants graded the projects and returned them to students two or three weeks later. In a new model Ureel helped create, students work in groups of two to four to mimic workforce settings. “We are no longer just feeding information. Humans learn best when we communicate with others. We’ve taken what we know works in industry and applied it to the classroom,” Ureel explains.

With support from a Jackson Blended Learning Grant, Ureel implemented a web-based teaching assistant to tighten the feedback loop for students. Students submit code via a web portal and receive instant feedback. “They continue submitting work until they get it right. It’s mastery learning,” Ureel adds.

Authentic Learning Experiences

When first-year Michigan Tech student Lauren Brindley received a Google Ignite Computer Science grant to provide funding for 10 robots, Ureel knew it was an opportunity to provide a rich learning experience for students. “After graduation, it’s likely students will build robots in their careers; we’re providing real-world, hands-on learning from day one.” Ureel is developing inquiry curriculum where first-year computer science students will explore how to program the rover robots to move about the room.


Ureel’s next challenge is to assess each first-year student to ensure they’re in the proper course. “Nonmajors often come in with little to no programming experience; meanwhile computer science majors are off and running, ready for a challenge,” Ureel says. To help several hundred students determine the best courses, Ureel is creating an online course sample so students get a taste of
course content before making any decisions.

Preliminary data indicates Ureel’s efforts are working. “Engagement, retention, and grades are improving.”

Advancements in Eyes-free Text Entry

For Keith Vertanen, the satisfaction of helping people with visual impairments is a byproduct of the challenge he seeks.

Vertanen’s research will offer more texting options not only to the blind community, but to the situationally impaired, too.

“My interest stemmed from sighted text entry research. The decoder (a touchscreen keyboard recognizer) is so accurate—we craved a bigger undertaking,” Vertanen explains. So he dug into literature and consulted with users who are blind to determine the need for better eyes-free text-entry options.

Existing accessibility solutions are slow. “There is a delay because users have to search for the target, key, or graphic and wait for audio feedback,” Vertanen says. By sliding a finger around on the touchscreen, the system announces via text-to-speech what their finger is over. When they find the element they want (it could be a key on a touchscreen keyboard), they double tap with their searching finger or they “split tap” by tapping with a second finger. The interaction technique was developed out of research at the University of Washington and is now a standard accessibility feature on iPhone and Android phones.

With Vertanen’s prototype, users with visual impairments imagine the size, position, and orientation of the Qwerty keyboard. They are asked to tap out letters, and eventually sentences. So far, users accurately tap their intended text on the imaginary display about 50 percent of the time.


There’s more work to be done. From this noisy data, Vertanen asks two questions: Can we develop new and improved algorithms to more accurately recognize the user’s intended text? And can we find ways users can provide the recognizer with a better signal while still allowing fast entry?

Vertanen’s research will offer more texting options not only to the blind community, but to the situationally impaired, too: “Those times when you cannot attend to your phone, like when you’re walking. Or perhaps we can treat your airline tray table as a touch-typing surface—but without a visual display.”

His research will also impact the devices of the future which may be designed without a text display.

“These are hard problems to solve. The other challenge is how to make error-correction efficient and pleasant. This is especially true if people are entering difficult text such as proper names or acronyms. A complementary approach is,  how do you design text-entry interfaces that allow users to be more explicit (albeit slower) about parts of their text they anticipate will be difficult to recognize,” Vertanen asks.

Making Data Retrieval More Efficient

When a user performs a search in social media, the request doesn’t stay within that platform. It calls upon the resources of a data center. “When someone sends a request to a data center, they want an immediate answer—they don’t want to wait,” Zhenlin Wang explains.

We designed upon open-source software and memcached that was adopted by Facebook and Twitter. They modified their approach to adapt to user demand. Our method beats their current practices.

Together with colleagues from Peking University, the University of Rochester, Wayne State University, and Michigan Tech, Wang looked to improve the internal structure,
theory, and algorithm of memory cache to make it more efficient.

This work is an offspring of his 2007 CAREER award.


“Currently, bulky disks store the data and are slow to react. When smaller, in-memory cache is used, the search is much faster,” he adds. “We designed upon open-source software and memcached that was adopted by Facebook and Twitter. They modified their approach to adapt to user demand. Our method beats their current practices,” Wang says.

“Imagine inviting 100 people over to your house for dinner, but only four will fit in your dining room. When we think about data resource management, it’s a similar scenario.”