Category Archives: Research

Call for Applications: Songer Research Award for Human Health Research

2018-19 Songer Award Recipients. Pictured Left to Right: Abby Sutherland, Billiane Kenyon, Jeremy Bigalke, Rupsa Basu, Matthew Songer, and Laura Songer.

Matthew Songer, (Biological Sciences ’79) and Laura Songer (Biological Sciences ’80) have generously donated funds to the College of Sciences and Arts (CSA) to support a research project competition for undergraduate and graduate students. Remembering their own eagerness to engage in research during their undergraduate years, the Songers established these awards to stimulate and encourage opportunities for original research by current Michigan Tech students. The College is extremely grateful for the Songers’ continuing interest in, and support of, Michigan Tech’s programs in human health and medicine. This is the second year of the competition.

Students may propose an innovative medically-oriented research project in any area of human health. The best projects will demonstrate the potential to have broad impact on improving human life. This research will be pursued in consultation with faculty members within the College of Sciences and Arts. In the Spring of 2019, the Songer’s gift will support one award for undergraduate research ($4,000) and a second award for graduate research ($6,000). Matching funds from the College may allow two additional awards.

Any Michigan Tech student interested in exploring a medically related question under the guidance of faculty in the College of Sciences and Arts may apply. Students majoring in any degree program in the college, including both traditional (i.e., biological sciences, kinesiology, chemistry) and nontraditional (i.e., physics, psychology, social science, bioethics, computer science, mathematics) programs related to human health may propose research projects connected to human health. Students are encouraged to propose original, stand-alone projects with expected durations of 6 – 12 months. The committee also encourages applications from CSA students who seek to continue research projects initiated through other campus mechanisms, such as the Summer Undergraduate Research Fellowship (SURF) program, Pavlis Honors College activities or the Graduate Research Forum (GRF).

Funds from a Songer Award may be used to purchase or acquire research materials and equipment needed to perform the proposed research project. Access to and research time utilizing University core research facilities, including computing, may be supported. Requests to acquire a personal computer will be scrutinized and must be fully justified. Page charges for publications also may be covered with award funds, as will travel to appropriate academic meetings. This award may not be used for salary or compensation for the student or consulting faculty.

To apply:

  • Students should prepare a research project statement (up to five pages in length) that describes the background, methods to be used, and research objectives. The statement also should provide a detailed description of the experiments planned and expected outcomes. Students must indicate where they will carry out their project and attach a separate list of references/citations to relevant scientific literature.
  • The application package also should provide a concise title and brief summary (1 page) written for lay audiences.
  • A separate budget page should indicate how funds will be used.
  • A short letter from a consulting faculty member must verify that the student defined an original project and was the primary author of the proposal. The faculty member should also confirm her/his willingness to oversee the project. This faculty letter is not intended to serve as a recommendation on behalf of the student’s project.

Submit applications as a single PDF file to the Office of the College of Sciences and Arts by 4:00 p.m. Monday, April 22. Applications may be emailed to djhemmer@mtu.edu.

The selection committee will consist of Matthew Songer, Laura Songer, Shekhar Joshi (BioSci) and Megan Frost (KIP). The committee will review undergraduate and graduate proposals separately and will seek additional comments about the proposed research on an ad-hoc basis from reviewers familiar with the topic of the research proposal. Primary review criteria will be the originality and potential impact of the proposed study, as well as its feasibility and appropriateness for Michigan Tech’s facilities.

The committee expects to announce the recipients by early May of 2019. This one-time research award will be administered by the faculty advisor of the successful student investigator. Students will be expected to secure any necessary IRB approval before funds will be released. Funds must be expended by the end of spring semester 2020; extensions will not be granted. Recipients must submit a detailed report to the selection committee, including a description of results and an accounting of finds utilized, no later than June 30, 2020.

Any questions may be directed to Megan Frost (mcfrost@mtu.edu), David Hemmer (djhemmer@mtu.edu) or Shekhar Joshi (cpjoshi@mtu.edu).


On the Road

Timothy Havens

Tim Havens (ECE/CS) presented a paper entitled, “SPFI: Shape-Preserving Choquet Fuzzy Integral for Non-Normal Fuzzy Set-Valued Evidence,” this month at the IEEE World Congress on Computational Intelligence in Rio de Janeiro. Havens also co-authored two other papers presented at the conference. WCCI is the biennial meeting of the three leading computational intelligence conferences: International Conference on Fuzzy Systems, International Joint Conference on Neural Networks, and Congress on Evolutionary Computation. Co-authors on the paper were Tony Pinar (ECE), Derek Anderson (U. Missouri) and Christian Wagner (U. Nottingham, UK). As general chair of the Int. Conf. Fuzzy Systems 2019 in New Orleans, Havens also presented a pitch for the upcoming event at the WCCI awards banquet.

Additionally, Havens presented an invited seminar, “How to Win on Trivia Night: Sensor Fusion Beyond the Weighted Average,” at MIT Lincoln Laboratory on July 16.


Graduate Student Colloquium features two CS students

Two of the Computer Science graduate students attended the Graduate Research Colloquium hosted by the GSG this week.  This is MTU’s largest graduate research showcase and competition with grad students presenting more than 60 research papers.  See the article in Michigan Tech Today http://gsg.mtu.edu/grc/.

“Improving Caching for Web Applications” by Daniel Byrne

Abstract:  Web applications employ caches to store the data that is most commonly accessed. The cache improves the application’s performance by reducing the time it takes to fetch a piece of data from the application’s database. Since the cache typically resides in a limited amount system memory, maximizing the memory utilization is key to delivering the best performance possible. In addition, application data access patterns change over time, so the system should be adaptive in its memory allocation policy as opposed to current staticDaniel Bryne allocations.In this work, we address both multi-tennancy (where a single cache is used for multiple applications) and dynamic workloads (changing access patterns) using a sharing model that relates the cache size to the application miss-rate, know as a miss-ratio curve. Intuitively, the larger the cache, the less likely the system will need to fetch the data from the database. Our efficient, online construction of the miss-ratio curve allows for us to determine the optimal memory allocation given the available system memory, while adapting to changing data access patterns. We show that our model outperforms the existing state-of-the-art sharing model in terms of overall cache hit-rate and does so at a lower time cost.

 

“Maximizing Coverage in VANETs” by Ali Jalooli

The sAli Jalooliuccess of vehicular networks is highly dependent on the coverage of message, which refers to the Euclidean spatial distance that a message once initiated by a given mobile node (i.e., source vehicle) can reach within time t. We studied the crucial problem of optimal utilization of roadside units (RSUs) in 2-D environments, and proposed a greedy algorithm, which by taking the V2V communication into consideration, finds the optimal locations for RSUs deployment to achieve the maximum message coverage.

 

 




Havens and Pinar Present in Naples and Attend Invited Workshop in UK

Timothy Havens
Timothy Havens

Tim Havens (ECE/CS) and Tony Pinar (ECE) presented several papers at the IEEE International Conference on Fuzzy Systems in Naples, Italy. Havens also chaired a session on Innovations in Fuzzy Inference.

Havens and Pinar also attend the Invited Workshop on the Future of Fuzzy Sets and Systems in Rothley, UK. This event invited leading researchers from around the globe for a two-day workshop to discuss future directions and strategies, in particular, to cybersecurity. The event was hosted by the University of Nottingham, UK, and sponsored by the National Cyber Security Centre, part of UK’s GCHQ.


Self-Stabilizing Systems

It was August 15, 2003. A software bug invoked a blackout spanning the Northeast, Midwest, and parts of Canada. Subways shut down. Hospital patients suffered in stifling heat. And police evacuated people trapped in elevators.

What should have been a manageable, local blackout cascaded into widespread distress on the electric grid. A lack of alarm left operators unaware of the need to re-distribute power after overloaded transmission lines hit unpruned foliage, which triggered a race condition in the control software.*

47

Ali Ebnenasir is working to prevent another Northeast Blackout. He’s creating and testing new design methods for more dependable software in the presence of unanticipated environmental and internal faults. “What software does or doesn’t do is critical,” Ebnenasir explains. “Think about medical devices controlled by software. Patient lives are at stake when there’s a software malfunction.”

How do you make distributed software more dependable? In the case of a single machine—like a smartphone—it’s easy. Just hit reset. But for a network, there is no centralized reset. “Our challenge is to design distributed software systems that automatically recover from unanticipated events,” Ebnenasir says.

The problem—and some solutions—has been around for nearly 40 years, but no uniform theory for designing self-stabilizing systems exists. “Now we’re equipping software engineers with tools and methods to design systems that autonomously recover.”

Ebnenasir’s work has been funded by the National Science Foundation.

*Source: Wikipedia


Ubiquitous High-Performance Computing (UHPC) and X-Stack Projects

The Ubiquituous High-Performance Computing Project, funded by the Defense Advanced Research Projects Agency (DARPA), initiates research on energy-efficient, resilient, and many-core computing on the horizon for 2018. Faced with the end of Dennard scaling, it was imperative to provide better hardware and software to face energy consumption of future computers, but also to exploit a large number of cores in a single cabinet (up to 1015 floating-point operations per second), all the while consuming no more than 50kW. A thousand of those machines have the potential to reach one exaflop (1015 floating-point operations per second). The hardware should expose several “knobs” to the software, to allow applications to gracefully adapt to a very dynamic environment, and expand and/or contract parallelism depending on various constraints such as maximal authorized power envelope, desired energy-efficiency, and required minimal performance.

Following UHPC, the Department of Energy-funded X-Stack Software Research project recentered the objectives. By using traditional high-performance communication libraries such as the Message-Passing Interface (MPI), by revolutionizing both hardware and software at the compute-node level.

In both cases, it was deemed unlikely that traditional programming and execution models would be able to deal with novel hardware. Taking advantage of the parallelism offered by the target straw-man hardware platform would be impossible without new system software components.

The Codelet Model was then implemented in various runtime systems, and inspired the Intel-led X-Stack project to define the Open Community Runtime (OCR). The Codelet Model was used on various architectures, from the IBM Cyclops-64 general-purpose many-core processor, to regular x86 compute nodes, as well as the Intel straw-man architecture, Traleika Glacier. Depending on the implementations, codelet-based runtime systems run on shared-memory or distributed systems. They showed their potential on both classical scientific workloads based on linear algebra, and more recent (and irregular) ones such as graph-related parallel breadth-first search. To achieve good results, hierarchical parallelism and specific task-scheduling strategies were needed.

Self-awareness is a combination of introspection and adaptation mechanisms. Introspection is used to determine the health of the system, while adaptation changes parameters of the system so parts of the compute node consume less energy, shutdown processing units, etc. Introspection and adaptation are driven by high-level goals expressed by the user, related to power and energy consumption, performance, and resilience.

The team studied how to perform fine-grain resource management to achieve self-awareness using codelets, and built a self-aware simulation tool to evaluate the benefits of various adaptive strategies.

45

The TERAFLUX Project

The TERAFLUX project was funded by the European Union. It targeted so-called “teradevices,” devices featuring more than 1,000 cores on a single chip, but with an architecture that will make it near-impossible to exploit using traditional programming and execution models. DF-Threads, a novel execution model based on dataflow principles was proposed to exploit such devices. A simulation infrastructure was used to demonstrate the potential of such a solution, while remaining programmable. At the same time, it was important to maintain a certain level of compatibility with existing systems and features expected by application programmers.

Both models borrow from dataflow models of computation, but they each feature subtle differences requiring special care to bridge them. Stéphane Zuckerman and his colleagues ported DARTS—their implementation of the Codelet Model—to the TERAFLUX simulator, and showed a convergence path existed between DF-Thread and codelet-execution models. The research demonstrated the advantages of hardware-based, software-controlled multithreading with hardware scheduling units for scalability and performance.

Stéphane Zuckerman presented the results and outcomes of his research in peer-reviewed conferences and workshops.


Improving Cyber Security—Education and Application

Most cyber attacks aren’t new. Rather, they are new to the administrators encountering them. “The workforce isn’t well trained in these complex issues,” Jean Mayo explains. “One problem we encounter in education is that we cannot allow students to modify the software that controls an actual system—they can cause real damage.”

Our goal is to keep the data safe not only by controlling who has access, but by ensuring file integrity.

With support from the National Science Foundation, a team of Michigan Tech computer scientists teaches modern models of access control using visualization systems within user-level software.

Mayo and her team are also taking a fresh look at teaching students how to code securely. “The system we developed will detect when security is compromised and provide students with an explanation of what went wrong and how to fix it,” she adds.

42

File System Enhancement for Emerging Computer System Concerns

Mayo is applying existing firewall technology to file system access control. In her core research, she’s providing greater flexibility for administrators to determine when access is granted. “Using the firewall model to filter traffic content—like a guard standing by a door—we can add more variables to control file access, like time of day or location. It is more flexible, but also more complex—firewalls are familiar and help administrators navigate the complexity.”

Mayo is also developing a language for guaranteeing file security. “Our goal is to keep the data safe not only by controlling who has access, but by ensuring file integrity.” This system will disallow changes made to a file when the change doesn’t meet file specifications. “This helps to prevent users from entering incorrect data.”


Better, Faster Video Processing and Image Enhancement

When you view a YouTube video, you are viewing tens of gigabytes compressed up to 50 times. The process to transmit what an HD camera captures requires large quantities of frame-by-frame video data transmission—and such is the case in sports broadcasting—it must happen fast.

Computational complexity is high because sports coverage is real-time.

“We can take advantage of similarities of each frame to reduce the size of the transmissions,” Saeid Nooshabadi says.

In the case of sports, where video is captured from multiple angles, computer scientists can reconstruct missing coverage using free-view video technology. “The more cameras recording—the better,” he adds. Computational complexity is high because sports coverage is real-time. Applications of Nooshabadi’s multi-view video processing work, funded by the National Science Foundation, include not only sports reporting, but surveillance and even remote surgery.

When your smartphone captures photos in burst mode, capturing a photo every half-second, each image is ever-so-slightly different. The images can be combined, stacked, and processed using complex mathematical operations to enhance the quality. This technology is useful in consumer-imaging devices.

41

“One of my students is working with the Donald Danforth Plant Science Center to apply image registration techniques to phenotyping applications. The technique requires referencing data from multiple sensors to the same spatial location, so data from multiple sensors can be integrated and analyzed to extract useful information,” Nooshabadi says.

“Previously these technologies required supercomputers. Now with advancements in mobile digital devices, the technology is becoming faster and more accessible.”