All posts by hrdunne

Undergraduate Programming Competition Win

18th Annual NMU Invitational Programming Contest Logo with 95 Students, 6 Schools, 34 TeamsComputer science undergraduate students received top honors at the 19th Annual Northern Michigan University Invitational Programming Contest held March 24, 2018. Tony Duda, Justin Evankovich, and Nicholas Muggio took first place; Michael Lay, Parker Russcher, and Marcus Stojcevich took second. Michigan Tech earned the highest program count and No. 1 ranking.

Congratulations!

“We are proud of our students for representing Husky values of possibility and tenacity.” —Min Song, Chair, Computer Science


Research Excellence Fund (REF) Award Announced

Keith VirtanenThe Vice President for Research Office announced the 2018 Research Excellence Fund (REF) awards and thanked the volunteer review committees, as well as the deans and department chairs, for their time spent on this important internal research award process.

Keith has received a Research Excellence Fund (REF) seed grant from Michigan Tech for his project entitled “Automatic Speech Recognition using Deep Neural Networks”. This one-year project has a budget of $45,421. This project will create a state-of-the-art speech recognition engine based on deep neural networks. The recognizer will be used to investigate speech-based interactive systems for instrumented physical environments (e.g. cars) and person-centric devices (e.g. augmented reality smartglasses). The recognizer will also be used to investigate the input of Java source code by voice.

Congratulations Keith!

 

 

 


Michigan Tech Among Best Computer Science Programs

33BestValueSchools, a website that evaluates colleges and universities for the return on investment that their education offers, has ranked Michigan Tech’s computer science program 14th among the top 30 computer science programs in the country.

The rankings took into account program demand, computational aptitude of students, research and development, and the return on investment based on salary reports by Payscale.com.

Describing Michigan Tech’s computer science program, BestValueSchools said

If you’re interested in gaming, take a close look at Michigan Tech’s concentration in Game Development. You’ll get plenty of hands-on experience at this accredited computer science school as you learn to design and develop cutting-edge interactive games. A team-based approach leaves you well-prepared for a collaborative work environment after graduation, and some of the skills you learn can transfer to other fields besides gaming (virtual reality, for example). Michigan Tech also runs a few notable master’s degree programs, including a popular MS in the fast-growing field of cybersecurity. This degree even includes three subspecialties, so you can further refine your studies.




ICC Distinguished Lecturer Series Tomorrow

ICC_Jie_wuThe Institute of Computing and Cybersystems (ICC) will host Jie Wu from 3 to 4 p.m. tomorrow (Sept. 22) in Rekhi 214.

He will present a lecture titled “Algorithmic Crowdsourcing and Applications in Big Data.” Refreshments will be served. Wu is director of Center for Networked Computing (CNC) and Laura H. Carnell Professor at Temple University. He served as the associate vice provost for International Affairs and chair in the Department of Computer and Information Sciences at Temple University.

Prior to joining Temple University, he was a program director at the National Science Foundation and was a distinguished professor at Florida Atlantic University. A full bio and abstract can be found online.


Havens and Pinar Present in Naples and Attend Invited Workshop in UK

Timothy Havens
Timothy Havens

Tim Havens (ECE/CS) and Tony Pinar (ECE) presented several papers at the IEEE International Conference on Fuzzy Systems in Naples, Italy. Havens also chaired a session on Innovations in Fuzzy Inference.

Havens and Pinar also attend the Invited Workshop on the Future of Fuzzy Sets and Systems in Rothley, UK. This event invited leading researchers from around the globe for a two-day workshop to discuss future directions and strategies, in particular, to cybersecurity. The event was hosted by the University of Nottingham, UK, and sponsored by the National Cyber Security Centre, part of UK’s GCHQ.


Self-Stabilizing Systems

It was August 15, 2003. A software bug invoked a blackout spanning the Northeast, Midwest, and parts of Canada. Subways shut down. Hospital patients suffered in stifling heat. And police evacuated people trapped in elevators.

What should have been a manageable, local blackout cascaded into widespread distress on the electric grid. A lack of alarm left operators unaware of the need to re-distribute power after overloaded transmission lines hit unpruned foliage, which triggered a race condition in the control software.*

47

Ali Ebnenasir is working to prevent another Northeast Blackout. He’s creating and testing new design methods for more dependable software in the presence of unanticipated environmental and internal faults. “What software does or doesn’t do is critical,” Ebnenasir explains. “Think about medical devices controlled by software. Patient lives are at stake when there’s a software malfunction.”

How do you make distributed software more dependable? In the case of a single machine—like a smartphone—it’s easy. Just hit reset. But for a network, there is no centralized reset. “Our challenge is to design distributed software systems that automatically recover from unanticipated events,” Ebnenasir says.

The problem—and some solutions—has been around for nearly 40 years, but no uniform theory for designing self-stabilizing systems exists. “Now we’re equipping software engineers with tools and methods to design systems that autonomously recover.”

Ebnenasir’s work has been funded by the National Science Foundation.

*Source: Wikipedia


Ubiquitous High-Performance Computing (UHPC) and X-Stack Projects

The Ubiquituous High-Performance Computing Project, funded by the Defense Advanced Research Projects Agency (DARPA), initiates research on energy-efficient, resilient, and many-core computing on the horizon for 2018. Faced with the end of Dennard scaling, it was imperative to provide better hardware and software to face energy consumption of future computers, but also to exploit a large number of cores in a single cabinet (up to 1015 floating-point operations per second), all the while consuming no more than 50kW. A thousand of those machines have the potential to reach one exaflop (1015 floating-point operations per second). The hardware should expose several “knobs” to the software, to allow applications to gracefully adapt to a very dynamic environment, and expand and/or contract parallelism depending on various constraints such as maximal authorized power envelope, desired energy-efficiency, and required minimal performance.

Following UHPC, the Department of Energy-funded X-Stack Software Research project recentered the objectives. By using traditional high-performance communication libraries such as the Message-Passing Interface (MPI), by revolutionizing both hardware and software at the compute-node level.

In both cases, it was deemed unlikely that traditional programming and execution models would be able to deal with novel hardware. Taking advantage of the parallelism offered by the target straw-man hardware platform would be impossible without new system software components.

The Codelet Model was then implemented in various runtime systems, and inspired the Intel-led X-Stack project to define the Open Community Runtime (OCR). The Codelet Model was used on various architectures, from the IBM Cyclops-64 general-purpose many-core processor, to regular x86 compute nodes, as well as the Intel straw-man architecture, Traleika Glacier. Depending on the implementations, codelet-based runtime systems run on shared-memory or distributed systems. They showed their potential on both classical scientific workloads based on linear algebra, and more recent (and irregular) ones such as graph-related parallel breadth-first search. To achieve good results, hierarchical parallelism and specific task-scheduling strategies were needed.

Self-awareness is a combination of introspection and adaptation mechanisms. Introspection is used to determine the health of the system, while adaptation changes parameters of the system so parts of the compute node consume less energy, shutdown processing units, etc. Introspection and adaptation are driven by high-level goals expressed by the user, related to power and energy consumption, performance, and resilience.

The team studied how to perform fine-grain resource management to achieve self-awareness using codelets, and built a self-aware simulation tool to evaluate the benefits of various adaptive strategies.

45

The TERAFLUX Project

The TERAFLUX project was funded by the European Union. It targeted so-called “teradevices,” devices featuring more than 1,000 cores on a single chip, but with an architecture that will make it near-impossible to exploit using traditional programming and execution models. DF-Threads, a novel execution model based on dataflow principles was proposed to exploit such devices. A simulation infrastructure was used to demonstrate the potential of such a solution, while remaining programmable. At the same time, it was important to maintain a certain level of compatibility with existing systems and features expected by application programmers.

Both models borrow from dataflow models of computation, but they each feature subtle differences requiring special care to bridge them. Stéphane Zuckerman and his colleagues ported DARTS—their implementation of the Codelet Model—to the TERAFLUX simulator, and showed a convergence path existed between DF-Thread and codelet-execution models. The research demonstrated the advantages of hardware-based, software-controlled multithreading with hardware scheduling units for scalability and performance.

Stéphane Zuckerman presented the results and outcomes of his research in peer-reviewed conferences and workshops.


Improving Cyber Security—Education and Application

Most cyber attacks aren’t new. Rather, they are new to the administrators encountering them. “The workforce isn’t well trained in these complex issues,” Jean Mayo explains. “One problem we encounter in education is that we cannot allow students to modify the software that controls an actual system—they can cause real damage.”

Our goal is to keep the data safe not only by controlling who has access, but by ensuring file integrity.

With support from the National Science Foundation, a team of Michigan Tech computer scientists teaches modern models of access control using visualization systems within user-level software.

Mayo and her team are also taking a fresh look at teaching students how to code securely. “The system we developed will detect when security is compromised and provide students with an explanation of what went wrong and how to fix it,” she adds.

42

File System Enhancement for Emerging Computer System Concerns

Mayo is applying existing firewall technology to file system access control. In her core research, she’s providing greater flexibility for administrators to determine when access is granted. “Using the firewall model to filter traffic content—like a guard standing by a door—we can add more variables to control file access, like time of day or location. It is more flexible, but also more complex—firewalls are familiar and help administrators navigate the complexity.”

Mayo is also developing a language for guaranteeing file security. “Our goal is to keep the data safe not only by controlling who has access, but by ensuring file integrity.” This system will disallow changes made to a file when the change doesn’t meet file specifications. “This helps to prevent users from entering incorrect data.”