Myounghoon (Philart) Jeon (CLS/CS) and his three graduate students are attending the Human Factors and Ergonomics Society 2017 International Annual Meeting, which began Monday through today, in Austin, Texas.
He will present a lecture titled “Algorithmic Crowdsourcing and Applications in Big Data.” Refreshments will be served. Wu is director of Center for Networked Computing (CNC) and Laura H. Carnell Professor at Temple University. He served as the associate vice provost for International Affairs and chair in the Department of Computer and Information Sciences at Temple University.
Prior to joining Temple University, he was a program director at the National Science Foundation and was a distinguished professor at Florida Atlantic University. A full bio and abstract can be found online.
Philart’s grant is a 4-year award with a total budget of $350,000 from Korea Automobile Testing & Research Institute. Two graduate students will be supported by this grant each year. The project is titled “Development of the safety assessment technique for take‐over in automated vehicles.”
The goal of the project is to design and evaluate intelligent auditory interactions for improving safety and user experience in the automated vehicles. Research tasks include developing a driving simulator for automated driving model, modelling driver states in automated vehicles, design and evaluating discrete auditory alerts for safety purpose, and the development of real-time sonification systems for overall user experience. Congratulations Philart!
The Computer Science Learning Center Open House
The CS Learning Center is hosting an Open House Friday, September 15th from 4-5pm. Stop by to see the new space and meet the coaches at our new location in Rekhi 118.
Light refreshments will be served. All are welcome.
The new CS Learning Center has more windows for natural lighting, bean bags and comfy chairs for informal help sessions, and all computers are equipped with dual monitors. With our new space comes the addition of more blended learning technologies; including a Mersive system that enables coaches and students to project the screens of their wireless devices to a 50-inch monitor, and a Promethean digital whiteboard allowing coaches and students to receive email images of the 70-inch screen after a tutoring session. The new equipment in the CS Learning Center was provided by the CTL/IT Distance Learning Grant Program with additional support from the CS Department. A special thanks goes to Dr. Robert Pastel for generously offering to move his lab, so the CS Learning Center could have a larger, more suitable space.
Congratulations to Jimmy Roznick! Jimmy is the recipient of the DOD SMART Scholarship. “The SMART scholarship is a Department of Defense scholarship for service program aimed at supporting students in STEM fields. The scholarship covers the full cost of tuition and provides students with a monthly stipend. In return, students intern and work at a sponsoring facility for a number of years, equal to the amount of schooling sponsored. I am very excited to be putting what I’ve learned at Michigan Tech to use for national security purposes.”, Jimmy said. He will soon be graduating with a bachelor’s degree in Software Engineering as well as pursuing a master’s in CS.
Tim Havens (ECE/CS) and Tony Pinar (ECE) presented several papers at the IEEE International Conference on Fuzzy Systems in Naples, Italy. Havens also chaired a session on Innovations in Fuzzy Inference.
Havens and Pinar also attend the Invited Workshop on the Future of Fuzzy Sets and Systems in Rothley, UK. This event invited leading researchers from around the globe for a two-day workshop to discuss future directions and strategies, in particular, to cybersecurity. The event was hosted by the University of Nottingham, UK, and sponsored by the National Cyber Security Centre, part of UK’s GCHQ.
Michigan Technological University is inviting K-12 teachers and administrators to a workshop in August, to help them find ways to bring computer science and programming into their classrooms. The workshop, supported through a Google CS4HS (Computer Science for High Schools) grant, exposes teachers to exciting new ways to bring computer science into schools.
This is the third year Google has supported a computer science workshop at Michigan Tech for teachers.
“As computer technology becomes an ever more powerful and pervasive factor in our world, students need instruction in the creative problem-solving skills that are the basis of computer science,” explains Linda Ott, professor of computer science at Michigan Tech and director of the workshop. “Software design and programming skills, along with an understanding of the principles of computer systems and applications, are tremendously valuable in a wide range of future careers, and the problem-solving process of computational thinking can be used to enrich a wide range of K-12 courses. New tools and teaching materials make it possible to bring the creative spirit of computing into K-12 classrooms.”
“From a teacher’s perspective, however, bringing computer science into the classroom can seem intimidating,” Ott goes on to say. “We want to help teachers develop confidence in their own computer science literacy and help them craft a computing curriculum that meets their teaching missions.”
The workshop will cover a basic understanding of computer science principles, help teachers integrate programming into new and existing courses, disseminate K-12 computer programing course materials developed at Michigan Tech and provide tools for increasing interest in computing among young women.
Participants will receive lunches, a stipend to help with travel and other expenses and a year of assistance in course development from a Michigan Tech computer science graduate student. Out-of-town teachers will receive free accommodation at the Magnuson Franklin Square Inn.
Visit the article in Tech Today http://www.mtu.edu/ttoday/ by J. Donovan for a link on how to apply.
It was August 15, 2003. A software bug invoked a blackout spanning the Northeast, Midwest, and parts of Canada. Subways shut down. Hospital patients suffered in stifling heat. And police evacuated people trapped in elevators.
What should have been a manageable, local blackout cascaded into widespread distress on the electric grid. A lack of alarm left operators unaware of the need to re-distribute power after overloaded transmission lines hit unpruned foliage, which triggered a race condition in the control software.*
Ali Ebnenasir is working to prevent another Northeast Blackout. He’s creating and testing new design methods for more dependable software in the presence of unanticipated environmental and internal faults. “What software does or doesn’t do is critical,” Ebnenasir explains. “Think about medical devices controlled by software. Patient lives are at stake when there’s a software malfunction.”
How do you make distributed software more dependable? In the case of a single machine—like a smartphone—it’s easy. Just hit reset. But for a network, there is no centralized reset. “Our challenge is to design distributed software systems that automatically recover from unanticipated events,” Ebnenasir says.
The problem—and some solutions—has been around for nearly 40 years, but no uniform theory for designing self-stabilizing systems exists. “Now we’re equipping software engineers with tools and methods to design systems that autonomously recover.”
Ebnenasir’s work has been funded by the National Science Foundation.
The Ubiquituous High-Performance Computing Project, funded by the Defense Advanced Research Projects Agency (DARPA), initiates research on energy-efficient, resilient, and many-core computing on the horizon for 2018. Faced with the end of Dennard scaling, it was imperative to provide better hardware and software to face energy consumption of future computers, but also to exploit a large number of cores in a single cabinet (up to 1015 floating-point operations per second), all the while consuming no more than 50kW. A thousand of those machines have the potential to reach one exaflop (1015 floating-point operations per second). The hardware should expose several “knobs” to the software, to allow applications to gracefully adapt to a very dynamic environment, and expand and/or contract parallelism depending on various constraints such as maximal authorized power envelope, desired energy-efficiency, and required minimal performance.
Following UHPC, the Department of Energy-funded X-Stack Software Research project recentered the objectives. By using traditional high-performance communication libraries such as the Message-Passing Interface (MPI), by revolutionizing both hardware and software at the compute-node level.
In both cases, it was deemed unlikely that traditional programming and execution models would be able to deal with novel hardware. Taking advantage of the parallelism offered by the target straw-man hardware platform would be impossible without new system software components.
The Codelet Model was then implemented in various runtime systems, and inspired the Intel-led X-Stack project to define the Open Community Runtime (OCR). The Codelet Model was used on various architectures, from the IBM Cyclops-64 general-purpose many-core processor, to regular x86 compute nodes, as well as the Intel straw-man architecture, Traleika Glacier. Depending on the implementations, codelet-based runtime systems run on shared-memory or distributed systems. They showed their potential on both classical scientific workloads based on linear algebra, and more recent (and irregular) ones such as graph-related parallel breadth-first search. To achieve good results, hierarchical parallelism and specific task-scheduling strategies were needed.
Self-awareness is a combination of introspection and adaptation mechanisms. Introspection is used to determine the health of the system, while adaptation changes parameters of the system so parts of the compute node consume less energy, shutdown processing units, etc. Introspection and adaptation are driven by high-level goals expressed by the user, related to power and energy consumption, performance, and resilience.
The team studied how to perform fine-grain resource management to achieve self-awareness using codelets, and built a self-aware simulation tool to evaluate the benefits of various adaptive strategies.
The TERAFLUX Project
The TERAFLUX project was funded by the European Union. It targeted so-called “teradevices,” devices featuring more than 1,000 cores on a single chip, but with an architecture that will make it near-impossible to exploit using traditional programming and execution models. DF-Threads, a novel execution model based on dataflow principles was proposed to exploit such devices. A simulation infrastructure was used to demonstrate the potential of such a solution, while remaining programmable. At the same time, it was important to maintain a certain level of compatibility with existing systems and features expected by application programmers.
Both models borrow from dataflow models of computation, but they each feature subtle differences requiring special care to bridge them. Stéphane Zuckerman and his colleagues ported DARTS—their implementation of the Codelet Model—to the TERAFLUX simulator, and showed a convergence path existed between DF-Thread and codelet-execution models. The research demonstrated the advantages of hardware-based, software-controlled multithreading with hardware scheduling units for scalability and performance.
Stéphane Zuckerman presented the results and outcomes of his research in peer-reviewed conferences and workshops.