Also In This Section
  • Categories

  • Recent News

  • Category: SAS

    Soner Onder Awarded NSF RD Grant

    Soner Onder (CS/ICC) is the principal investigator on a project that has received a $149,996 research and development grant from the National Science Foundation (NSF).

    The project is titled “IRES: Track I: Collaborative Research: Supporting FSU and MTU Student Research with NTNU Faculty on Automatic Improvement of Application Performance.”


    OISE/IRES Project to Fund International Research Experiences for Grad Students


    Professor Soner Onder, Computer Science, has been awarded $150K of a $300K project funded by the Office of International Science and Engineering (OISE) and the International Research Experiences for Students (IRES) programs, both units of the National Science Foundation (NSF). The remaining project share was awarded to Florida State University (FSU).


    The project provides international research experiences for eight Michigan Tech PhD students (and eight FSU students), providing a stipend, travel expenses, and living expenses while they pursue research in Norway for ten weeks in each of three summers.

    The students will be working with Dr. Onder’s collaborators at Norwegian University of Science and Technology (NTNU), Prof. Magnus Jahre and Prof. Magnus Själander. The two institutions are pursuing joint interdisciplinary work involving compilers and architecture . The work is expected to result in several joint research publications.

    “I believe this project will help improve our international recognition and stature as a major research institution,” Dr. Onder says.


    Abstract

    When the technology for computers advance and programs execute faster, more computer applications become possible. This project will enable Florida State University (FSU) and Michigan Technology University (MTU) students to visit the Norwegian University of Science and Technology (NTNU) to conduct research that will allow the performance and energy efficiency of computer applications to be automatically improved. Over the three years of this project, 16 FSU and MTU students (in cohorts of five, five, and six) will visit NTNU, which is in Trondheim, Norway, for a period of 10 weeks during May, June, and July. While in residence, the students will work closely with the faculty, postdoctoral associates, and graduate students in the research groups of Professors Magnus Sjalander and Magnus Jahre who are affiliated with the Computer Architecture Laboratory (CAL) at NTNU. The participating FSU and MTU students will not only increase their research knowledge, but will also become more globally engaged and better prepared to work in a culturally diverse, international environment.

    The era of improving processor performance without innovations in computer architecture or compilers is over since increasing the clock rate for computers has not been possible in recent years due to thermal limitations. However, manually modifying programs to efficiently exploit computer architectures is time consuming, error prone, and not portable to other systems. The most effective way to improve application performance is to automatically exploit architectural features without the intervention of the application developers. Our focus will be on automatically achieving high performance and energy efficiency by generating code to exploit existing and proposed architectural features at the machine instruction level. We propose to develop the compilation tools to facilitate the process of automatically generating code to exploit these proposed architectural features and to develop the simulation tools to evaluate the impact on both performance and energy efficiency.


    The Norwegian University of Science and Technology (NTNU) is the second largest university in Norway and is consistently ranked among the top one percent of universities world-wide; their current ranking is 101st.


    Office of International Science & Engineering (OISE) is the NSF focal point for international science and engineering activities both inside and outside NSF. OISE’s focuses on three activities: (1) promoting the development of a globally competent U.S. workforce, (2) facilitating and supporting international partnerships and networks to leverage NSF and foreign resources, and (3) providing opportunities for U.S. leadership to shape the global science and engineering agenda.


    The International Research Experiences for Students (IRES) program supports international research and research-related activities for U.S. science and engineering students. The IRES program contributes to development of a diverse, globally engaged workforce with world-class skills. IRES focuses on active research participation by undergraduate and/or graduate students in high quality international research, education and professional development experiences in NSF-funded research areas.


    Dr. Junqiao Qiu Awarded $174.8K NSF pre-CAREER Grant


    Dr. Junqiao Qiu, Computer Science, has been awarded a two-year, $174,797 NSF pre-CAREER grant, which supports research independence among early-career academicians

    The project is titled, “CRII: SHF: GPU-accelerated FSM computations with advanced speculation.”

    Dr. Qiu’s research focuses on parallel computing, programming systems, and compiler optimization.

    He is a member of the Scalable Architectures and Systems research group of the Institute of Computing and Cybersystems (ICC).

    View the grant on the NSF website.


    Abstract

    Finite State Machine (FSM)-based computations have played critical roles in a variety of important applications, ranging from cyber security and data analytics to software engineering and hardware design. Due to the growing data volumes and limitations on computer processing power, nowadays FSM efficiency is greatly constrained, and a new dimension of efficiency optimization is desired. This project proposes a novel framework to enhance the computing efficiency of FSM-based computations on GPUs. The combination of GPU acceleration and speculative parallelization developed in the proposed framework shows promise for boosting performance of FSM computations and presents the potential to optimize even more general non-FSM computations.

    This research investigates how to build up the synergy between highly-parallel GPU architectures and FSM computations. The key idea is exploring multiple dimensions of parallelism for increasing compute utilization as well as reducing data-movement overheads. Additionally, this research designs and integrates advanced speculative parallelization into FSM computations. The advanced speculative parallelization not only enables more effective predictors on different FSMs, it also provides efficient speculative-thread scheduling. All these optimizations will be built into a framework that further explores the trade-offs between different objectives and automatically optimizes application configurations based on the given objectives. Finally, this research seeks to enlarge the applicability of the envisioned results, and it brings the preliminary exploration about a new dimension of computing efficiency for irregular computations as well as applications associated with speculative parallelization.


    Dr. Qiu’s lab has openings (RA/TA support) for self-motivated students who are interested in doing system research. For more information, please email Dr. Qiu at junqiaoq@mtu.edu.


    Yakov Nekrich Paper Accepted for Top Computing Conference

    A publication by Associate Professor Yakov NekrichComputer Science, has been accepted to the 53rd Annual ACM Symposium on Theory of Computing (STOC).

    The paper, “Optimal-Time Dynamic Planar Point Location in Connected Subdivisions,” describes an optimal-time solution for the dynamic point location problem and answers an open problem in computational geometry. 

    The data structure described in the paper supports queries and updates in logarithmic time. This result is optimal in some models of computation.  Nekrich is the sole author of the publication.

    The annual ACM Symposium on Theory of Computing (STOC), is the flagship
    conference of SIGACT, the Special Interest Group on Algorithms and
    Computation Theory, a special interest group of the Association for
    Computing Machinery (ACM).


    Junqiao Qiu to Present Lecture November 6

    Assistant Professor Junqiao Qiu, Computer Science, will present his lecture, “Speculative Parallelization for FSM-centric Computations,” this Friday, Nov. 6, 2020, at 3:00 p.m., via online meeting.

    Lecture Abstract

    As a fundamental computation model, finite-state machine (FSM) has been used in a wide range of data-intensive applications, including malware detection, bioinformatics, semi-structured data analytics, natural language processing and even machine learning. However, FSM execution is known to be “embarrassingly sequential” due to the state dependences among transitions. Current studies find that speculation is a promising solution to address the inherent dependencies in FSM computations and thus enables scalable FSM parallelization.
    This talk will firstly introduce the fundamental scalability bottleneck in the current FSM parallelization, and then an aggressive speculation, a generalized speculation model that allows a speculated state to be validated against the result from another speculation, is proposed to address the scalability limitations. Finally, this talk will discuss the possibility to enlarge the applicability of the proposed approach and go beyond the FSM-based computations.

    Juneiao Qiu is a member of the Institute of Computing and Cybersystems’ (ICC) Center for Scalable Architectures and Systems (SAS).


    Hongyu An: Curious About the World and Exploring the Unknown

    by Karen S. Johnson, Communications Director, ICC

    “A scientist should be a person who is always curious about nature and the world, and who tries to explore the unknown.” –Hongyu An, Assistant Professor, Electrical and Computer Engineering

    Hongyu An, Assistant Professor, ECE

    Exploring science and technology is always exciting for new Assistant Professor Hongyu An, Electrical and Computer Engineering. He says he is “very pleased to have the chance to mentor the next generation and share my knowledge and experience with undergraduate and graduate students.”

    Several things drew Hongyu An to Michigan Tech, including his observation that as an institution Michigan Tech cares about its employees. “The excellent professors, smart students, and the supportive environment are the main reasons I joined Michigan Tech,” he says. “As a new faculty member, I am facing a lot of new challenges. There is great support in my department (ECE) and through the ICC.”

    Hongyu is a member of two Institute of Computing and Cybersystems (ICC) research centers: Human-Centered Computing and Scalable Architectures and Systems. He also sees synergies with the Center for Cyber-Physical Systems.

    “It is my great pleasure and honor to be a member of the ICC,” Hongyu says. “ I can collaborate with the experts in HCC for exploring the brain and artificial intelligence, and the professors in SAS for hardware and architecture designs. Moreover, the neuromorphic chips I am working on can potentially be applied to Cyber-Physical Systems.”

    Hongyu’s primary research area is hardware design for AI and neuromorphic systems. He believes that Artificial Intelligence is probably one of the most challenging research topics in science, noting that recent work in deep learning and artificial neural networks is demonstrating great progress in approaching artificial intelligence. 

    “But the traditional computers under von Neumann architecture cannot keep up with the development of neural networks and deep learning,” he cautions. “My research is addressing this challenge by using a new hardware design, from device to architecture levels.”

    Hongyu’s teaching interests include VLSI, Circuits, and Electromagnetics. Desribing his teaching philosophy, he notes that making complicated things simple is more challenging than making simple things complicated, and that he strives for the former. This academic year, An is teaching EE 4271 VLSI Design and mentoring ECE master’s student, Sarvani Marthi Sarvani, whose project aims to design a silicon retina through CMOS and Memristors.

    Hongyu and his research team are also investigating associative memory learning, a new learning method that aims to create a neuromorphic system that can learn from its surroundings directly. 

    “Associative memory is a widespread self-learning method in biological livings, which enables the nervoussystem to remember the relationship between two concurrent events,” Hongyu explains. “Through this learning method, dogs can learn the sound of bells as a sign of food; people can remember a word representing an object.”

    “The significance of rebuilding associative memory at a behavioral level not only reveals a way of designing a brain-like, self-learning neuromorphic system, it is also to explore a method of comprehending the learning mechanism of a nervous system,” he adds.

    And finally, beyond his work as a professor and scientist Hongyu hopes that he is “a good husband to my wife, a good father to my sons, and a good son to my parents.”

    Hongyu completed his Ph.D. in electrical engineering at Virginia Tech, his M.S. in electrical engineering at Missouri University of Science and Technology, and his B.S. in electrical engineering at Shenyang University of Technology.

    Recent Publications

    An, Hongyu, Mohammad Shah Al-Mamun, Marius K. Orlowski, Lingjia Liu, and Yang Yi. “Robust Deep Reservoir Computing through Reliable Memristor with Improved Heat Dissipation Capability. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (2020).

    An, Hongyu, Qiyuan An, and Yang Yi. “Realizing Behavior Level Associative Memory Learning Through Three-Dimensional Memristor-Based Neuromorphic Circuits. IEEE Transactions on Emerging Topics in Computational Intelligence (2019).

    Founded in 2015, the Institute of Computing and Cybersystems (ICC) promotes collaborative, cross-disciplinary research and learning experiences in the areas of computing education, cyber-physical systems, cybersecurity, data sciences, human-centered computing, and scalable architectures and systems, for the benefit of Michigan Technological University and society at large.

    The ICC creates and supports an arena in which faculty and students work collaboratively across organizational boundaries in an environment that mirrors contemporary technological innovation. The ICC’s 55 members represent more than 20 academic disciplines at Michigan Tech.


    Soner Onder and Dave Whalley Investigate Instruction-level Parallelism

    From Florida State University News

    A Florida State University researcher is working to make computer processors execute applications in a more energy-efficient manner with the help of a new $1.2 million grant from the National Science Foundation.

    “The general goal is to increase performance but to do it in a manner that is more energy efficient than the dominant computer processors that are in use today,” Professor of Computer Science David Whalley said.

    To do that, Whalley and his colleague Soner Onder, a professor at Michigan Technological University, hope to more efficiently exploit what’s called instruction-level parallelism, or the ability of a computer to simultaneously execute multiple machine instructions.

    “In general, VLIW processors are more energy efficient but cannot approach the performance of OoO processors except in limited domains, such as digital signal processing,” Whalley said.

    Whalley’s project, called SCALE for Statically Controlled Asynchronous Lane Execution, is designed to overcome these current limitations. SCALE supports separate execution lanes, so that instructions in separate lanes can execute in parallel and dependencies between instructions in different lanes are identified by the compiler to synchronize these lanes when necessary.

    “Providing distinct lanes of instructions allows the compiler to generate code for different modes of execution to adapt to the type of parallelism that is available at each point within an application,” Whalley said.

    The grant began this fall and will run through August 2023. Half of the funding will come to Florida State, with the other half supporting Onder’s part of the work at Michigan Technological University. The FSU portion will support two graduate students in computer science.


    Soner Onder Receives Year One Funding for $1.2M NSF SCALE Project

    Soner Onder

    Dave Whalley

    Soner Onder, professor of computer science, was recently awarded $246,329 for the first year of a four-year NSF grant for his project, “SHF: Medium: Collaborative Research: Statically Controlled Asynchronous Lane Execution (SCALE).” The project is in collaboration with Prof. David Whalley of Florida State University. Michigan Tech is the lead institution in the project, it is expected to total $1.2 million, with Michigan Tech receiving $600,000.

    Abstract: Enabling better performing systems benefits applications that span those running on mobile devices to large data applications running on data centers. The efficiency of most applications is still primarily affected by single thread performance. Instruction-level parallelism (ILP) speeds up programs by executing instructions of the program in parallel, with ‘superscalar’ processors achieving maximum performance. At the same time, energy efficiency is a key criteria to keep in mind as such speedup happens, with these two being conflicting criteria in system design. This project develops a Statically Controlled Asynchronous Lane Execution (SCALE) approach that has the potential to meet or exceed the performance of a traditional superscalar processor while approaching the energy efficiency of a very long instruction word (VLIW) processor. As implied by its name, the SCALE approach has the ability to scale to different types and levels of parallelism. The toolset and designs developed in this project will be available as open-source and will also have an impact on both education and research. The SCALE architectural and compiler techniques will be included in undergraduate and graduate curricula.

    The SCALE approach supports separate asynchronous execution lanes where dependencies between instructions in different lanes are statically identified by the compiler to provide inter-lane synchronization. Providing distinct lanes of instructions allows the compiler to generate code for different modes of execution to adapt to the type of parallelism that is available at each point within an application. These execution modes include explicit packaging of parallel instructions, parallel and pipelined execution of loop iterations, single program multiple data (SPMD) execution, and independent multi-threading.

    This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.

    https://www.nsf.gov/awardsearch/showAward?AWD_ID=1901005&HistoricalAwards=false


    Soner Onder Presents Keynote at SAMOS XIX

    Soner Onder
    Soner Onder

    Soner Onder (SAS), professor of computer science, presented a keynote lecture July 8, 2019, at the International Conference on Embedded Computer Systems: Architectures, Modeling and Simulation (SAMOS XIX) on Samos Island, Greece, which was held July 7-11. Onder’s talk was titled, “Form Follows Function: The Case for Homogeneous Computer Architectures.” Onder also participated in the conference’s “The Annual Open Mike Panel.”

    Keynote Lecture Abstract: ”Form follows function” is a principle associated with 20th-century modernist architecture and industrial design which says that the shape of a building or object should primarily relate to its intended function or purpose”[2]. For best performance in computer architecture, form must follow function as well. What are form and function in computer architecture? Form is easy to understand and interpret in its dictionary meaning; Function is not so clear-cut. In this talk, I will start with a simple problem, an algorithm, and a basic program representation that will be interpreted by the machine, and show that delivering high performance rests on solving only a handful, but fundamentally difficult problems. I will argue that the mere existence of domain specific solutions that general purpose computing cannot match in performance is a testament that the general purpose computing is ”not general enough”. What makes an architecture ”not general enough” is not the architecture itself, but rather the mismatch between the function its form had followed and the actual semantics of programs. To illustrate the point, I will challenge the widely understood interpretation of instruction-level parallelism (ILP) as ”single-thread performance”, and show that this interpretation is too short-sighted. We can efficiently exploit all types of available parallelism, including process-level, thread-level and data level parallelism, all at the instruction-level, and this approach is both feasible and necessary to combat the complexity that is plaguing our profession. I will then discuss why an executable single-assignment program representation [1] may be the ultimate function whose implementations may result in homogeneous general purpose architectures that can potentially match the performance of accelerators for specific tasks, while exceeding the performance of any accelerator traditional architecture combination for general tasks. I will conclude by discussing our results with Demand-driven Execution (DDE), whose form follows this single-assignment program representation.

    About SAMOS (from http://samos-conference.com/): SAMOS is a unique conference. It deals with embedded systems (sort of) but that is not what makes it different. It brings together every year researchers from both academia and industry on the quiet and inspiring northern mountainside of the Mediterranean island of Samos, which in itself is different. But more importantly, it really fosters collaboration rather than competition. Formal and intensive technical sessions are only held in the mornings.A lively panel or distinguished keynote speaker ends the formal part of the day, and leads nicely into the afternoons and evenings — reserved for informal discussions, good food, and the inviting Aegean Sea. The conference papers will be published by Springer’s Lecture Notes in Computer Science – LNCS and will be included in the DBLP Database.

    Samos Island, Greece

    Samos Island, Greece