Day: August 14, 2019

Soner Onder Presents Keynote at SAMOS XIX

Soner Onder
Soner Onder

Soner Onder (SAS), professor of computer science, presented a keynote lecture July 8, 2019, at the International Conference on Embedded Computer Systems: Architectures, Modeling and Simulation (SAMOS XIX) on Samos Island, Greece, which was held July 7-11. Onder’s talk was titled, “Form Follows Function: The Case for Homogeneous Computer Architectures.” Onder also participated in the conference’s “The Annual Open Mike Panel.”

Keynote Lecture Abstract: ”Form follows function” is a principle associated with 20th-century modernist architecture and industrial design which says that the shape of a building or object should primarily relate to its intended function or purpose”[2]. For best performance in computer architecture, form must follow function as well. What are form and function in computer architecture? Form is easy to understand and interpret in its dictionary meaning; Function is not so clear-cut. In this talk, I will start with a simple problem, an algorithm, and a basic program representation that will be interpreted by the machine, and show that delivering high performance rests on solving only a handful, but fundamentally difficult problems. I will argue that the mere existence of domain specific solutions that general purpose computing cannot match in performance is a testament that the general purpose computing is ”not general enough”. What makes an architecture ”not general enough” is not the architecture itself, but rather the mismatch between the function its form had followed and the actual semantics of programs. To illustrate the point, I will challenge the widely understood interpretation of instruction-level parallelism (ILP) as ”single-thread performance”, and show that this interpretation is too short-sighted. We can efficiently exploit all types of available parallelism, including process-level, thread-level and data level parallelism, all at the instruction-level, and this approach is both feasible and necessary to combat the complexity that is plaguing our profession. I will then discuss why an executable single-assignment program representation [1] may be the ultimate function whose implementations may result in homogeneous general purpose architectures that can potentially match the performance of accelerators for specific tasks, while exceeding the performance of any accelerator traditional architecture combination for general tasks. I will conclude by discussing our results with Demand-driven Execution (DDE), whose form follows this single-assignment program representation.

About SAMOS (from http://samos-conference.com/): SAMOS is a unique conference. It deals with embedded systems (sort of) but that is not what makes it different. It brings together every year researchers from both academia and industry on the quiet and inspiring northern mountainside of the Mediterranean island of Samos, which in itself is different. But more importantly, it really fosters collaboration rather than competition. Formal and intensive technical sessions are only held in the mornings.A lively panel or distinguished keynote speaker ends the formal part of the day, and leads nicely into the afternoons and evenings — reserved for informal discussions, good food, and the inviting Aegean Sea. The conference papers will be published by Springer’s Lecture Notes in Computer Science – LNCS and will be included in the DBLP Database.

Samos Island, Greece
Samos Island, Greece

Soner Onder Presents Talk in Barcelona, Spain

Soner Onder is pictured on the right in the front.

Sonder Onder (SAS), professor of computer science, presented an invited talk at “Yale:80: Pushing the Envelope of Computing for the Future,” held July 1-2, 2019, in Barcelona, Spain. The workshop was organized by Universitat Politècnica de Catalunya in honor of the 80th birthday of Yale Patt, a prominent computer architecture researcher. Onder was one of 23 invitees to give a talk. His lecture was titled, “Program semantics meets architecture: What if we did not have branches?”

View the slides from Onder’s talk: Yale80-in-2019-Soner-Onder

Yale Patt is a professor in the Department of Electrical & Computer Engineering at The University of Texas at Austin, where he holds the Ernest Cockrell, Jr. Centennial Chair in Engineering. He also holds the title of University Distinguished Teaching Professor. Patt was elected to the National Academy of Engineering in 2014, among the highest professional distinctions bestowed upon an engineer. View Patt’s faculty webpage at: http://www.ece.utexas.edu/people/faculty/yale-patt.

Link to the workshop’s website here: http://research.ac.upc.edu/80-in-2019/

Visit the workshop’s Facebook page here: https://www.facebook.com/BSCCNS/posts/workshop-yale-80-in-2019pushing-the-envelope-of-computing-for-the-futurehttprese/2217508564992996/

Soner Onder, Barcelona, Spain
Soner Onder at Sagrada Família, Barcelona, Spain

Benjamin Ong Awarded 25K for Parallel-in-time Integration Workshop

Benjamin Ong

Benjamin Ong (Math/ICC-DataS) is Principal Investigator on a one-year project that has received a $25,185 other sponsored activities grant from the National Science Foundation. The project is titled “Ninth Workshop on Parallel-In-Time Integration.”

The Ninth Workshop on Parallel-in-time Integration will take place June 8 – 12, 2020, at Michigan Tech. Ong (chair) and Jacob Schroder, assistant professor in the Dept. of Mathematics and Statistics at University of New Mexico, are heading the organizing committee for the workshop. Travel funding for early career researchers will be available. Application details and deadlines will be posted shortly on the event’s website at conferences.math.mtu.

Contact information:
ongbw@mtu.edu
906-487-3367

Invited speakers:

  • Professor Matthias Bolten, Bergische Universität Wuppertal
  • Professor Laurence Halpern, Université Paris 13
  • Professor George Karniadakis, Brown University
  • Professor Ulrich Langer, Johannes Kepler University Linz
  • Dr. Carol Woodward, Lawrence Livermore National Laboratory

The workshop is supported by:

  • Michigan Technological University, Department of Mathematical Sciences
  • Michigan Technological University, College of Science and Arts
  • Lawrence Livermore National Laboratory
  • Jülich Supercomputing Centre
  • FoMICS: The Swiss Graduate School in Computational Science

About the Workshop on Parallel-in-time Integration (from https://parallel-in-time.org/ and https://parallel-in-time.org/events/9th-pint-workshop/)

Computer models and simulations play a central role in the study of complex systems in engineering, life sciences, medicine, chemistry, and physics. Utilizing modern supercomputers to run models and simulations allows for experimentation in virtual laboratories, thus saving both time and resources. Although the next generation of supercomputers will contain an unprecedented number of processors, this will not automatically increase the speed of running simulations. New mathematical algorithms are needed that can fully harness the processing potential of these new systems. Parallel-in-time methods, the subject of this workshop, are timely and necessary, as they extend existing computer models to these next generation machines by adding a new dimension of scalability. Thus, the use of parallel-in-time methods will provide dramatically faster simulations in many important areas, such as biomedical applications (e.g., heart modeling), computational fluid dynamics (e.g., aerodynamics and weather prediction), and machine learning. Computational and applied mathematics plays a foundational role in this projected advancement.

The primary focus of the proposed parallel-in-time workshop is to disseminate cutting-edge research and facilitate scientific discussions on the field of parallel time integration methods. This workshop aligns with the National Strategic Computing Initiative (NCSI) objective: “increase coherence between technology for modeling/simulation and data analytics”. The need for parallel time integration is being driven by microprocessor trends, where future speedups for computational simulations will come through using increasing numbers of cores and not through faster clock speeds. Thus as spatial parallelism techniques saturate, parallelization in the time direction offers the best avenue for leveraging next generation supercomputers with billions of processors. Regarding the mathematical treatment of parallel time integrators, one must use advanced methodologies from the theory of partial differential equations in a functional analytic setting, numerical discretization and integration, convergence analyses of iterative methods, and the development and implementation of new parallel algorithms. Thus, the workshop will bring together an interdisciplinary group of experts spanning these areas.