The Distinguished Lecture Series brings world leading computational scientists to Zurich to present their work in a colloquium and to meet with faculty and students. The colloquium has broad attendance by faculty, staff, masters and PhD students from the University and ETHZ.

Past events

June 10, 2021: Recent advances in randomization and communication avoiding techniques for numerical linear algebra

Prof. Dr. Laura Grigori, Director of Research, INRIA Paris

TIME: 16:00 via ZOOM
HOSTS: Prof. Dr. Eleni Chatzi, ETHZ & Prof. Dr. Robert Feldmann, UZH

Abstract:
In this talk we discuss randomization techniques for solving large scale linear algebra problems.  We focus in particular on solving linear systems of equations and we present a randomized Gram-Schmidt process for orthogonalizing a set of vectors.  We discuss its efficiency and its numerical stability while also using mixed precision.  Its usage in the GMRES method for solving systems of equations is further presented.  In the last part of the talk we introduce a robust preconditioner    that relies on multilevel domain decomposition techniques and that allows to accelerate the convergence of iterative methods for linear systems of equations.

May 13, 2021: Cosmic magnetism using computer simulations

Prof. Dr. Romain Teyssier, University of Zurich

TIME: 16:00 via ZOOM, view recording
HOST: Prof. Dr. Eleni Chatzi, ETH Zürich

Abstract:
A current view on how galaxies form within a dark-matter dominated expanding Universe. The key role of numerical simulations in delivering realistic multi-scale and multi-physics models will be discussed. Such models allow to set a consistent picture of the generation and the amplification of magnetic fields in the Universe that could eventually be tested using future astronomical observations.

March 11, 2021: Quantum numerical linear algebra

Prof. Dr. Lin Lin, University of California, Berkeley

TIME: 18:15 via ZOOM view recording
HOST: Prof. Dr. Sid Mishra, ETH Zürich

Abstract:
The two “quantum supremacy” experiments (by Google in 2019 and by USTC in 2020, respectively) have brought quantum computation to the public’s attention. In this talk, I will discuss how to use a quantum computer to solve linear algebra problems. I will start with a toy linear system of equation Ax=b, where A is merely a 2 x 2 matrix. I will then talk about some recent progress of quantum linear system solvers, and a proposal for the quantum LINPACK benchmark.

April 17, 2018: From Approximate to Significance-Driven to Transprecision Computing: Opportunities and Challenges

Prof. Dr. Dmitrios Nikolopoulos, Queen’s University Belfast

TIME: 17:15, PLACE: CAB G 59, ETH Zurich

Abstract:
Approximate computing has evolved over the years as a fundamental method to improve performance and energy-efficiency in embedded and high-end systems, using algorithmic techniques that sacrifice numerical precision without necessarily compromising the quality of the result. In this talk we will explore two approaches to approximate computing which my group and I have been exploring over the past five years: Significance-driven computation, an approach that aims at disciplined approximation of algorithms using a high-level, task-based programming model. And transprecision computing, a more recent approach that explores dynamic precision tuning to optimise performance and energy-efficiency in algorithms with input-dependent/data-dependent behaviour.

flyer-nikolopoulos

December 4, 2017: Fascinating red blood cell properties: non-equilibrium membrane fluctuations and intricate dynamics in flow

Prof. Dr. Dmitry Fedosov, Research Center Juelich

TIME: 16:15, PLACE: HG E 3, ETH Zurich

Abstract:
Red blood cells (RBCs) constitute the major cellular part of blood and are mainly responsible for the transport of oxygen. They have a biconcave shape with a membrane consisting of a lipid bilayer with an attached cytoskeleton formed by a network of the spectrin proteins. The RBC membrane encloses a viscous cytosol (hemoglobin solution), so that RBCs possess no bulk cytoskeleton and organelles. Despite this simple structure in comparison to many other cells, RBCs exhibit fascinating properties and behavior in flow. One such example is membrane flickering, which can be easily observed under optical microscopy. This phenomenon has been initially attributed to pure thermal fluctuations of the cell membrane, and later followed by several suggestions about the possible involvement of non-equilibrium processes, without definitively ruling out equilibrium interpretations. Our recent study has rigorously shown the involvement of non-equilibrium processes through a violation of the fluctuation-dissipation relation, which is a direct demonstration of the non-equilibrium nature of flickering. Another interesting example is the behavior of RBCs in flow, which show complex deformation and dynamics. Current simplified understanding of RBC behavior in shear flow is that they tumble or roll at low shear rates and tank-tread at high shear rates. This view has been mainly formed by a number of experiments performed on RBCs dispersed in a viscous solution, which is several times more viscous than blood plasma. However, under physiological conditions with increasing shear rates, RBCs successively tumble, roll, deform into rolling stomatocytes, and finally adopt highly-deformed poly-lobed shapes. This behavior is governed by RBC elastic and viscous properties and it is important to consider it under relevant physiological conditions.

flyer-fedosov

November 13, 2017: The coming of age of de novo protein design

Prof. Dr. David Baker, University of Washington

TIME: 16:15, PLACE: HG E 3, ETH Zurich

Abstract:
Proteins mediate the critical processes of life and beautifully solve the challenges faced during the evolution of modern organisms. Our goal is to design a new generation of proteins that address current day problems not faced during evolution. In contrast to traditional protein engineering efforts, which have focused on modifying naturally occurring proteins, we design new proteins from scratch based on Anfinsen’s principle that proteins fold to their global free energy minimum. We compute amino acid sequences predicted to fold into proteins with new structures and functions, produce synthetic genes encoding these sequences, and characterize them experimentally. I will describe the design of ultra-stable idealized proteins, flu neutralizing proteins, high affinity ligand binding proteins, self-assembling protein nanomaterials, and modular protein logic elements. I will also describe the contributions of the general public to these efforts through the distributed computing project Rosetta@Home and the online protein folding and design game Foldit.

flyer-baker

October 9, 2017: High order numerical methods for hyperbolic equations

Prof. Dr. Chi-Wang Shu, Brown University

TIME: 16:15 (An apero will follow after the talk at Foyer HG E-Nord)

PLACE: HG E 3, ETH Zurich

Abstract:
Hyperbolic equations are used extensively in applications including fluid dynamics, astrophysics, electro-magnetism, semi-conductor devices, and biological sciences. High order accurate numerical methods are efficient for solving such partial differential quations, however they are difficult to design because solutions may contain discontinuities. In this talk we will survey several types of high order numerical methods for such problems, including weighted essentially non-oscillatory (WENO) finite difference and finite volume methods, discontinuous Galerkin finite element methods, and spectral methods. We will discuss essential ingredients, properties and relative advantages of each method, and provide comparisons among these methods. Recent development and applications of these methods will also be discussed.

flyer-shu

June 13, 2017: Memristors and The Principle of Local Activity

Prof. Dr. Leon Chua, University of California, Berkeley

TIME: 16:15 (An apero will follow after the talk)

PLACE: KOL-F 117 UZH

Abstract:
The first part of this talk presents the axiomatic basis of the ideal memristor and its generalized experimental definition via its pinched hysteresis loops. A fundamental new theorem will be presented which asserts that all non-volatile memories that did not require a power supply, or quantum-mechanical effects, must exhibit a continuum range of memories, which can be tuned to emulate synapses. In particular, we will show that the modulation of neurotransmitters during learning is akin to tuning the conductance of a memristor. We will emulate the classic Kandel Aplysia learning experiments, as well as Bliss-Lomo‘s long-term potentiation (LTP) phenomenon with a single memristor. Moreover, we will solve the heretofore unresolved Hodgkin-Huxley Paradox on the shocking gigantic inductance they had measured from the squid axon by showing the potassium and sodium ion channels which Hodgkin and Huxley had erroneously identified as time-varying conductances are in fact time-invariant memristors. Furthermore, we will present an experimental voltage-divider circuit containing only one Ag/AgInSbTe/Ta memristive synapse capable of mimicking Pavlov‘s conditioning experiments on dogs, thereby demonstrating the associative learning phenomenon.

The second part of this talk presents a new law of thermodynamics, dubbed the Principle of Local Activity, which allows the entropy to decrease, and provides the rigorous mathematical foundation for the generation of action potentials in the Hodgkin-Huxley equation, as well as Alan Turing‘s seminal paper on the emergence of complex phenomena and morphogenesis.

We conclude this talk by showing that intelligence and life must operate in a Goldilock‘s zone of local activity, dubbed the edge of chaos.

CO-HOSTS: Giacomo Indiveri (UZH), Juerg Leuthold (ETHZ), Mathieu Luisier (ETHZ), and Jean Fompeyrine (IBM)

This presentation aims at promoting the research themes of CogiTech/BRICO^2, an envisioned Swiss collaboration on brain-inspired cognitive computing between UZH, ETH, EPFL, IDSIA, UniBe, IBM, EMPA, and PSI.

flyer-chua

May 24, 2017: Modern numerical methods for high-speed, compressible, multi-physics, multi-material flows

Dr. Mikhail Shashkov, Los Alamos National Laboratory

TIME: 14:00, PLACE: KOL-G-201 (Aula) UZH

Abstract:
Computational experiment is among the most significant developments in the practice of the scientific inquiry in the 21th century. Within last four decades, computational experiment has become an important contributor to all scientific research programs. It is particular important for the solution of the research problems that are insoluble by traditional theoretical and experimental approaches, hazardous to study in the laboratory, or time consuming or expensive to solve by traditional means. Computational experiment includes several important ingredients: creating mathematical model, discretization, solvers, coding, verification and validation, visualization, analysis of the results, etc.

In this talk we will describe some aspects of the modern numerical methods for high-speed, compressible, multi-physics, multi-material flows. We will address meshing issues, mimetic discretizations of equations of the Lagrangian gas dynamics and diffusion equation on general polygonal meshes, mesh adaptation strategies, methods for dealing with shocks, interface reconstruction needed for multi-material flows, closure models for multi-material cells, time discretizations, etc.

March 29, 2017: The Neo-Digital Age – When Moore‘s Law Died

Prof. Dr. Thomas Sterling, Indiana University

TIME: 16:15, PLACE: KOL-G-201 (Aula) UZH

Abstract:
Even the highest scale contemporary conventional HPC system architectures are optimized for the basic operations and access patterns of classical matrix and vector processing. These include emphasis on FPU utilization, high data reuse requiring temporal and spatial locality, and uniform strides of indexing through regular data structures either dense or sparse. Such systems in the 100 Petaflops performance regime such as the Chinese Sunway Taihu-Light, and the US CORAL Summit and Aurora to be deployed in 2018 in spite of their innovations still are limited in these properties. Emerging classes of new application problems in data analytics, machine learning, and knowledge management demand very different operational properties in response to their highly irregular, sparse, and dynamic behaviors exhibiting little or no data reuse, random access patterns, and meta-data dominated processing. Close examination clearly suggests that at the core of these “big data” applications is dynamic adaptive graph processing which is in some ways diametrically opposite to conventional matrix computing. Of immediate importance is the need to significantly enhance efficiency and scalability as well as user productivity, performance portability, and reduce energy. Key to this is the introduction of powerful runtime system software for the exploitation of real-time system status information to support dynamic adaptive resource management and task scheduling. But software alone for runtime functionality will be insufficient for extreme-scale where near fine-grained parallelism is necessary and software overheads will limit efficiency and scalability. A new era of architecture research is beginning in the combined domains of accelerator hardware for both graph processing and runtime systems. This presentation will discuss the nature of the computational challenges, provide examples and describe experiments with a state-of-the-art runtime system software, HPX-5. Future directions in hardware architecture support for exascale runtime-assisted big data computation will be proposed and discussed. Questions and comments from the audience will be welcome throughout the talk.

flyer-sterling

April 27, 2016: Virtual Materials Testing

Dr. Karel Matouš, University of Notre Dame

TIME: 17:30, PLACE: KOL-G-201 (Aula) UZH

Abstract:
With concentrated efforts from the material science community to develop new multifunctional materials using unique processing conditions, the need for modeling tools that accurately describe the physical phenomena at each length scale has only further been emphasized. For example, additive manufacturing and shock synthesis of materials lead to unique material morphologies that need to be understood for reliable engineering analysis and product safety assessments. Considering these material complexities, Direct Numerical Modeling (DNM) is accessible only for moderate system sizes. Thus, a multiscale strategy must recognize that just a relatively small part of the material will typically be instantaneously exposed to rapid material transformations. The rest of the material may be adequately described by macroscopic constitutive models, obtained from homogenization of the complex but slowly-varying microstructure. Nonlinear model reduction, pattern recognition and data-mining are a key to future on-the-fly modeling and rapid decision making.

To address these challenges, we present an image-based (data-driven) multiscale framework for modeling the chemo-thermo-mechanical behavior of heterogeneous materials while capturing the large range of spatial and temporal scales. This integrated computational approach for predicting the behavior of complex heterogeneous systems combines macro- and microcontinuum representations with statistical techniques, nonlinear model reduction and highperformance computing. Our approach exploits the instantaneous localization knowledge to decide where more advanced computations are required. Simulations involving this wide range of scales, O(106) from nm to mm, and billions of computational cells are inherently expensive, requiring use of high-performance computing. Therefore, we have developed a hierarchically parallel highperformance computational framework that executes on hundreds of thousands of processing cores with exceptional scaling performance.

Any serious attempt to model a heterogeneous system must also include a strategy for constructing a complex computational domain. This work follows the concept of data-driven (image-based) modeling. We will delineate a procedure based on topology optimization and machine learning to construct a Representative Unit Cell (RUC) with the same statistics (n-point probability functions) to that of the original material. Our imaging sources come from microcomputed-tomography (micro-CT), focused ion beam (FIB) sectioning, and advanced photon source nano-tomography at the Argonne National Laboratory. We show that high-performance DNM of these statistically meaningful RUCs coupled on-the-fly to a macroscopic domain is possible. Therefore, well-resolved microstructure-statistics-property (MSP) relationships can be obtained.

Finally, the integrated V&V/UQ program with co-designed simulations and experiments provides a platform for computational model verification, validation and propagation of uncertainties.

April 13, 2016: Bayesian inference and the low-dimensional structure of measure transport

Prof. Dr. Youssef Marzouk, Massachusetts Institute of Technology

TIME: 17:30, PLACE: KOL-G-201 (Aula) UZH

Abstract:
Bayesian inference provides a natural framework for quantifying uncertainty in model parameters and predictions, and for combining heterogeneous sources of information. But the computational demands of the Bayesian framework constitute a major bottleneck in large-scale applications. We will discuss how transport maps, i.e., deterministic couplings between probability measures, can enable useful new approaches to Bayesian computation. A first use involves a combination of measure transport and Metropolis correction; here, we use continuous transportation to transform typical MCMC proposals into adapted non-Gaussian proposals, both local and global. Second, we discuss a variational approach to Bayesian inference that constructs a deterministic transport from a reference distribution to the posterior, without resorting to MCMC. Independent and unweighted posterior samples can then be obtained by pushing forward reference samples through the map.

Making either approach efficient in high dimensions, however, requires identifying and exploiting low-dimensional structure. We present new results relating sparsity of transport maps to the conditional independence structure of the target distribution, and discuss how this structure can be revealed through the analysis of certain average derivative functionals. A connection between transport maps and graphical models yields many useful algorithms for efficient ordering and decomposition—here, generalized to the continuous and non-Gaussian setting. The resulting inference algorithms involve either the direct identification of sparse maps or the composition of low-dimensional maps and rotations. We demonstrate our approaches on Bayesian inference problems arising in spatial statistics and in partial differential equations.

This is joint work with Matthew Parno and Alessio Spantini.

November 11, 2015: Computational Molecular Design: From Mathematical Theory via High Performance Computing to In Vivo Experiments

Prof. Dr. Christof Schuette, Freie Unversität Berlin, Zuse Institute Berlin (Vice-President), DFG Research Center MATHEON (Co-Chair)

TIME: 17:30, PLACE: ML E 12, ETH Zurich

Abstract:
Molecular dynamics and related computational methods enable the description of biological systems with all-atom detail. However, these approaches are limited regarding simulation times and system sizes. A systematic way to bridge the micro-macro scale range between molecular dynamics and experiments is to apply coarse-graining (CG) techniques. We will discuss Markov State Modelling, a CG technique that has attracted a lot of attention in physical chemistry, biophysics, and computational biology in recent years. First, the key ideas of the mathematical theory and its algorithmic realization behind Markov State Modelling will be explained, next we discuss the question of how to apply it to understanding molecular function, and last we will ask whether this may help in designing molecules with prescribed function. All of this will be illustrated by telling the story of the design process of a pain relief drug without concealing the potential pitfalls an obstacles.

October 29, 2015: The Real Changes Brought by Big Data: Challenges & Opportunities in the Rapidly Changing Landscape of Big Data

Dr. Usama M. Fayyad, Group Chief Data Officer and CIO of Risk, Finance, and Treasury Technology at Barclays PLC

TIME: 17:15, PLACE: KOL-F-117 UZH

Abstract:
With a fundamental change in the assumptions underpinning a structured data world dominated by relational databases, we are entering the age of BigData. The combination of economic drivers in enterprise computing, the need to leverage semi-structured and unstructured Data, and the emergence of the Internet of Things (IOT), a dramatic shift in the Data landscape is taking place. The advent of Hadoop and the Open Source stack in this space have accelerated the changes to a point of confusion. Today’s data analyst faces a bewildering environment of technologies and challenges involving semi-structured and unstructured data with access methodologies that have almost no relation to the past. This talk will cover issues and challenges in how to make the benefits of advanced analytics fit within the application environment. The requirement for Real-time data streaming and in situ data mining is stronger than ever. We demonstrate how many of the critical problems remain open with much opportunity for innovative solutions to play a huge enabling role. This opportunity makes Data Science and several related fields critical to almost all future analytical tasks. The talk will use 3 real case studies to demonstrate and discuss the challenges and the great opportunities for BigData and Data Science.

May 13, 2015: Capturing Enzyme Evolution in silico

Prof. Dame Janet M. Thornton, Director EMBL-EBI

TIME: 14:00, PLACE: KOL-G-201 UZH Aula (note: late access through H floor galleries)

Abstract:
Enzyme activity is essential for almost all aspects of life. With completely sequenced genomes, the full complement of enzymes in an organism can be defined, and 3D structures have been determined for many enzyme families. Traditionally each enzyme has been studied individually, but as more enzymes are characterised it is now timely to revisit the molecular basis of catalysis, by comparing different enzymes and their mechanisms, and to consider how complex pathways and networks may have evolved. New approaches to understanding enzymes mechanisms and how enzyme families evolve functional diversity will be described.

Poster Thornton_final A4

March 26, 2015: Petascale Computing and High-Precision Arithmetic: Applications and Challenges

Prof. David H. Bailey, Lawrence Berkeley National Lab (retired) and University of California, Davis

TIME: 17:15, PLACE: HG F 30 AudiMAX, ETH Zurich

Abstract:
State-of-the-art high-performance computing systems, which feature performance rates over 30 quadrillion floating-point operations per second and over 30 quadrillion bytes of memory, represent tools of immense power to scientists and engineers. However, there is considerable danger here also, for example from greatly magnified numerical round-off error. This presentation will discuss one remedy for such problems, namely to employ high-precision arithmetic facilities (typically 32 or 64 digits), which are now widely available and can be readily incorporated into user codes. Numerous examples in modern scientific computing will be shown where such precision is not only useful, but in fact mandatory. Other problems of considerable interest require much more precision — hundreds or even thousands of digits. Providing efficient, thread-safe, bug-free and easy-to-use tools for this type of computation is a major challenge.

Poster Bailey_final_web_2

Distinguished Lecture Series

Updated on 2022-06-23T15:28:04+02:00, by Philipp Sprecher.