Vous êtes ici : Accueil > Seminar / Workshop > Second workshop > Keynote Speakers

When Boole Meets Shannon – Second i-RISC Workshop

Keynote Speakers

Publié le 8 juin 2017

Jack Covan, University of Chicago

From Boole to Shannon and beyond

Abstract: In 1938 C.E.Shannon’s paper “A symbolic analysis of Relay and Switching Circuits” was published in Trans.AIEE. It was one of the first applications of Boolean algebra to problems involving switching circuits. [A similar paper was published in the Soviet Union by V.I. Shestakov in 1941, based on earlier work.] In 1943 W.S. McCulloch and W.H.Pitts published a paper entitled “A Logical Calculus of the Ideas immanent in Nervous Activity” in Bull. Math Biophys. In which they implemented the first order predicate calculus of mathematical logic in switching circuits based on primitive models of nerve cells. McCulloch and Pitts showed that a network of such cells, if provided with an indefinitely large memory, could function as a universal Turing machine. Following this, in 1952 J von Neumann showed how to construct networks of McCulloch-Pitts elements that were fault and failure tolerant. Von Neumann’s paper on this topic was included in a collection of papers edited by Shannon and J. McCarthy and published in “Automata Studies” in 1956. This was followed in 1963 by the MIT Press monograph “Reliable Computation in the Presence of Noise” written by S. Winograd and J.D. Cowan which showed how Shannon’s work on Information Theory could be used to improve on the efficacy of von Neumann’s networks. In the talk to be given at this workshop I will summarize much of this history and attempt to relate it to more modern works on this and related topics concerning machine learning, and the neuronal networks of the brain.


Bio: Jack Cowan is a Professor in the Mathematics Department at the University of Chicago. He also has joint appointments in the Department of Neurology and in the Committee on Computational Neuroscience. He was born in the UK and educated both in the UK and at MIT. In 1967 he and his family moved from London to Chicago where he became Professor and Chairman of the Committee on Mathematical Biology. Since 1981 he has been a Professor in the Mathematics Department. Over the past 58 years Jack Cowan has worked on a variety of mathematical topics including how to build reliable computers with unreliable elements, the population dynamics of large-scale neural networks, and (for the past thirty years) the statistical mechanics of networks of neuron-like elements. His main contributions have been the formulation of the sigmoid firing-rate equations for neural networks, and (with Hugh Wilson) of what are now known as the Wilson-Cowan equations, which capture the attractor dynamics of such networks, his work with Bard Ermentrout on their mathematical analysis, his work with Paul Bressloff, Martin Golubitsky, and others on modeling the structure and dynamics of the primate visual cortex, and (more recently) his work with Michael Buice on the statistical mechanics of neural networks. He is perhaps best known for his work on the Wilson-Cowan equations, his work on Geometric Visual Hallucinations and what they tell us about the human brain, and his recent papers on stochastic Wilson-Cowan equations and their applications.


Lav R. Varshney, University of Illinois at Urbana-Champaign

Towards Fundamental Limits of Reliable Memories Built from Unreliable Components

Abstract: There has been long-standing interest in constructing reliable memory systems from unreliable components like noisy bit-cells and noisy logic gates, under circuit complexity constraints. Prior work has focused exclusively on constructive achievability results, but here we develop converse theorems for this problem for the first time. The basic technique relies on entropy production/dissipation arguments and balances the need to dissipate entropy with the redundancy of the code employed. A bound from the entropy dissipation capability of noisy logic gates is used via a sphere-packing argument. Although a large gap remains between refined achievability results stated herein and the converse, some suggestions for ways to move forward beyond this first step are provided.

This work was supported in part by Systems on Nanoscale Information fabriCs (SONIC), one of the six SRC STARnet Centers, sponsored by MARCO and DARPA.


Bio:  Lav R. Varshney is an assistant professor in the Department of Electrical and Computer Engineering, a research assistant professor in the Coordinated Science Laboratory, and a research affiliate in the Beckman Institute for Advanced Science and Technology, all at the University of Illinois at Urbana-Champaign.

He received the B. S. degree with honors in electrical and computer engineering (magna cum laude) from Cornell University, Ithaca, New York in 2004. He received the S. M., E. E., and Ph. D. degrees in electrical engineering and computer science from the Massachusetts Institute of Technology (MIT), Cambridge in 2006, 2008, and 2010, respectively. He was a research staff member at the IBM Thomas J. Watson Research Center, Yorktown Heights, NY from 2010 until 2013. His research interests include information and coding theory, signal processing and data analytics, collective intelligence in sociotechnical systems, neuroscience, and creativity.

Dr. Varshney is a member of Tau Beta Pi, Eta Kappa Nu, and Sigma Xi, as well as a senior member of IEEE. He received the J.-A. Kong Award Honorable Mention for Electrical Engineering doctoral thesis, the E. A. Guillemin Thesis Award for Outstanding Electrical Engineering S. M. Thesis, several paper awards, and was a finalist for the 2014 Bell Labs Prize.  He has received much recognition at IBM for his work on crowdsourcing, on data analytics, and on computational creativity.


Luca Benini, ETH Zurich and University of Bologna

Ultra-Low Power Design for the IoT - an "Adequate Computing" perspective

Abstract: The "internet of everything" envisions trillions of connected objects loaded with high-bandwidth sensors requiring massive amounts of local signal processing, fusion, pattern extraction and classification. Higher level intelligence, requiring local storage and complex search and matching algorithms, will come next, ultimately leading to situational awareness and truly "intelligent things" harvesting energy from their environment.

From the computational viewpoint, the challenge is formidable and can be addressed only by pushing computing fabrics toward massive parallelism and brain-like energy efficiency levels. We believe that CMOS technology can still take us a long way toward this vision. Our recent results with the PULP (parallel ultra-low power) open computing platform demonstrate that pj/OP (GOPS/mW) computational efficiency is within reach in today's 28nm CMOS FDSOI technology. In the longer term, looking toward the next 1000x of energy efficiency improvement,  we will need to fully exploit the flexibility of heterogeneous 3D integration,  stop being religious about analog vs. digital, Von Neumann vs. "new" computing paradigms, and seriously look into relaxing traditional "hardware-software contracts" such as numerical precision and error-free permanent storage.


Bio: Luca Benini is the chair of digital Circuits and systems at ETHZ and a Full Professor at the University of Bologna. He has served as Chief Architect for the Platform2012/STHORM project in STmicroelectronics, Grenoble. He has held  visiting and consulting researcher positions at EPFL, IMEC, Hewlett-Packard Laboratories, Stanford University. Dr. Benini's research interests are in energy-efficient system design and Multi-Core SoC design.  He is also active in the area of energy-efficient smart sensors and sensor networks for biomedical and ambient intelligence applications.

He has published more than 700 papers in peer-reviewed international journals and conferences, four books and several book chapters. He is a Fellow of the IEEE and a member of the Academia Europaea.


Bernd Steinbach, University of Freiberg

Simpler Decomposition Functions Detected by Means of the Boolean Differential Calculus

Abstract: The main idea of a bi-decomposition consists in splitting a given Boolean function f(x) into two decomposition functions g(x) and h(x). A recursive synthesis method based on the bi-decomposition find a complete circuit if the decomposition functions g(x) and h(x) are simpler than the given function f(x).

There are both linear and non-linear bi-decompositions. Böhlau suggested in his PhD thesis three types of strong bi-decomposition: the non-linear AND-bi-decomposition, the non-linear OR-bi-decomposition, as well as the linear EXOR-bi-decomposition. The big advantage of all strong bi-decompositions is that both the decomposition function g(x) and the decomposition function h(x) depend on less variables than the given functions f(x). In this way the needed simplification is reached. The drawback of the strong bi-decompositions is that there are Boolean functions f(x) for which no strong bi-decomposition exists.

Le suggested in his PhD thesis weak bi-decompositions. The linear EXOR-bi-decomposition exists in any case; unfortunately the decomposition function are not simpler in this case. Therefore, this type of bi-decomposition must be excluded from a synthesis approach based on bi-decomposition. However, Le found also the non-linear weak AND-bi-decomposition and the non-linear weak OR-bi-decomposition which simplify the decomposition functions g(x) and h(x), where g(x) can be chosen out of a larger lattice of functions and h(x) depends of less variables. The main result from Le is the proof that for each Boolean function there exists either a strong EXOR-bi-decomposition or a weak AND-bi-decomposition or a weak OR-bi-decomposition. Hence, there is a complete synthesis approach based on strong and week the bi-decomposition.

Open questions over a very long time were:  Are there also other simpler decomposition functions? How simple decomposition functions can be determined? Based on his results about more general Boolean lattices, Steinbach found as supplement to the known bi-decompositions the new vectorial bi-decompositions. The more general indicator for simplification is whether any vectorial derivative is equal to zero. The so far known bi-decompositions utilize only simple derivatives which can be seen as a small subset of all vectorial derivatives. Practical examples confirm the benefit of the common use of all bi-decompositions.

Bio: Bernd Steinbach studied Information Technology at the University of Technology in Chemnitz (Germany) and graduated with an M.Sc. in 1973. He graduated with a Ph.D. and with a Dr. sc. techn. (Doctor scientiae technicarum) for his second doctoral thesis from the Faculty of Electrical Engineering of the Chemnitz University of Technology in 1981 and 1984, respectively. In 1991 he obtained the Habilitation (Dr.-Ing. habil.) from the same Faculty. He was working in industry as an Electrician, there he had tested professional controlling systems at the Niles Company. After his studies he taught as Assistant Lecturer at the Department of Information Technology of the Chemnitz University of Technology. In a following period of industrial occupation as a research engineer he developed programs for test pattern generation for computer circuits at the company Robotron. He returned to the Department of Information Technology of the Chemnitz University of Technology as Associate Professor for design automation in logic design. Since 1992 he is a Full Professor of Computer Science / Software Engineering and Programming at the Freiberg University of Mining and Technology, Department of Computer Science. He has served as Head of the Department of Computer Science and Vice-Dean of the Faculty of Mathematics and Computer Science.


His research areas include logic functions and equations and their application in many fields, such as Artificial Intelligence, UML – based testing of software, UML – based hardware / software co-design. He is the head of a group that developed the XBOOLE software system. He published seven books about topic of the Boolean domain. The first one, co-authored by D. Bochmann, covers Logic Design using XBOOLE (in German). The following two, co-authored by Christian Posthoff, are "Logic Functions and Equations – Binary Models for Computer Science" and "Logic Functions and Equations – Examples and Exercises", Springer 2004, and 2009, respectively. The fourth, also co-authored by Christian Posthoff, "Boolean Differntial Calculus" Morgan & Clayool 2013. He is the editor and author of several chapters of the book "Recent Progress in the Boolean Domain",  Cambridge Scholars Publishing 2014. Last two "Logic Functions - Boolean Models" and "Efficient Boolean calculations with XBOOLE" are textbooks written in German again co-authored by Christian Posthoff and published by EAG.LE in 2014 and 2015. He published more than 240 chapters in books, complete issues of journals, and papers in journals and proceedings. He has served as Program Chairman for the IEEE International Symposium on Multiple-Valued Logic (ISMVL), and as guest editor of the Journal of Multiple-Valued Logic and Soft Computing. He is the initiator and general chair of a biennial series of International Workshops on Boolean Problems (IWSBP) which started in 1994, with 11 workshops until now. He received the Barkhausen Award from the University of Technology Dresden in 1983.


Bane Vasic, University of Arizona

Iterative Decoders with Deliberate Message Flips

(Authors: Bane Vasic, Predrag Ivanis, David Declercq, Khoa LeTrung and Elza Dupraz)

Abstract: In this talk we show that random errors in iterative decoder's logic gates can be exploited to our advantage and lead to an improved performance and reduced hardware redundancy. We discuss bit-flipping and message passing decoders with deliberate errors, more specifically gradient descent bit flipping and Gallager-B decoders on the binary symmetric channel (BSC).  We present an analysis method for finite length low-density parity check codes and show examples in which for sufficient number of iterations error performance of such iterative decoders is close to that of maximum likelihood decoder.

Bio: Bane Vasic is a Professor of Electrical and Computer Engineering and Mathematics at the University of Arizona and is a Director of the Error Correction Laboratory.  Prof. Vasic is an inventor of the soft error-event decoding algorithm, and the key architect of a detector/decoder for Bell Labs magnetic recording read channel chips which were regarded as the best in industry.  Different variants of this algorithm were implemented in virtually all magnetic hard drives. His pioneering work on structured low-density parity check (LDPC) error correcting codes and invention of codes has enabled low-complexity iterative decoder implementations.  Structured LDPC codes are today adopted in a number of communications standards, and are used in extremely high-density magnetic hard drives. Dr. Vasic's theoretical work in error correction coding theory and codes on graphs has led to combinatorial characterization of decoding failures of hard decision iterative decoders of low-density parity check (LDPC) codes, and design of codes and decoders with best error-floor performance known today. He is an IEEE Fellow, and a Chair of the IEEE Data Storage Technical Committee.


Sorin Cotofana, TU Delft

Is Boole's Computation Avenue Getting Bumpy?

Abstract: For many decades we are witnessing an ongoing increase of Integrated Circuit performance mainly due to advances in MOSFET fabrication technology and algorithm improvements. While device feature size reduction has been the key factor for the dramatic increase of the processing power of logic and arithmetic circuits it is generally accepted that, sooner or later, MOSFETs cannot be shrunken further due to fundamental physical restrictions.  Due to this combined with the fact that with extreme feature size reduction MOSFETs start to deviate from the switch behavior, which allowed for their natural marriage with the Boolean algebra, several emerging nano technologies are currently being investigated, e.g., Single Electron Tunneling (SET) junctions, Nano Electro Mechanical FETs, memristive devices, etc.

While there is a natural fit between the mainstream MOSFET and the Boolean logic this is not really the case for most of the emerging technologies. Despite this, the state of the art approach is to mold emerging nano-devices to fit into the CMOS centric design methodology, i.e., produce a MOSFET replacement nano-device. At first glance this is not surprising given the fact that the FET relay-alike behavior provides the perfect hardware support for Boolean logic based implementations. However, essentially speaking, these new nano-devices exhibit behaviors that are substantially different than that of the well-established MOSFET devices and if we are to fully utilize their potential we should adapt the computation and design paradigms to their specifics.


In this line of reasoning we first discuss nano-era specific challenges and opportunities one has to face and/or make use of, when designing computation platforms. Subsequently, we assume SET junctions as discussion vehicle and introduce SET based logic families which do not build upon Boolean logic, and discuss their capabilities and potential impact. Finally, we briefly describe a modular redundancy scheme, which relies on adaptive analog voting and make use of noise to achieve controlled Dynamic Stochastic Resonance to increase its correction capabilities. Based on the discussed approaches we conclude that, in certain circumstances, if one is willing to embrace new computation paradigms, one can better address nano-era specific challenges and make a better use of emerging devices.


Bio: Sorin Cotofana received the M.Sc. degree in Computer Science from the "Politehnica" University of Bucharest, Romania, and the Ph.D. degree in Electrical Engineering from Delft University of Technology, The Netherlands. He is currently an Associate Professor with the Electrical Engineering, Mathematics and Computer Science Faculty, Delft University of Technology, Delft, the Netherlands. His current research is focused on: (i) the design and implementation of dependable/reliable systems out of unpredictable/unreliable components; (ii) ageing assessment/prediction and lifetime reliability aware resource management; and (iii) unconventional computation paradigms and computation with emerging nano-devices. He (co-)authored more than 300 papers in peer-reviewed international journal and conferences, and received 12 international conferences best paper awards, e.g., 2012 IEEE Conference on Nanotechnology, 2012 ACM/IEEE International Symposium on Nanoscale Architectures, 2005 IEEE Conference on Nanotechnology, 2001 International Conference on Computer Design. He served as Associate editor for IEEE Transactions on CAS I (2009-2011), IEEE Transactions on Nanotechnology (2008-2014), Chair of the Giga-Nano IEEE CASS Technical Committee (2013-2015), and IEEE Nano Council CASS representative (2013-2014), and has been actively involved in the organization of many international conferences. He is currently Associate Editor in Chief and Senior Editor for IEEE Transactions on Nanotechnology and Steering Committee member for IEEE Transactions on Multi-Scale Computing Systems. He is a HiPEAC member and a senior IEEE member (Circuits and System Society (CASS) and Computer Society).