Analog Computing

There is an unconventional kind of computing that is being investigated in research centers and academic spin-offs around the world. It is not the standard way of doing calculations with a digital computer. It rather relies on a mapping between the purported capability of a physical system to solve a given problem natively and the possibility of generalizing such problems onto more general computing tasks. An example could be a device in which a light beam propagates inside some pattern, which can be controlled and modified with high precision and on short times. Light bounces off the device’s wall according to Maxwell’s equations of optics, so it is “solving” these equations. Researchers aim at controlling diffraction patterns, for example, which can then be mapped to, say, finding the global minimum of a surface, which in turn is a task on which one aims to encode another, mathematically equivalent problem, such as optimizing a financial portfolio.

Introducing unconventional computing — Before describing the current state of “unconventional computing”, it is possibly clearer to introduce it by describing what it is not, and from where it draws ideas and shares characteristics. Let’s say for the moment that, depending on the approach, it may be, or may be not, a “quantum” process. Such unconventional computing can be generally characterized as analog computing, and there is an increased focus toward developing neuromorphic approaches, in the sense that the hardware is inspired by (artificial) neural network topology.

Digital quantum computing: not analog — “Standard” quantum computing algorithms are an example of digital computing. Starting from the early 90s, these proposals provided a new approach to computation, but framing it in terms of special logic gates applied in a “circuit”: it was still digital computing, but with different rules. A digital quantum circuit is an abstraction to describe the evolution of data through multiple steps, similarly to what happens to data going through the nodes of a neural network. Shor’s algorithm for prime factoring and Grover’s for “database search” are examples of circuit-based quantum algorithms, such as those that can be run on available quantum compilers like those by IBM Q or Rigetti Computing. Such digital quantum computers encode information on a completely different digital support to the one we are used to, the quantum bit.

Quantum Annealing — On a parallel track, already several decades ago, quantum physics theorists addressed the possibility of using the energy landscape of a quantum system to map a computational problem into its dynamics. For example, as temperature is gradually decreased, a quantum system tends to end up toward a lower energy configuration. Finding a global minimum of a energy potential can be mapped to effectively performing a given calculation. This approach, which we could loosely refer to as belonging to quantum annealing protocols, was commercially adopted by D-Wave and still has to demonstrate speedups in real devices or practical implementations, but research and characterization of next-generation devices is underway.

Quantum Simulation — An even different idea, with some comparisons to quantum annealing, is the more general one of analog quantum simulation. The main concept was famously set forth by Richard Feynman in the early 80s, from the observation that to simulate a quantum system of interest, to study its properties, it would be easier to do it with a controllable quantum system, used as a proxy. Say that you would like to study a material whose properties are controlled by quantum mechanics: numerically simulating the materials particles in a computer is unfeasible, and it might be easier to test its properties if you could replicate its salient features on a model, e.g., engineering a lattice of intersecting laser beams at whose nodes sit interacting particles.

Of course part of the challenge is to find a suitable simulator that contains enough complexity to provide insight into the original problem. If the dynamics and energy landscape of such quantum simulator faithfully replicates the given material, tuning some knobs that control the model is a much more feasible task even than experimentally studying a slab of material in a lab. A reprogrammable quantum simulator is in some sense a quantum computer, just not always as general as one might want it to be.

Enter Stage: Neural Networks — More recently, the success of deep neural networks architectures, and the demanding computational requirements these algorithms are hungry for, have led to look into new ways of performing calculations, optimizing computational time, latency, and connectivity. Indeed, the modern deployment of some neural network applications now relies on architectures that go beyond standard computing processing units (CPUs). They still rely on digital computing, but taking advantage of a specific hardware design, such as one on which some calculations can be natively run in parallel. There is no need for a CPU to oversee all parallel calculations, which can run efficiently on a different kind of hardware, such as graphical processing units (GPUs). An example of applications is the field of image recognition, in which deep neural network algorithms have exceeded previous performances and other techniques year on year in the past five years in contests such as ImageNet. In practice, this is due to the novelty of requirements. In machine learning with neural networks, one can train and deploy an algorithm by exploiting a distributed computing power for a problem that can be easily parallelized.

Analog computing, An old story new — Going back to the development side of analog computing, relevant advancements in manufacturing novel solid-state and photonics (i.e., light-based) systems has occurred over the past thirty years. Is this just an old story new? Academic groups and corporate R&Ds have been for years been chasing optical computing, with its promised large-data bandwidth and frictionless on-chip data sharing, then cyclically subduing to electronic computing as computational power just kept steadily increasing, leveraging on the semiconductor (silicon) industry. Even back in the early days of computing, specific-purpose machines have been deployed in strategic and research networks, with specific requirements for their operation. Gradually, each of these machines went out of production — sometimes bringing down the manufacturing companies behind them. General purpose computers just kept getting better, standardizing requirements and removing incompatibilities.

The current renaissance in unconventional computing is due to a mix of things. The end of Moore’s law is one aspect (although sometimes it is a too simplistic narrative). The advancement in manufacturing and characterization techniques has reached the point that tinkering with an experiment’s degrees of freedom with exquisite precision and comparing this to playing with the control knobs of a computer became not any more a heresy, but a fascinating narrative. A variety of approaches leads to unconventional computing. Below are some outstanding trends and examples.

Analog computing startups: photonics — A couple of years ago, control over photonics devices led to implement a prototype of a reprogrammable analog neural network, jointly in the labs of Dirk Englund and Marin Soljačić at MIT. The first two authors of the relative publication started not one, but two competing startups, backed by Chinese investors and Google Ventures (GV), among others. Link 1 Link 2

Another center of R&D research in photonics is NTT Docomo. This was formerly the Nippon Telegraph and Television company, which similarly to AT&T with Bell Labs set up basic and applied research infrastructure spanning many fields. In connection with Stanford University, Yoshihisa Yamamoto’s “coherent Ising machine” has been crafted to investigate whether a particular approach to nonlinear components in a laser light circuit, known as parametric oscillators, could be programmed on very short timescales to simulate the energy landscape of a bunch of magnetic spins, which in turn is a well studied problem that can be used to embed on it optimization problems. Simulating the time evolution of such complex system could then be used to tackle optimization problems. The technology transfer has been subsidized by Japan’s government and will soon land in Silicon Valley, with the machine currently branded as LASOLV (for laser solver). Link

Analog computing startups: optoelectronics and electronics — Similarly, a variety of systems are trying to emulate neural networks and complex dynamical systems, sometimes aiming at exploiting quantum features, sometimes simply leveraging the extreme controllability of devices. Experimental research efforts on different platforms are underway on quantum-mechanical Bose-Einstein condensates in multi-mode optical cavities, such as Caltech and MIT. Skoltech, a recently established polytechnique research institute outside Moscow, is pioneering the development of polariton lattices, in which laser light undergoing wave modulation irradiates optically-activated semiconductor micro-cavities. These systems are seen as classical simulators in which to engineer combinatorial capabilities. Research on microelectromechanical systems is being performed in Europe in various places, including by the Volkswagen Foundation, HP research, and at CNRS in France.

Challenges for analog computing —The challenges for unconventional computing are somewhat similar to what is being experience by noisy intermediate scale quantum computing systems, known as NISQ. Firstly, a problem to tackle is noise reduction. The great idea of digital computing is that you can easily correct errors that have flipped a bit by mistake, for example with redundancy schemes (or quantum error correction). How noise scales in such systems is not always clear and in some instances it could hamper universal laws, while keeping room for some algorithms, working to solve some problems, that in a given limited regime could do much better than “standard computing” solutions. At the same time, controlled noise in a device could lead to chaotic regimes, which could be beneficial to the exploration of the phase space of solutions.

Secondly, there is the issue of benchmarking. This is a problem that has been already been encountered by quantum technologists. Drawing again for comparison for an example from quantum computing, the more qubits are not the better, or at least not the only metric to consider: one needs to take into consideration connectivity and decoherence times over gate operation spans, for example. Similar issues apply to unconventional computing machines. Comparing exact algorithms with heuristics can lead to misrepresentations. Similarly, complexity theory arguments might not be easily applicable or interpretable in practical terms. Comparing energy consumption might be one of the most straightforward ways to compare computing machines.

To overcome the current hurdles, research is underway along various lines of research in analog computing. More spinoffs might be incubating.

This is a collection of my articles on quantum technology, part of my Quantum Tech Newsletter. You can read the original posts also on Medium:

  1. Gravitational Quantum Sensors
  2. Quantum Advantage
  3. Analog Computing
  4. Quantum Internet
  5. Quantum Games
  6. Open-Source Quantum Tech
  7. Quantum Machine Learning
  8. Space Quantum Communication

© Nathan Shammah — 2017 and beyond.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: