Modern technology gives us many things.

Back Off Binary: Researchers Debut Ternary Semiconductor

0

One of the first things we learn about computers is that they perform operations using binary code to represent values. It’s a fundamental trait of computing — but it’s not a requirement. It’s possible to build a computer that uses three discrete values for computing rather than just two. “Ternary” is the term for this, though you’ll sometimes see “trinary” used instead. South Korean researchers backed by Samsung have created such a device and demonstrated it can be built using conventional CMOS manufacturing.

There are several different types of ternary computers. A ternary computer may use unbalanced trinary {0,1,2}, fractional unbalanced trinary {0, 1/2, 1}, balanced trinary {-1,0,1}, unknown state logic {F,?,T} or trinary coded binary {T,F,T}. The Soviet Union actually experimented with some ternary computers, but they were never commercialized at scale. A ternary bit is known as a trit (no word on whether four trits are known as a tribble).

Captain James T. Kirk contemplates a tribble. Or a physical representation of 23rd-century computing in stuffie form. You pick.

One potential advantage of ternary computing is that it’s a more efficient way to represent numbers, as fewer trits are needed to hold larger values. Three is the closest integer value to e (e, with a value of ~2.718 is defined as the most efficient base for representing arbitrary values). It is therefore somewhat more efficient to use base 3 as opposed to base 2. The South Korean researchers, led by Professor Kyung Rok Kim of UNIST’s Electrical & Computer Engineering Department, developed the first unbalanced ternary system {0,1,2} using leakage current to derive the third value.

“The latest research shows that there is a possibility of commercializing ternary semiconductors on the current binary-method chipmaking process technologies, which can lead a change in the paradigm of the semiconductors industry,” Kim told The Korea Herald.

In their paper, published by Nature, the team writes:

The human brain is an example of an energy-efficient system that uses ternary synaptic weights, consuming only ~20W with petascale connection densities. It is known that the brain’s energy efficiency is achieved through maximized parallelism with low-frequency processing (~10Hz). Massively parallel hardware architecture with low frequency, which mimics the structure of the brain, has been considered as a promising approach to break through the power scaling limitations of conventional CMOS digital circuits. In particular, neuromorphic systems on chips (SoCs) based on binary CMOS have been demonstrated, which use 4,096 maximally parallelized neurosynaptic cores and 128 k spiking neural networks at low-frequency (~1 kHz) operation and are of potential value in low-power artificial neural network applications.

This is a reference to Loihi, a neuromorphic architecture from Intel we covered recently. The significance of this research, specifically, is that it’s the first time we’ve seen a ternary design implemented in a modern 8-inch wafer using conventional CMOS processes. As the quote above implies, the South Korean team isn’t trying to build a high-speed replacement for conventional CMOS logic, but a vastly more efficient ternary platform that would enable ultra-low-power devices. Elsewhere in the paper, they speak of scaling the design between kilohertz and megahertz depending on design characteristics, and they imagine a system with both ternary and binary circuits (as well as conversion logic to move between the two). This is not a circuit design that would replace conventional Core or Ryzen CPUs, but it might find applications in AI and machine learning.

It remains to be seen if ternary logic will ever be adopted for computing, but the fact that we’re seeing it explored at the same time interest in AI and machine learning are spiking is no coincidence. We’re already seeing articles about how top-end AI efforts at companies like Facebook and Google are slamming into power limits. Reducing data movement and improving architectural efficiency are both important to efforts ranging from human-brain interfaces to the widespread adoption of neural networks. Slow, ultra-parallel architectures may have applications in this space that they won’t offer in more conventional computing, and with new process nodes offering smaller gains with every improvement, engineers are re-examining older ideas, hunting for ways to improve computing across various use-cases and scenarios. In theory, multi-valued logic circuits can provide higher efficiency, though the design logic is also much more complex.

The difficulty of adopting ternary computing relative to binary shouldn’t be understated, but it’s possible that such techniques might find uses in new areas like AI/ML, where architectures are less established and the need for high efficiency, especially in edge devices, is paramount. It’s difficult to completely evaluate the relative advantages of ternary versus binary computing because so little work has been done on ternary computing in the first place, but it might offer meaningful, useful advantages in specific applications. This research isn’t the only work being done on the concept; researchers at the 2016 Hackaday conference presented data on a balanced ternary system, again theoretically intended for the IoT and AI/ML applications.

Now Read:

  • How Makimoto’s Wave Explains the Tsunami of Specialized AI Processors Headed for Market
  • Chiplets Are the Future, but They Won’t Replace Moore’s Law
  • Is Moore’s Law Alive, Dead, or Pining for the Fjords? Even Experts Disagree

Leave A Reply

Your email address will not be published.