posted on 2025-05-01, 00:00authored byMaeesha Binte Hashem
In this thesis, we introduce TimeFloats, an efficient train-in-memory architecture that im-
plements 8-bit floating-point scalar product computations in the time domain. By building on
the compute-in-memory paradigm—which consolidates both storage and inference within the
same memory array—TimeFloats adds support for floating-point operations, making it possible
to train deep neural networks directly on-chip.
Conventional compute-in-memory solutions often rely on ADCs and DACs, incurring high
power consumption and increased design complexity—especially at advanced CMOS technology
nodes. In contrast, TimeFloats employs time-domain signal processing to eliminate the need
for these domain converters, using mainly digital circuit blocks to minimize power usage and
noise sensitivity. This approach also enables high-resolution computations and straightforward
integration with standard digital circuits.
Simulation results in a 15,nm CMOS technology indicate that TimeFloats achieves an energy
efficiency of 22.1TOPS/W, underscoring its potential for low-power, high-performance edge
training applications.