University of Illinois Chicago
Browse

Exploring Time-Domain Floating-Point Computation for Neural Network Training in Compute-in-Memory Systems

Download (1.37 MB)
thesis
posted on 2025-05-01, 00:00 authored by Maeesha Binte Hashem
In this thesis, we introduce TimeFloats, an efficient train-in-memory architecture that im- plements 8-bit floating-point scalar product computations in the time domain. By building on the compute-in-memory paradigm—which consolidates both storage and inference within the same memory array—TimeFloats adds support for floating-point operations, making it possible to train deep neural networks directly on-chip. Conventional compute-in-memory solutions often rely on ADCs and DACs, incurring high power consumption and increased design complexity—especially at advanced CMOS technology nodes. In contrast, TimeFloats employs time-domain signal processing to eliminate the need for these domain converters, using mainly digital circuit blocks to minimize power usage and noise sensitivity. This approach also enables high-resolution computations and straightforward integration with standard digital circuits. Simulation results in a 15,nm CMOS technology indicate that TimeFloats achieves an energy efficiency of 22.1TOPS/W, underscoring its potential for low-power, high-performance edge training applications.

History

Advisor

Amit Ranjan Trivedi

Department

Electrical and Computer Engineering

Degree Grantor

University of Illinois Chicago

Degree Level

  • Masters

Degree name

MS, Master of Science

Committee Member

Ahmet Enis Cetin Wenjing Rao

Thesis type

application/pdf

Language

  • en

Usage metrics

    Dissertations and Theses

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC