University of Illinois Chicago
Browse

Architectural Support For Unified Memory Controller

Download (926.79 kB)
thesis
posted on 2016-07-01, 00:00 authored by Vivek Porush
Advances in semiconductor fabrication technology and processor architecture has resulted in development of high performance processors. These processors have multi – core, thread architecture that further exacerbates memory demand. Sustainable performance improvements on these architectures require contemporary memories to provide high bandwidth and capacity at low power along with several other constraints. Although conventional memory systems have been constantly pushed to meet these requirements, disparity between processor and memory performance is ever increasing. This results in significant performance bottleneck. Recent research in memory technologies has resulted in development of novel non - volatile memory technologies (PCM, MRAM). These memories offer several advantages such as high scalability, capacity and bandwidth at low power and can further improve overall system performance. Since these technologies are fundamentally different than contemporary memories, a new framework is required to integrate them in current memory architecture. Previously, researchers relied on independent memory controllers to integrate these memories within a single heterogeneous system. However with reduction in VLSI feature size, controllers are now placed on the processor die itself. These trends point towards the urge of developing a new memory controller framework that is technology independent and can benefit from multiple memory technologies. One such framework is Universal Memory Controller (UniMA). Throughout our study we investigate design and performance overheads in UniMA framework and propose several design alternatives. First, we analyze physical token passage topologies in terms of performance, cost and reliability. Analysis shows that bus topology for token passage is better suited for desktop and mobile applications. Next, we propose new shared channel ownership protocols namely static token assignment, individual token hold and dynamic token assignment. These token assignment policies aim at reducing overall token passage overhead along with maintaining fairness on the shared channel. Results indicate that longer token passage overheads can be successfully hidden in memory intensive applications. Further, dynamic token passage offers best performance by adapting to the current memory demands. Finally, we analyze the effect of request reordering and page management on the overall performance of UniMA framework. Results indicate that closed page policy offers best performance in multi - threaded environment.

History

Advisor

Zhu, Zhichun

Department

Electrical and Computer Engineering

Degree Grantor

University of Illinois at Chicago

Degree Level

  • Masters

Committee Member

Rao, Wenjing Kshemkalyani, Ajay

Submitted date

2016-05

Language

  • en

Issue date

2016-07-01

Usage metrics

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC