For a long time, applications have relied on general-purpose processors for their execution. Their increasing demands were met with the improvement in technology predicted by Moore’s Law. Nevertheless, the technology gains are diminishing and at the same time the diversity and increasing demands of the applications lead to the fact that the one-size-fits-all approach has reached its limits. As such, more efficient and powerful solutions focus on the development of different computer architectures for different application domains. Several domain specific architectures, also known as accelerators, have been proposed in the recent past. The GPU is one such accelerator which was originally designed to accelerate computer graphics and but has evolved to an architecture that is driving most machine-learning (ML) applications. While the number of accelerators in the market is increasing significantly, there are still many challenges and open questions!
In this presentation I will outline our future work that will be focusing on the design of efficient, scalable, and flexible architectures for different domains including deep learning, database, and graph workloads. One of the challenges that will be addressed is the development of architectures that are able to scale both up to large server systems and down to sensor attached devices. Another challenge is the large demand in terms of data required by the emerging applications which will be addressed by closing the gap between the processing and memory units. Finally, as architectures are now being designed to efficiently solve problems in a specific domain, now more than ever there is a need to close the gap between the hardware and the software and thus put an emphasis on hardware-software co-design. In summary, the work focuses on the development of hardware-software co-design domain specific accelerators exploiting processing-in-memory technology for the compute continuum!
Zoom, register for link
26 February, 2021, 11:30
26 February, 2021, 12:30