Skip to main content Accessibility help
×
Hostname: page-component-7479d7b7d-qs9v7 Total loading time: 0 Render date: 2024-07-08T19:31:52.513Z Has data issue: false hasContentIssue false

4 - The Future of Computing

3D SRAM for Neural Network, eBrain

Published online by Cambridge University Press:  17 September 2021

Tadahiro Kuroda
Affiliation:
University of Tokyo
Wai-Yeung Yip
Affiliation:
University of Tokyo
Get access

Summary

Chapter 4 introduces our vision of how to use TCI and TLC to enable More-than-Moore system performance leaps. It first explores how TCI can be employed to stack SRAM to offer better memory access performance than stacked DRAM for deep neural network (DNN) accelerators to enable system-level innovations and possible paradigm shifts. The idea of an electronic right brain is then introduced and its difference from an electronic left brain implemented with the conventional von Neumann computer explained. SRAM stacked on an FPGA using TCI is then proposed as an implementation of a DNN-based electronic right brain. It further describes how, by storing configuration information in the SRAM, the FPGA can be reconfigured in real time to enable virtualization of different DNNs over time and hence temporal scaling of the right-brain hardware. It then explains how this can be combined with an electronic left brain based on a von Neumann computer also enhanced by TCI to construct a complete electronic brain, and how it can be scaled both up and down to address different performance needs. The chapter concludes by exploring how such an electronic brain can support trends in the IC industry and the emerging digital society.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2021

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Ueyoshi, K., Ando, K., Hirose, K., Takamaeda-Yamazaki, S., Hamada, M., Kuroda, T., and Motomura, M.. (2019, Jan.). “QUEST: Multi-purpose log-quantized DNN inference engine stacked on 96-MB 3-D SRAM using inductive coupling technology in 40-nm CMOS.” IEEE Journal of Solid-State Circuits. 54(1), pp. 186196.Google Scholar
Kim, D., Kung, J., Chai, S., Yalamanchili, S., and Mukhopadhyay, S.. “Neurocube: A programmable digital neuromorphic architecture with high-density 3D memory.” 2016 ACM/IEEE International Symposium on Computer Architecture, pp. 380–392, Jun. 2016.Google Scholar
Gao, M., Pu, J., Yang, X., Horowitz, M., and Kozyrakis, C.. “Tetris: Scalable and efficient neural network acceleration with 3D memory.” 2017 ACM International Conference on Architectural Support for Programming Languages and Operating Systems, pp. 751–764, Apr. 2017.Google Scholar
Ueyoshi, K., Ando, K., Hirose, K., Takamaeda-Yamazaki, S., Kadomoto, J., Miyata, T., Hamada, M., Kuroda, T., and Motomura, M.. “QUEST: A 7.49TOPS multi-purpose log-quantized DNN inference engine stacked on 96MB 3D SRAM using inductive-coupling technology in 40nm CMOS.” 2018 IEEE International Solid-State Circuits Conference, pp. 216–218, Feb. 2018.Google Scholar
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., and Li, F. F.. (2015, Dec.). “ImageNet large scale visual recognition challenge.” International Journal of Computer Vision. 115(3), pp. 211252.Google Scholar
Shin, D., Lee, J., Lee, J., and Yoo, H. J.. “DNPU: An 8.1 TOPS/W reconfigurable CNN-RNN processor for general-purpose deep neural networks.” 2017 IEEE International Solid-State Circuits Conference, pp. 240–241, Feb. 2017.Google Scholar
Moons, B., Uytterhoeven, R., Dehaene, W., and Verhelst, M., “Envision: A 0.26-to-10 TOPS/W subword-parallel dynamic-voltage-accuracy-frequency-scalable convolutional neural network processor in 28nm FDSOI.” 2017 IEEE International Solid-State Circuits Conference, pp. 246–247, Feb. 2017.Google Scholar
Yin, S., Ouyang, P., Tang, S., Tu, F., Li, X., Liu, L., and Wei, S.. “A 1.06-to-5.09 TOPS/W reconfigurable hybrid-neural-network processor for deep learning applications.” 2017 Symposium on VLSI Circuits, pp. C26–C27, Jun. 2017.Google Scholar
CACTI version 6.5. [Online]. Accessed on Sep. 27, 2018. Available: www.hpl.hp.com/research/cacti.Google Scholar
Motomura, M.. “AIコンピューティングがアーキテクチャにもたらすもの [Changes in computer architecture driven by AI computing],” in Japanese, Hokkaido University, Sapporo, Hokkaido Prefecture, Japan, Oct. 2018.Google Scholar
Hennessy, J. L. and Patterson, D. A.. (2017). Computer Architecture: A Quantitative Approach, 6th ed.. Cambridge: Morgan Kaufmann.Google Scholar
Trimberger, S., Carberry, D., Johnson, A., and Wong, J.. “A time-multiplexed FPGA.” The 5th IEEE Symposium on Field-Programmable Custom Computing Machines, pp. 153–160, Apr. 1997.Google Scholar
Junaidi, A. R., Take, Y., and Kuroda, T.. “A 352Gb/s inductive-coupling DRAM/SoC interface using overlapping coils with phase division multiplexing and ultra-thin fan-out wafer level package.” 2014 Symposium on VLSI Circuits, pp. 29–30, Jun. 2014.Google Scholar
Arizona State University. Predictive Technology Model [Online]. Retrieve Dec. 8, 2015. Available: http://ptm.asu.edu/.Google Scholar
Cabinet Office of Japan. Society 5.0. Retrieved on Apr. 10, 2020. Available: www8.cao.go.jp/cstp/english/society5_0/index.html.Google Scholar
Kuroda, T.. “Semiconductor industry in 2025,” Panel Discussion (presented but not published), 2010 IEEE International Solid-State Circuits Conference, Feb. 2010.Google Scholar
Goulding, N., Sampson, J., Venkatesh, G., Garcia, S., Auricchio, J., Babb, J., Taylor, M. B., and Swanson, S.. (2010, Aug.) “GreenDroid: a mobile application processor for a future of dark silicon.” Hot Chips 22.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • The Future of Computing
  • Tadahiro Kuroda, University of Tokyo, Wai-Yeung Yip, University of Tokyo
  • Book: Wireless Interface Technologies for 3D IC and Module Integration
  • Online publication: 17 September 2021
  • Chapter DOI: https://doi.org/10.1017/9781108893299.005
Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

  • The Future of Computing
  • Tadahiro Kuroda, University of Tokyo, Wai-Yeung Yip, University of Tokyo
  • Book: Wireless Interface Technologies for 3D IC and Module Integration
  • Online publication: 17 September 2021
  • Chapter DOI: https://doi.org/10.1017/9781108893299.005
Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

  • The Future of Computing
  • Tadahiro Kuroda, University of Tokyo, Wai-Yeung Yip, University of Tokyo
  • Book: Wireless Interface Technologies for 3D IC and Module Integration
  • Online publication: 17 September 2021
  • Chapter DOI: https://doi.org/10.1017/9781108893299.005
Available formats
×