Invention Grant
- Patent Title: Scalable, ultra-low-latency photonic tensor processor
-
Application No.: US17673268Application Date: 2022-02-16
-
Publication No.: US11546077B2Publication Date: 2023-01-03
- Inventor: Liane Sarah Beland Bernstein , Alexander Sludds , Dirk Robert Englund
- Applicant: Massachusetts Institute of Technology
- Applicant Address: US MA Cambridge
- Assignee: Massachusetts Institute of Technology
- Current Assignee: Massachusetts Institute of Technology
- Current Assignee Address: US MA Cambridge
- Agency: Smith Baluch LLP
- Main IPC: H04J14/02
- IPC: H04J14/02 ; G06N3/067 ; H04B10/61 ; H04B10/54

Abstract:
Deep neural networks (DNNs) have become very popular in many areas, especially classification and prediction. However, as the number of neurons in the DNN increases to solve more complex problems, the DNN becomes limited by the latency and power consumption of existing hardware. A scalable, ultra-low latency photonic tensor processor can compute DNN layer outputs in a single shot. The processor includes free-space optics that perform passive optical copying and distribution of an input vector and integrated optoelectronics that implement passive weighting and the nonlinearity. An example of this processor classified the MNIST handwritten digit dataset (with an accuracy of 94%, which is close to the 96% ground truth accuracy). The processor can be scaled to perform near-exascale computing before hitting its fundamental throughput limit, which is set by the maximum optical bandwidth before significant loss of classification accuracy (determined experimentally).
Public/Granted literature
- US20220337333A1 Scalable, Ultra-Low-Latency Photonic Tensor Processor Public/Granted day:2022-10-20
Information query