Abstract:
A packet-switched reservation request to be associated with a first data stream is received. A communication mode is selected. The communication mode is to be either a circuit-switched mode or a packet-switched mode. At least a portion of the first data stream is communicated in accordance with the communication mode.
Abstract:
A multicast message that is to originate from a source is received. The multicast message comprises an identifier. A plurality of directions in which the multicast message is to fork at the router are stored. A plurality of messages from the directions in which the multicast message is to fork are received. The received messages are to comprise the identifier. The plurality of messages are aggregated into an aggregate message and sent towards the source.
Abstract:
A first pointer dereferencer receives a location of a portion of a first node of a data structure. The first node is to be stored in a first storage element. A first pointer is obtained from the first node of the data structure. A location of a portion of a second node of the data structure is determined based on the first pointer. The second node is to be stored in a second storage element. The location of the portion of the second node of the data structure is sent to a second pointer dereferencer that is to access the portion of the second node from the second storage element.
Abstract:
One embodiment provides for a graphics processing unit to accelerate machine-learning operations, the graphics processing unit comprising a multiprocessor having a single instruction, multiple thread (SIMT) architecture, the multiprocessor to execute at least one single instruction; and a first compute unit included within the multiprocessor, the at least one single instruction to cause the first compute unit to perform a two-dimensional matrix multiply and accumulate operation, wherein to perform the two-dimensional matrix multiply and accumulate operation includes to compute an intermediate product of 16-bit operands and to compute a 32-bit sum based on the intermediate product.
Abstract:
An apparatus includes a first port set that includes an input port and an output port. The apparatus further includes a plurality of second port sets. Each of the second port sets includes an input port coupled to the output port of the first port set and an output port coupled to the input port of the first port set. The plurality of second port sets are to each communicate at a first maximum bandwidth and the first port set is to communicate at a second maximum bandwidth that is higher than the first maximum bandwidth.
Abstract:
One embodiment provides a graphics processor comprising a memory controller and a graphics processing resource coupled with the memory controller. The graphics processing resource includes circuitry configured to execute an instruction to perform a matrix operation on first input including weight data and second input including input activation data, generate intermediate data based on a result of the matrix operation, quantize the intermediate data to a floating-point format determined based on a statistical distribution of first output data, and output, as second output data, quantized intermediate data in a determined floating-point format.
Abstract:
One embodiment provides for a graphics processing unit to accelerate machine-learning operations, the graphics processing unit comprising a multiprocessor having a single instruction, multiple thread (SIMT) architecture, the multiprocessor to execute at least one single instruction; and a first compute unit included within the multiprocessor, the at least one single instruction to cause the first compute unit to perform a two-dimensional matrix multiply and accumulate operation, wherein to perform the two-dimensional matrix multiply and accumulate operation includes to compute an intermediate product of 16-bit operands and to compute a 32-bit sum based on the intermediate product.
Abstract:
One embodiment provides for a graphics processing unit to accelerate machine-learning operations, the graphics processing unit comprising a multiprocessor having a single instruction, multiple thread (SIMT) architecture, the multiprocessor to execute at least one single instruction; and a first compute unit included within the multiprocessor, the at least one single instruction to cause the first compute unit to perform a two-dimensional matrix multiply and accumulate operation, wherein to perform the two-dimensional matrix multiply and accumulate operation includes to compute a 32-bit intermediate product of 16-bit operands and to compute a 32-bit sum based on the 32-bit intermediate product.
Abstract:
A processing apparatus is provided comprising a multiprocessor having a multithreaded architecture. The multiprocessor can execute at least one single instruction to perform parallel mixed precision matrix operations. In one embodiment the apparatus includes a memory interface and an array of multiprocessors coupled to the memory interface. At least one multiprocessor in the array of multiprocessors is configured to execute a fused multiply-add instruction in parallel across multiple threads.
Abstract:
Systems, apparatuses, and methods for k-nearest neighbor (KNN) searches are described. In particular, embodiments of a KNN accelerator and its uses are described. In some embodiments, the KNN accelerator includes a plurality of vector partial distance computation circuits each to calculate a partial sum, a minimum sort network to sort partial sums from the plurality of vector partial distance computation circuits to find k nearest neighbor matches and a global control circuit to control aspects of operations of the plurality of vector partial distance computation circuits.