-
公开(公告)号:US11175891B2
公开(公告)日:2021-11-16
申请号:US16370966
申请日:2019-03-30
Applicant: Intel Corporation
Inventor: Simon Rubanovich , Amit Gradstein , Zeev Sperber , Mrinmay Dutta
Abstract: Disclosed embodiments relate to performing floating-point addition with selected rounding. In one example, a processor includes circuitry to decode and execute an instruction specifying locations of first and second floating-point (FP) sources, and an opcode indicating the processor is to: bring the FP sources into alignment by shifting a mantissa of the smaller source FP operand to the right by a difference between their exponents, generating rounding controls based on any bits that escape; simultaneously generate a sum of the FP sources and of the FP sources plus one, the sums having a fuzzy-Jbit format having an additional Jbit into which a carry-out, if any, select one of the sums based on the rounding controls, and generate a result comprising a mantissa-wide number of most-significant bits of the selected sum, starting with the most significant non-zero Jbit.
-
公开(公告)号:US10528346B2
公开(公告)日:2020-01-07
申请号:US15940774
申请日:2018-03-29
Applicant: Intel Corporation
Inventor: Dipankar Das , Naveen K. Mellempudi , Mrinmay Dutta , Arun Kumar , Dheevatsa Mudigere , Abhisek Kundu
Abstract: Disclosed embodiments relate to instructions for fused multiply-add (FMA) operations with variable-precision inputs. In one example, a processor to execute an asymmetric FMA instruction includes fetch circuitry to fetch an FMA instruction having fields to specify an opcode, a destination, and first and second source vectors having first and second widths, respectively, decode circuitry to decode the fetched FMA instruction, and a single instruction multiple data (SIMD) execution circuit to process as many elements of the second source vector as fit into an SIMD lane width by multiplying each element by a corresponding element of the first source vector, and accumulating a resulting product with previous contents of the destination, wherein the SIMD lane width is one of 16 bits, 32 bits, and 64 bits, the first width is one of 4 bits and 8 bits, and the second width is one of 1 bit, 2 bits, and 4 bits.
-
公开(公告)号:US12288062B2
公开(公告)日:2025-04-29
申请号:US18399578
申请日:2023-12-28
Applicant: Intel Corporation
Inventor: Dipankar Das , Naveen K. Mellempudi , Mrinmay Dutta , Arun Kumar , Dheevatsa Mudigere , Abhisek Kundu
Abstract: Disclosed embodiments relate to instructions for fused multiply-add (FMA) operations with variable-precision inputs. In one example, a processor to execute an asymmetric FMA instruction includes fetch circuitry to fetch an FMA instruction having fields to specify an opcode, a destination, and first and second source vectors having first and second widths, respectively, decode circuitry to decode the fetched FMA instruction, and a single instruction multiple data (SIMD) execution circuit to process as many elements of the second source vector as fit into an SIMD lane width by multiplying each element by a corresponding element of the first source vector, and accumulating a resulting product with previous contents of the destination, wherein the SIMD lane width is one of 16 bits, 32 bits, and 64 bits, the first width is one of 4 bits and 8 bits, and the second width is one of 1 bit, 2 bits, and 4 bits.
-
4.
公开(公告)号:US20230195417A1
公开(公告)日:2023-06-22
申请号:US17559811
申请日:2021-12-22
Applicant: Intel Corporation
Inventor: Mrinmay Dutta , Simon Rubanovich , Amit Gradstein , Zeev Sperber
CPC classification number: G06F7/507 , G06F7/501 , G06F7/764 , G06F7/5057
Abstract: One embodiment provides a processor comprising at least one of a first mask to receive a first input operand and a second input operand and to generate a selected portion of an AND of a sum of the first input operand and the second input operand using an AND chain of the first mask in parallel with generation of the sum by an adder; and a second mask to receive the first input operand and the second input operand and to generate the selected portion of an OR of the sum using an OR chain of the second mask in parallel with generation of the sum.
-
公开(公告)号:US11900107B2
公开(公告)日:2024-02-13
申请号:US17704690
申请日:2022-03-25
Applicant: Intel Corporation
Inventor: Dipankar Das , Naveen K. Mellempudi , Mrinmay Dutta , Arun Kumar , Dheevatsa Mudigere , Abhisek Kundu
CPC classification number: G06F9/30014 , G06F7/483 , G06F7/5443 , G06F9/30036 , G06F9/30145 , G06F9/382 , G06F9/3802 , G06F9/384 , G06F9/3887 , G06N3/063 , G06F9/30065 , G06F2207/382
Abstract: Disclosed embodiments relate to instructions for fused multiply-add (FMA) operations with variable-precision inputs. In one example, a processor to execute an asymmetric FMA instruction includes fetch circuitry to fetch an FMA instruction having fields to specify an opcode, a destination, and first and second source vectors having first and second widths, respectively, decode circuitry to decode the fetched FMA instruction, and a single instruction multiple data (SIMD) execution circuit to process as many elements of the second source vector as fit into an SIMD lane width by multiplying each element by a corresponding element of the first source vector, and accumulating a resulting product with previous contents of the destination, wherein the SIMD lane width is one of 16 bits, 32 bits, and 64 bits, the first width is one of 4 bits and 8 bits, and the second width is one of 1 bit, 2 bits, and 4 bits.
-
公开(公告)号:US11321086B2
公开(公告)日:2022-05-03
申请号:US16735381
申请日:2020-01-06
Applicant: Intel Corporation
Inventor: Dipankar Das , Naveen K. Mellempudi , Mrinmay Dutta , Arun Kumar , Dheevatsa Mudigere , Abhisek Kundu
Abstract: Disclosed embodiments relate to instructions for fused multiply-add (FMA) operations with variable-precision inputs. In one example, a processor to execute an asymmetric FMA instruction includes fetch circuitry to fetch an FMA instruction having fields to specify an opcode, a destination, and first and second source vectors having first and second widths, respectively, decode circuitry to decode the fetched FMA instruction, and a single instruction multiple data (SIMD) execution circuit to process as many elements of the second source vector as fit into an SIMD lane width by multiplying each element by a corresponding element of the first source vector, and accumulating a resulting product with previous contents of the destination, wherein the SIMD lane width is one of 16 bits, 32 bits, and 64 bits, the first width is one of 4 bits and 8 bits, and the second width is one of 1 bit, 2 bits, and 4 bits.
-
公开(公告)号:US20220100517A1
公开(公告)日:2022-03-31
申请号:US17033741
申请日:2020-09-26
Applicant: Intel Corporation
Inventor: Ilya Albrekht , Wajdi Feghali , Regev Shemy , Or Beit Aharon , Mrinmay Dutta , Vinodh Gopal , Vikram B. Suresh
Abstract: Disclosed embodiments relate to systems and methods to performing instructions structured to compute a plurality of cryptic rounds of the block cipher. In one example, a processor includes fetch and decode circuitry to fetch and decode a single instruction comprising a first field to identify a destination of a first operand, a second field to identify a source of a second operand comprising an input state, a third field to identify a source of a third operand comprising a round key. The processor includes execution circuitry to execute the decoded instruction to compute a plurality of cryptic rounds of the block cipher by performing a round function on data elements of the second operand and the third operand to generate a word.
-
-
-
-
-
-