VITERBI PACK INSTRUCTION
    71.
    发明申请
    VITERBI PACK INSTRUCTION 审中-公开
    VITERBI包装规范

    公开(公告)号:WO2007109793A2

    公开(公告)日:2007-09-27

    申请号:PCT/US2007/064816

    申请日:2007-03-23

    CPC classification number: H03M13/4107 H03M13/41 H03M13/4169 H03M13/6505

    Abstract: A Viterbi pack instruction is disclosed that masks the contents of a first predicate register with a first masking value and masks the contents of a second predicate register with a second masking value. The resulting masked data is written to a destination register. The Viterbi pack instruction may be implemented in hardware, firmware, software, or any combination thereof.

    Abstract translation: 公开了一种维特比包指令,其以第一掩蔽值掩蔽第一谓词寄存器的内容,并用第二掩蔽值掩蔽第二谓词寄存器的内容。 所得到的掩蔽数据被写入目的寄存器。 维特比包指令可以以硬件,固件,软件或其任何组合来实现。

    UNIFIED NON-PARTITIONED REGISTER FILES FOR A DIGITAL SIGNAL PROCESSOR OPERATING IN AN INTERLEAVED MULTI-THREADED ENVIRONMENT
    72.
    发明申请
    UNIFIED NON-PARTITIONED REGISTER FILES FOR A DIGITAL SIGNAL PROCESSOR OPERATING IN AN INTERLEAVED MULTI-THREADED ENVIRONMENT 审中-公开
    用于数字信号处理器的统一的非分区寄存器文件在交互式多线程环境中运行

    公开(公告)号:WO2006110906A2

    公开(公告)日:2006-10-19

    申请号:PCT/US2006/014174

    申请日:2006-04-11

    Abstract: A processor device is disclosed and includes a memory and a sequencer that is responsive to the memory. The sequencer can support very long instruction word (VLIW) instructions and superscalar instructions. The processor device further includes a first instruction execution unit responsive to the sequencer, a second instruction execution unit responsive to the sequencer, a third instruction execution unit responsive to the sequencer, and a fourth instruction execution unit responsive to the sequencer. Further, the processor device includes a plurality of register files and each of the plurality of register files includes a plurality of registers. The plurality of register files are coupled to the sequencer and coupled to the first instruction execution unit, the second instruction execution unit, the third instruction execution unit, and the fourth instruction execution unit.

    Abstract translation: 公开了处理器设备,并且包括响应于存储器的存储器和定序器。 音序器可以支持非常长的指令字(VLIW)指令和超标量指令。 处理器装置还包括响应于定序器的第一指令执行单元,响应于定序器的第二指令执行单元,响应于定序器的第三指令执行单元和响应于定序器的第四指令执行单元。 此外,处理器装置包括多个寄存器文件,并且多个寄存器文件中的每一个包括多个寄存器。 多个寄存器文件耦合到定序器并耦合到第一指令执行单元,第二指令执行单元,第三指令执行单元和第四指令执行单元。

    INSTRUCTION-BASED SYNCHRONIZATION OF OPERATIONS INCLUDING AT LEAST ONE SIMD SCATTER OPERATION
    74.
    发明申请
    INSTRUCTION-BASED SYNCHRONIZATION OF OPERATIONS INCLUDING AT LEAST ONE SIMD SCATTER OPERATION 审中-公开
    基于指令的同步操作,包括至少一个SIMD散射操作

    公开(公告)号:WO2018057113A1

    公开(公告)日:2018-03-29

    申请号:PCT/US2017/044105

    申请日:2017-07-27

    Abstract: A method of determining an execution order of memory operations performed by a processor includes executing at least one single-instruction, multiple-data (SIMD) scatter operation by the processor to store data to a memory. The method further includes executing one or more instructions by the processor to determine the execution order of a set of memory operations. The set of memory operations includes the at least one SIMD scatter operation.

    Abstract translation: 确定由处理器执行的存储器操作的执行顺序的方法包括由处理器执行至少一个单指令多数据(SIMD)分散操作以将数据存储到存储器。 该方法还包括由处理器执行一个或多个指令以确定一组存储器操作的执行顺序。 该组内存操作包括至少一个SIMD分散操作。

    TABLE LOOKUP USING SIMD INSTRUCTIONS
    75.
    发明申请
    TABLE LOOKUP USING SIMD INSTRUCTIONS 审中-公开
    表使用SIMD指令

    公开(公告)号:WO2017030675A1

    公开(公告)日:2017-02-23

    申请号:PCT/US2016/041698

    申请日:2016-07-11

    CPC classification number: G06F9/30036 G06F9/30003 G06F9/3004

    Abstract: Systems and methods pertain to looking up entries of a table. A processor receives one or more single instruction multiple data (SIMD) instructions, including a first SIMD instruction which specifies a first subset of indices. A first subset of table entries is looked up, using a crossbar, with the first subset of indices. A first vector output of the first SIMD instruction is based on whether the outputs of the crossbar belong to a desired subset of table entries. Similarly, second, third, and fourth SIMD instructions specify corresponding second, third, and fourth subsets of indices to lookup the remaining table entries using the crossbar. The size of the crossbar is based on the number of indices in the subset of indices used to lookup table entries.

    Abstract translation: 系统和方法属于查找表的条目。 处理器接收一个或多个单指令多数据(SIMD)指令,包括指定索引的第一子集的第一SIMD指令。 使用交叉开关查找表条目的第一个子集,并使用索引的第一个子集。 第一SIMD指令的第一矢量输出基于交叉开关的输出是否属于表条目的期望子集。 类似地,第二,第三和第四SIMD指令指定相应的第二,第三和第四索引子集,以使用横杠来查找剩余的表条目。 交叉开关的大小基于用于查找表条目的索引子集中的索引数。

    VECTOR REGISTER ADDRESSING AND FUNCTIONS BASED ON A SCALAR REGISTER DATA VALUE
    76.
    发明申请
    VECTOR REGISTER ADDRESSING AND FUNCTIONS BASED ON A SCALAR REGISTER DATA VALUE 审中-公开
    基于标量寄存器数据值的矢量寄存器寻址和功能

    公开(公告)号:WO2014133895A2

    公开(公告)日:2014-09-04

    申请号:PCT/US2014/017713

    申请日:2014-02-21

    Abstract: Techniques are provided for executing a vector alignment instruction. A scalar register file in a first processor is configured to share one or more register values with a second processor, the one or more register values accessed from the scalar register file according to an Rt address specified in a vector alignment instruction, wherein a start location is determined from one of the shared register values. An alignment circuit in the second processor is configured to align data identified between the start location within a beginning Vu register of a vector register file (VRF) and an end location of a last Vu register of the VRF according to the vector alignment instruction. A store circuit is configured to select the aligned data from the alignment circuit and store the aligned data in the vector register file according to an alignment store address specified by the vector alignment instruction.

    Abstract translation: 提供了用于执行向量对齐指令的技术。 第一处理器中的标量寄存器文件被配置为与第二处理器共享一个或多个寄存器值,所述一个或多个寄存器值根据矢量对准指令中指定的Rt地址从标量寄存器文件访问,其中开始位置 从共享寄存器值之一确定。 第二处理器中的对准电路被配置为根据向量对准指令将矢量寄存器文件(VRF)的起始Vu寄存器内的起始位置与VRF的最后一个Vu寄存器的结束位置之间标识的数据进行对准。 存储电路被配置为从对准电路中选择对准的数据,并根据由向量对准指令指定的对准存储地址将对准的数据存储在向量寄存器文件中。

    CACHE MEMORY WITH WRITE THROUGH, NO ALLOCATE MODE

    公开(公告)号:WO2014004269A3

    公开(公告)日:2014-01-03

    申请号:PCT/US2013/046966

    申请日:2013-06-21

    Abstract: In a particular embodiment, a method of managing a cache memory includes, responsive to a cache size change command, changing a mode of operation of the cache memory to a write through/no allocate mode. The method also includes processing instructions associated with the cache memory while executing a cache clean operation when the mode of operation of the cache memory is the write through/no allocate mode. The method further includes after completion of the cache clean operation, changing a size of the cache memory and changing the mode of operation of the cache to a mode other than the write through/no allocate mode.

    FLOATING POINT CONSTANT GENERATION INSTRUCTION
    78.
    发明申请
    FLOATING POINT CONSTANT GENERATION INSTRUCTION 审中-公开
    浮动点常数生成指令

    公开(公告)号:WO2013119995A1

    公开(公告)日:2013-08-15

    申请号:PCT/US2013/025401

    申请日:2013-02-08

    CPC classification number: G06F9/30014 G06F9/30025 G06F9/30167

    Abstract: Systems and methods for generating a floating point constant value from an instruction are disclosed. A first field of the instruction is decoded as a sign bit of the floating point constant value. A second field of the instruction is decoded to correspond to an exponent value of the floating point constant value. A third field of the instruction is decoded to correspond to the significand of the floating point constant value. The first field, the second field, and the third field are combined to form the floating point constant value. The exponent value may include a bias, and a bias constant may be added to the exponent value to compensate for the bias. The third field may comprise the most significant bits of the significand. Optionally, the second field and the third field may be shifted by first and second shift values respectively before they are combined to form the floating point constant value.

    Abstract translation: 公开了从指令生成浮点常数值的系统和方法。 指令的第一个字段被解码为浮点常数值的符号位。 指令的第二个字段被解码以对应于浮点常数值的指数值。 指令的第三个字段被解码以对应于浮点常数值的有效位数。 第一个字段,第二个字段和第三个字段组合起来形成浮点常量值。 指数值可以包括偏置,并且可以将偏置常数加到指数值以补偿偏差。 第三个字段可以包括有效位的最高有效位。 可选地,第二场和第三场可以在组合之前分别移位第一和第二移位值以形成浮点常数值。

    USING THE LEAST SIGNIFICANT BITS OF A CALLED FUNCTION'S ADDRESS TO SWITCH PROCESSOR MODES
    79.
    发明申请
    USING THE LEAST SIGNIFICANT BITS OF A CALLED FUNCTION'S ADDRESS TO SWITCH PROCESSOR MODES 审中-公开
    使用呼叫功能地址的最小重要位置切换处理器模式

    公开(公告)号:WO2013119842A1

    公开(公告)日:2013-08-15

    申请号:PCT/US2013/025187

    申请日:2013-02-07

    Abstract: Systems and methods for tracking and switching between execution modes in processing systems. A processing system is configured to execute instructions in at least two instruction execution modes including a first and second execution mode chosen from a classic/aligned mode and a compressed/unaligned mode. Target addresses of selected instructions such as calls and returns are forcibly misaligned in the compressed mode, such one or more bits, such as, the least significant bits (alignment bits) of the target address in the compressed mode are different from the corresponding alignment bits in the classic mode. When the selected instructions are encountered during execution in the first mode, a decision to switch operation to the second mode is based on analyzing the alignment bits of the target address of the selected instruction.

    Abstract translation: 在处理系统中跟踪和切换执行模式的系统和方法。 处理系统被配置为以至少两个指令执行模式执行指令,包括从经典/对准模式和压缩/未对准模式选择的第一和第二执行模式。 所选择的指令(例如呼叫和返回)的目标地址在压缩模式下被强制地未对准,诸如压缩模式中的目标地址的最低有效位(对齐比特)之类的一个或多个比特与对应的比对比特不同 在经典模式下。 当在第一模式中执行期间遇到所选择的指令时,将操作切换到第二模式的决定是基于分析所选指令的目标地址的对准比特。

    UTILIZING NEGATIVE FEEDBACK FROM UNEXPECTED MISS ADDRESSES IN A HARDWARE PREFETCHER
    80.
    发明申请
    UTILIZING NEGATIVE FEEDBACK FROM UNEXPECTED MISS ADDRESSES IN A HARDWARE PREFETCHER 审中-公开
    利用硬件预选器中的意外错误地址使用负反馈

    公开(公告)号:WO2013109650A1

    公开(公告)日:2013-07-25

    申请号:PCT/US2013/021776

    申请日:2013-01-16

    CPC classification number: G06F12/0862 G06F2212/6026

    Abstract: Systems and methods for populating a cache using a hardware prefetcher are disclosed. A method for prefetching cache entries includes determining an initial stride value based on at least a first and second demand miss address in the cache, verifying the initial stride value based on a third demand miss address in the cache, prefetching a predetermined number of cache entries based on the verified initial stride value, determining an expected next miss address in the cache based on the verified initial stride value and addresses of the prefetched cache entries; and confirming the verified initial stride value based on comparing the expected next miss address to a next demand miss address in the cache. If the verified initial stride value is confirmed, additional cache entries are prefetched. If the verified initial stride value is not confirmed, further prefetching is stalled and an alternate stride value is determined.

    Abstract translation: 公开了使用硬件预取器填充高速缓存的系统和方法。 用于预取高速缓存条目的方法包括基于高速缓存中的至少第一和第二请求未命中地址来确定初始步幅值,基于高速缓存中的第三请求未命中地址来验证初始步幅值,预取数量的高速缓存条目 基于所验证的初始步幅值,基于所述经验证的初始步幅值和所述预取高速缓存条目的地址来确定所述高速缓存中的预期下一未命中地址; 并且基于将预期的下一个未命中地址与高速缓存中的下一个请求未命中地址进行比较来确认已验证的初始步幅值。 如果确认了验证的初始步幅值,则预取额外的高速缓存条目。 如果验证的初始步幅值未被确认,则进一步预取停止并且确定替代步幅值。

Patent Agency Ranking