Technique for scheduling threads
    1.
    发明专利
    Technique for scheduling threads 审中-公开
    调度螺纹的技术

    公开(公告)号:JP2010061642A

    公开(公告)日:2010-03-18

    申请号:JP2009160234

    申请日:2009-07-06

    CPC classification number: G06F9/3851 G06F9/3009

    Abstract: PROBLEM TO BE SOLVED: To prevent deterioration in performance or increase in power consumption by switching threads at an optimum timing or by optimum frequency.
    SOLUTION: A pre-fetch buffer bank 101 includes a plurality of buffer entries and stores a plurality of instructions concerning respective threads. The instruction fetched about each thread is transmitted to a multiplexer (mux) 105 via interconnects T0, T1, T2, T3. The multiplexer 105 selects one instruction corresponding to the selected thread, on the basis of a selection line from thread picker logic 110. The thread picker logic may determines which thread is to be selected, i.e., which instruction of the thread is to be performed, on the basis of the expression of the fetched instruction which is to be provided by a thread block indicator 115 and/or a decoded indication or a decoder 120.
    COPYRIGHT: (C)2010,JPO&INPIT

    Abstract translation: 要解决的问题:通过在最佳定时或最佳频率切换线程来防止性能下降或功耗增加。 解决方案:预取缓冲器组101包括多个缓冲器条目并存储关于相应线程的多个指令。 关于每个线程取出的指令通过互连T0,T1,T2,T3被发送到多路复用器(多路复用器)105。 复用器105基于来自线程选择器逻辑110的选择线来选择与所选择的线程相对应的一条指令。线程选择器逻辑可以确定要选择哪个线程,即要执行线程的哪个指令, 基于由线程块指示符115和/或解码指示或解码器120提供的获取指令的表示。(C)2010,JPO和INPIT

    Partition-free multi-socket memory system architecture
    3.
    发明专利
    Partition-free multi-socket memory system architecture 审中-公开
    无分区多点存储器系统架构

    公开(公告)号:JP2013178823A

    公开(公告)日:2013-09-09

    申请号:JP2013111885

    申请日:2013-05-28

    Inventor: SPRANGLE ERIC

    Abstract: PROBLEM TO BE SOLVED: To provide a technique to increase the memory bandwidth for applications.SOLUTION: An apparatus comprises at least two processors coupled to at least two memories. A first processor 200 of the at least two processors is configured to read a first portion of data stored in a first memory 225 of the at least two memories and a second portion of data stored in a second memory 220 of the at least two memories within a first portion of a clock signal period. A second processor 205 of the at least two processors is configured to read a third portion of data stored in the first memory 225 of the at least two memories and a fourth portion of data stored in the second memory 220 of the at least two memories within the first portion of the clock signal period.

    Abstract translation: 要解决的问题:提供增加应用的存储器带宽的技术。解决方案:一种装置包括耦合到至少两个存储器的至少两个处理器。 所述至少两个处理器的第一处理器200被配置为读取存储在所述至少两个存储器的第一存储器225中的数据的第一部分和存储在所述至少两个存储器的第二存储器220中的第二部分数据 时钟信号周期的第一部分。 所述至少两个处理器的第二处理器205被配置为读取存储在所述至少两个存储器的第一存储器225中的数据的第三部分和存储在所述至少两个存储器的第二存储器220中的第四部分数据 时钟信号周期的第一部分。

    Sharing resources between cpu and gpu
    4.
    发明专利
    Sharing resources between cpu and gpu 有权
    CPU和GPU之间的共享资源

    公开(公告)号:JP2011175624A

    公开(公告)日:2011-09-08

    申请号:JP2010279280

    申请日:2010-12-15

    CPC classification number: G06T1/20

    Abstract: PROBLEM TO BE SOLVED: To share computing resources according to the type of workload to be processed between CPU and GPU.
    SOLUTION: A processor 100 includes a plurality of processing cores 100-1 to 100-N, a dedicated throughput application hardware 110 (for example, graphics texture sampling hardware), and a memory interface logic 120. The processor is arranged along with a ring interconnection. The CPU transfers some of operations scheduled by GPU hardware through a common memory or a direct link (or annular link), respectively, thereby executing these operations. Conversely, the operation scheduled by the graphics hardware can be transferred to an available CPU using a similar mechanism.
    COPYRIGHT: (C)2011,JPO&INPIT

    Abstract translation: 要解决的问题:根据CPU和GPU之间要处理的工作负载类型共享计算资源。 解决方案:处理器100包括多个处理核100-1至100-N,专用吞吐量应用硬件110(例如,图形纹理采样硬件)和存储器接口逻辑120.处理器沿着 与环互连。 CPU分别通过公共存储器或直接链路(或环形链路)传输由GPU硬件调度的一些操作,从而执行这些操作。 相反,由图形硬件调度的操作可以使用类似的机制传送到可用的CPU。 版权所有(C)2011,JPO&INPIT

    Partition-free multisocket memory system architecture
    5.
    发明专利
    Partition-free multisocket memory system architecture 审中-公开
    无分区多存储器系统架构

    公开(公告)号:JP2010009580A

    公开(公告)日:2010-01-14

    申请号:JP2009083082

    申请日:2009-03-30

    Inventor: SPRANGLE ERIC

    Abstract: PROBLEM TO BE SOLVED: To provide a technique for expanding a memory bandwidth of an application. SOLUTION: In a device having at least two processors connected to at least two memories, a first processor of the at least two processors reads a first portion of a data stored in a first memory of the at least two memories, and a second portion of a data stored in a second memory of the at least two memories, within the first portion of a clock signal period, and a second processor of the at least two processors reads a third portion of the data stored in the first memory of the at least two memories, and a fourth portion of the data stored in the second memory of the at least two memories, within the first portion of the clock signal period. COPYRIGHT: (C)2010,JPO&INPIT

    Abstract translation: 要解决的问题:提供扩展应用的存储器带宽的技术。 解决方案:在具有连接到至少两个存储器的至少两个处理器的设备中,所述至少两个处理器的第一处理器读取存储在所述至少两个存储器的第一存储器中的数据的第一部分,以及 存储在至少两个存储器的第二存储器中的数据的第二部分在时钟信号周期的第一部分内,并且所述至少两个处理器的第二处理器读取存储在第一存储器中的数据的第三部分 所述至少两个存储器以及存储在所述至少两个存储器的第二存储器中的数据的第四部分在所述时钟信号周期的所述第一部分内。 版权所有(C)2010,JPO&INPIT

    METHOD AND APPARATUS FOR DETERMINING A DYNAMIC RANDOM ACCESS MEMORY PAGE MANAGEMENT IMPLEMENTATION
    6.
    发明申请
    METHOD AND APPARATUS FOR DETERMINING A DYNAMIC RANDOM ACCESS MEMORY PAGE MANAGEMENT IMPLEMENTATION 审中-公开
    用于确定动态随机访问存储器页面管理实现的方法和装置

    公开(公告)号:WO2004061685A3

    公开(公告)日:2004-11-04

    申请号:PCT/US0338727

    申请日:2003-12-04

    Applicant: INTEL CORP

    CPC classification number: G06F13/1631 G06F12/0215

    Abstract: A system and method for a processor to determine a memory page management implementation used by a memory controller without necessarily having direct access to the circuits or registers of the memory controller is disclosed. In one embodiment, a matrix of counters correspond to potential page management implementations and numbers of pages per block. The counters may be incremented or decremented depending upon whether the corresponding page management implementations and numbers of pages predict a page boundary whenever a long access latency is observed. The counter with the largest value after a period of time may correspond to the actual page management implementation and number of pages per block.

    7.
    发明专利
    未知

    公开(公告)号:DE10392278T5

    公开(公告)日:2005-04-14

    申请号:DE10392278

    申请日:2003-01-23

    Applicant: INTEL CORP

    Abstract: A method and apparatus for accessing memory comprising monitoring memory accesses from a hardware prefetcher; determining whether the memory accesses from the hardware prefetcher are used by an out-of-order core; and switching memory accesses from a first mode to a second mode if a percentage of the memory access generated by the hardware prefetcher are used by the out-of-order core.

    PREFETCHING DATA IN COMPUTER SYSTEM
    9.
    发明申请
    PREFETCHING DATA IN COMPUTER SYSTEM 审中-公开
    计算机系统中的预取数据

    公开(公告)号:WO2004025457A2

    公开(公告)日:2004-03-25

    申请号:PCT/US0327716

    申请日:2003-09-06

    Applicant: INTEL CORP

    CPC classification number: G06F12/0897 G06F12/0862

    Abstract: A method and apparatus to detect and filter out redundant cache line addresses in a prefetch input queue, and to adjust the detector window size dynamically according to the number of detector entries in the queue for the cache-to-memory controller bus. Detectors correspond to cache line addresses that may represent cache misses in various levels of cache memory.

    Abstract translation: 一种用于检测和过滤预取输入队列中的冗余高速缓存行地址的方法和装置,并且根据用于高速缓存到存储器控制器总线的队列中的检测器条目的数量动态地调整检测器窗口大小。 检测器对应于高速缓存行地址,这可能代表各级高速缓存中的高速缓存未命中。

Patent Agency Ranking