CONDITIONAL INSTRUCTION FOR A SINGLE INSTRUCTION, MULTIPLE DATA EXECUTION ENGINE
    1.
    发明申请
    CONDITIONAL INSTRUCTION FOR A SINGLE INSTRUCTION, MULTIPLE DATA EXECUTION ENGINE 审中-公开
    一个指令,多个数据执行机构的条件指令

    公开(公告)号:WO2006012070A3

    公开(公告)日:2006-05-26

    申请号:PCT/US2005021604

    申请日:2005-06-17

    CPC classification number: G06F9/3887 G06F9/30036 G06F9/30072 G06F9/3885

    Abstract: According to some embodiments, a conditional Single Instruction, Multiple Data instruction is provided. For example, a first conditional instruction may be received at an n-channel SIMD execution engine. The first conditional instruction may be evaluated based on multiple channels of associated data, and the result of the evaluation may be stored in an n-bit conditional mask register. A second conditional instruction may then be received at the execution engine and the result may be copied from the conditional mask register to an n-bit wide, m-entry deep conditional stack.

    Abstract translation: 根据一些实施例,提供了条件单指令,多数据指令。 例如,可以在n信道SIMD执行引擎处接收第一条件指令。 可以基于相关数据的多个信道来评估第一条件指令,并且可以将评估结果存储在n位条件掩码寄存器中。 然后可以在执行引擎处接收第二条件指令,并且可以将结果从条件屏蔽寄存器复制到n位宽的m入口深条件堆栈。

    2.
    发明专利
    未知

    公开(公告)号:AT389219T

    公开(公告)日:2008-03-15

    申请号:AT03781937

    申请日:2003-11-13

    Applicant: INTEL CORP

    Abstract: Methods, apparatus and computer readable medium are described that compress and/or decompress digital images in a lossless or a lossy manner. In some embodiments, a display controller may quantize pels of a digital image and may identify runs of successive quantized pels which are equal. The display controller may generate a symbol to represent an identified run of pels. The symbol may comprise a run length and a quantized pel that may be used to reconstruct the run of pels. The symbol may further comprise an error vector for each of the pels of the run that may be used to further reconstruct the run of pels.

    CACHE FOR A MULTI THREAD AND MULTI CORE SYSTEM AND METHODS THEREOF
    3.
    发明申请
    CACHE FOR A MULTI THREAD AND MULTI CORE SYSTEM AND METHODS THEREOF 审中-公开
    用于多线程和多核心系统的缓存及其方法

    公开(公告)号:WO2009006018A3

    公开(公告)日:2009-03-05

    申请号:PCT/US2008067279

    申请日:2008-06-18

    CPC classification number: G06F12/0859

    Abstract: According to one embodiment, the present disclosure generally provides a method for improving the performance of a cache of a processor. The method may include storing a plurality of data in a data Random Access Memory (RAM). The method may further include holding information for all outstanding requests forwarded to a next-level memory subsystem. The method may also include clearing information associated with a serviced request after the request has been fulfilled. The method may additionally include determining if a subsequent request matches an address supplied to one or more requests already in-flight to the next-level memory subsystem. The method may further include matching fulfilled requests serviced by the next-level memory subsystem to at least one requestor who issued requests while an original request was in-flight to the next level memory subsystem. The method may also include storing information specific to each request, the information including a set attribute and a way attribute, the set and way attributes configured to identify where the returned data should be held in the data RAM once the data is returned, the information specific to each request further including at least one of thread ID, instruction queue position and color. The method may additionally include scheduling hit and miss data returns. Of course, various alternative embodiments are also within the scope of the present disclosure.

    Abstract translation: 根据一个实施例,本公开一般提供了用于改进处理器的高速缓存的性能的方法。 该方法可以包括将多个数据存储在数据随机存取存储器(RAM)中。 该方法可以进一步包括保存转发给下一级存储器子系统的所有未完成请求的信息。 该方法还可以包括在请求已经被满足之后清除与服务请求相关联的信息。 该方法可以另外包括确定后续请求是否与提供给已经在进行中的一个或多个请求的地址匹配到下一级存储器子系统。 该方法可以进一步包括将由下一级存储器子系统服务的已履行请求与至少一个请求者进行匹配,所述至少一个请求者在原始请求正在飞行时发出请求,并且正在下一级存储器子系统中。 该方法还可以包括存储特定于每个请求的信息,该信息包括集合属性和方式属性,集合和方式属性被配置为一旦数据被返回就识别返回的数据应该保存在数据RAM中的何处,信息 特定于每个请求还包括线程ID,指令队列位置和颜色中的至少一个。 该方法可以另外包括安排命中和未命中数据返回。 当然,各种替代实施例也在本公开的范围内。

    Z-BUFFERING TECHNIQUES FOR GRAPHICS RENDERING
    4.
    发明申请
    Z-BUFFERING TECHNIQUES FOR GRAPHICS RENDERING 审中-公开
    用于图形渲染的Z缓冲技术

    公开(公告)号:WO2004061776A3

    公开(公告)日:2004-12-02

    申请号:PCT/US0336304

    申请日:2003-11-12

    Applicant: INTEL CORP

    CPC classification number: G06T15/405

    Abstract: Embodiments of the invention relate to graphics rendering in which Z-buffering tests are performed at the front of the rendering pipeline. Particularly, Z-buffering test logic at the front of the rendering pipeline is coupled to a render cache memory, which includes a Z-buffer, such that Z-buffering can be accomplished at the front of the rendering pipeline.

    Abstract translation: 本发明的实施例涉及在渲染管线的前面执行Z缓冲测试的图形呈现。 特别地,在渲染流水线前面的Z缓冲测试逻辑耦合到包括Z缓冲器的渲染高速缓冲存储器,使得可以在渲染管线的前面完成Z缓冲。

    5.
    发明专利
    未知

    公开(公告)号:AT470902T

    公开(公告)日:2010-06-15

    申请号:AT04815467

    申请日:2004-12-23

    Applicant: INTEL CORP

    Abstract: Multiple parallel passive threads of instructions coordinate access to shared resources using "active" semaphores. The semaphores are referred to as active because the semaphores send messages to execution and/or control circuitry to cause the state of a thread to change. A thread can be placed in an inactive state by a thread scheduler in response to an unresolved dependency, which can be indicated by a semaphore. A thread state variable corresponding to the dependency is used to indicate that the thread is in inactive mode. When the dependency is resolved a message is passed to control circuitry causing the dependency variable to be cleared. In response to the cleared dependency variable the thread is placed in an active state. Execution can proceed on the threads in the active state.

    Dynamic allocation of a buffer across multiple clients in a threaded processor

    公开(公告)号:GB2436044B

    公开(公告)日:2009-03-18

    申请号:GB0712505

    申请日:2005-10-13

    Applicant: INTEL CORP

    Inventor: PIAZZA THOMAS

    Abstract: A method may include distributing ranges of addresses in a memory among a first set of functions in a first pipeline. The first set of the functions in the first pipeline may operate on data using the ranges of addresses. Different ranges of addresses in the memory may be redistributed among a second set of functions in a second pipeline without waiting for the first set of functions to be flushed of data.

    7.
    发明专利
    未知

    公开(公告)号:DE60313664D1

    公开(公告)日:2007-06-14

    申请号:DE60313664

    申请日:2003-11-13

    Applicant: INTEL CORP

    Abstract: Methods, apparatus and computer readable medium are described that compress and/or decompress a digital image in a lossless or a lossy manner. In some embodiments, a display controller may compress a digital image by generating a symbol for each pel of the digital image. In particular, the symbol may represent a pel via a match vector and a channel error vector. The match vector may indicate which quantized channels of the pel matched quantized channels of a previous pel. Further, the channel error vector may comprise a lossless or lossy channel for each quantized channel of the pel that did not match a corresponding quantized channel of the previous pel. The channel error may also comprise a lossless or lossy channel error for each quantized channel of the pel that matched a corresponding quantized channel of the previous pel.

    CACHE FOR A MULTI THREAD AND MULTI CORE SYSTEM AND METHODS THEREOF
    8.
    发明公开
    CACHE FOR A MULTI THREAD AND MULTI CORE SYSTEM AND METHODS THEREOF 审中-公开
    CACHEFÜREIN MULTITHREAD- UND MERTIKERNSYSTEM UND VERFAHRENDAFÜR

    公开(公告)号:EP2160683A4

    公开(公告)日:2011-07-06

    申请号:EP08771309

    申请日:2008-06-18

    Applicant: INTEL CORP

    CPC classification number: G06F12/0859

    Abstract: A method includes storing a plurality of data RAM, holding information for all outstanding requests forwarded to a next-level memory subsystem, clearing information associated with a serviced request after the request has been fulfilled, determining if a subsequent request matches an address supplied to one or more requests already in-flight to the next-level memory subsystem, matching fulfilled requests serviced by the next-level memory subsystem to at least one requester who issued requests while an original request was in-flight to the next level memory subsystem, storing information specific to each request comprising a set attribute and a way attribute configured to identify where the returned data should be held in the data RAM once the data is returned, the information specific to each request further including at least one of thread ID, instruction queue position and color, and scheduling hit and miss data returns.

    Abstract translation: 一种方法包括存储多个数据RAM,保存转发到下一级存储器子系统的所有未完成请求的信息,在请求已经满足之后清除与服务请求相关联的信息,确定后续请求是否匹配提供给一个 或更多已经在飞行中的请求到下一级存储器子系统,将下一级存储器子系统服务的满足的请求与在原始请求正在进行到下一级存储器子系统时发出请求的至少一个请求者匹配,存储 特定于每个请求的信息包括集合属性和方式属性,被配置为在数据返回后识别返回的数据应保存在数据RAM中的位置,特定于每个请求的信息还包括线程ID,指令队列中的至少一个 位置和颜色,调度命中和错误数据返回。

    10.
    发明专利
    未知

    公开(公告)号:DE60313664T2

    公开(公告)日:2007-08-16

    申请号:DE60313664

    申请日:2003-11-13

    Applicant: INTEL CORP

    Abstract: Methods, apparatus and computer readable medium are described that compress and/or decompress a digital image in a lossless or a lossy manner. In some embodiments, a display controller may compress a digital image by generating a symbol for each pel of the digital image. In particular, the symbol may represent a pel via a match vector and a channel error vector. The match vector may indicate which quantized channels of the pel matched quantized channels of a previous pel. Further, the channel error vector may comprise a lossless or lossy channel for each quantized channel of the pel that did not match a corresponding quantized channel of the previous pel. The channel error may also comprise a lossless or lossy channel error for each quantized channel of the pel that matched a corresponding quantized channel of the previous pel.

Patent Agency Ranking