-
公开(公告)号:US11188467B2
公开(公告)日:2021-11-30
申请号:US15717939
申请日:2017-09-28
Applicant: Intel Corporation
Inventor: Israel Diamand , Alaa R. Alameldeen , Sreenivas Subramoney , Supratik Majumder , Srinivas Santosh Kumar Madugula , Jayesh Gaur , Zvika Greenfield , Anant V. Nori
IPC: G06F12/00 , G06F12/0846 , G06F12/0811 , G06F12/128 , G06F12/121 , G06F12/0886 , G06F12/08
Abstract: A method is described. The method includes receiving a read or write request for a cache line. The method includes directing the request to a set of logical super lines based on the cache line's system memory address. The method includes associating the request with a cache line of the set of logical super lines. The method includes, if the request is a write request: compressing the cache line to form a compressed cache line, breaking the cache line down into smaller data units and storing the smaller data units into a memory side cache. The method includes, if the request is a read request: reading smaller data units of the compressed cache line from the memory side cache and decompressing the cache line.
-
公开(公告)号:US11151074B2
公开(公告)日:2021-10-19
申请号:US16542085
申请日:2019-08-15
Applicant: Intel Corporation
Inventor: Israel Diamand , Roni Rosner , Ravi Venkatesan , Shlomi Shua , Oz Shitrit , Henrietta Bezbroz , Alexander Gendler , Ohad Falik , Zigi Walter , Michael Behar , Shlomi Alkalay
IPC: G06F13/42 , G06N3/04 , G06F13/20 , G06F12/0893
Abstract: Methods and apparatus to implement multiple inference compute engines are disclosed herein. A disclosed example apparatus includes a first inference compute engine, a second inference compute engine, and an accelerator on coherent fabric to couple the first inference compute engine and the second inference compute engine to a converged coherency fabric of a system-on-chip, the accelerator on coherent fabric to arbitrate requests from the first inference compute engine and the second inference compute engine to utilize a single in-die interconnect port.
-
公开(公告)号:US11036277B2
公开(公告)日:2021-06-15
申请号:US16541674
申请日:2019-08-15
Applicant: Intel Corporation
Inventor: Israel Diamand , Avital Paz , Eran Nevet , Zigi Walter
IPC: G06F1/32 , G06F1/08 , G06F1/324 , G06F1/3234
Abstract: Methods and apparatus to dynamically throttle compute engines are disclosed. A disclosed example apparatus includes one or more compute engines to perform calculations, where the one or more compute engines are to cause a total power request to be issued based on the calculations. The example apparatus also includes a power management unit to receive the total power request and respond to the total power request. The apparatus also includes a throttle manager to adjust a throttle speed of at least one of the one or more compute engines based on comparing a minimum of the power request and a granted power to a total used power of the one or more compute engines prior to the power management unit responding to the total power request.
-
公开(公告)号:US10304418B2
公开(公告)日:2019-05-28
申请号:US15276856
申请日:2016-09-27
Applicant: Intel Corporation
Inventor: Daniel Greenspan , Randy Osborne , Zvika Greenfield , Israel Diamand , Asaf Rubinstein
IPC: G06F3/14 , G09G5/39 , G09G5/393 , G06F12/0895
Abstract: An electronic processing system may include a processor and a multi-level memory coupled to the processor, the multi-level memory including at least a main memory and a fast memory, the fast memory having relatively faster performance as compared to the main memory. The system may further include a fast memory controller coupled to the fast memory and a graphics controller coupled to the fast memory controller. The fast memory may include a cache portion allocated to a cache region to allow a corresponding mapping of elements of the main memory in the cache region, and a graphics portion allocated to a graphics region for the graphics controller with no corresponding mapping of the graphics region with the main memory.
-
公开(公告)号:US20180089096A1
公开(公告)日:2018-03-29
申请号:US15276856
申请日:2016-09-27
Applicant: Intel Corporation
Inventor: Daniel Greenspan , Randy Osborne , Zvika Greenfield , Israel Diamand , Asaf Rubinstein
IPC: G06F12/0893 , G09G5/39
CPC classification number: G09G5/39 , G06F3/14 , G06F12/0895 , G06F2212/604 , G09G5/393 , G09G2360/121
Abstract: An electronic processing system may include a processor and a multi-level memory coupled to the processor, the multi-level memory including at least a main memory and a fast memory, the fast memory having relatively faster performance as compared to the main memory. The system may further include a fast memory controller coupled to the fast memory and a graphics controller coupled to the fast memory controller. The fast memory may include a cache portion allocated to a cache region to allow a corresponding mapping of elements of the main memory in the cache region, and a graphics portion allocated to a graphics region for the graphics controller with no corresponding mapping of the graphics region with the main memory.
-
公开(公告)号:US09767041B2
公开(公告)日:2017-09-19
申请号:US14721625
申请日:2015-05-26
Applicant: Intel Corporation
Inventor: Aravindh V. Anantaraman , Zvika Greenfield , Israel Diamand , Anant V. Nori , Pradeep Ramachandran , Nir Misgav
IPC: G06F12/12 , G06F12/08 , G06F12/121 , G06F12/0891 , G06F12/0804 , G06F12/0868 , G06F12/0893 , G06F12/0864 , G06F12/123 , G06F12/128 , G06F9/44
CPC classification number: G06F12/121 , G06F9/4418 , G06F12/0804 , G06F12/0864 , G06F12/0868 , G06F12/0891 , G06F12/0893 , G06F12/123 , G06F12/128 , G06F2212/1021 , G06F2212/1024 , G06F2212/214 , G06F2212/608
Abstract: Apparatus, systems, and methods to manage memory operations are described. In one example, a controller comprises logic to receive a first transaction to operate on a first data element in the cache memory, perform a lookup operation for the first data element in the volatile memory and in response to a failed lookup operation, to generate a cache scrub hint forward the cache scrub hint to a cache scrub engine and identify one or more cache lines to scrub based at least in part on the cache scrub hint. Other examples are also disclosed and claimed.
-
公开(公告)号:US20160283392A1
公开(公告)日:2016-09-29
申请号:US14671927
申请日:2015-03-27
Applicant: Intel Corporation
Inventor: Zvika Greenfield , Nadav Bonen , Israel Diamand
IPC: G06F12/08
CPC classification number: G06F12/0895 , G06F12/0808 , G06F12/0842 , G06F12/0864 , G06F2212/1016 , G06F2212/62
Abstract: Embodiments are generally directed to an asymmetric set combined cache including a direct-mapped cache portion and a multi-way cache portion. A processor may include one or more processing cores for processing of data, and a cache memory to cache data from a main memory for the one or more processing cores, the cache memory including a first cache portion, the first cache portion including a direct-mapped cache, and a second cache portion, the second cache portion including a multi-way cache. The cache memory includes asymmetric sets in the first cache portion and the second cache portion, the first cache portion being larger than the second cache portion. A coordinated replacement policy for the cache memory provides for replacement of data in the first cache portion and the second cache portion.
Abstract translation: 实施例通常涉及包括直接映射高速缓存部分和多路高速缓存部分的非对称集合组合高速缓存。 处理器可以包括用于处理数据的一个或多个处理核心,以及高速缓存存储器,用于从一个或多个处理核心的主存储器缓存数据,高速缓存存储器包括第一高速缓存部分,第一高速缓存部分包括直接 - 映射的高速缓存和第二高速缓存部分,所述第二高速缓存部分包括多路高速缓存。 高速缓存存储器包括第一高速缓存部分和第二高速缓存部分中的非对称集合,第一高速缓存部分大于第二高速缓存部分。 缓存存储器的协调替换策略提供了第一高速缓存部分和第二高速缓存部分中的数据的替换。
-
-
-
-
-
-