-
公开(公告)号:US20210318929A1
公开(公告)日:2021-10-14
申请号:US17356338
申请日:2021-06-23
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Karthik KUMAR , Mark A. SCHMISSEUR , Thomas WILLHALM , Marcos E. CARRANZA
Abstract: Methods and apparatus for application aware memory patrol scrubbing techniques. The method may be performed on a computing system including one or more memory devices and running multiple applications with associated processes. The computer system may be implemented in a multi-tenant environment, where virtual instances of physical resources provided by the system are allocated to separate tenants, such as through virtualization schemes employing virtual machines or containers. Quality of Service (QoS) scrubbing logic and novel interfaces are provided to enable memory scrubbing QoS policies to be applied at the tenant, application, and/or process level. This QoS policies may include memory ranges for which specific policies are applied, as well as bandwidth allocations for performing scrubbing operations. A pattern generator is also provided for generating scrubbing patterns based on observed or predicted memory access patterns and/or predefined patterns.
-
公开(公告)号:US20190042458A1
公开(公告)日:2019-02-07
申请号:US16017872
申请日:2018-06-25
Applicant: Intel Corporation
Inventor: Karthik KUMAR , Francesc GUIM BERNAT , Benjamin GRANIELLO , Thomas WILLHALM , Mustafa HAJEER
IPC: G06F12/0895
Abstract: Cache on a persistent memory module is dynamically allocated as a prefetch cache or a write back cache to prioritize read and write operations to a persistent memory on the persistent memory module based on monitoring read/write accesses and/or user-selected allocation.
-
公开(公告)号:US20190042429A1
公开(公告)日:2019-02-07
申请号:US15944598
申请日:2018-04-03
Applicant: Intel Corporation
Inventor: Karthik KUMAR , Mustafa HAJEER , Thomas WILLHALM , Francesc GUIM BERNAT , Benjamin GRANIELLO
IPC: G06F12/0831 , G06F12/0817
CPC classification number: G06F12/0831 , G06F12/0817 , G06F2212/621
Abstract: Examples include a processor including a coherency mode indicating one of a directory-based cache coherence protocol and a snoop-based cache coherency protocol, and a caching agent to monitor a bandwidth of reading from and/or writing data to a memory coupled to the processor, to set the coherency mode to the snoop-based cache coherency protocol when the bandwidth exceeds a threshold, and to set the coherency mode to the directory-based cache coherency protocol when the bandwidth does not exceed the threshold.
-
公开(公告)号:US20190042423A1
公开(公告)日:2019-02-07
申请号:US15957575
申请日:2018-04-19
Applicant: Intel Corporation
Inventor: Karthik KUMAR , Benjamin GRANIELLO , Mark A. SCHMISSEUR , Thomas WILLHALM , Francesc GUIM BERNAT
IPC: G06F12/0811 , G06F12/084 , G06F12/0897
Abstract: A method is described. The method includes configuring different software programs that are to execute on a computer with customized hardware caching service levels. The available set of hardware caching levels at least comprise L1, L2 and L3 caching levels and at least one of the following hardware caching levels is available for customized support of a software program L2, L3 and L4.
-
公开(公告)号:US20180260325A1
公开(公告)日:2018-09-13
申请号:US15979223
申请日:2018-05-14
Applicant: Intel Corporation
Inventor: Kshitij A. DOSHI , Thomas WILLHALM
IPC: G06F12/0804 , G06F12/0815 , G06F9/30 , G06F12/0811 , G06F12/084
Abstract: A processor of an aspect includes a plurality of packed data registers, and a decode unit to decode a vector cache line write back instruction. The vector cache line write back instruction is to indicate a source packed memory indices operand that is to include a plurality of memory indices. The processor also includes a cache coherency system coupled with the packed data registers and the decode unit. The cache coherency system, in response to the vector cache line write back instruction, to cause, any dirty cache lines, in any caches in a coherency domain, which are to have stored therein data for any of a plurality of memory addresses that are to be indicated by any of the memory indices of the source packed memory indices operand, to be written back toward one or more memories. Other processors, methods, and systems are also disclosed.
-
公开(公告)号:US20230222025A1
公开(公告)日:2023-07-13
申请号:US18124453
申请日:2023-03-21
Applicant: Intel Corporation
Inventor: Karthik KUMAR , Francesc GUIM BERNAT , Mark A. SCHMISSEUR , Thomas WILLHALM , Marcos E. CARRANZA
CPC classification number: G06F11/008 , G06F11/142
Abstract: Reliability, availability, and serviceability (RAS)-based memory domains can enable applications to store data in memory domains having different degrees of reliability to reduce downtime and data corruption due to memory errors. In one example, memory resources are classified into different RAS-based memory domains based on their expected likelihood of encountering errors. The mapping of memory resources into RAS-based memory domains can be dynamically managed and updated when information indicative of reliability (such as the occurrence of errors or other information) suggests that a memory resource is becoming less reliable. The RAS-based memory domains can be exposed to applications to enable applications to allocate memory in high reliability memory for critical data.
-
公开(公告)号:US20220318132A1
公开(公告)日:2022-10-06
申请号:US17847026
申请日:2022-06-22
Applicant: Intel Corporation
Inventor: Thomas WILLHALM , Francesc GUIM BERNAT , Karthik KUMAR
IPC: G06F12/02
Abstract: Methods and apparatus for software-assisted sparse memory. A processor including a memory controller is configured to implement one or more portions of the memory space for memory accessed via the memory controller as sparse memory regions. The amount of physical memory used for a sparse memory region is a fraction of the address range for the sparse memory region, where only non-zero data are written to the physical memory. Mechanisms are provided to detect memory access requests for memory in a sparse memory region and perform associated operations, while non-sparse memory access operations are performed when accessing memory that is not in a sparse memory region. Interfaces are provided to enable software to request allocation of a new sparse memory region or allocate sparse memory from an existing sparse memory region. Operations associated with access to sparse memory regions include detecting whether data for read and write request are all zeros.
-
公开(公告)号:US20220197819A1
公开(公告)日:2022-06-23
申请号:US17691743
申请日:2022-03-10
Applicant: Intel Corporation
Inventor: Karthik KUMAR , Francesc GUIM BERNAT , Thomas WILLHALM , Marcos E. CARRANZA , Cesar Ignacio MARTINEZ SPESSOT
IPC: G06F12/109 , G06F12/14
Abstract: Examples described herein relate to a memory controller to allocate an address range for a process among multiple memory pools based on a service level parameters associated with the address range and performance capabilities of the multiple memory pools. In some examples, the service level parameters include one or more of latency, network bandwidth, amount of memory allocation, memory bandwidth, data encryption use, type of encryption to apply to stored data, use of data encryption to transport data to a requester, memory technology, and/or durability of a memory device.
-
19.
公开(公告)号:US20190384837A1
公开(公告)日:2019-12-19
申请号:US16012515
申请日:2018-06-19
Applicant: Intel Corporation
Inventor: Karthik KUMAR , Francesc GUIM BERNAT , Thomas WILLHALM , Mark A. SCHMISSEUR , Benjamin GRANIELLO
IPC: G06F17/30 , G06F11/14 , G06F12/0804 , G06F12/02
Abstract: A group of cache lines in cache may be identified as cache lines not to be flushed to persistent memory until all cache line writes for the group of cache lines have been completed.
-
公开(公告)号:US20180004687A1
公开(公告)日:2018-01-04
申请号:US15201373
申请日:2016-07-01
Applicant: Intel Corporation
Inventor: Francesc GUIM BERNAT , Karthik KUMAR , Thomas WILLHALM , Narayan RANGANATHAN , Pete D. VOGT
IPC: G06F13/16 , G06F13/42 , G06F13/40 , H04L29/08 , H04L12/803
Abstract: An extension of node architecture and proxy requests enables a node to expose memory computation capability to remote nodes. A remote node can request execution of an operation by a remote memory computation resource, and the remote memory computation resource can execute the request locally and return the results of the computation. The node includes processing resources, a fabric interface, and a memory subsystem including a memory computation resource. The local execution of the request by the memory computation resource can reduce latency and bandwidth concerns typical with remote requests.
-
-
-
-
-
-
-
-
-