-
1.
公开(公告)号:US11789878B2
公开(公告)日:2023-10-17
申请号:US16721706
申请日:2019-12-19
Applicant: Intel Corporation
Inventor: Benjamin Graniello , Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm
CPC classification number: G06F13/1663 , G06F3/061 , G06F3/067 , G06F3/0635 , G06F3/0685 , G06F9/5016 , G06F11/3037 , G06F12/0246 , G06F13/1678 , G06F15/7807
Abstract: Methods, apparatus and systems for adaptive fabric allocation for local and remote emerging memories-based prediction schemes. In conjunction with performing memory transfers between a compute host and memory device connected via one or more interconnect segments, memory read and write traffic is monitored for at least one interconnect segment having reconfigurable upstream lanes and downstream lanes. Predictions of expected read and write bandwidths for the at least one interconnect segment are then made. Based on the expected read and write bandwidths, the upstream lanes and downstream lanes are dynamically reconfigured. The interconnect segments include interconnect links such as Compute Exchange Link (CXL) flex buses and memory channels for local memory implementations, and fabric links for remote memory implementations. For local memory, management messages may be used to provide telemetry information containing the expected read and write bandwidths. For remote memory, telemetry information is provided to a fabric management component that is used to dynamically reconfigure one or more fabric links.
-
2.
公开(公告)号:US20240103914A1
公开(公告)日:2024-03-28
申请号:US17954411
申请日:2022-09-28
Applicant: Intel Corporation
Inventor: Russell J. Fenger , Rajshree A. Chabukswar , Benjamin Graniello , Monica Gupta , Guy M. Therien , Michael W. Chynoweth
IPC: G06F9/48 , G06F1/3228
CPC classification number: G06F9/4887 , G06F1/3228
Abstract: In one embodiment, a processor includes: a plurality of cores to execute instructions; at least one monitor coupled to the plurality of cores to measure at least one of power information, temperature information, or scalability information; and a control circuit coupled to the at least one monitor. Based at least in part on the at least one of the power information, the temperature information, or the scalability information, the control circuit is to notify an operating system that one or more of the plurality of cores are to transition to a forced idle state in which non-affinitized workloads are prevented from being scheduled. Other embodiments are described and claimed.
-
公开(公告)号:US10885004B2
公开(公告)日:2021-01-05
申请号:US16012515
申请日:2018-06-19
Applicant: Intel Corporation
Inventor: Karthik Kumar , Francesc Guim Bernat , Thomas Willhalm , Mark A. Schmisseur , Benjamin Graniello
Abstract: A group of cache lines in cache may be identified as cache lines not to be flushed to persistent memory until all cache line writes for the group of cache lines have been completed.
-
公开(公告)号:US10402330B2
公开(公告)日:2019-09-03
申请号:US15944598
申请日:2018-04-03
Applicant: Intel Corporation
Inventor: Karthik Kumar , Mustafa Hajeer , Thomas Willhalm , Francesc Guim Bernat , Benjamin Graniello
IPC: G06F12/00 , G06F12/0831 , G06F12/0817
Abstract: Examples include a processor including a coherency mode indicating one of a directory-based cache coherence protocol and a snoop-based cache coherency protocol, and a caching agent to monitor a bandwidth of reading from and/or writing data to a memory coupled to the processor, to set the coherency mode to the snoop-based cache coherency protocol when the bandwidth exceeds a threshold, and to set the coherency mode to the directory-based cache coherency protocol when the bandwidth does not exceed the threshold.
-
5.
公开(公告)号:US20240086341A1
公开(公告)日:2024-03-14
申请号:US18371513
申请日:2023-09-22
Applicant: Intel Corporation
Inventor: Benjamin Graniello , Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm
CPC classification number: G06F13/1663 , G06F3/061 , G06F3/0635 , G06F3/067 , G06F3/0685 , G06F9/5016 , G06F11/3037 , G06F12/0246 , G06F13/1678 , G06F15/7807
Abstract: Methods, apparatus and systems for adaptive fabric allocation for local and remote emerging memories-based prediction schemes. In conjunction with performing memory transfers between a compute host and memory device connected via one or more interconnect segments, memory read and write traffic is monitored for at least one interconnect segment having reconfigurable upstream lanes and downstream lanes. Predictions of expected read and write bandwidths for the at least one interconnect segment are then made. Based on the expected read and write bandwidths, the upstream lanes and downstream lanes are dynamically reconfigured. The interconnect segments include interconnect links such as Compute Exchange Link (CXL) flex buses and memory channels for local memory implementations, and fabric links for remote memory implementations. For local memory, management messages may be used to provide telemetry information containing the expected read and write bandwidths. For remote memory, telemetry information is provided to a fabric management component that is used to dynamically reconfigure one or more fabric links.
-
公开(公告)号:US11451435B2
公开(公告)日:2022-09-20
申请号:US16367626
申请日:2019-03-28
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Benjamin Graniello , Timothy Verrall , Andrew J. Herdrich , Rashmin Patel , Monica Kenguva , Brinda Ganesh , Alexander Vul , Ned M. Smith , Suraj Prabhakaran
IPC: H04L41/0803 , H04L41/5041
Abstract: Technologies for providing multi-tenant support in edge resources using edge channels include a device that includes circuitry to obtain a message associated with a service provided at the edge of a network. Additionally, the circuitry is to identify an edge channel based on metadata associated with the message. The edge channel has a predefined amount of resource capacity allocated to the edge channel to process the message. Further, the circuitry is to determine the predefined amount of resource capacity allocated to the edge channel and process the message using the allocated resource capacity for the identified edge channel.
-
公开(公告)号:US11188264B2
公开(公告)日:2021-11-30
申请号:US16780632
申请日:2020-02-03
Applicant: Intel Corporation
Inventor: Shekoufeh Qawami , Philip Hillier , Benjamin Graniello , Rajesh Sundaram
IPC: G06F3/06
Abstract: A memory system includes a nonvolatile (NV) memory device with asymmetry between intrinsic read operation delay and intrinsic write operation delay. The system can select to perform memory access operations with the NV memory device with the asymmetry, in which case write operations have a lower delay than read operations. The system can alternatively select to perform memory access operations with the NV memory device where a configured write operation delay that matches the read operation delay.
-
公开(公告)号:US10599579B2
公开(公告)日:2020-03-24
申请号:US16017872
申请日:2018-06-25
Applicant: Intel Corporation
Inventor: Karthik Kumar , Francesc Guim Bernat , Benjamin Graniello , Thomas Willhalm , Mustafa Hajeer
IPC: G06F12/00 , G06F12/0895 , G06F12/0846 , G06F12/0862
Abstract: Cache on a persistent memory module is dynamically allocated as a prefetch cache or a write back cache to prioritize read and write operations to a persistent memory on the persistent memory module based on monitoring read/write accesses and/or user-selected allocation.
-
公开(公告)号:US20200076682A1
公开(公告)日:2020-03-05
申请号:US16367626
申请日:2019-03-28
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Benjamin Graniello , Timothy Verrall , Andrew J. Herdrich , Rashmin Patel , Monica Kenguva , Brinda Ganesh , Alexander Vul , Ned M. Smith , Suraj Prabhakaran
IPC: H04L12/24
Abstract: Technologies for providing multi-tenant support in edge resources using edge channels include a device that includes circuitry to obtain a message associated with a service provided at the edge of a network. Additionally, the circuitry is to identify an edge channel based on metadata associated with the message. The edge channel has a predefined amount of resource capacity allocated to the edge channel to process the message. Further, the circuitry is to determine the predefined amount of resource capacity allocated to the edge channel and process the message using the allocated resource capacity for the identified edge channel.
-
-
-
-
-
-
-
-