-
公开(公告)号:MY185857A
公开(公告)日:2021-06-14
申请号:MYPI2017701834
申请日:2015-10-12
Applicant: INTEL CORP
Inventor: HAIDER NAZAR S , MULLA DEAN , CHU ALLEN W
Abstract: Systems, methods, and devices are disclosed for mitigating voltage droop in a computing device. An example apparatus includes a plurality of threshold registers (328, 330, 332) to store respective voltage droop thresholds, and an interface to receive a license grant message (124; 326) indicating a license mode for a processor core or domain (314). The license mode corresponds to a selected set of execution units (116, 118, 129) in the processor core or domain (314). The apparatus also includes a voltage droop correction module (126) to, based on the license mode indicated in the license grant message (124; 326), select one of the voltage droop thresholds from the plurality of voltage droop registers, and compare a voltage droop in the processor core or domain (314) with the selected voltage droop threshold. Based on the comparison, the apparatus triggers a voltage droop correction process.
-
公开(公告)号:WO2007016395A2
公开(公告)日:2007-02-08
申请号:PCT/US2006029529
申请日:2006-07-27
Applicant: INTEL CORP , MULLA DEAN , KHANNA RAHUL , PFLEDERER KEITH
Inventor: MULLA DEAN , KHANNA RAHUL , PFLEDERER KEITH
Abstract: Embodiments of the invention are generally directed to apparatuses, methods, and systems for a computing system feature activation mechanism. In an embodiment, a computing system receives a remotely generated feature activation information. The computing system compares the remotely generated feature activation information with a built-in feature activation mechanism. In an embodiment, a feature of the computing system is activated if the remotely generated feature activation information matches the built-in feature activation mechanism. Other embodiments are described and claimed.
Abstract translation: 本发明的实施例一般涉及用于计算系统特征激活机制的装置,方法和系统。 在一个实施例中,计算系统接收远程产生的特征激活信息。 计算系统将远程生成的特征激活信息与内置的特征激活机制进行比较。 在一个实施例中,如果远程生成的特征激活信息与内置特征激活机制匹配,则激活计算系统的特征。 描述和要求保护其他实施例。
-
公开(公告)号:DE112006002072T5
公开(公告)日:2008-06-12
申请号:DE112006002072
申请日:2006-07-27
Applicant: INTEL CORP
Inventor: MULLA DEAN , KHANNA RAHUL , PFLEDERER KEITH
Abstract: Embodiments of the invention are generally directed to apparatuses, methods, and systems for a computing system feature activation mechanism. In an embodiment, a computing system receives a remotely generated feature activation information. The computing system compares the remotely generated feature activation information with a built-in feature activation mechanism. In an embodiment, a feature of the computing system is activated if the remotely generated feature activation information matches the built-in feature activation mechanism. Other embodiments are described and claimed.
-
公开(公告)号:DE102021124514A1
公开(公告)日:2022-05-25
申请号:DE102021124514
申请日:2021-09-22
Applicant: INTEL CORP
Inventor: MULLA DEAN , KAM TIMOTHY , CHEMUDUPATI SURESH , DEHAEMER ERIC , SISTLA KRISHNAKANTH , DAS RIPAN , BANSAL YOGESH , HERDRICH ANDREW , VARMA ANKUSH , SHAPIRA DORIT , HAIDER NAZAR , GUPTA NIKHIL , SAMPATH PAVITHRA , KANDULA PHANI KUMAR , PARIKH RUPAL , VENUGOPAL SHRUTHI , WANG STEPHEN , HAAKE STEPHEN , SEWANI AMAN , PURANDARE ADWAIT , GUPTA UJJWAL , BALIGAR NIKETHAN SHIVANAND , GARG VIVEK , TULANOWSKI MICHAEL , PALIT NILANJAN , CHEN STANLEY
IPC: G06F1/32
Abstract: Eine Architektur für hierarchische Leistungsverwaltung (HPM, hierarchical power management) berücksichtigt die Grenzen der Skalierung auf einer Leistungsverwaltungssteuerung sowie die Autonomie an jedem Die und stellt eine vereinheitlichte Ansicht des Package für eine Plattform bereit. Auf einfachster Ebene weist die HPM-Architektur eine Supervisor- und eine oder mehrere Supervisand-Leistungsverwaltungseinheiten (PMUs) auf, die über mindestens zwei unterschiedliche Kommunikations-Fabrics kommunizieren. Jede PMU kann sich wie ein Supervisor für eine Anzahl von Supervisand-PMUs in einer bestimmten Domäne verhalten. HPM adressiert diese Bedürfnisse für Produkte, die eine Sammlung von Dies mit variierenden Leistungs- und Wärmeverwaltungsfähigkeiten und -bedürfnissen umfassen. HPM dient als ein vereinheitlichter Mechanismus, der eine Sammlung von Dies mit variierender Fähigkeit und Funktion überspannen kann, die zusammen ein traditionelles Ein-Chip-System (SoC) bilden. HPM stellt eine Basis zum Verwalten von Leistung und Temperatur über einen Satz unterschiedlicher Dies hinweg bereit.
-
公开(公告)号:DE19983687B4
公开(公告)日:2008-09-11
申请号:DE19983687
申请日:1999-10-18
Applicant: INTEL CORP
Inventor: FU JOHN , MULLA DEAN , MATHEWS GREGORY S , SAILER STUART E , SHAW JENG-JYE
Abstract: A method is provided for requesting data from a memory. The method includes issuing a plurality of data requests to a data request port for the memory. The plurality of data requests includes at least two ordered data requests. The method includes determining if an earlier one of the ordered data requests corresponds to a miss in the memory, and converting a later one of the ordered data requests to a prefetch in response to the earlier one of the ordered data requests corresponding to a miss in the memory. An apparatus includes a memory having at least one pipelined port for receiving data requests. The port is adapted to determine whether an earlier ordered one of the data requests corresponds to a miss in the memory. The port converts a later ordered one of the data requests to a prefetch in response to determining that the earlier ordered one of the data requests corresponds to a miss in the memory.
-
公开(公告)号:DE19983687T1
公开(公告)日:2001-11-22
申请号:DE19983687
申请日:1999-10-18
Applicant: INTEL CORP
Inventor: FU JOHN , MULLA DEAN , MATHEWS GREGORY S , SAILER STUART E , SHAW JENG-JYE
Abstract: A method is provided for requesting data from a memory. The method includes issuing a plurality of data requests to a data request port for the memory. The plurality of data requests includes at least two ordered data requests. The method includes determining if an earlier one of the ordered data requests corresponds to a miss in the memory, and converting a later one of the ordered data requests to a prefetch in response to the earlier one of the ordered data requests corresponding to a miss in the memory. An apparatus includes a memory having at least one pipelined port for receiving data requests. The port is adapted to determine whether an earlier ordered one of the data requests corresponds to a miss in the memory. The port converts a later ordered one of the data requests to a prefetch in response to determining that the earlier ordered one of the data requests corresponds to a miss in the memory.
-
-
-
-
-