Adaptive CPU NUMA Scheduling
    1.
    发明申请
    Adaptive CPU NUMA Scheduling 审中-公开
    自适应CPU NUMA调度

    公开(公告)号:US20160085571A1

    公开(公告)日:2016-03-24

    申请号:US14492051

    申请日:2014-09-21

    Applicant: VMware, Inc.

    Abstract: Examples perform selection of non-uniform memory access (NUMA) nodes for mapping of virtual central processing unit (vCPU) operations to physical processors. A CPU scheduler evaluates the latency between various candidate processors and the memory associated with the vCPU, and the size of the working set of the associated memory, and the vCPU scheduler selects an optimal processor for execution of a vCPU based on the expected memory access latency and the characteristics of the vCPU and the processors. Some examples contemplate monitoring system characteristics and rescheduling the vCPUs when other placements may provide improved performance and/or efficiency.

    Abstract translation: 示例执行非均匀存储器访问(NUMA)节点的选择,用于将虚拟中央处理单元(vCPU)操作映射到物理处理器。 CPU调度器评估各种候选处理器与与vCPU相关联的存储器之间的延迟以及相关联存储器的工作集合的大小,并且vCPU调度器基于预期的存储器访问延迟选择用于执行vCPU的最优处理器 以及vCPU和处理器的特性。 一些示例考虑监视系统特性并重新安排vCPU,当其他布局可能提供改进的性能和/或效率时。

    ADAPTIVE CPU NUMA SCHEDULING
    3.
    发明申请

    公开(公告)号:US20190205155A1

    公开(公告)日:2019-07-04

    申请号:US16292502

    申请日:2019-03-05

    Applicant: VMware, Inc.

    Abstract: Systems and methods for performing selection of non-uniform memory access (NUMA) nodes for mapping of virtual central processing unit (vCPU) operations to physical processors are provided. A CPU scheduler evaluates the latency between various candidate processors and the memory associated with the vCPU, and the size of the working set of the associated memory, and the vCPU scheduler selects an optimal processor for execution of a vCPU based on the expected memory access latency and the characteristics of the vCPU and the processors. The systems and methods further provide for monitoring system characteristics and rescheduling the vCPUs when other placements provide improved performance and efficiency.

    Adaptive CPU NUMA scheduling
    5.
    发明授权

    公开(公告)号:US10776151B2

    公开(公告)日:2020-09-15

    申请号:US16292502

    申请日:2019-03-05

    Applicant: VMware, Inc.

    Abstract: Systems and methods for performing selection of non-uniform memory access (NUMA) nodes for mapping of virtual central processing unit (vCPU) operations to physical processors are provided. A CPU scheduler evaluates the latency between various candidate processors and the memory associated with the vCPU, and the size of the working set of the associated memory, and the vCPU scheduler selects an optimal processor for execution of a vCPU based on the expected memory access latency and the characteristics of the vCPU and the processors. The systems and methods further provide for monitoring system characteristics and rescheduling the vCPUs when other placements provide improved performance and efficiency.

    Efficient online construction of miss rate curves
    7.
    发明授权
    Efficient online construction of miss rate curves 有权
    有效率在线构建失误率曲线

    公开(公告)号:US09223722B2

    公开(公告)日:2015-12-29

    申请号:US14196100

    申请日:2014-03-04

    Applicant: VMware, Inc.

    Abstract: Miss rate curves are constructed in a resource-efficient manner so that they can be constructed and memory management decisions can be made while the workloads are running. The resource-efficient technique includes the steps of selecting a subset of memory pages for the workload, maintaining a least recently used (LRU) data structure for the selected memory pages, detecting accesses to the selected memory pages and updating the LRU data structure in response to the detected accesses, and generating data for constructing a miss-rate curve for the workload using the LRU data structure. After a memory page is accessed, the memory page may be left untraced for a period of time, after which the memory page is retraced.

    Abstract translation: 错误率曲线以资源有效的方式构建,以便可以构建它们,并且可以在工作负载运行时进行内存管理决策。 资源有效的技术包括以下步骤:为工作负载选择存储器页面的子集,维护所选择的存储器页面的最近最少使用的(LRU)数据结构,检测对所选择的存储器页面的访问并响应更新LRU数据结构 并且使用LRU数据结构生成用于构建工作负载的错过率曲线的数据。 在访问存储器页面之后,存储器页面可以保持未被跟踪一段时间,之后再回读存储器页面。

    Cache performance prediction and scheduling on commodity processors with shared caches
    10.
    发明授权
    Cache performance prediction and scheduling on commodity processors with shared caches 有权
    具有共享缓存的商品处理器上的缓存性能预测和调度

    公开(公告)号:US09430287B2

    公开(公告)日:2016-08-30

    申请号:US14657970

    申请日:2015-03-13

    Applicant: VMware, Inc.

    Inventor: Puneet Zaroo

    Abstract: A method includes assigning a thread performance counter to threads being created in the computing environment, the thread performance counter measuring a number of cache misses for a corresponding thread. The method also includes calculating a self-thread value S as a change in the thread performance counter of a given thread during a predetermined period, calculating an other-thread value O as a sum of changes in all the thread performance counters during the predetermined period minus S, and calculating an estimation adjustment value associated with a first probability that a second set of cache misses for the corresponding thread replace a cache area currently occupied by the corresponding thread. The method also includes estimating a cache occupancy for the thread based on a previous occupancy for the thread, S, O, and the estimation adjustment value, and assigning computing environment resources to the thread based on the estimated cache occupancy.

    Abstract translation: 一种方法包括:将线程性能计数器分配给在计算环境中创建的线程,所述线程性能计数器测量相应线程的多个高速缓存未命中。 该方法还包括计算自身线程值S作为预定时间段内给定线程的线程性能计数器的变化,在预定时段内计算出所有线程性能计数器的变化之和的另一线程值O 减去S,并且计算与第一概率相关联的估计调整值,所述第一概率对于相应线程的第二组高速缓存未命中代替当前线程当前占用的高速缓存区域。 该方法还包括基于线程S,O和估计调整值的先前占用来估计线程的高速缓存占用,以及基于估计的高速缓存占用向线程分配计算环境资源。

Patent Agency Ranking