Auto-Adaptive Serverless Function Management

    公开(公告)号:US20190065238A1

    公开(公告)日:2019-02-28

    申请号:US16054541

    申请日:2018-08-03

    Abstract: A method implemented by a cloud computing devices comprises removing, by the cloud computing device, data associated with a function from an execution data structure in response to determining that the function is waiting for an input event, adding, by the cloud computing device, a context associated with the function to a management data structure while the function is waiting for the input event, the context associated with the function comprising software components associated with the function and an intermediate variable associated with the function, executing, by the cloud computing device, the function with the input event in response to receiving the input event, and removing, by the cloud computing device, the context associated with the function from the management data structure in response to receiving the input event.

    Guided Optimistic Resource Scheduling
    3.
    发明申请

    公开(公告)号:US20180316626A1

    公开(公告)日:2018-11-01

    申请号:US15960991

    申请日:2018-04-24

    Abstract: A system for resource management is disclosed. The system includes a node local resource management layer employed to generate node local guidance information based on coarse grained information and application usage characteristics. A central cluster resource management layer is configured to generate per-framework resource guidance filter information based on the node local guidance information. An application layer, including a plurality of frameworks, is configured to employ the per-framework resource guidance filter information to generate resource guidance filters. The resource guidance filters guide resource requests to the central cluster resource management layer and allow the application layer to receive resources from the node local resource management layer in response to the resource requests to the central cluster resource management layer.

    Content-aware energy savings for video streaming and playback on mobile devices

    公开(公告)号:US10117185B1

    公开(公告)日:2018-10-30

    申请号:US15423490

    申请日:2017-02-02

    Abstract: A system, computer readable medium, and method are provided for reducing the power consumption of a mobile device. The method includes the steps of detecting video content to be viewed in an application executed by the mobile device; detecting unwanted content associated with the video content; and operating the mobile device in a low-power mode during playback of the video content in the application in response to detecting the unwanted content. The mobile device may include a memory storing the application and a processor executing the application, which configures the processor to implement the method. Five techniques may be applied in the low-power mode to reduce power consumption including Dynamic Voltage and Frequency Scaling (DVFS), reducing resolution of content, reducing brightness of a display, masking content, and thread throttling. The low-power mode saves energy when playing back videos on the mobile device.

    CORE LOAD KNOWLEDGE FOR ELASTIC LOAD BALANCING OF THREADS
    5.
    发明申请
    CORE LOAD KNOWLEDGE FOR ELASTIC LOAD BALANCING OF THREADS 审中-公开
    用于弹性载荷平衡的核心载荷知识

    公开(公告)号:US20170039093A1

    公开(公告)日:2017-02-09

    申请号:US14818253

    申请日:2015-08-04

    CPC classification number: G06F9/5083 G06F9/5066 G06F2209/5018

    Abstract: A method of balancing load on multiple cores includes maintaining multiple bitmaps in a global memory location. Each bitmap indicates loads of the threads included in a thread domain. The multiple threads are associated with each core. Each core maintains and updates the respective bitmap based on the loads of the threads. The multiple bitmaps are maintained in the global memory location which is accessible by a multiple thread domains configured to execute threads using the cores. Execution of the multiple thread domains is balanced using the multiple cores based on loads of each thread described in each bitmap.

    Abstract translation: 在多个核心上平衡负载的方法包括在全局存储器位置保持多个位图。 每个位图指示线程域中包含的线程的负载。 多个线程与每个核心相关联。 每个核心基于线程的负载维护和更新相应的位图。 多个位图被保留在全局存储器位置,该位置可被配置为使用内核执行线程的多个线程域访问。 基于每个位图中描述的每个线程的负载,使用多个核来平衡多个线程域的执行。

    APPARATUS, METHOD, AND COMPUTER PROGRAM FOR UTILIZING SECONDARY THREADS TO ASSIST PRIMARY THREADS IN PERFORMING APPLICATION TASKS
    6.
    发明申请
    APPARATUS, METHOD, AND COMPUTER PROGRAM FOR UTILIZING SECONDARY THREADS TO ASSIST PRIMARY THREADS IN PERFORMING APPLICATION TASKS 审中-公开
    在执行应用任务时使用辅助线程来辅助主线程的装置,方法和计算机程序

    公开(公告)号:US20170031724A1

    公开(公告)日:2017-02-02

    申请号:US14815875

    申请日:2015-07-31

    CPC classification number: G06F9/505 G06F2209/5018 G06F2209/509

    Abstract: An apparatus, method, and computer program product are provided for utilizing secondary threads to assist primary threads in performing application tasks. In use, a plurality of primary threads are utilized for performing at least one of a plurality of tasks of an application utilizing at least one corresponding core. Further, it is determined whether the primary threads require assistance in performing one or more of the plurality of tasks of the application. Based on such determination, a plurality of secondary threads are utilized for performing the one or more of the plurality of tasks of the application.

    Abstract translation: 提供了一种装置,方法和计算机程序产品,用于利用辅助线程来辅助主线程执行应用任务。 在使用中,使用多个主线程来利用至少一个对应的核来执行应用的多个任务中的至少一个。 此外,确定主线程是否需要协助执行应用的多个任务中的一个或多个。 基于这样的确定,使用多个辅助线程来执行应用的多个任务中的一个或多个。

    System and Method for Predicting False Sharing
    8.
    发明申请
    System and Method for Predicting False Sharing 有权
    用于预测虚假共享的系统和方法

    公开(公告)号:US20150032971A1

    公开(公告)日:2015-01-29

    申请号:US14341438

    申请日:2014-07-25

    Abstract: In one embodiment, a method for predicting false sharing includes running code on a plurality of cores and tracking potential false sharing in the code while running the code to produce tracked potential false sharing, where tracking the potential false sharing includes determining whether there is potential false sharing between a first cache line and a second cache line, and where the first cache line is adjacent to the second cache line. The method also includes reporting potential false sharing in accordance with the tracked potential false sharing to produce a false sharing report.

    Abstract translation: 在一个实施例中,一种用于预测虚假共享的方法包括:在多个核心上运行代码并在运行代码时跟踪代码中的潜在错误共享以产生跟踪的潜在虚假共享,其中跟踪潜在的虚假共享包括确定是否存在潜在的错误共享 在第一高速缓存行和第二高速缓存行之间共享,并且其中第一高速缓存行与第二高速缓存行相邻。 该方法还包括根据跟踪的潜在虚假共享报告潜在的虚假共享,以产生虚假共享报告。

    Location control and access control of emails

    公开(公告)号:US10924459B2

    公开(公告)日:2021-02-16

    申请号:US15409161

    申请日:2017-01-18

    Abstract: A sender device includes a non-transitory memory storage comprising instructions and a location control policy, and a processor coupled to the memory. The processor executes the instructions to generate an email, generate a control mechanism for the email, wherein the control mechanism instructs a security server to implement the location control policy and wherein the location control policy affects a recipient device's use of the email, and integrate the control mechanism into the email to generate an integrated email. The sender device further includes a transmitter coupled to the processor and configured to transmit the integrated email to the security server for the security server to implement the control mechanism.

    Systems and methods for creating and using a data structure for parallel programming

    公开(公告)号:US10585845B2

    公开(公告)日:2020-03-10

    申请号:US15293413

    申请日:2016-10-14

    Abstract: System and method embodiments are provided for creating data structure for parallel programming. A method for creating data structures for parallel programming includes forming, by one or more processors, one or more data structures, each data structure comprising one or more global containers and a plurality of local containers. Each of the global containers is accessible by all of a plurality of threads in a multi-thread parallel processing environment. Each of the plurality of local containers is accessible only by a corresponding one of the plurality of threads. A global container is split into a second plurality of local containers when items are going to be processed in parallel and two or more local containers are merged into a single global container when a parallel process reaches a synchronization point.

Patent Agency Ranking