Abstract:
According to an example, an instruction to run a kernel of an application on an apparatus having a first processing unit integrated with a second processing unit may be received. In addition, an application profile for the application at a runtime of the application kernel on the second processing unit may be created, in which the application profile identifies an affinity of the application kernel to be run on either the first processing unit or the second processing unit, and identifies a characterization of an input data set of the application. The application profile may also be stored in a data store.
Abstract:
In one implementation, a scheduler system includes a plurality of processor resource, a processor resource assignment engine to maintain a plurality of processor resource groups based on scheduler activity information, and a process assignment engine to assign a processor resource request to one of the plurality of resource groups, identify a processor resource of the plurality of resources assigned to the one of the plurality of resource groups, and enqueue a process associated with the processor resource request on a run-queue of the processor resource based on a strategy of the scheduler policy.
Abstract:
In example implementations, a method is provided. The method includes receiving a job request at a scheduler of a plurality of schedulers based upon a quality of service (QoS) level associated with the job request and the scheduler. The job request is scheduled to a computing node based upon locally stored resource information of a selected number of computing nodes within a computing cluster. A shared memory is accessed via a memory fabric to obtain updated resource information of the selected number of computing nodes. The job request may then be re-scheduled to a different computing node based upon the updated resource information.
Abstract:
In one example, a memory network may control access to a shared memory that is by multiple compute nodes. The memory network may control the access to the shared memory by receiving a memory access request originating from an application executing on the multiple compute nodes and determining a priority for processing the memory access request. The priority determined by the memory network may correspond to a memory address range in the memory that is specifically used by the application.
Abstract:
Jobs are executed in a shared cluster of computing nodes. A checkpointbased scheduling system determines checkpoint overhead for the jobs. A job is selected based on the checkpoint overheads. Generation of a checkpoint for the selected job is facilitated by the checkpoint-based scheduling system.
Abstract:
Log analysis can include transferring compiled log analysis code, executing log analysis code, and performing a log analysis on the executed log analysis code.
Abstract:
In one example, a memory network may control access to a shared memory that is by multiple compute nodes. The memory network may control the access to the shared memory by receiving a memory access request originating from an application executing on the multiple compute nodes and determining a priority for processing the memory access request. The priority determined by the memory network may correspond to a memory address range in the memory that is specifically used by the application.