Abstract:
Methods and apparatus are provided for executing processor tasks on a multi-processing system. The multi-processing system includes a plurality of sub-processing units and a main processing unit that may access a shared memory. Each sub-processing unit includes an on-chip local memory separate from the shared memory. The methods and apparatus contemplate: providing that the processor tasks be copied from the shared memory into the local memory of the sub-processing units in order to execute them, and prohibiting the execution of the processor tasks from the shared memory; and migrating at least one processor task from one of the sub-processing units to another of the sub-processing units.
Abstract:
The present invention is a method for integrating multiple object codes from heterogeneous architectures. For a program on a first processor to reference a program from the name space of a second processor, the object code for the second-processor program is enclosed in a wrapper to create object code in the first-processor name space. The header of the wrapped object code defines a new symbol in the name space of the first processor, and the symbol points to the second-processor object code contained in the wrapped object code. Instead of directly referencing the second-processor object code, the referencing program on the first processor references the wrapped object code. A system tool can be used to wrap the object code which runs on the second processor.
Abstract:
Methods and apparatus are provided for managing processor tasks in a multi-processor computing system. The system is operable to store the processor tasks in a shared memory that may be accessed by a plurality of sub-processing units of the multi-processor computing system; and permit the sub-processing units to determine which of the processor tasks should be copied from the shared memory and executed based on priorities of the processor tasks.
Abstract:
Methods and apparatus are provided for migrating and distributing processor tasks on a plurality of multi-processing systems distributed over a network. The multi-processing system includes at least one broadband entity, each broadband entity including a plurality of processing units and synergistic processing units, as well as a shared memory. Tasks from one broadband entity are bundled, migrated and processed remotely on other broadband entities to efficiently use processing resources, and then returned to the migrating broadband entity for completion or continued processing.
Abstract:
The present invention provides apparatus and methods to perform thermal management in a computing environment. In one embodiment, thermal attributes are associated with operations and/or processing components, and the operations are scheduled for processing by the components so that a thermal threshold is not exceeded. In another embodiment, hot and cool queues are provided for selected operations, and the processing components can select operations from the appropriate queue so that the thermal threshold is not exceeded.
Abstract:
Methods and apparatus for cell processors are disclosed. A policy module is loaded from a main memory of a cell processor into the local memory of a selected synergistic processing unit (SPU) under control of an SPU policy module manager (SPMM) running on the SPU. A selected one or more work queues are assigned from a main memory to a selected one or more of the SPUs according to a hierarchy of precedence. A policy module for the selected one or more work queues is loaded to the selected one or more SPUs. The policy module interprets the selected one or more of the selected one or more work queues. Under control of the policy module, work from one or more of the selected one or more work queues is loaded into the local memory of the selected SPU. The work is performed with the selected SPU. After completing the work or upon a pre-emption, control of the selected SPU is returned to the SPMM.
Abstract:
The present invention provides methods and apparatus for transferring and storing data among processors and memory in a multiprocessor system. The data is compressed locally before it is sent to a shared memory. The memory stores the data in its compressed state, but the data is aligned in the memory in the same manner as uncompressed data would be. A tag table keeps track of the compression type and compressed data size for a set of data at a given address block. A data compressor and a data expander may be implemented in a direct memory access controller accessible to multiple coprocessors, or the compressor and the expander may be implemented within the coprocessors.
Abstract:
In one aspect, a multiprocessor system and method is provided whereby one of the processors is assigned the responsibility of handling interrupts and identifying the next processor to handle an interrupt. When that processor switches tasks and determines that it is no longer the least important processor as far as task priority is concerned, it will then select and transfer its interrupt-related responsibilities (i.e., handling the interrupt and determining the next interrupt-handing processor) to the processor which is executing the least important task. The selected processor will then be designated for handling interrupts unless and until it undergoes a task switch and selects a different processor.
Abstract:
A synchronization scheme is provided for a multiprocessor system. In particular, a processor includes a buffer sync controller. The buffer sync controller is operative to allow or deny access by a subprocessor to shared data in a shared memory, such that a processor seeking to write data into or read data from the shared memory must ascertain certain shared parameter data processed by the buffer sync controller.
Abstract:
The present invention provides apparatus and methods to perform thermal management in a computing environment. Thermal attributes are associated with operations and/or processing components. The components have thermal thresholds that should not be exceeded. In a preferred embodiment, an operation can be transferred from one component to another component if the thermal threshold is exceeded during execution by the first component.