Abstract:
A system and method for communicating instructions and data between a processor and external devices are provided. The system and method make use of a channel interface as the primary mechanism for communicating between the processor and a memory flow controller. The channel interface provides channels for communicating with processor facilities, memory flow control facilities, machine state registers, and external processor interrupt facilities, for example. These channels may be designated as blocking or non-blocking. With blocking channels, when no data is available to be read from the corresponding registers, or there is no space available to write to the corresponding registers, the processor is placed in a low power "stall" state. The processor is automatically awakened, via communication across the blocking channel, when data becomes available or space is freed. Thus, the channels of the present invention permit the processor to stay in a low power state.
Abstract:
PROBLEM TO BE SOLVED: To combine aspects of logical partitioning of a processing system with resource management, in terms of resource consumption. SOLUTION: Methods and apparatus are provided for logically partitioning respective processors 102 of a multi-processing system 100 into a plurality of resource groups and time-allocating resources among the resource groups as the function of a predetermined algorithm. The resources include at least one of: (i) allocated portions of communication bandwidths between the processors 102 and one or more input/output devices 110; (ii) allocated portions of space within a shared memory 106 used by the processors 102; and (iii) sets of cache memory lines used by the processors 102. COPYRIGHT: (C)2007,JPO&INPIT
Abstract:
PROBLEM TO BE SOLVED: To store data into a portion of a cache or other fast memory without also writing it to main memory. SOLUTION: A method of storing data transferred from an I/O device, a network, or a disk into a portion of the cache or other fast memory, without also writing it to the main memory is provided. Further, the data is "locked" into the cache or other fast memory until it is loaded for use. The data remains in a locking cache until it is specifically overwritten under software control. In this embodiment, a processor can write data to the cache or other fast memory without writing it also to the main memory. The portion of the cache or other fast memory can be used as additional system memory. COPYRIGHT: (C)2006,JPO&NCIPI
Abstract:
PROBLEM TO BE SOLVED: To obtain a cluster of a multimedia A/V server subsystem to increase the number of data streams which are offered by a multimedia system by operating one of control server subsystems as a master control server and assigning a request for a data stream to one of clusters. SOLUTION: A demand is served by a stream from a disk array 72 that is distributed to a display device 54 through a loop 68, a controller 56, a data stream line 62, a switch 50 and a cable connection 51. A demand for a data stream that exceeds the capabilities of these controllers 56 to 60 can distribute, e.g. a title T to all of plural controllers 56 to 60 without necessitating one or plural copies of a content itself between the rest disk arrays 74 and 76. That is, a demand is served by one that is assigned among clusters.
Abstract:
A system and method for limiting the size of a local storage of a processor are provided. A facility is provided in association with a processor for setting a local storage size limit. This facility is a privileged facility and can only be accessed by the operating system running on a control processor in the multiprocessor system or the associated processor itself. The operating system sets the value stored in the local storage limit register when the operating system initializes a context switch in the processor. When the processor accesses the local storage using a request address, the local storage address corresponding to the request address is compared against the local storage limit size value in order to determine if the local storage address, or a modulo of the local storage address, is used to access the local storage.
Abstract:
Memory management in a computer system is improved by preventing a subset of address translation information from being replaced with other types of address translation information in a cache memory reserved for storing such address translation information for faster access by a CPU. This way, the CPU can identify the subset of address translation information stored in the cache.
Abstract:
A method, an apparatus, and a computer program are provided for controlling memory access. Direct Memory Access (DMA) units have become commonplace in a number of bus architectures. However, managing limited system resources has become a challenge with multiple DMA units. In order to mange the multitude of commands generated and preserve dependencies, embedded flags in commands or a barrier command are used. These operations then can control the order in which commands are executed so as to preserve dependencies.
Abstract:
A method, an apparatus, and a computer program are provided for controlling memory access. Direct Memory Access (DMA) units have become commonplace in a number of bus architectures. However, managing limited system resources has become a challenge with multiple DMA units. In order to mange the multitude of commands generated and preserve dependencies, embedded flags in commands or a barrier command are used. These operations then can control the order in which commands are executed so as to preserve dependencies.
Abstract:
PROBLEM TO BE SOLVED: To provide a system and a method for communicating command parameters between a processor and a memory flow controller. SOLUTION: This application utilizes a channel interface as a main mechanism for communication between the processor and the memory flow controller. The channel interface provides a channel for executing communication with, for instance, a processor facility, a memory flow control facility, a machine status register, and an external processor interrupt facility. When data to be read from a corresponding register by a blocking channel are not usable or there is no writing space in the corresponding register, the processor is brought into a low-power "stall" state. When the data are made usable or a space is released, the processor is automatically called via communication on the blocking channel. COPYRIGHT: (C)2007,JPO&INPIT
Abstract:
PROBLEM TO BE SOLVED: To provide a system and a method for loading software on a plurality of processors. SOLUTION: A processing unit (PU) retrieves a file from a system memory and loads the file on the internal memory of the processing unit. The PU extracts a processor type from the header of the file. It is distinguished whether the file should be executed in, the PU or a synergistic processing unit (SPU) depending on its processor type. When the file should be executed in the SPU, the PU DMA (Direct Memory Access)-transfers the file to the SPU for execution. In one embodiment, the file is a combined file including both of a PU code and an SPU code. In the embodiment, the PU identifies one section header or a plurality of section headers included in the file. The section header(s) indicates the SPU code incorporated into the combined file. In the embodiment, the PU extracts the SPU code from the combined file and DMA-transfers the extracted code to the SPU for execution. COPYRIGHT: (C)2005,JPO&NCIPI