Abstract:
Memory management in a computer system is improved by preventing a subset of address translation information from being replaced with other types of address translation information in a cache memory reserved for storing such address translation information for faster access by a CPU. This way, the CPU can identify the subset of address translation information stored in the cache.
Abstract:
PROBLEM TO BE SOLVED: To minimize redesign time for a chip in chip design. SOLUTION: A modular design method provides a custom-designed chip by using variable and scalable module multiprocessor design, and without redesigning a module including design. The design includes a PU module, a plurality of first assist processing modules, and a plurality of first DMA control modules, with each being associated with different one of the plurality of assist processing modules. First multiprocessor design including one or more of the plurality of modules is generated, and the number of the modules to be reduced and/or added is selected, beforehand, from the first design. Furthermore, second multiprocessor design in which the selected number of the modules selected beforehand is reduced, and/or added is performed. COPYRIGHT: (C)2011,JPO&INPIT
Abstract:
PROBLEM TO BE SOLVED: To suppress the redesign time of a chip in chip designing to a minimum. SOLUTION: Disclose is a method of providing a custom design chip, without redesigning a module that includes design, in variable and scalable designing of a modular multiprocessor. A PU module, a first plurality of assist processing modules and a first plurality of DMA control modules, each being associated with different one of the plurality of assist processing modules are involved in this designing. A first multiprocessor design is created, including at least one module from among the plurality of modules, and the number of modules to be deleted and/or added is preliminarily selected from the first design. Furthermore, a second multiprocessor design is performed with preliminarily selected modules of the selected number deleted therefrom and/or are added thereto. COPYRIGHT: (C)2007,JPO&INPIT
Abstract:
PROBLEM TO BE SOLVED: To provide a method and apparatus for loading data to a local store of a processor in a computer system having a direct memory access (DMA) mechanism. SOLUTION: A transfer of data is performed from a system memory of the computer system to the local store. The data is fetched from the system memory to a cache of the processor. A DMA read request is issued to request data. It is decided whether the requested data is found in the cache. Upon a decision that the requested data is found in the cache, the requested data is loaded directly from the cache to the local store. COPYRIGHT: (C)2005,JPO&NCIPI
Abstract:
PROBLEM TO BE SOLVED: To provide a method and a system for providing cache management commands in a system supporting a DMA mechanism and caches. SOLUTION: A DMA mechanism is set up by a processor. Software running on the processor generates cache management commands. The DMA mechanism carries out the commands, thereby enabling the software program management of the caches. The commands include commands for writing data to the cache, loading data from the cache, and for marking data in the cache as no longer needed. The cache can be a system cache or a DMA cache. COPYRIGHT: (C)2006,JPO&NCIPI
Abstract:
PURPOSE: To make a prefetch instruction valid by inputting 2nd information from a prefetch memory to a cache memory when 1st information includes the 2nd information and inputting the 2nd information from a system memory in the other case. CONSTITUTION: When target data information is not stored in a cache memory 12, a processor 16 judges whether this information is stored in a prefetch memory 26 or not and when it is stored, the processor 16 stores this data information from the prefetch memory 26 into the cache memory 12. When the target data information is not stored in the prefetch memory 26, the processor 16 requests this data information from a system memory 30 through a system bus 28. Afterwards, when a BIU 18 inputs the received data information, the processor 16 stores the information from a read register 24 into the cache memory 12.
Abstract:
PROBLEM TO BE SOLVED: To provide an I/O configuration capable of increasing bandwidth between electronic circuit chips that can be combined on the planar surface of a substrate by option. SOLUTION: This equipment covers a group of general I/O connections, arranged in the periphery that uses wiring type connections between a chip and other circuit of a substrate on which the chip is mounted, and uses a group of C4 type I/O connections arranged inside, in relation to an IC chip located above. The group of connections arranged inside can be used to provide a direct connection to an optional auxiliary chip that have I/O connection points with the corresponding group. Such a configuration not only will increase the number of possible I/O connections but also increases the communication bandwidth in between chips that are connected directly. COPYRIGHT: (C)2005,JPO&NCIPI
Abstract:
PROBLEM TO BE SOLVED: To provide a management system and a method for streaming data in a cache. SOLUTION: A computer system 100 comprises: a processor 102; the cache 104; and a system memory 110. The processor 102 issues a data request for the streaming data. The streaming data has one or more small data portions. The system memory 110 has a specific area for storing the streaming data. The cache has a predefined area locked for the streaming data and is connected to a cache controller which is in communication with a processor 106 and the system memory 110. When at least one small data portion for the streaming data is not found in the predefined area of the cache, the small data portion is transferred to the predefined area of the cache 104 from the specific area of the system memory 110. COPYRIGHT: (C)2004,JPO&NCIPI