METHOD AND SYSTEM FOR OPERATING AND EXTENDING HASH FUNCTION

    公开(公告)号:JP2002051077A

    公开(公告)日:2002-02-15

    申请号:JP2001105972

    申请日:2001-04-04

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To provide a method and system that provide a hash value of an item in a computer system and a supplementary value of the hash value. SOLUTION: Components are derived from items. The components include a 1st component and a final component. Each of the components includes a specific number of bits. The components are cascaded via at least one XOR to provide outcomes. The outcomes include the 1st outcome and the final outcome. The final outcome includes only the final component. The 1st outcome includes a result of applying exclusively ORing (EORing) to the 1st component and the remaining cascaded components among the entire components. In order to provide a hush value, a reversible hash function and a supplementary function of the reversible hash function are applied at least to the 1st outcome. The supplementary value of the hash value includes outcomes except the 1st outcome.

    CONTROLLER FOR MULTIPLE INSTRUCTION THREAD PROCESSORS

    公开(公告)号:CA2334393A1

    公开(公告)日:2001-10-04

    申请号:CA2334393

    申请日:2001-02-02

    Applicant: IBM

    Abstract: A prefetch buffer is used in connection with a plurality of independent thre ad processes in such a manner as to avoid an immediate stall when execution is given to an idle thread. A mechanism is established to control the switching from one thread to another within a Processor in order to achieve more efficient utilization of processor resources. This mechanism will grant temporary control to an alternate execution thread when a short latency even t is encountered, and will grant full control to an alternate execution thread when a long latency event is encountered. This thread control mechanism comprises a priority FIFO, which is configured such that its outputs control execution priority for two or more execution threads within a processor, based on the length of time each execution thread has been resident within the FIFO. The FIFO is loaded with an execution thread number each time a new task (a networking packet requiring classification and routing within a network) is dispatched for processing, where the execution thread number loaded into the FIFO corresponds to the thread number which is assigned to process the task. When a particular execution thread completes processing of a particular task, and enqueues the results for subsequent handling, the priority FIFO is further controlled toremove the corresponding execution thread number from the FIFO. When an active execution thread encounters a lo ng latency event, the corresponding thread number within the FIFO is removed from a high priority position in the FIFO, and placed into the lowest priority position of the FIFO. This thread control mechanism also comprises a Thread Control State Machine for each execution thread supported by the processor. The Thread Control State Machine further comprises four states. A n Init state is used while an execution thread is waiting for a task to process. Once a task is enqueued for processing, a Ready state is used to request execution cycles. Once access to the processor is granted, an Execute state is used to support actual processor execution. Requests for additional processor cycles are made from both the Ready state and the Execute state. The state machine is returned to the Init state once processing has been completed for the assigned task. A Wait state is used to suspend requests for execution cycles while the execution thread is stalled due to either a long-latency event or a short-latency event. This thread control mechanism further comprises an arbiter which uses thread numbers from the priority FIFO to determine which execution thread should be granted access to processor resources. The arbiter further process es requests for execution control from each execution thread, and selects one execution thre ad to be granted access to processor resources for each processor execution cycle by matching thread numbers from requesting execution threads with corresponding thread numbers in the priori ty FIFO.

    17.
    发明专利
    未知

    公开(公告)号:DE10110504B4

    公开(公告)日:2006-11-23

    申请号:DE10110504

    申请日:2001-03-03

    Applicant: IBM

    Abstract: A mechanism controls a multi-thread processor so that when a first thread encounters a latency event for a first predefined time interval temporary control is transferred to an alternate execution thread for duration of the first predefined time interval and then back to the original thread. The mechanism grants full control to the alternate execution thread when a latency event for a second predefined time interval is encountered. The first predefined time interval is termed short latency event whereas the second time interval is termed long latency event.

    Multi-processor system with registers having a common address map

    公开(公告)号:GB2366426A

    公开(公告)日:2002-03-06

    申请号:GB0108828

    申请日:2001-04-09

    Applicant: IBM

    Abstract: A processor system comprises a core language processor 101, co-processors 107 - 111; each having special purpose, scalar 116 and array 117, registers; and an interface between the processors, where the interface maps the special purpose registers into a common address map. The system may be utilised as a protocol processor unit to provide instruction communication to a network, and the co-processors may compute CRC checksums, move data between local and main memories, search a tree structure, enqueue packets or assist in accessing the contents of registers. The interface may take the form of an execution interface 106 or a data interface 130.

Patent Agency Ranking