Abstract:
A data processing system includes a control computer which controls and monitors a cooling subsystem. The control computer has a non-volatile memory holding two status logs for recording stautus data and fault information. One of the logs is selected as the current log. If a fault condition is detected, the control computer writes fault information into the current log and then, if the other log is unlocked, locks the current log and selects the other log as the current log. In response to a "request locked log" command, the control computer transmits the contents of the locked log. In response to an "unlock" command, the control computer unlocks the locked log and then, if the other log contains fault information, locks that other log. The system thus provides an efficient mechanism for ensuring that the fault information is maintained, even through system powerdown.
Abstract:
System and method for managing shared, distributed locks in a multiprocessing complex. The manager operates using a partitionable lock spare with logical processor connection. Logically connected processors are subject to validation and disconnection due to failure. The locks synchronize data access to identifiable subunits of DASD. Denied lock requests are queued for servicing when the lock becomes available. Lock partitions are used to speed DASD to DASD copying without halting processing on the first DASD. A special partition is assigned to the copy task and the processors writing to the DASD can determine copy status with a single read or test. Operations requiring multilateral agreement of processors, such as rebuilding locks or moving locks, are protected by fencing any nonresponsive processor. A special queue partition is designated for master/slave control point designation. All processors contend for the master lock and losing contenders are queued. Queuing provides automatic fallback in case of a failing processor.
Abstract:
Recovering from failure of a distributed processing system process designated as a master process for at least one shared resource. The method and system of the invention provides for detection of the failure (200) by one or more of the shadow processes. The detecting process tests (202) to determine whether it has the shared write lock managed by the master process. If it does, it becomes the master process (204). If not, it determines from the shared control file which process holds the write lock (206) and it communicates to that process a request (208) to assume master process responsibilities. That process attempts to establish itself as master process (210). A test is performed (212) to determine if a new master process has been designated. If not, a race is conducted (214) between all shadow processes.
Abstract:
A data processing system includes a control computer which controls and monitors a cooling subsystem. The control computer has a non-volatile memory holding two status logs for recording stautus data and fault information. One of the logs is selected as the current log. If a fault condition is detected, the control computer writes fault information into the current log and then, if the other log is unlocked, locks the current log and selects the other log as the current log. In response to a "request locked log" command, the control computer transmits the contents of the locked log. In response to an "unlock" command, the control computer unlocks the locked log and then, if the other log contains fault information, locks that other log. The system thus provides an efficient mechanism for ensuring that the fault information is maintained, even through system powerdown.
Abstract:
System and method for managing shared, distributed locks in a multiprocessing complex. The manager operates using a partitionable lock spare with logical processor connection. Logically connected processors are subject to validation and disconnection due to failure. The locks synchronize data access to identifiable subunits of DASD. Denied lock requests are queued for servicing when the lock becomes available. Lock partitions are used to speed DASD to DASD copying without halting processing on the first DASD. A special partition is assigned to the copy task and the processors writing to the DASD can determine copy status with a single read or test. Operations requiring multilateral agreement of processors, such as rebuilding locks or moving locks, are protected by fencing any nonresponsive processor. A special queue partition is designated for master/slave control point designation. All processors contend for the master lock and losing contenders are queued. Queuing provides automatic fallback in case of a failing processor.
Abstract:
Apparatus and method for reading data pages 33 in a transaction processing system 20 without locking the pages are disclosed. The system maintains a Global_Committed_LSN 36 identifying the oldest uncommitted transaction accessing any of the data, and Object_Committed_LSNs 38a,38b identifying the oldest uncommitted transactions accessing particular files, tables and indexes. Each data page includes a Page_LSN 35 identifying the last transaction to have updated the page. To read a page, a transaction first latches the pages, and compares the page s Page_LSN with the Global_Committed_LSN, or with the page's respective Object_Committed_LSN. If the Page_LSN is older than the Committed_LSN with which is was compared, then the transaction reads the page without locking it, since there can be no uncommitted transaction in process which might have updated the page's data. However if the Page_LSN is younger than the Committed_LSN, the page is locked before being read.
Abstract:
To lock use of shared information to itself in a multiprocessor system (100) having two independently and asynchronously operating processors (101, 111) whose main store units (102, 112) duplicate each other's contents, a processor must cause an atomic read-modify-write (RMW) operation to be executed on a semaphore in the duplicated main store units of both processors. To properly order execution of multiple such RMW operations, arbiters (106, 116) of system buses (105, 115) of the two processors communicate over an interarbiter channel (121). The arbiter of a source processor that wishes to perform a RMW operation notifies the other processor's arbiter over the interarbiter channel. Simultaneous attempts at notification by both arbiters are resolved in favor of one of them that is designated the master. The notifying arbiter prevents its processor from performing another RMW operation until the one RMW operation has completed thereon, but permits other operations to proceed normally. The notified arbiter prevents its processor from performing another RMW operation until the one RMW operation has been transferred via interprocessor links (107, 117) and bus (120) from the source processor to the notified arbiter's processor and has been performed thereon, but permits other operations to proceed normally. Thus multiple RMW operations are performed on both processors in the same order asynchronously and without impacting performance of other operations.
Abstract:
One embodiment provides an apparatus. The apparatus includes a processor, a chipset, a memory to store a process, and logic. The processor includes one or more core(s) and is to execute the process. The logic is to acquire performance monitoring data in response to a platform processor utilization parameter (PUP) greater than a detection utilization threshold (UT), identify a spin loop based, at least in part, on at least one of a detected hot function and/or a detected hot loop, modify the identified spin loop using binary translation to create a modified process portion, and implement redirection from the identified spin loop to the modified process portion.
Abstract:
Disclosed is an approach for implementing disaster recovery for virtual machines. Consistency groups are implemented for virtual machines, where the consistency group link together two or more VMs. The consistency group includes any set of VMs which need to be managed on a consistent basis in the event of a disaster recovery scenario.