Abstract:
Examples of the present disclosure describe implementing bitmap-based data replication when a primary form of data replication between a source device and a target device cannot be used. According to one example, a temporal identifier may be received from the target device. If the source device determines that the primary replication method is unable to be used to replicate data associated with the temporal identifier, a secondary replication method may be initiated. The secondary replication method may utilize a recovery bitmap identifying data blocks that have changed on the source device since a previous event.
Abstract:
Examples of the present disclosure describe implementing bitmap-based data replication when a primary form of data replication between a source device and a target device cannot be used. According to one example, a temporal identifier may be received from the target device. If the source device determines that the primary replication method is unable to be used to replicate data associated with the temporal identifier, a secondary replication method may be initiated. The secondary replication method may utilize a recovery bitmap identifying data blocks that have changed on the source device since a previous event.
Abstract:
Crash recovery with asynchronous consistent snapshots in persistent memory stores of a processing environment. A processing environment includes a user program and infrastructure-maintained data structures. The infrastructure-maintained data structures include a log of updates made to program data structures and a snapshot of the state of the program data structures. The systems and methods include writing log entries in the log to a transient memory. The log entries correspond to store instructions and memory management instructions operating on a nonvolatile memory (NVM), and input/output (I/O) operations executed by program instructions of the user program. Each of the log entries represents an effect of a corresponding operation in the program instructions. The systems and methods also include creating a snapshot in the NVM after a consistent program point based on the log of updates. The snapshot provides a rollback position during restart following a crash.
Abstract:
Embodiments of the present invention relate to synchronously replicating data in a distributed computing environment. To achieve synchronous replication both an eventual consistency approach and a strong consistency approach are contemplated. Received data may be written to a log of a primary data store for eventual committal. The data may then be annotated with a record, such as a unique identifier, which facilitates the replay of the data at a secondary data store. Upon receiving an acknowledgment that the secondary data store has written the data to a log, the primary data store may commit the data and communicate an acknowledgment of success back to the client. In a strong consistency approach, the primary data store may wait to send an acknowledgement of success to the client until it receives an acknowledgment that the secondary has not only written, but also committed, the data.
Abstract:
A storage system in an embodiment of this invention comprises a non-volatile storage area for storing write data from a host, a cache area capable of temporarily storing the write data before storing the write data in the non-volatile storage area, and a controller that determines whether to store the write data in the cache area or to store the write data in the non-volatile storage area without storing the write data in the cache area, and stores the write data in the determined area.
Abstract:
A storage device system includes an information processing device, a first storage device equipped with a first storage volume, and a second storage device equipped with a second storage volume. The information processing device and the first storage device are communicatively connected to one another. Also, the first storage device and the second storage device are communicatively connected to one another. The information processing device is equipped with a first write request section that requests to write data in the first storage device according to a first communications protocol. The first storage device is equipped with a second write request section that requests to write data in the second storage device according to a second communications protocol. The information processing device creates first data including a first instruction to be executed in the second storage device. The information processing device transmits to the first write request section a request to write the first data in the first storage volume according to the first communications protocol. When the first data written in the first storage volume is an instruction to the second storage device, the first storage device transmits to the second write request section a request to write the first data in the second storage volume according to the second communications protocol. The second storage device executes the first instruction set in the first data written in the second storage volume.
Abstract:
Provided is a storage system (S) that includes a first storage apparatus (4a) and a second storage apparatus (4b) each connected to a host computer (3a). The first and second storage apparatuses each include a controller (42) and a disk drive (41). The controller manages an encryption status and an encryption key for each of a data volume and a journal volume in the disk drive. The controller in the first storage apparatus receives a write request from the host computer, creates a journal based on write data, encrypts the journal, and stores in an order the journal in a storage area in the journal volume. The controller in the order the encrypted journal stored in the journal volume, decrypts the journal, and transmits the decrypted journal to the second storage apparatus.
Abstract:
An asynchronous peer-to-peer data replication method implemented within a replication cluster comprising at least one master node and at least a first client node includes entering an update in a data volume of the master node and storing the update in a master node storage. Next, updating a first active session in a master log and then sending a first message from the master node to the first client node. The first message comprises a first message content and first “piggybacked” data indicating that the first active session in the master log was updated. Next, receiving the first message by the first client node, registering that the first active session in the master log was updated and signaling internally that the first active session in the master log was updated. Next, sending an update request from the first client node to the master node, processing the update request by the master node and sending the update to the first client node. Finally, receiving the update by the first client node and updating the first active session in a first client log.
Abstract:
A first storage system stores information relating to the updating of data stored in that system as a journal. More specifically, the journal is composed of a copy of data that was used for updating and update information such as a write command used during updating. Furthermore, the second storage system acquires the journal via a communication line between the first storage system and the second storage system. The second storage system holds a duplicate of the data held by the first storage system and updates the data corresponding to the data of the first storage system in the data update order of the first storage system by using the journal.