Abstract:
The parallel disk drive array data storage subsystem maps between virtual and physical data storage devices and schedules the writing of data to these devices. The data storage subsystem functions as a conventional large form factor disk drive memory, using an array of redundancy groups, each containing N+M disk drives. A performance improvement is obtained by eliminating redundancy data updates in the redundancy group by writing modified virtual track instances into previously emptied logical tracks and marking the data contained in the previous virtual track instance location as invalid. Logical cylinders containing a mixture of valid and invalid virtual tracks are emptied by writing all the valid virtual tracks into a previously emptied logical cylinder as a background process.
Abstract:
A disk storage system that writes multiple copies of records directed to user-specified volumes. A plurality of spaced apart control units (112, 113) interconnected by direct data links (106) and a corresponding plurality of sets of recording means (109, 111) communicate over the direct data links (106) when a write request is received by one control unit (112, 113) to cause one volume in each set of recording means (109, 111) to write a copy of the received record.
Abstract:
This apparatus (100) makes use of a disk drive array to store the data records for the associated host processor (11, 12). This disk drive array emulates the operation of a large form factor disk drive by using a plurality of interconnected small form factor disk drives (12*-*). These small form factor disk drives (12*-*) are configured into redundancy groups (421-428), each of which contains n+m disk drives for storing data records and redundancy information thereon. The use of this configuration is significantly more reliable than a large form factor disk drive. However, in order to maintain compatibility with host processors (11, 12) that request the duplex copy group feature, the pantom duplex copy group apparatus of the present invention mimics the creation of a duplex copy group in this dynamically mapped data storage subsystem (100) using a disk array and a phantom set of pointers (414) that mimic the data storage devices (421) on which the data records are stored.
Abstract:
The data storage subsystem (100) is implemented using an array of data storage elements (122-* to 125-*) which vary in data storage characteristics and/or data storage capacity. Control apparatus (101) automatically compensates for any nonuniformity among the data storage elements (122-* to 125-*) by selecting a set of physical characteristics that define a common data storage element format. The selected set of physical characteristics may not match any of the disk drives (122-1 to 122-n+m) but each disk drive (122-1 to 122-n+m) can emulate these selected characteristics. This capability enables the disk drives (122-* to 125-*) in the data storage subsystem (100) to be replaced by nonidentical disk drives in a nondisruptive manner to provide continuous data availability.
Abstract:
The deleted dataset space release system provides facilities in a dynamically mapped virtual memory data storage subsystem (100) to immediately release the physical space occupied by a deleted dataset for use by the data storage subsystem (100) to store subsequently received data files. This system also provides data security by preventing the unauthorized access to the data of scratched data files, both in cache memory (113) and on the data storage devices (122-125). The deleted dataset space release system utilizes a user exit in the host processor data file scratch routine to transmit information to the data storage subsystem (100) indicative of the host processor data file scratch activity. Existing channel command words are used in a manner that is transparent to the host processor (11, 12). The data storage subsystem (100) thereby immediately receives an indication that the host processor (11, 12) is scratching a data file from the volume table of contents of a virtual volume. The data storage subsystem (100) can then concurrently scratch this data file from the virtual track directory (401) contained in the data storage subsystem (100) and thereby release the physical storage space occupied by this scratched data file.
Abstract:
The use of a dynamically mapped virtual memory system (100) permits the storage of data so that each data record occupies only the physical space required for the data. Furthermore, the data storage subsystem (100) manages the allocation of physical space on the disk drives (122-125) and does not rely on the file extend defined in the count key data format. Data compaction apparatus is provided to remove the gaps contained in the stream of count key data records received from the host processor (11, 12). A data compression algorithm (203-0) is then used to compress the received data into a compressed format for storage on the disk drives (122-125). It is the compacted, compressed data that is finally stored on the disk drives (122-125). Furthermore, any data record received from the host processor (11, 12) absent data in the user data field therein is simply listed in the virtual memory map as a null field occupying no physical space on the disk drives (122-125). The data storage control (101), through its mapping capability, stores the actual data in the minimum physical space required by overcoming the limitations imposed on large form factor disk drives by the use of count key data format data records. However, the data storage subsystem (100) returns this stored data to the host processor (11, 12) in count key data format through a data record reformatting process once the stored compacted compressed data is staged to the cache memory (113) for access by the host processor (11, 12). The data storage subsystem (100) is operationally independent of the host processor (11, 12), yet performs as if it were a standard operationally dependent large form factor disk subsystem.
Abstract:
The disk drive memory (100) of the present invention uses a large plurality of small form factor disk drives (130) to implement an inexpensive, high performance, high reliability disk drive memory that emulates the format and capability of large form factor disk drives. The plurality of disk drives (130) are switchably interconnectable (121) to form redundancy groups of N+M parallel connected disk drives to store data thereon. The N+M disk drives are used to store the N segments of each data word plus M redundancy segments. In addition, a pool of R backup disk drives (130) is maintained to automatically substitute a replacement disk drive for a disk drive in a redundancy group that fails during operation. The number N of data segments in each data redundancy group can be varied throughout the disk drive memory to thereby match the characteristics of the input data or operational parameters within the disk drive memory. Furthermore, a group of U unassigned disk drives (130) can be maintained as a stock of disk drives that can be powered up as needed and assigned to either a redundancy group or to the pool of backup disk drives.
Abstract:
La mémoire (100) d'une unité à disques de la présente invention utilise une grande pluralité de petites unités à disques de facteur de forme (130) pour mettre en oeuvre une mémoire d'unités à disques peu coûteuse, de haute performance et haute fiabilité qui émule le format et la capacité de grandes unités à disques de facteur de forme. La pluralité des unités à disques (130) sont interconnectables (121) de manière commutable pour former des groupes de redondance de N+M unités à disques connectées en parallèle pour y stocker des données. Les N+M unités à disques sont utilisées pour stocker les N segments de chaque mot de données plus les M segments de redondance. De plus, un groupe de R unités à disques de sauvegarde (130) est maintenu pour remplacer une unité à disques par une unité à disques de remplacement dans un groupe de redondances qui est défaillant pendant le fonctionnement. Le nombre N de segments de données dans chaque groupe de redondances de données peut varier dans toute la mémoire de l'unité à disques de manière à correspondre aux caractéristiques des données d'entrée ou paramètres de fonctionnement dans la mémoire de l'unité à disques. En outre, un groupe de U unités à disques non affectées (130) peut être maintenu comme réserve d'unités à disques pouvant être mises en route si besoin est et affecté soit à un groupe de redondances soit au groupe d'unités à disques de sauvegarde.
Abstract:
Le sous-système de mémorisation de données à réseau d'unités de disques en parallèle établit une correspondance entre des dispositifs virtuels et physiques de mémorisation de données et effectue l'ordonnancement de l'écriture de données sur ces dispositifs. Le sous-système de mémorisation fonctionne de manière analogue à une mémoire conventionnelle à unités de disques de grand encombrement, en utilisant un réseau de groupes de redondances qui contiennent chacun N+M unités de disques. Une amélioration des performances est obtenue par l'élimination des mises à jour de données redondantes dans le groupe de redondances en introduisant les enregistrements modifiés de pistes virtuelles dans des pistes logiques vidées au préalable et en identifiant les données contenues dans l'emplacement de l'enregistrement de la piste virtuelle précédente comme étant invalides. Des cylindres logiques contenant un mélange de pistes virtuelles valables et invalides sont vidés, en tâche de fond, en introduisant toutes les pistes virtuelles valables dans un cylindre logique vidé au préalable.
Abstract:
The intelligent data storage manager functions to combine the non-homogeneous physical devices contained in a data storage subsystem to create a logical device with new and unique quality of service characteristics that satisfy the criteria for the policies appropriate for the present data object. In particular, if there is presently no logical device that is appropriate for use in storing the present data object, the intelligent data storage manager defines a new logical device using existing physical and/or logical device definitions as component building blocks to provide the appropriate characteristics to satisfy the policy requirements. The intelligent data storage manager uses weighted values that are assigned to each of the presently defined logical devices to produce a best fit solution to the requested policies in an n-dimensional best fit matching algorithm. The resulting logical device definition is then implemented by dynamically interconnecting the logical devices that were used as the components of the newly defined logical device to store the data object.