Abstract:
Protecting configuration data in a clustered container system may include, in some embodiments, protecting an ETCD data store in a Kubernetes cluster. A data storage management system addresses the unique needs of protecting an ETCD data store of a target Kubernetes cluster, as well as protecting non-ETCD data payloads. The illustrative data storage management system defines ETCD as a unique kind of workload. ETCD protection is integrated within the data storage management system, which automatically creates data structures and resources within the system for, and provides special-purpose features to protect, ETCD contents and associated security certificates. One of the special-purpose features deploys a temporary data transfer agent within the target Kubernetes cluster to safeguard an ETCD snapshot and transmit its contents, along with the security certificates, to a backup infrastructure that operates outside of the target Kubernetes cluster. The backup infrastructure comprises components deployed by the data storage management system.
Abstract:
Data protection resources are automatically scaled to the needs of data source(s) in an application orchestrator computing environment, such as a cluster in a Kubernetes deployment. The approach is adaptable to data sources in production clusters or application suites that are not application orchestrator deployments, such as a cloud-based database-as-a-service (DBaaS). A data storage management system protects cluster-based data with an elastic number of data protection resources (e.g., data agents, media agents), which are deployed on demand. The number of data protection resources deployed for a particular job are appropriate to the workload(s) at present and depend on a variety of scaling factors. In some embodiments, data protection resources are deployed within the same cluster as the data sources. In other embodiments, a separate infrastructure cluster provides the data protection resources on demand, and connects to any number and types of data sources, whether cloud-based or otherwise, without limitation.
Abstract:
A method and system for communicating with IoT devices to gather information related to device failure or error(s) is disclosed. The system makes a copy of at least a portion of the device's non-volatile memory and/or receives IoT device data (e.g., sensor data and/or log files etc.) from an IoT device that recently failed. The system determines which log files and/or sensor data, for example, the IoT device created before and/or after a failure. After gathering this information, the system stores the information in a database, sends it to the IoT device manufacturer, for further analysis and diagnostics to troubleshoot the failure and send a fix or software update to the IoT device.
Abstract:
According to certain aspects, a method of creating customized bootable images for client computing devices in an information management system can include: creating a backup copy of each of a plurality of client computing devices, including a first client computing device; subsequent to receiving a request to restore the first client computing device to the state at a first time, creating a customized bootable image that is configured to directly restore the first client computing device to the state at the first time, wherein the customized bootable image includes system state specific to the first client computing device at the first time and one or more drivers associated with hardware existing at time of restore on a computing device to be rebooted; and rebooting the computing device to the state of the first client computing device at the first time from the customized bootable image.
Abstract:
Recovery points can be used for replicating a virtual machine and reverting the virtual machine to a different state. A filter driver can monitor and capture input/output commands between a virtual machine and a virtual machine disk. The captured input/output commands can be used to create a recovery point. The recovery point can be associated with a bitmap that may be used to identify data blocks that have been modified between two versions of the virtual machine. Using this bitmap, a virtual machine may be reverted or restored to a different state by replacing modified data blocks and without replacing the entire virtual machine disk.
Abstract:
Recovery points can be used for replicating a virtual machine and reverting the virtual machine to a different state. A filter driver can monitor and capture input/output commands between a virtual machine and a virtual machine disk. The captured input/output commands can be used to create a recovery point. The recovery point can be associated with a bitmap that may be used to identify data blocks that have been modified between two versions of the virtual machine. Using this bitmap, a virtual machine may be reverted or restored to a different state by replacing modified data blocks and without replacing the entire virtual machine disk.
Abstract:
A pseudo-storage-device driver is employed to configure pseudo-volumes that correspond to respective snapshots in a storage array. Each pseudo-volume is mounted as a recovery point instead of the corresponding snapshot. Instead of writing changes to the snapshots, the changes—typically modifications to metadata associated with the snapshot—are managed via the pseudo-volume. Metadata changes that arise in the context of mapping, mounting, and/or using a snapshot are written to the pseudo-volume, in a data structure referred to as a “private store.” Information management operations that need metadata associated with the snapshot are directed to the private store for the latest updates to the metadata. After the information management operation ends, the pseudo-volumes are unmounted and the updates in the private store are discarded. Because no changes were made to the snapshot, no changes need to be reversed. Accordingly, the illustrative system preserves the integrity of the snapshots through any number of information management operations that may generate metadata changes. Moreover, because the illustrative system is agnostic as to whether a given storage device is persistent-type or not, there is less burden on administration and also less risk of error.
Abstract:
Systems and methods are provided which perform a file level restore by utilizing existing operating system components (e.g., file system drivers) that are natively installed on the target computing device. These components can be used to mount and/or interpret a secondary copy of the file system. For instance, the system can instantiate an interface object (e.g., a device node such as a pseudo device, device file or special file) on the target client which includes file system metadata corresponding to the backed up version of the file system. The interface provides a mechanism for the operating system to mount the secondary copy and perform file level access on the secondary copy, e.g., to restore one or more selected files.
Abstract:
A method and system for communicating with IoT devices to gather information related to device failure or error(s) is disclosed. The system makes a copy of at least a portion of the device's non-volatile memory and/or receives IoT device data (e.g., sensor data and/or log files etc.) from an IoT device that recently failed. The system determines which log files and/or sensor data, for example, the IoT device created before and/or after a failure. After gathering this information, the system stores the information in a database, sends it to the IoT device manufacturer, for further analysis and diagnostics to troubleshoot the failure and send a fix or software update to the IoT device.
Abstract:
A streamlined approach analyzes block-level backups of VM virtual disks and creates both coarse and fine indexes of backed up VM data files in the block-level backups. The indexes (collectively the “content index”) enable granular searching by filename, by file attributes (metadata), and/or by file contents, and further enable granular live browsing of backed up VM files. Thus, by using the illustrative data storage management system, ordinary block-level backups of virtual disks are “opened to view” through indexing. Any block-level copies can be indexed according to the illustrative embodiments, including file system block-level copies. The indexing occurs offline in an illustrative data storage management system, after VM virtual disks are backed up into block-level backup copies, and therefore the indexing does not cut into the source VM's performance. The disclosed approach is widely applicable to VMs executing in cloud computing environments and/or in non-cloud data centers. The illustrative content indexing is accomplished without restoring the VM data files being indexed to a staging location.