SYSTEMS AND METHODS FOR PERFORMING STORAGE OPERATIONS USING NETWORK ATTACHED STORAGE

20200278792 ยท 2020-09-03

    Inventors

    Cpc classification

    International classification

    Abstract

    Systems and methods for performing hierarchical storage operations on electronic data in a computer network are provided. In one embodiment, the present invention may store electronic data from a network device to a network attached storage (NAS) device pursuant to certain storage criteria. The data stored on the NAS may be migrated to a secondary storage and a stub file having a pointer pointing to the secondary storage may be put at the location the data was previously stored on the NAS. The stub file may redirect the network device to the secondary storage if a read request for the data is received from the network device.

    Claims

    1. (canceled)

    2. A method for controlling a networked storage device, the method comprising: with a computing device, controlling a first storage device coupled to a network to write electronic data to a first storage location in the first storage device, the first storage device comprising first computer hardware and an internal operating system, the electronic data generated by a software application running on a first client computing device of a plurality of client computing devices coupled to the network; consulting a stored storage criteria to determine that the electronic data is to be moved to a second storage location in a second storage device different than the first storage device; in response to the determination that the electronic data is to be moved to the second storage location, copying the electronic data to the second storage location with a computing device; subsequent to said copying, receiving a read request to access the electronic data from the first storage device; in response to the read request, controlling the first storage device with a computing device to access re-direction information from the first storage device, the re-direction information pointing to the second storage location on the second storage device; using the re-direction information to control the second storage device with a computing device to obtain the electronic data from the second storage location without transferring the electronic data to the first storage location in the first storage device; and in response to a write request received subsequent to the obtaining the electronic data, storing a modified version of the electronic data in the second storage device, and updating the re-direction information without transferring the electronic data to the first storage location in the first storage device.

    3. The method of claim 2 wherein the first storage device is configured to support one or more of the Unix network file system (NFS) protocol or server message block/common Internet file system (SMB/CIFS) protocol.

    4. The method of claim 3 wherein the first storage device is configured to support one or more of Ethernet and Transmission Control Protocol/Internet Protocol (TCP/IP).

    5. The method of claim 2 wherein said copying is part of an archive operation.

    6. The method of claim 2 wherein said read request is satisfied without de-migrating the electronic data from the second storage device to the first storage device.

    7. The method of claim 2 further comprising decompressing or otherwise decoding the electronic data obtained from the second storage device.

    8. The method of claim 2 wherein the first storage device is a network attached storage (NAS) device and further comprises an internal file management system.

    9. The method of claim 2 wherein the re-direction information is included in a Windows shortcut or a Unix softlink.

    10. A system comprising: a first storage device coupled to a network and comprising first computer hardware and an internal operating system; one or more computing devices externally located with respect to the first storage device configured to: control the first storage device to write electronic data to a first storage location in the first storage device, the electronic data generated by a software application running on a first client computing device of a plurality of client computing devices coupled to the network; consult a stored storage criteria to determine that the electronic data is to be moved to a second storage location in a second storage device different than the first storage device; in response to the determination that the electronic data is to be moved to the second storage location, copy the electronic data to the second storage location with a computing device; subsequent to said copying, receive a read request to access the electronic data; in response to the read request, control the first storage device with a computing device to access re-direction information from the first storage device, the re-direction information pointing to the second storage location on the second storage device; use the re-direction information to control the second storage device with a computing device to obtain the electronic data from the second storage location, without transferring the electronic data to the first storage location in the first storage device; and in response to a write request received subsequent to the obtaining the electronic data, store a modified version of the electronic data in the second storage device, and update the re-direction information without transferring the electronic data to the first storage location in the first storage device.

    11. The system of claim 10 wherein the first storage device is configured to support one or more of the Unix network file system (NFS) protocol or server message block/common Internet file system (SMB/CIFS) protocol.

    12. The system of claim 10 wherein the first storage device is configured to support one or more of Ethernet and Transmission Control Protocol/Internet Protocol (TCP/IP).

    13. The system of claim 10 wherein the re-direction information is included in a Windows shortcut or a Unix softlink.

    14. The system of claim 10 wherein said copying is part of an archive operation.

    15. The system of claim 10 wherein said read request is satisfied without de-migrating the electronic data from the second storage device to the first storage device.

    16. The system of claim 10 further comprising decompressing or otherwise decoding the electronic data obtained from the second storage device.

    17. The system of claim 10 wherein the first storage device further comprises an internal file management system.

    18. The system of claim 10 wherein the re-direction information is contained in a Windows shortcut or a Unix softlink.

    19. Non-transitory computer-readable storage comprising computer-readable instructions that, when executed cause computer hardware to perform operations defined by the computer-readable instructions, the operations comprising: with a computing device, controlling a first storage device coupled to a network to write electronic data to a first storage location in the first storage device, the first storage device comprising first computer hardware and an internal operating system, the electronic data generated by a software application running on a first client computing device of a plurality of client computing devices coupled to the network; consulting a stored storage criteria to determine that the electronic data is to be moved to a second storage location in a second storage device different than the first storage device; in response to the determination that the electronic data is to be moved to the second storage location, copying the electronic data to the second storage location with a computing device; subsequent to said copying, receiving a read request to access the electronic data from the first storage device; in response to the read request, controlling the first storage device with a computing device to access re-direction information from the first storage device, the re-direction information pointing to the second storage location on the second storage device; using the re-direction information to control the second storage device with a computing device to obtain the electronic data from the second storage location without transferring the electronic data to the first storage location in the first storage device; and in response to a write request received subsequent to the obtaining the electronic data, storing a modified version of the electronic data in the second storage device, and updating the re-direction information without transferring the electronic data to the first storage location in the first storage device.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0022] The invention is illustrated in the figures of the accompanying drawings which are meant to be exemplary and not limiting, in which like references are intended to refer to like or corresponding parts throughout, and in which:

    [0023] FIG. 1 is a diagrammatic representation of basic components and data flow of prior art HSM systems;

    [0024] FIG. 2 is a block diagram of a system constructed in accordance with the principles of the present invention for storing and retrieving electronic data from primary and secondary storage locations;

    [0025] FIG. 3 is a flow chart illustrating some of the steps for performing storage and retrieval operations on electronic data in a computer network according to an embodiment of the invention;

    [0026] FIG. 4 is a flow chart illustrating some of the steps performed when a system application attempts to access electronic data moved from primary storage to secondary storage in accordance with an embodiment of the present invention;

    [0027] FIG. 5 is a flow chart illustrating some of the steps performed when a system application attempts to alter electronic data moved from primary storage to secondary storage in accordance with an embodiment of the present invention; and

    [0028] FIG. 6 is a chart illustrating steps performed in a Solaris-based embodiment of the system shown in FIG. 2.

    DETAILED DESCRIPTION

    [0029] An embodiment of a system 50 constructed in accordance with the principles of the present invention is shown in FIG. 2. As shown, system 50 may include a NAS device 100, a network 90, network devices 85, data migrators 95, primary storage device 102, secondary storage devices 120 and 130, storage area network (SAN) 70, media agent 97, storage device 140 and storage manager 180. NAS 100 may be coupled to network 90 which may itself also include or be part of several other network types, including, without limitation, Ethernet, IP, Infineon, Wi-Fi, wireless, Bluetooth or token-ring, and other types.

    [0030] One or more network devices 85 may be coupled to network 90. Each network device 85 may include a client application, a client computer, a host computer, a mainframe computer, a mid-range computer, or any other type of device capable of being connected in a network and running applications which produce electronic data that is periodically stored. Such data may be sometimes referred to as production level data. In some embodiments, a network device 85 may have the ability to generate electronic data requests, such as file requests and system requests to NAS device 100 through the network 90.

    [0031] NAS device 100 may include, and/or be connected to, a primary storage device 102 such as a hard disk or other memory that provides relatively high-speed data access (as compared to secondary storage systems). Primary storage device 102 may include additional storage for NAS device 100 (which may itself include some internal storage), and may be the first network storage device accessed by network devices 85.

    [0032] As shown in FIG. 2, NAS device 100 may include one or more data migrators 95, each of which may be implemented as a software program operating on NAS 100, as an external computer connected to NAS 100, or any combination of the two implementations. Data migrator 95 may be responsible for storing electronic data generated by a network device 85 in primary storage device 102, or other memory location in NAS device 100, based on a set of storage criteria specified by a system user (e.g., storage policy, file size, age, type, etc.). Moreover, data migrators 95 may form a list or otherwise keep track of all qualifying data within network devices 85 and copy that data to primary storage device 102 as necessary (e.g., in a backup or archiving procedure, discussed in more detail below).

    [0033] A storage policy (or criteria) is generally a data structure or other information that includes a set of preferences and other storage criteria for performing a storage operation. The preferences and storage criteria may include, but are not limited to: a storage location, relationships between system components, network pathway(s) to utilize, retention policies, data characteristics, compression or encryption requirements, preferred system components to utilize in a storage operation, and other criteria relating to a storage operation. A storage policy may be stored to a storage manager index, to archive media as metadata for use in restore operations or other storage operations, or to other locations or components of the system.

    [0034] Storage operations, which may generally include data migration and archiving operations may involve some or all of the following operations, but are not limited thereto, including creation, storage, retrieval, migration, deletion, and tracking of primary or production volume data, secondary volume data, primary copies, secondary copies, auxiliary copies, snapshot copies, backup copies, incremental copies, differential copies, synthetic copies, HSM copies, archive copies, Information Lifecycle Management (ILM) copies, and other types of copies and versions of electronic data.

    [0035] De-migration as used herein generally refers to data retrieval-type operations and may occur when electronic data that has been previously transferred from a first location to a second location is transferred back or otherwise restored to the first location. For example, data stored on NAS 100 and migrated to in secondary storage and then returned to NAS 100 may be considered de-migrated. De-migration may also occur in other contexts, for example, when data is migrated from one tier of storage to another tier of storage (e.g., from RAID storage to tape storage) based on aging policies in an ILM context, etc. Thus, if it was desired to access data that had been migrated to a tape, that data could be de-migrated from the tape back to RAID, etc.

    [0036] In some embodiments, data migrators 95 may also monitor or otherwise keep track of electronic data stored in primary storage 102 for possible archiving in secondary storage devices 120 and 130. In such embodiments, some or all data migrators 95 may periodically scan primary storage device 102 searching for data that meet a set storage or archiving criteria. If certain data on device 102 satisfies a set of established archiving criteria, data migrator 95 may discover certain information regarding that data and then migrate it (i.e., coordinate the transfer the data or compressed versions of the data) to secondary storage devices, which may include tape libraries, magnetic media, optical media, or other storage devices. Moreover, is some embodiments archiving criteria, which generally may be a subset set of storage criteria (or policies), may specify criteria for archiving data or for moving data from primary to secondary storage devices.

    [0037] As shown in FIG. 2, one or more secondary storage devices 120 and 130 may be coupled to NAS device 100 and/or to one or more stand alone or external versions of data migrators 95. Each secondary storage device 120 and 130 may include some type of mass storage device that is typically used for archiving or storing large volumes of data. Whether a file is stored to secondary storage device 120 or device 130 may depend on several different factors, for example, on the set of storage criteria, the size of the data, the space available on each storage device, etc.

    [0038] In some embodiments, data migrators 95 may generally communicate with the secondary storage devices 120 and 130 via a local bus such as a SCSI adaptor or an HBA (host bus adaptor). In some embodiments, secondary storage devices 120 and 130 may be communicatively coupled to the NAS device 100 or data migrators 95 via a storage area network (SAN) 70.

    [0039] Certain hardware and software elements of system 50 may be the same as those described in the three-tier backup system commercially available as the Commvault QiNetx backup system from Commvault Systems, Inc. of Tinton Falls, N.J., and further described in application Ser. No. 09/610,738 which is incorporated herein by reference in its entirety.

    [0040] In some embodiments, rather than using a dedicated SAN 70 to connect NAS 100 to secondary storage devices 120 and 130, the secondary storage devices may be directly connected to the network 90. In this case, the data migrators 95 may store or archive the files over the network 90 directly to the secondary storage devices 120 and 130. In the case where stand-alone versions of the data migrators 95 are used without a dedicated SAN 70, data migrators 95 may be connected to the network 90, with each stand-alone data migrator 95 performing its tasks on the NAS device 100 over the network.

    [0041] In some embodiments, system 50 may include a storage manager 180 and one or more of the following: a media agent 98, an index cache 97, and another information storage device 140 that may be a redundant array of independent disks (RAID) or other storage system. These elements are exemplary of a three-tier backup system such as the Commvault QiNetx backup system, available from Commvault Systems, Inc. of Tinton Falls, N.J., and further described in application Ser. No. 09/610,738 which is incorporated herein by reference in its entirety.

    [0042] Storage manager 180 may generally be a software module or application that coordinates and controls system 50. Storage manager 180 may communicate with some or all elements of system 50 including client network devices 85, media agents 97, and storage devices 120, 130 and 140, to initiate and manage system storage operations, backups, migrations, and recoveries.

    [0043] A media agent 97 may generally be a software module that conveys data, as directed by the storage manager 180, between network device 85, data migrator 95, and one or more of the secondary storage devices 120, 130 and 140 as necessary. Media agent 97 is coupled to and may control the secondary storage devices 120, 130 and 140 and may communicate with the storage devices 120, 130 and 140 either via a local bus such as a SCSI adaptor, an HBA or SAN 70.

    [0044] Each media agent 97 may maintain an index cache 98 that stores index data system 50 generates during, store backup, migration, archive and restore operations. For example, storage operations for Microsoft Exchange data may generate index data. Such index data may provide system 50 with an efficient mechanism for locating stored data for recovery or restore operations. This index data is generally stored with the data backed up on storage devices 120, 130 and 140 as a header file or other local indicia and media agent 97 (that typically controls a storage operation) may also write an additional copy of the index data to its index cache 98. The data in the media agent index cache 98 is thus generally readily available to system 50 for use in storage operations and other activities without having to be first retrieved from a storage device 120, 130 or 140.

    [0045] Storage manager 180 may also maintain an index cache 98. The index data may be used to indicate logical associations between components of the system, user preferences, management tasks, and other useful data. For example, the storage manager 180 may use its index cache 98 to track logical associations between several media agents 97 and storage devices 120, 130 and 140.

    [0046] Index caches 98 may reside on their corresponding storage component's hard disk or other fixed storage device. In one embodiment, system 50 may manage index cache 98 on a least recently used (LRU) basis as known in the art. When the capacity of the index cache 98 is reached, system 50 may overwrite those files in the index cache 98 that have been least recently accessed with new index data. In some embodiments, before data in the index cache 98 is overwritten, the data may be copied t a storage device 120, 130 or 140 as a cache copy. If a recovery operation requires data that is no longer stored in the index cache 98, such as in the case of a cache miss, system 50 may recover the index data from the index cache copy stored in the storage device 120, 130 or 140.

    [0047] In some embodiments, other components of system 50 may reside and execute on the storage manager 180. For example, one or more data migrators 95 may execute on the storage manager 180.

    [0048] Referring now to FIG. 3, some of the steps involved in practicing an embodiment of the present invention are shown in the flow chart illustrated thereon. When a network device sends a write request for writing a data to the NAS device, the write request may include a folder, directory or other location in which to store the data on the NAS device (step 300). Through a network, the network device may write the data to the NAS device, storing the file in primary storage (and/or NAS) in the location specified in the write request (step 302). As shown, after a data migrator copies data to secondary storage (step 304) the data migrator may store a stub file at the original file location, the stub file having a pointer pointing to the location in secondary storage where the actual file was stored, and to which the network device can be redirected if a read request for the file is received from the network device, step 306.

    [0049] Referring now to FIG. 4, some of the steps involved in attempting to read certain data that has been migrated to secondary storage media is shown in the flow chart of FIG. 4. As illustrated, a network device may attempt to read data that was originally stored at the current location of the stub file at step 400. The operating system of the network device may read the stub file at step 402 and recognize that the data is now a stub file, and be automatically redirected to read the data from the location pointed to by the stub file at step 404. This may be accomplished for example, by having the network device follow a Windows shortcut or a UNIX softlink (in Solaris applications). The data may then be accessed by directly reading from the secondary storage location at step 406. Although this process may cause a slight delay or latency attributable to the redirection, and, in the case of a secondary storage device using cassettes or other library media, may cause additional delay involved with finding the proper media, the delay normally associated with de-migrating the data to primary storage is eliminated.

    [0050] With reference to FIG. 5, a flow chart illustrating some of the steps performed when data is edited after being read from archive by a client network device are shown. After the data is read from a secondary storage device and edited (step 500), if the network device performs a save operation and issues a write request to the NAS device (and generally speaking, not a save-as operation to store the file in a new location), the data may be stored in the primary storage device at the original location where the data was stored before archiving, replacing the stub file (step 502). Depending on the type of file system and configuration of the secondary storage device, the archived data may be marked as deleted or outdated if the secondary storage device retains old copies of files as a backup mechanism (step 504). Next, the edited data continues to reside in primary storage until a data migrator archives the edited data to secondary storage (step 506) and places a stub file in its place in primary storage (step 508).

    [0051] In other embodiments, when a network device issues a save command after the data edited in step 500, instead of being stored to the stub file location, the data may be stored back to the archive location, leaving the stub file intact, except that if the stub file may keep track of data information, such information may be changed according to the edited data.

    [0052] In an embodiment that stores files that can be read by a network device using the Windows operating system, for example, the data migrator may produce a Windows shortcut file of the same name as the archived file. Other operating systems may provide for use of shortcut files similar to Windows shortcuts that can be used as stub files in the present system, including, for example, Mac OS by Apple Computer, Inc. of Cupertino, Calif.

    [0053] Also, in embodiments which store files that can be read by a network device using Unix type file systems, such as Linux, Solaris, or the like, a softlink is used for re-direction, which is similar to a Windows shortcut. For example, a typical command to create a softlink in Unix systems is as follows: [0054] In -s /prim ary_storage_location/stubfile/secondary_storage_location/archivefile wherein primary_storage_location is the location in the primary storage device, the stubfile is the name of the stub file containing the softlink, the secondary_storage_location is the location to which the file is archived, and the archivefile is the name of the file stored in the secondary location.

    [0054] In some Unix-based systems, such as Solaris, when a network device needs to read a file, the network directory and drive where the file resides may need to be mounted if the directory and file are not already mounted. When the network device issues a read request to a NAS device to read an archived file in such a system, the Softlink stored in the data's primary storage location may have been archived to a drive or directory that is not already mounted for file access.

    [0055] One way to resolve this issue of unmounted drives or directories is to trap the read request, either by the NAS device or the network device, to interrupt processing and to mount the drive and/or directory to which the Softlink is pointing to the archived data so the network device may then read the data from the secondary location.

    [0056] However, many Unix file systems do not provide a ready infrastructure to trap an input/output request before the request is usually propagated to the file system. Using Solaris as an example, many Unix systems typically provide a generic file system interface called a virtual file system (vfs). Vfs supports use by various file systems such as the Unix file system (ufs), Unix network file system (nfs), the Veritas file system (vxfs), etc. Similarly, directories in these file systems may need to be mounted on the individual network devices in Unix based systems. Vfs can act as a bridge to communicate with different file systems using a stackable file system.

    [0057] FIG. 6 is a flow chart illustrating some of the steps performed in a Solaris-based embodiment of the present invention, which provides one or more data migrators that each which may include a stackable loopback file system. The stackable loopback file system's interface may be designed such that if a network device or an application issues a read/write request (i.e., an open( ) request), the stackable loopback file system intercepts the request (step 600). The stackable loopback file system may provide a facility to trap calls, such as open, read, write and other typical Unix file operations if the request is for a stub file (step 602). If the request is for a non-archived file, (step 604) then the stackable loopback file system propagates the normal operations to the underlying file system such as ufs and vxfs, step 605, and a regular open( ) is performed by the underlying file system.

    [0058] Otherwise, if the request is for an archived file, FIG. 6 presents some options of three different embodiments to perform the handling of the trapped request, steps 606A, 606B or 606C, after which, the system may redirect the application or restores the file stored at the secondary location to the stub file location, step 608. Option one, step 606A, is to override the open( ) operation in the libc.so library with a new open( ) command (i.e. cv_open). This may be used for applications that use libc.so during runtime. For the applications which are using dynamically linkable libraries, if the open( ) operation can be overwritten in libc.so with cv_open keeping intact the existing symbols for the other calls, then this option will work for those applications as well. However, this option may not work for applications which directly open the file in the kernel, such as database applications. Further, this option may not work for the statically linked applications.

    [0059] Option two, step 606B, involves changing the trap handler for the open( ) system call. Trap handlers are implemented in assembly and are typically specific to the various Unix architectures. Solaris systems usually include a generic trap handler for system calls and other traps and may be implemented, if desired.

    [0060] Option three, step 606C, may be used for implementing a stackable loopback file system. This option uses a loopback file system that propagates the normal operations to the underlying file System like ufs, vxfs and also provides a facility to trap the required calls. The stackable loopback file system provides the various vfs operations. The stackable loopback file system also provides vnode operations typically used by other file systems. A vnode may be a virtual node comprising a data structure used within Unix-based operating systems to represent an open file, directory, device, or other entity (e.g., socket) that can appear in the file system name-space. The stackable loopback file system provides a mount option to mount the existing file directory to some other location that is used as the secondary location for storing the file. The special mount operation may search through the underlying file system, store the necessary information of the underlying file system, and assign the path as its starting root. Example commands to accomplish this operation follows:

    TABLE-US-00001 mount_cxfs/etc/etc mount_cxfs/etc/tmp/etc_temp

    [0061] Where/etc/tmp/etc_temp does not appear in a mounted path already. This mount option is used for those file directories, which are not already mounted.

    [0062] One way to implement the additional functionality of stackable loopback file system is to make the stackable loopback file system a loadable module or driver. Unix systems, such as Solaris, usually support file system drivers such as the loadable modules. The stackable loopback file system module may support both normal file system and driver functionalities. The stackable loopback file system driver may use input-output controls (ioctls), which are special request device drivers above and beyond calls to the read or write entry points, to provide the capability to mount the file directories. Vnode operations may simply pass through the driver to the underlying file system, except that read/write/mmap operations are trapped to handle data migration of the relocated files, and performs a lookup operation to resolve recursions of the files mounted to some other location.

    [0063] The driver may be included in the migrator, preferably in an embodiment where the migrator resides on the NAS. The migrator may include a relocate daemon that triggers the data migration for the files to be migrated if user defined policies are met. The relocate daemon may then creates the stub file. A redirect/restore daemon may be triggered by the stackable loopback file system when a stub file is accessed. The restore daemon may mount the drive and/or secondary drive or directory where the file was archived if the drive and directory are not already mounted. The stackable loopback file system may then re-directs the network device to the directory where the file is stored as described above. In an alternative embodiment, after mounting the drive and directory, the file may be restored to the primary location. The driver may generate an event for the restore daemon to complete restoration. Restore daemon may send an ioctl for the completion of the restoration and deletes the stub file.

    [0064] Thus, as can be seen from the above, systems and methods for recovering electronic information from a storage medium are provided. It will be understood that the foregoing is merely illustrative of the principles of the present invention and that various modifications can be made by those skilled in the art without departing from the scope and spirit of the invention. Accordingly, such embodiments will be recognized as within the scope of the present invention.

    [0065] Systems and modules described herein may comprise software, firmware, hardware, or any combination(s) of software, firmware, or hardware suitable for the purposes described herein. Software and other modules may reside on servers, workstations, personal computers, computerized tablets, PDAs, and other devices suitable for the purposes described herein. Software and other modules may be accessible via local memory, via a network, via a browser or other application in an ASP context, or via other means suitable for the purposes described herein. Data structures described herein may comprise computer files, variables, programming arrays, programming structures, or any electronic information storage schemes or methods, or any combinations thereof, suitable for the purposes described herein. User interface elements described herein may comprise elements from graphical user interfaces, command line interfaces, and other interfaces suitable for the purposes described herein. Screenshots presented and described herein can be displayed differently as known in the art to input, access, change, manipulate, modify, alter, and work with information.

    [0066] While the invention has been described and illustrated in connection with preferred embodiments, many variations and modifications as will be evident to those skilled in this art may be made without departing from the spirit and scope of the invention, and the invention is thus not to be limited to the precise details of methodology or construction set forth above as such variations and modification are intended to be included within the scope of the invention.

    [0067] Persons skilled in the art will appreciate that the present invention can be practiced by other than the described embodiments, which are presented for purposes of illustration rather than of limitation and that the present invention is limited only by the claims that follow.