A computer system includes a primary site having a primary volume, and having an instance that performs data processing related to input/output data, and a secondary site having a secondary volume, and having an instance. Remote copy is set up between the primary and secondary volumes, and the instance of the primary site transfers data input to or outputted from the primary volume to the secondary site, while the instance of the secondary site stores the transferred data in the secondary volume. The computer system further includes a specifications changing section that changes specifications of the instance of the secondary site when a failover switching process of switching a performer of the data processing from the primary to secondary site is performed, or when a failback switching process of switching the performer of the data processing from the secondary to primary site after the failover switching process is performed.
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
A task management system is configured to store a consumption resource amount transition prediction of a resource consumed by tasks, and effective resource amount information for managing an effective resource amount indicating a total resource amount that is providable from the resource. The task management system is configured to, for the resource consumed by each of the tasks, compare a consumption resource amount prediction obtained from the consumption resource amount transition prediction with a current consumption resource amount, determine that the effective resource amount of the resource is decreased when a predetermined condition including that the current consumption resource amount is smaller than the consumption resource amount prediction is satisfied, estimate a decrease amount of the effective resource amount based on a difference between the current consumption resource amount and the consumption resource amount prediction, and determine an influence on task execution based on the decrease amount.
A method for remote snapshot restoration is provided. The method may include receiving a snapshot restore request; determining if the received snapshot restore request satisfies a three-way connection requirement by determining if the received snapshot restore request satisfies host-device-device connection information, wherein the host-device-device connection information comprises host information, source storage device information, destination storage device information, application-volume path information, and volume-volume path information; and for the received snapshot restore requesting satisfying the three-way connection requirement, performing snapshot restoration and data copying.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
Systems and methods described herein can involve, for an execution of a remote copy operation from a primary storage system to a secondary storage system, calculating lag time from a current time and latest copy time received during execution of the remote copy operation. The calculation of the lag time for the execution of the remote copy operation can be conducted by either the primary storage system or the secondary storage system.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
5.
COMPUTER SYSTEM AND MANAGEMENT METHOD FOR COMPUTER SYSTEM
In a computer system including a plurality of storage nodes and a control node, in a case where a storage system is deployed to a new storage node, the control node templates configuration information of an existing storage node by using a template deployment service, the control node sets a plurality of initial setting nodes, and causes the plurality of initial setting nodes to perform in parallel initial setting of converting and storing data of the existing storage node by using the template in a plurality of disks to be connected to a new storage node, and the new storage node is connected to the disks in which the initial setting is performed, and imports configuration information of the existing storage so that the storage system is deployed.
To migrate a volume while maintaining a pair. A first node includes a primary volume. A second node includes a secondary volume configured to create a remote copy pair with the primary volume and sets the remote copy using identification information on the primary volume and identification information on the secondary volume. A third node includes a migration destination volume to be a migration destination of the secondary volume. The first node receives a new pair creating request in which the identification information on the volume, the primary identification information on the secondary volume, and information specifying the third node are designated, and creates a new pair between the primary volume and the migration destination volume. The second node deletes the pair of the secondary volume and the primary volume and replaces the identification information on the migration destination volume with the identification information on the secondary volume.
G06F 16/21 - Design, administration or maintenance of databases
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
A storage system having high performance and high reliability includes a non-volatile storage device, a storage controller configured to control data to be read and written from and to the storage device using a storage function; and a volatile memory. In the reading and writing, the storage controller generates a log and stores the log in a log memory, writes the log stored in the memory to the storage device, and collects a capacity of the storage area of the memory storing the log written to the storage device. In collecting a free area of the memory, the storage controller executes a base image saving method of writing in the storage device in units of storage areas having a plurality of logs and collecting a free area, and a garbage collection method of writing in the storage device in units of logs and collecting a free area.
A storage system includes a controller and one or more storage devices, and the controller can compress data in different compression units, and collectively compresses data of one or a plurality of consecutive addresses in each compression unit of the different compression units. The controller receives write data, determines whether read of data stored in the one or more storage devices is necessary for compression of the write data by a first compression unit, determines compression of the write data in the first compression unit when read is not necessary, and determines compression in the first compression unit or compression in a second compression unit smaller than the first compression unit based on a remaining endurance of rewriting of the one or more storage devices when read is necessary.
A storage system includes a controller, and a plurality of storage drives. The storage drives make up one or more parity groups. The controller monitors occurrence of one or more given events of an internal process different from an I/O process on host data, the given events increasing a load of a first parity group, and in response to occurrence of the given event, determines a power value indicating power to be interchanged to the first parity group, from one or more types of resources different from the first parity group in the storage system.
A storage system includes a plurality of storage servers that includes a storage device and a controller that processes data inputted to and outputted from the storage device, the plurality of storage servers providing a lead local volume for inputting and outputting data to and from the storage device; and a first compute server including a client, wherein the first compute server manages a free capacity of the plurality of storage servers, individually provisions storage capacity to each of the plurality of storage servers on the basis of the free capacity to provide the lead local volume, and configures a first volume on the basis of the provided plurality of lead local volumes, and provides the first volume to the client.
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
H04L 67/1004 - Server selection for load balancing
A data processing system includes: a processor implemented by a programmable device; and a processor processing unit connected to the processor. The processor includes a plurality of processing circuits configured to execute in parallel data processing commands provided from the processor processing unit, an error detection unit configured to detect a soft error occurring in a processing circuit that is executing the data processing command, and a processing circuit selection unit configured to select a processing circuit to execute the data processing command from a plurality of processing circuits. The processing circuit selection unit specifies a processing circuit in which the soft error occurs based on a soft error detection result of the error detection unit, and selects a processing circuit to execute the data processing command from the plurality of processing circuits, excluding the processing circuit in which the soft error occurs.
When the latest data and a backup of the latest data are infected, the data is recovered from the backup older than the data. Therefore, it is necessary to eliminate a difference from the latest data, leading to loss of business opportunities. A data recovery system that recovers data stored in a storage system includes a storage device that holds original data, an analyzing server that holds a copy of the original data, and generates formatted data by formatting the copy data for analysis, a managing server that holds the copy data and data history management information storing a history of the formatted data, and a data recovery unit that refers to the data history management information, selects the copy data or the formatted data as recovery data, and recovers the data from the selected recovery data when a security threat is detected in the data.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
Provided herein is a storage system capable of reducing power consumption. Each of a plurality of controllers provided to a storage system includes a plurality of first power feeding areas of which electric power is independently controllable, and a processor which processes an input/output request and a memory connected to the processor. The processor and the memory are provided to each first power feeding area. Each of the plurality of controllers is configured as being switchable between a first operating mode in which all the first power feeding areas provided to the controller are made in a working state, and a second operating mode in which one or some of the first power feeding areas provided to the controller are made in a stopped state, and the rest of the first power feeding areas is made in the working state.
To efficiently detect a defect and identify a causal portion of the defect while minimizing impact on performance of a storage system. A storage device, including an optical module, receives a data I/O request transmitted from another device via the optical module, and performs I/O processing responding to the data I/O request to a storage unit. The storage device performs, upon receiving a data write request as the data I/O request from another device, transmitting a reception enabled notification being information indicating a state where reception of write data for the data write request is enabled to another device, and starting timing of a write monitoring timer being a timer to monitor a reception state of the write data, and upon timeout of the write monitoring timer, acquiring emitted light quantity of a light emitting element, and outputting information indicating the acquired emitted light quantity.
G06F 3/06 - Digital input from, or digital output to, record carriers
H04B 10/079 - Arrangements for monitoring or testing transmission systemsArrangements for fault measurement of transmission systems using an in-service signal using measurements of the data signal
To safely store data. A storage 200 connected to another storage 400 via a network, the storage 200 including: a processor 216 configured to process data to be stored in a storage device 220, in which the processor 216 is configured to acquire a first snapshot for a volume generated using a storage area of the storage device, compare a second snapshot, which is a previously acquired snapshot, with the first snapshot to retrieve incremental data, transfer the incremental data to the other storage and store the incremental data as backup data, and set lock of the backup data stored in the other storage.
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
A support device and method capable of supporting construction of an optimal secondary usage environment for data are proposed.
A support device and method capable of supporting construction of an optimal secondary usage environment for data are proposed.
A plurality of construction candidate plans for a secondary usage environment including a copy method for copying the data between a designated copy source site and a designated copy destination site of the data are calculated. A time and a cost required for copying the data and an operation cost of the secondary usage environment to be constructed are calculated for each of the calculated construction candidate plans,, and the plurality of construction candidate plans based on a calculation result together with evaluations thereof are presented.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
A simulation apparatus is an apparatus that obtains a solution of a combinatorial optimization problem by SA, in which a local optimal solution of an objective function is acquired by using a local search method of changing a temperature parameter, a statistic is calculated based on the local optimal solution and a probability distribution function of a variable included in the objective function, a maximum value (temperature parameter when a standard deviation related to a state of the local optimal solution is large) and a minimum value (temperature parameter when the standard deviation is small) as a search range for the temperature parameter are obtained based on the calculated statistic (standard deviation), and a good solution is obtained by executing the simulated annealing a plurality of times in the search range having the maximum value and the minimum value for the temperature parameter.
A storage device includes a processor and an accelerator configured to compress and decompress data. The processor receives first replacement write data for a part of a first logical address region to update first data in the first logical address region that has been compressed by basic compression unit. The processor instructs the accelerator to compress the first replacement write data by a size smaller than the basic compression unit. The accelerator compresses the first replacement write data by the smaller size. The processor merges not-to-be-replaced data in the first logical address region and the first replacement data that are decompressed by the accelerator to generate uncompressed data having a size of the basic compression unit. The processor instructs the accelerator to compress the uncompressed data by the basic compression unit.
A storage system includes: a plurality of storage nodes each including a processor; and a storage apparatus, in which the processor includes a plurality of processor cores and executes a plurality of programs for processing data input/output to/from the storage apparatus by using the processor cores, provides a volume that is a logical storage area, and adjusts the number of processor cores to be allocated to each of the plurality of programs.
A storage system and a backup method for the storage system capable of reliably periodically backing up data of the storage system without being affected by a load of a network are proposed. When receiving, from a storage management server, a setting of periodic backup including at least a backup frequency for data of the storage system, a scheduler sets a schedule including a time-point for the periodic backup. A backup unit periodically creates backup data according to the time-point in the schedule set by the scheduler. A data transfer unit transfers the created backup data to a cloud storage device according to the time-point in the schedule set by the scheduler.
G06F 3/06 - Digital input from, or digital output to, record carriers
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
21.
STORAGE SYSTEM AND MALICIOUS PROGRAM DETECTION METHOD
Provided is a storage system capable of detecting a malicious program without executing compression processing of data. A storage system includes a processor that processes data input to and output from a storage device. The processor operates thereon a duplication detection program which deduplicates duplicated data, stores the deduplicated data in the storage device, calculates a duplication rate being a ratio of duplicated data in a predetermined unit of storage, and detects a change between a duplication rate before an update of the data and a duplication rate after the update of the data in units of the predetermined unit of storage. Moreover, the processor operates thereon a ransomware detection program which detects that the data is updated by ransomware, when a decrease amount of the duplication rate exceeds a duplication rate threshold value relating to the change in duplication rate detected by the duplication detection program.
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
There is provided a storage migration method including sending, by a maintenance PC, a power-off instruction for migration to a migration source storage, sending, by the migration source storage, storage configuration information to the maintenance PC, turning off the power of the migration source storage when it is confirmed that the maintenance PC has received the storage configuration information, sending, by the maintenance PC, the storage configuration information and a power-on instruction for migration to a migration destination storage, and outputting, by the maintenance PC, a migration completion notification when a disk drive relocated from the migration source storage can be confirmed to be set in the migration destination storage.
A storage system with reduced power consumption includes a storage apparatus that saves data in accordance with a data input/output request from a host or outputs the saved data, the storage system including a plurality of components each configured to operate in a first power mode or at least one second lower power mode in a switchable manner, a condition monitoring module monitoring each of the plurality of components, and a power mode control module that determines a power mode of at least one component to be the second power mode, according to a processing load related to each of the plurality of components and which corresponds to a result of monitoring by the condition monitoring module, and operates the at least one particular component in the power saving mode, in which the plurality of components perform mutual control with the storage apparatus in accordance with the data input/output request.
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computerOutput arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
G06F 3/06 - Digital input from, or digital output to, record carriers
24.
SYSTEM, METHOD, AND PROGRAM FOR DATA TRANSFER PROCESS
When executing a data transfer process between one volume and a second storage unit, a computer resource activation unit activates one or more computer resources that executes the data transfer process, and mounts one common volume as a transfer source or a transfer destination in the data transfer process on any of the activated computer resources. A region allocation unit allocates a part of the data transfer process to the computer resource for each storage region. The computer resource executes the part of the data transfer process between the second storage unit and the storage region allocated to the computer resource in the one common volume.
G06F 3/06 - Digital input from, or digital output to, record carriers
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
25.
COMPUTER SYSTEM, INFRASTRUCTURE MANAGEMENT METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
A storage pool for allocating a storage area to a volume to be used by an instance constituting a service is selected. At least two infrastructures provide an environment for constructing a business system including instances, at least one infrastructure includes a plurality of storage pools for allocating a storage area to a volume to be used by an instance, and a management system holds management information for centrally managing a plurality of business systems. A computer provided in an infrastructure managing a storage pool makes an inquiry about an attribute of an instance to the management system when an allocation request to allocate a storage area to a volume to be used by the instance is received, receives information on the attribute of the instance, retrieves an available storage pool based on the received information, and generates display information for presenting the retrieved storage pool.
A storage system includes a storage device and a processor. The storage device stores first mapping information and second mapping information. The first mapping information includes information indicating mapping, between an address of a respective volume (VOL) in a respective snapshot family and an address in a respective snapshot virtual device, in each of a plurality of snapshot families each including a primary volume (PVOL) and a secondary volume (SVOL) which is a snapshot of the PVOL. The second mapping information includes information indicating mapping between an address in the respective snapshot virtual device and an address in a deduplication virtual device.
A network interface includes a processor, memory, and a cache between the processor and the memory. The processor secures a plurality of buffers for storing transfer data in the memory, and manages an allocation order of available buffers of the plurality of buffers. The processor returns a buffer released after data transfer to a position before a predetermined position of the allocation order.
To provide a successor setting system that can set a successor without setting the successor in advance even when an administrator of IT infrastructure management information is absent without taking over. A successor setting system includes: a successor selection rule management table in which a successor selection rule is managed, the successor selection rule being used to select a successor of an administrator of IT infrastructure management information by which a resource and users who operate the resource are managed when the administrator is unable to continue managing the IT infrastructure management information; and an activity history monitoring unit that monitors activity history of the users in the resource, applies an evaluation result obtained by evaluating the activity history for each user to the successor selection rule to select the user who is to be the successor, and sets the selected user as the successor.
The storage apparatus includes a midplane provided vertically to an installation surface of the storage apparatus and provided with a plurality of connectors arranged in parallel in an X-axis direction parallel to the installation surface of the storage apparatus; and a plurality of adapters on each of which two drive apparatuses are mounted and which are arranged in parallel in the X-axis direction and connected to the midplane through the plurality of respective connectors in a Y-axis direction parallel to the installation surface and vertical to the X-axis direction. Each of the adapters includes a board including a plurality of connectors connected to respective connectors of the two drive apparatuses which are arranged in parallel in the Y-axis direction, in the Y-axis direction, and a frame for mounting the two drive apparatuses arranged in parallel in the Y-axis direction and the board to each of the adapters.
A storage system includes a storage apparatus having a plurality of physical drives and a controller. In a case where a physical failure occurs in a physical drive, the controller additionally installs a drive provided from a cloud, and maintains a redundant array of inexpensive disks (RAID) configuration before the occurrence of the physical failure in the physical drive using the physical drives excluding the physical drive where the physical failure has occurred and the additionally installed drive provided from the cloud.
A failure predictor of a drive apparatus in a storage system is detected more accurately. A control apparatus 1 for a storage system S stores a learning model(s) 132M for evaluating response performance of a drive apparatus 3 with respect to execution of a command relating to input and output by the control apparatus 1. The control apparatus 1 acquires operation information of the drive apparatus 3 and inputs specified information regarding commands, which is included in the operation information, to the learning model 132M. The control apparatus 1 judges a failure predictor of the drive apparatus 3 on the basis of output relating to the response performance by the learning model 132M in response to the input of the specified information.
A storage apparatus includes a first controller having a first memory, a second controller having a second memory, and a memory module having a third memory. The first memory stores drive control information including a correspondence between a logical address and a physical address, first cache data in a data input-output (I/O) process, and first cache control information including a correspondence between a logical address and a cache address of the first cache data. The second memory stores drive control information, second cache data in the data I/O process, and second cache control information including a correspondence between a logical address and a cache address of the second cache data. The third memory stores first cache data provided with redundancy and second cache data provided with redundancy.
The present invention achieves high throughput by making efficient use of virtual device resources. A storage system includes a storage device and a processor. The processor manages a primary volume and a snapshot volume as a snapshot family. The processor uses a snapshot virtual device as the data storage destination for the primary volume and for the snapshot volume. Upon receiving a write request from a host, the processor switches between an overwrite process and a new allocation process in accordance with the reference made to a write destination address range by the snapshot volume and with the degree of distribution of the write destination address range in the snapshot virtual device. The overwrite process is performed to overwrite an allocated area of the snapshot virtual device. The new allocation process is performed to allocate a new area of the snapshot virtual device to the write destination address range.
A unified storage system is capable of reducing hardware costs by effectively utilizing a previously used storage controller when upgrading a storage controller in a unified storage and an upgrade method for the unified storage system is possible. The unified storage system A includes: a storage node having a controller; and a storage device configured to store data. In the unified storage system, the unified storage system supports block access and file access, the unified storage system includes a file system, which is configured to process file access from a client and perform block access to the controller, the controller processes block access from the client and block access from the file system to access the storage device which stores the data, and the unified storage system is capable of adding a network-connected information apparatus and is capable of migrating the file system to the information apparatus.
The present invention provides a storage system and a storage system control method that have high failure tolerance but are small in construction cost.
The present invention provides a storage system and a storage system control method that have high failure tolerance but are small in construction cost.
The storage system runs on a plurality of cloud computers disposed in a plurality of different zones, and includes storage nodes that are disposed in the plurality of computers in the plurality of zones to process inputted/outputted data. The storage nodes include a first storage node and a second storage node. The first storage node operates during normal operation. The second storage node is present in a zone different from that where the first storage node is present, and is able to take over processing of the first storage node. The plurality of cloud computers have a storage device and a virtual storage device. The storage device physically stores data that is to be processed by the storage nodes. The virtual storage device stores data that is made redundant between the zones by a plurality of the storage devices disposed in the different zones. The storage system accesses data in the virtual storage device by using storage control information, and stores the storage system in the virtual storage device. The virtual storage device makes the stored data redundant between the zones. If a failure occurs in a zone including the first storage node, the second storage node takes over the processing of the first storage node by using the data made redundant between the zones.
When data is to be stored in or transferred to a storage system including a controller, the controller causes a selected offload instance to compress or to decompress the data to be stored in or to be transferred to the storage system, the selected offload instance being one or more offload instances that support a specific compression scheme and to which a compression or decompression load is to be offloaded.
Systems and methods described herein can involve, for receipt of a write request from a server to a first storage system associated with a mounted volume having an attribute of read only or read/write, the write request processed by a second storage system, the second storage system setting the write destination of write data associated with the write request to the first storage system or the second storage system based on the attribute.
When a cooperation source user name corresponding to identification information of a user included in a received request is included in a cooperation destination system user information, a cooperation destination system converts the identification information of the user included in the request into a user ID corresponding to the cooperation source user name in a cooperation destination system user information. The cooperation destination system processes the request based on the user ID and determines whether or not a cooperation source user name corresponding to the user ID is included in the cooperation destination system user information. In a case where the cooperation source user name corresponding to the user ID is included in the cooperation destination system user information, the cooperation destination system converts the user ID into a cooperation source user name corresponding to the user ID in the cooperation destination system user information.
G06F 16/00 - Information retrievalDatabase structures thereforFile system structures therefor
G06F 16/2458 - Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
A file transfer system transfers a target file updated in a first computer to a second computer for each part obtained by dividing the target file and includes: an update recording unit that records an update position of the target file in the first computer as an offset flag; an update determination unit that refers to the offset flag and determines presence or absence of update for each part of the target file; and a transfer unit that transfers the part determined to have update by the update determination unit to the second computer, in which when re-update in which the target file is updated after the transfer unit starts transferring any part included in the target file occurs, the transfer unit transfers, to the second computer, a re-update part that is the part updated by the re-update, regardless of whether or not the re-update part has already been transferred.
A processor inputs/outputs data related to data input/output with respect to the volume to/from a page of the logical storage area; maps the volume to data of the logical storage area; is able to release the storage area in units of the pages; includes a plurality of the volumes that can share data of the logical storage area; performs garbage collection of deleting data which is not referred to from any of the plurality of volumes as invalid data, moving data which is referred to from any of the volumes to another page, and releasing a storage area of a page on which the data is deleted and the data is moved; and stores a plurality of pieces of data in the page of a movement destination such that the plurality of pieces of data stored in the same page is mapped from a same volume by the garbage collection.
Storage controllers shift from a normal operation mode to a degraded operation mode in accordance with a command from a management apparatus. Each of the storage controllers in the normal operation mode works in the normal operation state. In transition from the normal operation mode to the degraded operation mode in accordance with a command from the management apparatus, a storage controller designated by the management apparatus changes from the normal operation state into the standing-by state and the other storage controllers except for the designated storage controller change from the normal operation state into the degraded operation state. The storage controller in the standing-by state changes into the degraded operation state in response to stop of a storage controller in the degraded operation state because of occurrence of a failure under the degraded operation mode.
G06F 11/10 - Adding special bits or symbols to the coded information, e.g. parity check, casting out nines or elevens
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
42.
STORAGE SYSTEM AND METHOD FOR TRANSFERRING DATA THEREOF
A storage system includes a plurality of storage controllers. The storage controller includes a processor, a memory, and a transfer device that processes control data for controlling an internal operation of the storage system, the control data being transmitted and received between the plurality of storage controllers. The processor accumulates the control data in the memory when a transfer request for the control data is generated, generates a write request for transmitting a plurality of the control data stored in the memory, and transmits the write request to the other storage controller. The transfer device writes a plurality of the control data included in the write request to the memory upon receiving the write request.
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
H04L 49/9047 - Buffering arrangements including multiple buffers, e.g. buffer pools
43.
STORAGE CONTROLLER AND STORAGE CONTROLLER CONTROL METHOD
To replace a storage controller without stopping a host and without losing data related to IO processing from the host. A storage controller sets, in a port management table, a first host path definition between the host and first address information in a controller unit in addition to a second host path definition between the host and second address information in a controller unit. The storage controller sets, in a route management table, a first connection route between an input port and a first output port to which a port of the first address information is connected, in addition to a second connection route between an input port and a second output port to which a port of the second address information is connected. The storage controller transfers an IO to one controller unit or another controller unit based on the port management table and the route management table.
A storage system includes a plurality of nodes each of which includes a processor, in which when a replication source volume in a replication source storage system connected to the storage system is replicated to a plurality of nodes of the storage system, any one of the processors generates a first replicated volume by replicating the replication source volume of the replication source storage system in a first node among the plurality of nodes, and generates a second replicated volume mapped to the first replicated volume in a second node among the plurality of nodes.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
45.
Automatic identification and attachment of replicated volumes to application
Systems and methods described herein can involve managing volume management information indicative of a relationship between an application and a volume used by the application; for receipt of a first request to make the application ready for takeover to another location, updating the volume management information to indicate that another volume of the another location is associated with the application. For receipt of a second request to conduct volume attachment for the application, the systems and methods can involve identifying one or more volumes associated with the application based on the volume management information; and attaching an identified volume from the identified one or more volumes to the application.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
46.
STORAGE SYSTEM, MANAGEMENT METHOD OF STORAGE SYSTEM, AND MANAGEMENT DEVICE OF STORAGE SYSTEM
Data movement for reducing an environmental load in a hierarchical storage is appropriately determined. A storage system includes an upper-level storage device and a management device. The management device is configured to determine, for each file stored in the upper-level storage device, based on a size of a target file and an access frequency of the target file, and the power consumption information, whether power consumption for holding the target file is to be reduced by moving the target file to the lower-level storage device, and output, when it is determined that the power consumption for holding the target file is to be reduced by moving the target file to the lower-level storage device, an instruction to move the target file from the upper-level storage device to the lower-level storage device.
Power demand at each base is adjusted so as to improve a renewable energy utilization ratio at all bases. An inter-base workload control system manages an amount of excess power obtained by subtracting a power supply amount of the renewable energy power supply from a power consumption amount associated with execution of a workload in a future time range at the bases, spatial migratable time range information on a spatial migratable time range where spatial migration of migrating the workload in a future time range at the bases to another base is possible and temporal migratable time range information on temporal migration of delaying execution of the workload in the future time range at the bases within the same base and migrating the workload to another time range and a predicted amount of power consumption through execution of the workload in the future time range at the bases.
A frontend interface of a controller according to the present invention includes a plurality of corresponding queueing interfaces for each processor of the controller, and an enqueueing destination of a host I/O command can be switched in response to an instruction from a processor. When a controller OS restarts, the controller waits for completion of a host I/O and executes controller blocking and restarting during setup. Therefore, to determine whether or not this process is possible, the processor gives an instruction to switch a queue and waits until a switch source queue is empty.
Each storage node includes a processor, a drive that stores data, and a communication unit that transmits data to another storage node or receives data from the another storage node. The communication unit includes a compression circuit that performs reversible compression before data is transmitted and a decompression circuit that decompresses compressed data after the compressed data is received. In response to a reading command for reading data of a designated size to the outside, when a predetermined condition is satisfied, the communication unit of a first storage node compresses the data stored in the drive of the first storage node by the compression circuit and transmits the compressed data to the communication unit of a second storage node. The communication unit of the second storage node decompresses the received data with a decompression circuit. The second storage node outputs decompressed data to an outside.
The copy performance of a storage apparatus is improved. The storage apparatus includes a storage controller that processes an I/O request from a host, and a storage device that stores data from the host. For each of a plurality of volumes, the storage controller stores a schedule indicative of a copy speed level for each of continuous time slots in a cycle. The storage controller collects information regarding current performance of the storage controller, stores the collected current performance information into a memory, and determines the copy speed level for a next time slot on the basis of the copy speed level for the next time slot indicated by the schedule and a relation between a value indicated by the performance information and a threshold value.
A storage system having both a high performance and high reliability is implemented. The storage system includes a plurality of storage nodes each including a processor and a memory, and a storage device. Each of the plurality of storage nodes includes a storage controller configured to run on the processor, the plurality of storage controllers include an active storage controller configured to process data output to and received from the storage device, and a standby storage controller configured to take over the processing of the data from the active storage controller, each of the active storage controller and the standby storage controller is allocated with a storage area of the memory, and the storage node changes an amount of a memory capacity allocated for the storage controller of the self-node when a state of the storage controller is switched between a standby state and an active state.
A system receives a quota request in which a tenant, one or more locations, and a capacity upper limit related to a requested quota are designated, executes conflict determination n based on quota information, and adds information related to the requested quota to the quota information when a result of the conflict determination is false. The quota information includes information representing a tenant, one or more locations, and a capacity upper limit for each quota of the plurality of storage devices at the plurality of locations. For each of the plurality of locations, a capacity usable by the tenant among a capacity of the storage device at the location is equal to or less than the capacity upper limit of the quota corresponding to the location and the tenant.
A storage having a cluster configuration is operated while stabilizing performance thereof. A storage system calculates a load value of a storage in which a predetermined cluster configuration is set, determines whether the calculated load value exceeds a predetermined value, adds a predetermined resource to the storage when the calculated load value exceeds the predetermined value, calculates a predicted value of a load of the storage when the resource is removed after the resource is added, determines whether the calculated predicted value is lower than a predetermined value, and removes the resource of the storage when the calculated predicted value is lower than the predetermined value.
In a computer system, storage controllers disposed in different data centers form a pair via a communication path between the data centers. When a communication failure occurs in the communication path between the paired storage controllers, a tie breaker takes over the data input/output from one of the storage controllers forming the pair to the other storage controller based on the statistical information of the communication characteristics of the storage controller generated by the I/O monitor, and determines failure control to stop the storage node having the one storage controller, and the storage cluster controller executes the failure control.
Provided is a computer system capable of maintaining a storage capacity allocated to a journal volume within an appropriate range during an application period of remote copy. A first storage system includes a primary volume and a primary journal volume, and a second storage system includes a secondary volume and a secondary journal volume. A management computer is configured to manage the remote copy in which a primary volume, a primary journal volume, a secondary journal volume, and a secondary volume are paired, and expand and/or release a capacity of the primary journal volume and/or the secondary journal volume according to operation information of a resource related to the remote copy.
A processor of a storage system calculates long-term load fluctuation prediction as a prediction of load fluctuation over time in the future of the controller nodes based on time-series data of load of the controller nodes. The processor calculates an addition/reduction completion target time to complete addition or reduction of an operating controller node out of the controller nodes based on the long-term load fluctuation prediction and a load threshold value determined from a power performance model. The processor calculates a rebalancing time for a rebalancing process based on data movement in the rebalancing process for moving data between the drive nodes in accordance with the addition or the reduction and bandwidth information of a path for the data movement. The processor calculates a start time of the rebalancing process from the addition/reduction completion target time and the rebalancing time and starts the rebalancing process at the start time.
When receiving power requirement for power control and performing power control of a target device in accordance with the received power requirement, based on power consumption for a performance of each component of the target device and a device configuration of the target device, power saving level management information is created, which respectively defines the performance of each component at each power saving level associated with each of a plurality of divided power consumption ranges of the target device, and based on a power consumption upper limit value or the power saving level of the target device designated as the received power requirement, the power saving level management information is referred to and the performance of each component is set to a performance of the power saving level according to the power requirement, respectively.
There is provided a load verification system that performs load performance verification of a data storage area. The load verification system includes: a verification-purposed performance metrics collector that acquires performance metrics of the data storage area and a volume thereof that are load verification targets; an expected performance metrics data generator that generates performance metrics data that will be a result of a load, expected with respect to the load-verification-target volume; and an I/O pattern data generator that generates input/output pattern data based on which a load is generated that causes generation of performance metrics data whose performance is equivalent to expected performance indicated in the expected performance metrics data. An input/output pattern is reproduced by a reproduction section based on the data generated by the I/O pattern data generator, and performance metrics generated as a result of applying a load to the load-verification-target volume are collected.
An environmental load reducing system is disclosed to enable a user desiring to contribute to reduction of environmental loads to examine a switch to a configuration reducing environmental loads. The environmental load reducing system compares configuration information associated with an operating system operated by a user with configuration information associated with a different operating system of a different user, to detect a difference between system components of these systems. The system also compares a calculation result of a carbon dioxide emission amount emitted by operation of the operating system for a fixed period of time with a calculation result of a carbon dioxide emission amount emitted by operation of the different operating system for the fixed period of time. A presentation unit presents the different system component as a low environmental load component according to the system component difference and in reference to a comparison result.
To back up stored data of a storage device installed on-premise to a storage service provided by a public cloud more reliably and efficiently. A storage system according to the invention includes a storage device having first storage logical volumes (LDEVs), and a storage device having second storage LDEVs. When stored data of a first LDEV and stored data of a second LDEV are synchronized with each other, network conditions in a transfer path from the storage device to the public cloud and in a transfer path from the storage device to the public cloud are observed. The first LDEV or the second LDEV is selected as a backup source based on the network conditions.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
A worker node included in a storage system 1 includes a score calculation unit 31 that calculates a score of the worker node based on a failure history and an operation status of the worker node, and a master node (P) includes a promotion node selection unit 52 that compares scores for each worker node when a failure occurs in one of master nodes and selects, based on the scores, a worker node to be promoted to a master node instead of the master node in which the failure has occurred.
An API execution control system including: an API reception unit configured to receive a plurality of successive API execution requests from a user; a table update unit configured to detect patterns of the plurality of received successive API execution requests and register the detected patterns of the plurality of successive API execution requests in an API group management table; an API execution prediction unit configured to determine whether the API execution request received by the API reception unit matches the pattern registered in the API group management table; and an API execution control unit configured to, when it is determined that the API execution request received by the API execution prediction unit matches the pattern registered in the API group management table, raise an execution priority of an API whose next execution request is to be received.
A storage billing system including a fee calculation section that determines a fee on the basis of an amount of storage usage according to a contract that is renewed when the storage is updated, a contract history check section that, when the fee for the usage of the storage is to be determined, determines whether the contract is renewed, a carbon dioxide emissions calculation section that, when the contract history check section determines that the contract is renewed, determines a difference between an amount of power usage by the storage before the update and an amount of power usage by the storage after the update, and calculates an amount of carbon dioxide emissions reduction, a fee adjustment section that determines the fee by reducing the fee determined by the fee calculation section according to the amount of carbon dioxide emissions reduction determined by the carbon dioxide emissions calculation section.
An object is to effectively use resources of a plurality of storage nodes.
An object is to effectively use resources of a plurality of storage nodes.
A storage system includes a plurality of storage nodes. The storage system includes: a management unit configured to manage the plurality of storage nodes. Each of the plurality of storage nodes is configured to accumulate credits on a condition that a processing load is within a predetermined range and perform burst in which processing is performed with a load exceeding the predetermined range by consuming the credits. The management unit manages the credits of each storage node, determines a trigger of burst of predetermined storage processing based on an accumulation state of the credits in the plurality of storage nodes related to the storage processing, and executes, when the credits are accumulated in the plurality of storage nodes related to the predetermined storage processing, the predetermined storage processing by the burst by consuming the accumulated credits.
H04L 67/1012 - Server selection for load balancing based on compliance of requirements or conditions with available server resources
H04L 67/1008 - Server selection for load balancing based on parameters of servers, e.g. available memory or workload
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
One of objects of the present invention is to provide a backup system and a backup method that make it possible to improve availability by increasing the number of paths for acquiring backups to be stored in a data protection area. A backup storage apparatus includes a first storage, a second storage, and a BP storage. The BP storage has a first backup volume, a second backup volume, and a data protection area. The BP storage stores first route BP images and second route BP images in the data protection area such that generations of the first route BP images and generations of the second route BP images do not overlap.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
66.
INFORMATION PROCESSING DEVICE, NETWORK DEVICE, AND METHOD FOR UPDATING NETWORK DEVICE FIRMWARE
An information processing device includes a controller and an interface device. The controller stores a compressed file including new firmware for the interface device, and sends at least part of compressed data in the compressed file to the interface device. The interface device performs a signature verification process and a decompressing process on the received compressed data in parallel.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
67.
INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD
Information regarding an API parameter of a target device is collected from the outside as a manual and knowledge, a description regarding a device configuration, a main parameter, and a specific parameter for which a value is to be set is extracted from the collected manual and knowledge, and a classification axis is created based on the device configuration and the main parameter included in the extracted description. An API execution log and configuration information are collected from the target device, the collected API execution log is classified according to the classification axis, a setting value of the specific parameter in the API execution log is aggregated for each classification, a most frequent setting value is determined as a default value of the parameter, and when the specific parameter is not set in the higher-level API call, the default value is set to make a lower-level API call.
A deployment optimization program causes each of a plurality of optimization engines that use different policies for calculating a deployment plan for data and containers to calculate candidate information including a candidate deployment plan that is a candidate for the deployment plan, and an evaluation value obtained by evaluating a process related to the data in the candidate deployment plan, and integrates a plurality of pieces of the candidate information based on the candidate deployment plan included in the calculated plurality of pieces of the candidate information so as to generate data and container deployment plan information.
To estimate power consumption of a workload, an estimation server estimates power consumption of a workload executed on a physical server. A processor trains a plurality of short-range power models that receive a metric of the physical server as an input and output a power consumption value of the physical server in a plurality of short ranges obtained by dividing an entire power range of the physical server into a predetermined division number, trains a classifier that receives a metric of the physical server as an input and outputs specification information specifying a corresponding short range and specifies specification information specifying a short range to be applied based on a metric of the workload and the classifier. The estimate of the power consumption of the workload is based on the metric of the workload and a short-range power model corresponding to the short range indicated by the specification information.
Each of one or a plurality of storage nodes included in a storage system includes a volume provided to a compute and a component that can affect performance of the volume. In a case where a computer determines that a load of a component in any of the one or plurality of storage nodes increased, decreased, increases, or decreases due to the fact that a load of an existing volume in the storage node increased, decreased, increases, or decreases, the computer selects vertical scaling as a scaling method for the storage system, and/or in a case where the computer determines that a load of a component in any of the one or plurality of storage nodes increased, decreased, increases, or decreases due to the fact that the number of volumes in the storage node increased, decreased, increases, or decreases, the computer selects horizontal scaling as a scaling method for the storage system.
Upon acquiring target business specifying information for specifying a target business, disaster recovery (DR) operation phase determination processing of calculating an operation phase is executed based on copy configuration information for managing a pair configuration of a target business use volume and a copy volume and a copy status table for managing a copy status in the target business use volume and the copy volume, a disaster pattern corresponding to a disaster situation of a volume of a disaster target having been damaged is calculated in accordance with an operation phase calculated by the DR operation phase determination unit, and a cloud use fee is calculated for each disaster pattern from a failure occurrence to completion of system recovery of a use site where a use volume is created.
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
A detection apparatus attached to a target apparatus, the detection apparatus including: a first resistance element attached to a power supply; a second resistance element connected in series to the first resistance element; and a detection unit that acquires an intermediate voltage between the first resistance element and the second resistance element, in which the second resistance element is exposed to a surrounding atmosphere of the apparatus, and in which the detection unit detects a sign of corrosion of the apparatus caused by the surrounding atmosphere based on a change in the intermediate voltage.
A computing device includes a storage unit storing a job list, and a computing unit that performs a computation related to an instance capable of executing burst processing by consuming credits. The job list is a list of batch jobs. The batch jobs include a plurality of combinations of a time frame and data regarding a size of a job, the time frame being set as a combination of a time point at which execution of the job can be started and a time point at which the job should have been completed. The burst processing of the job is at a speed exceeding a baseline but not exceeding a maximum speed, the baseline being a processing speed of the job that can always be attained. The computing unit determines whether or not the job can be completed within the time frame, for the batch jobs in the job list.
A storage system includes a non-volatile storage device and a plurality of storage controllers that control reading and writing for the storage device, in which each of the plurality of storage controllers includes a processor and a memory, the storage controller stores a write request from a host for the storage device as cache data in the memory, returns a write completion response to the host after protecting the cache data in a first memory protection method or a second memory protection method, and destages the cache data into the storage device after the write completion response, and the storage controller switches between the first memory protection method and the second memory protection method to be used according to an operation state of another storage controller.
Regarding cloud storage, a time-out of an I/O response to an I/O request from a host is deterred. A storage node 1 executes an I/O processing thread 101 for retaining an I/O resource to be used for processing relating to an I/O request and an I/O response to the I/O request, and a response standby processing thread. The I/O processing thread: transmits an I/O request to cloud storage in response to a request from a host; and moves the I/O resource to the response standby processing thread if not having received the I/O response from the cloud storage before an elapse of first time-out time. The response standby processing thread transmits a response confirmation to demand the I/O response from the cloud storage by using the I/O resource moved from the I/O processing thread and performs standby processing on the I/O response in place of the I/O processing thread.
In a storage system, when a communication path for remote copying from a primary volume to a secondary volume is set, a storage node in a primary site makes an inquiry to a discovery node in a secondary site about node information on a node having a secondary volume paired with a primary volume. Based on the node information acquired from the discovery node, a primary volume owner node sets a communication path between the primary volume owner node and a secondary volume owner node, the communication path being used for remote copying volume data from the primary volume to the secondary volume.
Detect server attack due to ransomware attacks, etc., without increasing the system load using metrics that are normally monitored. A storage system comprising a first storage connected to the server running the application, a data protection storage to get a backup of the first storage, a monitoring server monitoring the data protection storage, wherein the monitoring server comprising backup execution unit that backup data from the first storage to the data protection storage, an amount of data written monitoring unit determines abnormality when the amount of data written to the data protection storage exceed predetermined amount and, an output part issue alert when the amount of data written monitoring unit determines an abnormality.
G06F 21/55 - Detecting local intrusion or implementing counter-measures
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
Table parallelization processing for parallelizing data processing is performed on a plurality of tables in units of allocation tables to a core of a processing execution computer, and record parallelization processing for dividing a table having a large data size into a plurality of records and parallelizing data processing on a plurality of records in units of allocation records to the core of the processing execution computer is performed when the table is larger than a predetermined data size.
Performance deterioration of a storage system is prevented. A storage controller includes one or more processors, and one or more memories configured to store one or more programs to be executed by the one or more processors. The one or more processors are configured to execute conversion of converting metadata before conversion for controlling the storage system into metadata after conversion in a format corresponding to a new controller newly installed in the storage system, execute control of switching an access destination between the metadata before conversion and the metadata after conversion according to an access control code during the conversion, and access the metadata before conversion without using the access control code before start of the conversion.
Systems and methods described herein can involve, responsive to a request of a volume requiring remote copy, checking an IO throughput setting of the volume; using network bandwidth based on the IO throughput setting of the volume; and for the use of the network bandwidth not exceeding total remote copy network resources allocated for existing volumes configured with remote copy and the volume requiring remote copy, establishing a remote copy relationship for the volume in response to the request.
A method for application placement management. The method comprising identifying, by a storage agent, a first server from a plurality of servers or a first cluster from a plurality of clusters, the first server or the first cluster can access a first volume through which an application can be executed; identifying, by the storage agent, data associated with the application, wherein the data is stored in the first volume; identifying, by the storage agent, a group of servers from the plurality of servers or a group of clusters from the plurality of clusters having access to the data; updating, by the storage agent, data accessibility associated with each server of the group of servers or each cluster of the group of clusters; and notifying, by the storage agent, the updated data accessibility associated with each server of the group of servers or each cluster of the group of clusters.
An information processing system includes storage apparatuses installed in areas, SDSs provided on a cloud, and a management system. The management system estimates, in reference to configuration information and performance information regarding a volume of each of the storage apparatuses, a required resource amount required to fail over the volume of each storage apparatus to a duplicate volume. The management system selects an SDS of a replication destination in such a manner as to minimize a required resource amount aggregated for each installation location and for each storage system SDS, while locating in a distributed manner, in the SDSs, duplicate volumes related to the storage apparatuses located at an identical point.
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
A storage system is protected from tampering of software executed by the storage system. The storage system includes a first storage controller and a second storage controller. The first storage controller includes a first input and output controller configured to input and output host data, and a first management controller. The second storage controller includes a second input and output controller configured to input and output host data, and a second management controller. The first management controller is configured to store a backup of software of at least one of the second storage controller or the first input and output controller. A copy of tampered software of the at least one is stored. The tampered software of the at least one is recovered by the backup.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
A multitenant management system selects a name space corresponding to a user from a plurality of name spaces and determines whether the user is a legitimate user by using user management information of the name space. When a result of the determination is positive, the system determines whether a resource of an access destination conforming to the received resource access request falls within a resource access range corresponding to one tenant scope indicated by the user management information of the selected name space. When the result of the determination is positive, the system executes the resource access request.
H04L 47/722 - Admission controlResource allocation using reservation actions during connection setup at the destination endpoint, e.g. reservation of terminal resources or buffer space
Reliability in a storage system can be easily and appropriately improved. In a computer system including a storage system configured to provide a plurality of instances in any one of a plurality of subzones divided by risk boundaries, a processor of the computer system is configured to make a storage controller that controls I/O processing for a volume based on a capacity pool provided by a plurality of storages redundant to the plurality of instances provided in the plurality of subzones.
A data storage system where the cost required to change a configuration of a system and the burden on an administrator are reduced, the capacity of a backup device is effectively utilized, and the backup processing is optimized. When conditions related to a capacity resource specified in a backup requirement table cannot be satisfied, predicted resource consumption when the data of a backup target in a task is backed up to other destinations is calculated using an existing backup information table, a score representing a low impact on the resource when migrating to the other backup destinations is calculated on the basis of the predicted resource consumption, a backup destination as a migration destination of the backup related to the task is determined on the basis of the score, and a backup schedule table is updated such that the determined backup destination becomes the backup destination related to the task.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
A method for application migrating from a first cluster to a second cluster. The method may include detecting application migration request from the first cluster to the second cluster for performing application migration; identifying first volumes associated with the first cluster that are used by the application; establishing data copying from the first volumes of a first storage device associated with the first cluster to second volumes of a second storage device associated with the second cluster; determining if a copy condition is met based on the data copying; for meeting the copy condition, stopping the application on the first cluster; flushing uncopied data from the first volumes to the second volumes; determining whether the flushing of the uncopied data is completed; and for the flushing of the uncopied data being completed, deploying the application on the second cluster.
A data management system for backing up data in a first environment to a second environment includes: backup management information in which source data, backup data, a backup method, and data backed up using the backup method are associated; and a secondary usage data copy unit serving as a secondary usage processing unit that receives a usage request for the backup data stored in the second environment, wherein the secondary usage processing unit refers to the backup management information, specifies the backup data required for processing the usage request, specifies the backup method for the backup data, restores the backup data on the basis of the specified backup method, and enables processing of the usage request.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
A first node performs copy (virtual copy) of address mapping between a virtual volume and a pool to a first virtual volume to create a third virtual volume in the first node. A second node performs mapping from a first pool volume in the second node to the third virtual volume in the first node, links an address of the first pool volume, which is mapped to the third virtual volume, to an address of a second virtual volume in the second node on a one-to-one basis, and performs log-structured write of the data in the second virtual volume to a second pool volume in the second node.
An object is to efficiently solve a quadratic programming problem having a k-hot constraint (k is a positive integer) for binary variables. A preferred aspect of the invention is an optimization method for, using an information processing apparatus, solving a quadratic programming problem in which one or more independent k-hot constraints are imposed on binary variables, the information processing apparatus including a processor, a storage device, an input device, and an output device. The information processing apparatus relaxes the binary variables into continuous values by adding correction values to a nonlinear coefficients of the binary variables on which the k-hot constraints are imposed, and the information processing apparatus executes a solution search while satisfying the k-hot constraints by executing a state transition such that a sum of a set of continuous variables on which the k-hot constraint is imposed is constant.
A protocol chip transmits the request from the host apparatus to a first processor through a first address translation unit. A first processor transmits a response to the request from the host apparatus, to the protocol chip through the first address translation unit. When the first processor stops processing, an instruction to transmit the request from the host apparatus to a second processor is transmitted to the protocol chip. When receiving the instruction to transmit the request from the host apparatus to the second processor, the protocol chip transmits the request from the host apparatus to the second processor through a second address translation unit. The second processor transmits the response to the request from the host apparatus to the protocol chip through the second address translation unit.
Logical hierarchies include an append hierarchy in a storage device. The storage device writes user data received in the append hierarchy to a free area, select a garbage collection operation mode for a first logical area in the append hierarchy from operation modes including first and second operation modes. Conditions of executing the garbage collection in the first operation mode include a capacity of the free area in the append hierarchy being less than a threshold, and an amount of garbage that is invalid data after update in the first logical area being equal to or greater than a threshold. Conditions of executing the garbage collection in the second operation mode include the amount of garbage in the first logical area being equal to or greater than a threshold value, while excluding the condition of the capacity of the free area in the append hierarchy.
The storage system is a storage system comprising a plurality of storage nodes each including a non-volatile storage device, a storage controller that processes data read/write to the storage device, and a volatile memory, in which the storage controller stores data related to the data write in the memory, stores data that needs to be non-volatile among the data stored in the memory as log data in the storage device, makes the log data stored in the storage device redundant among a plurality of storage nodes, and performs a recovery process for the log data when a problem occurs in the log data stored in the storage device of one of the storage nodes.
A calculator system connected to a public network efficiently avoids congestion. The calculator system is connected to a network including a network switch, includes a plurality of calculators, and recovers, when a data packet is lost on the network, a transfer operation of the lost data packet by a retransmission operation. The calculator system includes the calculators; software running on the calculators; timing adjusting mechanism present between the calculators and the network. The timing adjusting mechanism is configured to calculate a delay time for delaying transmission of a data packet transmitted from the software based on characteristics of the data packet, and delay the transmission of the data packet by the calculated delay time.
In a data processing method executed by a data processing system that performs compression and/or decompression of image data, a tensor shape representing compression target data is obtained, compression processing is performed using data having an input shape fixed for each shape of the compression target data as input, and an input shape fixed compressor for outputting compressed data is generated. Then, the data processing system performs compression processing of compression target data using the generated input shape fixed compressor to generate compressed data.
Proposed are a highly available information processing system and information processing method capable of withstanding a failure in units of sites. A redundancy group including a plurality of the storage controllers installed in different sites is formed, and the redundancy group includes an active state storage controller which processes data, and a standby state storage controller which takes over processing of the data if a failure occurs in the active state storage controller, and the active state storage controller executes processing of storing the data from a host application installed in the same site in the storage device installed in that site, and storing redundant data for restoring data stored in a storage device of a same site in the storage device installed in another site where a standby state storage controller of a same redundancy group is installed.
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
G06F 11/16 - Error detection or correction of the data by redundancy in hardware
97.
Storage system, data processing method of storage system, and data processing program of storage system
A storage system includes a storage device, a processor, and a storage unit. The processor provides a volume configured on the storage device to a mainframe server. The processor manages data handled by an open-architecture server, using a first slot having a first slot length as a unit, in the volume, and manages data handled by the mainframe server, using a second slot having a second slot length shorter than the first slot length as a unit, the first slot storing therein a predetermined number of the second slots, in the volume. The processor performs a process using one of the first slot and the second slot as a unit, depending on the type of the process.
In the event of a partial outage, a multi-node system is enabled to start processing easily and appropriately. The multi-node system includes multiple nodes each including at least one controller, the controller including a processor, a power supply control microcomputer, a memory, and a nonvolatile memory. The processor detects whether or not any one of the nodes is inactive due to a power outage. The processor determines whether or not operation of the multi-node system can be continued, on the basis of operational status of the nodes. Upon determination that the operation of the multi-node system cannot be continued, the processor saves necessary data held in the memory into the nonvolatile memory. The power supply control microcomputer restarts the processor. When the node in the power outage has recovered therefrom following the restart, the multi-node system is caused to start processing.
A hybrid cloud system includes a management server, a storage of a source-side data center serving as a remote copy source, and a storage by a cloud service provided from a target-side data center serving as a backup destination. The management server is configured to make a request for a disaster recovery configuration using the storage by the cloud service, based on a recovery time objective requirement related to a recovery time objective, a recovery point objective requirement related to a recovery point objective, and a recovery level objective requirement related to a recovery level objective.
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
100.
INFORMATION PROCESSING SYSTEM AND INFORMATION PROCESSING METHOD
An information processing system includes a physical drive, a compute unit, and a storage control unit that processes a data input/output request from the compute unit, in which: the storage control unit includes an IO processing unit and an encryption/decryption-related processing unit; the encryption/decryption-related processing unit is capable of referring to key generation method information including at least one element used to generate a key used to encrypt/decrypt the data and an algorithm for generating a key by using the element; and the encryption/decryption-related processing unit generates a key used to encrypt/decrypt the data according to a content set in the key generation method information, and encrypts data received from the compute unit by the IO processing unit or decrypts data read from the physical drive by the IO processing unit by using the key.
G06F 21/78 - Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure storage of data
G06F 13/16 - Handling requests for interconnection or transfer for access to memory bus