A network interface includes a processor, memory, and a cache between the processor and the memory. The processor secures a plurality of buffers for storing transfer data in the memory, and manages an allocation order of available buffers of the plurality of buffers. The processor returns a buffer released after data transfer to a position before a predetermined position of the allocation order.
The storage apparatus includes a midplane provided vertically to an installation surface of the storage apparatus and provided with a plurality of connectors arranged in parallel in an X-axis direction parallel to the installation surface of the storage apparatus; and a plurality of adapters on each of which two drive apparatuses are mounted and which are arranged in parallel in the X-axis direction and connected to the midplane through the plurality of respective connectors in a Y-axis direction parallel to the installation surface and vertical to the X-axis direction. Each of the adapters includes a board including a plurality of connectors connected to respective connectors of the two drive apparatuses which are arranged in parallel in the Y-axis direction, in the Y-axis direction, and a frame for mounting the two drive apparatuses arranged in parallel in the Y-axis direction and the board to each of the adapters.
A storage system includes a storage apparatus having a plurality of physical drives and a controller. In a case where a physical failure occurs in a physical drive, the controller additionally installs a drive provided from a cloud, and maintains a redundant array of inexpensive disks (RAID) configuration before the occurrence of the physical failure in the physical drive using the physical drives excluding the physical drive where the physical failure has occurred and the additionally installed drive provided from the cloud.
A failure predictor of a drive apparatus in a storage system is detected more accurately. A control apparatus 1 for a storage system S stores a learning model(s) 132M for evaluating response performance of a drive apparatus 3 with respect to execution of a command relating to input and output by the control apparatus 1. The control apparatus 1 acquires operation information of the drive apparatus 3 and inputs specified information regarding commands, which is included in the operation information, to the learning model 132M. The control apparatus 1 judges a failure predictor of the drive apparatus 3 on the basis of output relating to the response performance by the learning model 132M in response to the input of the specified information.
A storage apparatus includes a first controller having a first memory, a second controller having a second memory, and a memory module having a third memory. The first memory stores drive control information including a correspondence between a logical address and a physical address, first cache data in a data input-output (I/O) process, and first cache control information including a correspondence between a logical address and a cache address of the first cache data. The second memory stores drive control information, second cache data in the data I/O process, and second cache control information including a correspondence between a logical address and a cache address of the second cache data. The third memory stores first cache data provided with redundancy and second cache data provided with redundancy.
The present invention achieves high throughput by making efficient use of virtual device resources. A storage system includes a storage device and a processor. The processor manages a primary volume and a snapshot volume as a snapshot family. The processor uses a snapshot virtual device as the data storage destination for the primary volume and for the snapshot volume. Upon receiving a write request from a host, the processor switches between an overwrite process and a new allocation process in accordance with the reference made to a write destination address range by the snapshot volume and with the degree of distribution of the write destination address range in the snapshot virtual device. The overwrite process is performed to overwrite an allocated area of the snapshot virtual device. The new allocation process is performed to allocate a new area of the snapshot virtual device to the write destination address range.
A unified storage system is capable of reducing hardware costs by effectively utilizing a previously used storage controller when upgrading a storage controller in a unified storage and an upgrade method for the unified storage system is possible. The unified storage system A includes: a storage node having a controller; and a storage device configured to store data. In the unified storage system, the unified storage system supports block access and file access, the unified storage system includes a file system, which is configured to process file access from a client and perform block access to the controller, the controller processes block access from the client and block access from the file system to access the storage device which stores the data, and the unified storage system is capable of adding a network-connected information apparatus and is capable of migrating the file system to the information apparatus.
The present invention provides a storage system and a storage system control method that have high failure tolerance but are small in construction cost.
The present invention provides a storage system and a storage system control method that have high failure tolerance but are small in construction cost.
The storage system runs on a plurality of cloud computers disposed in a plurality of different zones, and includes storage nodes that are disposed in the plurality of computers in the plurality of zones to process inputted/outputted data. The storage nodes include a first storage node and a second storage node. The first storage node operates during normal operation. The second storage node is present in a zone different from that where the first storage node is present, and is able to take over processing of the first storage node. The plurality of cloud computers have a storage device and a virtual storage device. The storage device physically stores data that is to be processed by the storage nodes. The virtual storage device stores data that is made redundant between the zones by a plurality of the storage devices disposed in the different zones. The storage system accesses data in the virtual storage device by using storage control information, and stores the storage system in the virtual storage device. The virtual storage device makes the stored data redundant between the zones. If a failure occurs in a zone including the first storage node, the second storage node takes over the processing of the first storage node by using the data made redundant between the zones.
When data is to be stored in or transferred to a storage system including a controller, the controller causes a selected offload instance to compress or to decompress the data to be stored in or to be transferred to the storage system, the selected offload instance being one or more offload instances that support a specific compression scheme and to which a compression or decompression load is to be offloaded.
When a cooperation source user name corresponding to identification information of a user included in a received request is included in a cooperation destination system user information, a cooperation destination system converts the identification information of the user included in the request into a user ID corresponding to the cooperation source user name in a cooperation destination system user information. The cooperation destination system processes the request based on the user ID and determines whether or not a cooperation source user name corresponding to the user ID is included in the cooperation destination system user information. In a case where the cooperation source user name corresponding to the user ID is included in the cooperation destination system user information, the cooperation destination system converts the user ID into a cooperation source user name corresponding to the user ID in the cooperation destination system user information.
G06F 16/2458 - Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
Systems and methods described herein can involve, for receipt of a write request from a server to a first storage system associated with a mounted volume having an attribute of read only or read/write, the write request processed by a second storage system, the second storage system setting the write destination of write data associated with the write request to the first storage system or the second storage system based on the attribute.
A file transfer system transfers a target file updated in a first computer to a second computer for each part obtained by dividing the target file and includes: an update recording unit that records an update position of the target file in the first computer as an offset flag; an update determination unit that refers to the offset flag and determines presence or absence of update for each part of the target file; and a transfer unit that transfers the part determined to have update by the update determination unit to the second computer, in which when re-update in which the target file is updated after the transfer unit starts transferring any part included in the target file occurs, the transfer unit transfers, to the second computer, a re-update part that is the part updated by the re-update, regardless of whether or not the re-update part has already been transferred.
A processor inputs/outputs data related to data input/output with respect to the volume to/from a page of the logical storage area; maps the volume to data of the logical storage area; is able to release the storage area in units of the pages; includes a plurality of the volumes that can share data of the logical storage area; performs garbage collection of deleting data which is not referred to from any of the plurality of volumes as invalid data, moving data which is referred to from any of the volumes to another page, and releasing a storage area of a page on which the data is deleted and the data is moved; and stores a plurality of pieces of data in the page of a movement destination such that the plurality of pieces of data stored in the same page is mapped from a same volume by the garbage collection.
Storage controllers shift from a normal operation mode to a degraded operation mode in accordance with a command from a management apparatus. Each of the storage controllers in the normal operation mode works in the normal operation state. In transition from the normal operation mode to the degraded operation mode in accordance with a command from the management apparatus, a storage controller designated by the management apparatus changes from the normal operation state into the standing-by state and the other storage controllers except for the designated storage controller change from the normal operation state into the degraded operation state. The storage controller in the standing-by state changes into the degraded operation state in response to stop of a storage controller in the degraded operation state because of occurrence of a failure under the degraded operation mode.
G06F 11/10 - Adding special bits or symbols to the coded information, e.g. parity check, casting out nines or elevens
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
15.
STORAGE SYSTEM AND METHOD FOR TRANSFERRING DATA THEREOF
A storage system includes a plurality of storage controllers. The storage controller includes a processor, a memory, and a transfer device that processes control data for controlling an internal operation of the storage system, the control data being transmitted and received between the plurality of storage controllers. The processor accumulates the control data in the memory when a transfer request for the control data is generated, generates a write request for transmitting a plurality of the control data stored in the memory, and transmits the write request to the other storage controller. The transfer device writes a plurality of the control data included in the write request to the memory upon receiving the write request.
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
H04L 49/9047 - Buffering arrangements including multiple buffers, e.g. buffer pools
16.
STORAGE CONTROLLER AND STORAGE CONTROLLER CONTROL METHOD
To replace a storage controller without stopping a host and without losing data related to IO processing from the host. A storage controller sets, in a port management table, a first host path definition between the host and first address information in a controller unit in addition to a second host path definition between the host and second address information in a controller unit. The storage controller sets, in a route management table, a first connection route between an input port and a first output port to which a port of the first address information is connected, in addition to a second connection route between an input port and a second output port to which a port of the second address information is connected. The storage controller transfers an IO to one controller unit or another controller unit based on the port management table and the route management table.
A storage system includes a plurality of nodes each of which includes a processor, in which when a replication source volume in a replication source storage system connected to the storage system is replicated to a plurality of nodes of the storage system, any one of the processors generates a first replicated volume by replicating the replication source volume of the replication source storage system in a first node among the plurality of nodes, and generates a second replicated volume mapped to the first replicated volume in a second node among the plurality of nodes.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
18.
DATA CENTER SYSTEM, INTER-BASE WORKLOAD CONTROL METHOD, AND INTER-BASE WORKLOAD CONTROL SYSTEM
Power demand at each base is adjusted so as to improve a renewable energy utilization ratio at all bases. An inter-base workload control system manages an amount of excess power obtained by subtracting a power supply amount of the renewable energy power supply from a power consumption amount associated with execution of a workload in a future time range at the bases, spatial migratable time range information on a spatial migratable time range where spatial migration of migrating the workload in a future time range at the bases to another base is possible and temporal migratable time range information on temporal migration of delaying execution of the workload in the future time range at the bases within the same base and migrating the workload to another time range and a predicted amount of power consumption through execution of the workload in the future time range at the bases.
Systems and methods described herein can involve managing volume management information indicative of a relationship between an application and a volume used by the application; for receipt of a first request to make the application ready for takeover to another location, updating the volume management information to indicate that another volume of the another location is associated with the application. For receipt of a second request to conduct volume attachment for the application, the systems and methods can involve identifying one or more volumes associated with the application based on the volume management information; and attaching an identified volume from the identified one or more volumes to the application.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
20.
STORAGE SYSTEM, MANAGEMENT METHOD OF STORAGE SYSTEM, AND MANAGEMENT DEVICE OF STORAGE SYSTEM
Data movement for reducing an environmental load in a hierarchical storage is appropriately determined. A storage system includes an upper-level storage device and a management device. The management device is configured to determine, for each file stored in the upper-level storage device, based on a size of a target file and an access frequency of the target file, and the power consumption information, whether power consumption for holding the target file is to be reduced by moving the target file to the lower-level storage device, and output, when it is determined that the power consumption for holding the target file is to be reduced by moving the target file to the lower-level storage device, an instruction to move the target file from the upper-level storage device to the lower-level storage device.
A frontend interface of a controller according to the present invention includes a plurality of corresponding queueing interfaces for each processor of the controller, and an enqueueing destination of a host I/O command can be switched in response to an instruction from a processor. When a controller OS restarts, the controller waits for completion of a host I/O and executes controller blocking and restarting during setup. Therefore, to determine whether or not this process is possible, the processor gives an instruction to switch a queue and waits until a switch source queue is empty.
Each storage node includes a processor, a drive that stores data, and a communication unit that transmits data to another storage node or receives data from the another storage node. The communication unit includes a compression circuit that performs reversible compression before data is transmitted and a decompression circuit that decompresses compressed data after the compressed data is received. In response to a reading command for reading data of a designated size to the outside, when a predetermined condition is satisfied, the communication unit of a first storage node compresses the data stored in the drive of the first storage node by the compression circuit and transmits the compressed data to the communication unit of a second storage node. The communication unit of the second storage node decompresses the received data with a decompression circuit. The second storage node outputs decompressed data to an outside.
The copy performance of a storage apparatus is improved. The storage apparatus includes a storage controller that processes an I/O request from a host, and a storage device that stores data from the host. For each of a plurality of volumes, the storage controller stores a schedule indicative of a copy speed level for each of continuous time slots in a cycle. The storage controller collects information regarding current performance of the storage controller, stores the collected current performance information into a memory, and determines the copy speed level for a next time slot on the basis of the copy speed level for the next time slot indicated by the schedule and a relation between a value indicated by the performance information and a threshold value.
A storage system having both a high performance and high reliability is implemented. The storage system includes a plurality of storage nodes each including a processor and a memory, and a storage device. Each of the plurality of storage nodes includes a storage controller configured to run on the processor, the plurality of storage controllers include an active storage controller configured to process data output to and received from the storage device, and a standby storage controller configured to take over the processing of the data from the active storage controller, each of the active storage controller and the standby storage controller is allocated with a storage area of the memory, and the storage node changes an amount of a memory capacity allocated for the storage controller of the self-node when a state of the storage controller is switched between a standby state and an active state.
A system receives a quota request in which a tenant, one or more locations, and a capacity upper limit related to a requested quota are designated, executes conflict determination n based on quota information, and adds information related to the requested quota to the quota information when a result of the conflict determination is false. The quota information includes information representing a tenant, one or more locations, and a capacity upper limit for each quota of the plurality of storage devices at the plurality of locations. For each of the plurality of locations, a capacity usable by the tenant among a capacity of the storage device at the location is equal to or less than the capacity upper limit of the quota corresponding to the location and the tenant.
A storage having a cluster configuration is operated while stabilizing performance thereof. A storage system calculates a load value of a storage in which a predetermined cluster configuration is set, determines whether the calculated load value exceeds a predetermined value, adds a predetermined resource to the storage when the calculated load value exceeds the predetermined value, calculates a predicted value of a load of the storage when the resource is removed after the resource is added, determines whether the calculated predicted value is lower than a predetermined value, and removes the resource of the storage when the calculated predicted value is lower than the predetermined value.
In a computer system, storage controllers disposed in different data centers form a pair via a communication path between the data centers. When a communication failure occurs in the communication path between the paired storage controllers, a tie breaker takes over the data input/output from one of the storage controllers forming the pair to the other storage controller based on the statistical information of the communication characteristics of the storage controller generated by the I/O monitor, and determines failure control to stop the storage node having the one storage controller, and the storage cluster controller executes the failure control.
Provided is a computer system capable of maintaining a storage capacity allocated to a journal volume within an appropriate range during an application period of remote copy. A first storage system includes a primary volume and a primary journal volume, and a second storage system includes a secondary volume and a secondary journal volume. A management computer is configured to manage the remote copy in which a primary volume, a primary journal volume, a secondary journal volume, and a secondary volume are paired, and expand and/or release a capacity of the primary journal volume and/or the secondary journal volume according to operation information of a resource related to the remote copy.
A processor of a storage system calculates long-term load fluctuation prediction as a prediction of load fluctuation over time in the future of the controller nodes based on time-series data of load of the controller nodes. The processor calculates an addition/reduction completion target time to complete addition or reduction of an operating controller node out of the controller nodes based on the long-term load fluctuation prediction and a load threshold value determined from a power performance model. The processor calculates a rebalancing time for a rebalancing process based on data movement in the rebalancing process for moving data between the drive nodes in accordance with the addition or the reduction and bandwidth information of a path for the data movement. The processor calculates a start time of the rebalancing process from the addition/reduction completion target time and the rebalancing time and starts the rebalancing process at the start time.
When receiving power requirement for power control and performing power control of a target device in accordance with the received power requirement, based on power consumption for a performance of each component of the target device and a device configuration of the target device, power saving level management information is created, which respectively defines the performance of each component at each power saving level associated with each of a plurality of divided power consumption ranges of the target device, and based on a power consumption upper limit value or the power saving level of the target device designated as the received power requirement, the power saving level management information is referred to and the performance of each component is set to a performance of the power saving level according to the power requirement, respectively.
There is provided a load verification system that performs load performance verification of a data storage area. The load verification system includes: a verification-purposed performance metrics collector that acquires performance metrics of the data storage area and a volume thereof that are load verification targets; an expected performance metrics data generator that generates performance metrics data that will be a result of a load, expected with respect to the load-verification-target volume; and an I/O pattern data generator that generates input/output pattern data based on which a load is generated that causes generation of performance metrics data whose performance is equivalent to expected performance indicated in the expected performance metrics data. An input/output pattern is reproduced by a reproduction section based on the data generated by the I/O pattern data generator, and performance metrics generated as a result of applying a load to the load-verification-target volume are collected.
An environmental load reducing system is disclosed to enable a user desiring to contribute to reduction of environmental loads to examine a switch to a configuration reducing environmental loads. The environmental load reducing system compares configuration information associated with an operating system operated by a user with configuration information associated with a different operating system of a different user, to detect a difference between system components of these systems. The system also compares a calculation result of a carbon dioxide emission amount emitted by operation of the operating system for a fixed period of time with a calculation result of a carbon dioxide emission amount emitted by operation of the different operating system for the fixed period of time. A presentation unit presents the different system component as a low environmental load component according to the system component difference and in reference to a comparison result.
A worker node included in a storage system 1 includes a score calculation unit 31 that calculates a score of the worker node based on a failure history and an operation status of the worker node, and a master node (P) includes a promotion node selection unit 52 that compares scores for each worker node when a failure occurs in one of master nodes and selects, based on the scores, a worker node to be promoted to a master node instead of the master node in which the failure has occurred.
An API execution control system including: an API reception unit configured to receive a plurality of successive API execution requests from a user; a table update unit configured to detect patterns of the plurality of received successive API execution requests and register the detected patterns of the plurality of successive API execution requests in an API group management table; an API execution prediction unit configured to determine whether the API execution request received by the API reception unit matches the pattern registered in the API group management table; and an API execution control unit configured to, when it is determined that the API execution request received by the API execution prediction unit matches the pattern registered in the API group management table, raise an execution priority of an API whose next execution request is to be received.
To back up stored data of a storage device installed on-premise to a storage service provided by a public cloud more reliably and efficiently. A storage system according to the invention includes a storage device having first storage logical volumes (LDEVs), and a storage device having second storage LDEVs. When stored data of a first LDEV and stored data of a second LDEV are synchronized with each other, network conditions in a transfer path from the storage device to the public cloud and in a transfer path from the storage device to the public cloud are observed. The first LDEV or the second LDEV is selected as a backup source based on the network conditions.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
An object is to effectively use resources of a plurality of storage nodes.
An object is to effectively use resources of a plurality of storage nodes.
A storage system includes a plurality of storage nodes. The storage system includes: a management unit configured to manage the plurality of storage nodes. Each of the plurality of storage nodes is configured to accumulate credits on a condition that a processing load is within a predetermined range and perform burst in which processing is performed with a load exceeding the predetermined range by consuming the credits. The management unit manages the credits of each storage node, determines a trigger of burst of predetermined storage processing based on an accumulation state of the credits in the plurality of storage nodes related to the storage processing, and executes, when the credits are accumulated in the plurality of storage nodes related to the predetermined storage processing, the predetermined storage processing by the burst by consuming the accumulated credits.
H04L 67/1012 - Server selection for load balancing based on compliance of requirements or conditions with available server resources
H04L 67/1008 - Server selection for load balancing based on parameters of servers, e.g. available memory or workload
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
A storage billing system including a fee calculation section that determines a fee on the basis of an amount of storage usage according to a contract that is renewed when the storage is updated, a contract history check section that, when the fee for the usage of the storage is to be determined, determines whether the contract is renewed, a carbon dioxide emissions calculation section that, when the contract history check section determines that the contract is renewed, determines a difference between an amount of power usage by the storage before the update and an amount of power usage by the storage after the update, and calculates an amount of carbon dioxide emissions reduction, a fee adjustment section that determines the fee by reducing the fee determined by the fee calculation section according to the amount of carbon dioxide emissions reduction determined by the carbon dioxide emissions calculation section.
An information processing device includes a controller and an interface device. The controller stores a compressed file including new firmware for the interface device, and sends at least part of compressed data in the compressed file to the interface device. The interface device performs a signature verification process and a decompressing process on the received compressed data in parallel.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
39.
INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD
Information regarding an API parameter of a target device is collected from the outside as a manual and knowledge, a description regarding a device configuration, a main parameter, and a specific parameter for which a value is to be set is extracted from the collected manual and knowledge, and a classification axis is created based on the device configuration and the main parameter included in the extracted description. An API execution log and configuration information are collected from the target device, the collected API execution log is classified according to the classification axis, a setting value of the specific parameter in the API execution log is aggregated for each classification, a most frequent setting value is determined as a default value of the parameter, and when the specific parameter is not set in the higher-level API call, the default value is set to make a lower-level API call.
One of objects of the present invention is to provide a backup system and a backup method that make it possible to improve availability by increasing the number of paths for acquiring backups to be stored in a data protection area. A backup storage apparatus includes a first storage, a second storage, and a BP storage. The BP storage has a first backup volume, a second backup volume, and a data protection area. The BP storage stores first route BP images and second route BP images in the data protection area such that generations of the first route BP images and generations of the second route BP images do not overlap.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
41.
DEPLOYMENT PLAN CALCULATION DEVICE, COMPUTER SYSTEM, AND DEPLOYMENT PLAN CALCULATION METHOD
A deployment optimization program causes each of a plurality of optimization engines that use different policies for calculating a deployment plan for data and containers to calculate candidate information including a candidate deployment plan that is a candidate for the deployment plan, and an evaluation value obtained by evaluating a process related to the data in the candidate deployment plan, and integrates a plurality of pieces of the candidate information based on the candidate deployment plan included in the calculated plurality of pieces of the candidate information so as to generate data and container deployment plan information.
To estimate power consumption of a workload, an estimation server estimates power consumption of a workload executed on a physical server. A processor trains a plurality of short-range power models that receive a metric of the physical server as an input and output a power consumption value of the physical server in a plurality of short ranges obtained by dividing an entire power range of the physical server into a predetermined division number, trains a classifier that receives a metric of the physical server as an input and outputs specification information specifying a corresponding short range and specifies specification information specifying a short range to be applied based on a metric of the workload and the classifier. The estimate of the power consumption of the workload is based on the metric of the workload and a short-range power model corresponding to the short range indicated by the specification information.
Each of one or a plurality of storage nodes included in a storage system includes a volume provided to a compute and a component that can affect performance of the volume. In a case where a computer determines that a load of a component in any of the one or plurality of storage nodes increased, decreased, increases, or decreases due to the fact that a load of an existing volume in the storage node increased, decreased, increases, or decreases, the computer selects vertical scaling as a scaling method for the storage system, and/or in a case where the computer determines that a load of a component in any of the one or plurality of storage nodes increased, decreased, increases, or decreases due to the fact that the number of volumes in the storage node increased, decreased, increases, or decreases, the computer selects horizontal scaling as a scaling method for the storage system.
Upon acquiring target business specifying information for specifying a target business, disaster recovery (DR) operation phase determination processing of calculating an operation phase is executed based on copy configuration information for managing a pair configuration of a target business use volume and a copy volume and a copy status table for managing a copy status in the target business use volume and the copy volume, a disaster pattern corresponding to a disaster situation of a volume of a disaster target having been damaged is calculated in accordance with an operation phase calculated by the DR operation phase determination unit, and a cloud use fee is calculated for each disaster pattern from a failure occurrence to completion of system recovery of a use site where a use volume is created.
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
A detection apparatus attached to a target apparatus, the detection apparatus including: a first resistance element attached to a power supply; a second resistance element connected in series to the first resistance element; and a detection unit that acquires an intermediate voltage between the first resistance element and the second resistance element, in which the second resistance element is exposed to a surrounding atmosphere of the apparatus, and in which the detection unit detects a sign of corrosion of the apparatus caused by the surrounding atmosphere based on a change in the intermediate voltage.
A computing device includes a storage unit storing a job list, and a computing unit that performs a computation related to an instance capable of executing burst processing by consuming credits. The job list is a list of batch jobs. The batch jobs include a plurality of combinations of a time frame and data regarding a size of a job, the time frame being set as a combination of a time point at which execution of the job can be started and a time point at which the job should have been completed. The burst processing of the job is at a speed exceeding a baseline but not exceeding a maximum speed, the baseline being a processing speed of the job that can always be attained. The computing unit determines whether or not the job can be completed within the time frame, for the batch jobs in the job list.
A storage system includes a non-volatile storage device and a plurality of storage controllers that control reading and writing for the storage device, in which each of the plurality of storage controllers includes a processor and a memory, the storage controller stores a write request from a host for the storage device as cache data in the memory, returns a write completion response to the host after protecting the cache data in a first memory protection method or a second memory protection method, and destages the cache data into the storage device after the write completion response, and the storage controller switches between the first memory protection method and the second memory protection method to be used according to an operation state of another storage controller.
In a storage system, when a communication path for remote copying from a primary volume to a secondary volume is set, a storage node in a primary site makes an inquiry to a discovery node in a secondary site about node information on a node having a secondary volume paired with a primary volume. Based on the node information acquired from the discovery node, a primary volume owner node sets a communication path between the primary volume owner node and a secondary volume owner node, the communication path being used for remote copying volume data from the primary volume to the secondary volume.
Regarding cloud storage, a time-out of an I/O response to an I/O request from a host is deterred. A storage node 1 executes an I/O processing thread 101 for retaining an I/O resource to be used for processing relating to an I/O request and an I/O response to the I/O request, and a response standby processing thread. The I/O processing thread: transmits an I/O request to cloud storage in response to a request from a host; and moves the I/O resource to the response standby processing thread if not having received the I/O response from the cloud storage before an elapse of first time-out time. The response standby processing thread transmits a response confirmation to demand the I/O response from the cloud storage by using the I/O resource moved from the I/O processing thread and performs standby processing on the I/O response in place of the I/O processing thread.
Detect server attack due to ransomware attacks, etc., without increasing the system load using metrics that are normally monitored. A storage system comprising a first storage connected to the server running the application, a data protection storage to get a backup of the first storage, a monitoring server monitoring the data protection storage, wherein the monitoring server comprising backup execution unit that backup data from the first storage to the data protection storage, an amount of data written monitoring unit determines abnormality when the amount of data written to the data protection storage exceed predetermined amount and, an output part issue alert when the amount of data written monitoring unit determines an abnormality.
G06F 21/55 - Detecting local intrusion or implementing counter-measures
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
Table parallelization processing for parallelizing data processing is performed on a plurality of tables in units of allocation tables to a core of a processing execution computer, and record parallelization processing for dividing a table having a large data size into a plurality of records and parallelizing data processing on a plurality of records in units of allocation records to the core of the processing execution computer is performed when the table is larger than a predetermined data size.
Performance deterioration of a storage system is prevented. A storage controller includes one or more processors, and one or more memories configured to store one or more programs to be executed by the one or more processors. The one or more processors are configured to execute conversion of converting metadata before conversion for controlling the storage system into metadata after conversion in a format corresponding to a new controller newly installed in the storage system, execute control of switching an access destination between the metadata before conversion and the metadata after conversion according to an access control code during the conversion, and access the metadata before conversion without using the access control code before start of the conversion.
Systems and methods described herein can involve, responsive to a request of a volume requiring remote copy, checking an IO throughput setting of the volume; using network bandwidth based on the IO throughput setting of the volume; and for the use of the network bandwidth not exceeding total remote copy network resources allocated for existing volumes configured with remote copy and the volume requiring remote copy, establishing a remote copy relationship for the volume in response to the request.
An information processing system includes storage apparatuses installed in areas, SDSs provided on a cloud, and a management system. The management system estimates, in reference to configuration information and performance information regarding a volume of each of the storage apparatuses, a required resource amount required to fail over the volume of each storage apparatus to a duplicate volume. The management system selects an SDS of a replication destination in such a manner as to minimize a required resource amount aggregated for each installation location and for each storage system SDS, while locating in a distributed manner, in the SDSs, duplicate volumes related to the storage apparatuses located at an identical point.
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
A storage system is protected from tampering of software executed by the storage system. The storage system includes a first storage controller and a second storage controller. The first storage controller includes a first input and output controller configured to input and output host data, and a first management controller. The second storage controller includes a second input and output controller configured to input and output host data, and a second management controller. The first management controller is configured to store a backup of software of at least one of the second storage controller or the first input and output controller. A copy of tampered software of the at least one is stored. The tampered software of the at least one is recovered by the backup.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
A method for application placement management. The method comprising identifying, by a storage agent, a first server from a plurality of servers or a first cluster from a plurality of clusters, the first server or the first cluster can access a first volume through which an application can be executed; identifying, by the storage agent, data associated with the application, wherein the data is stored in the first volume; identifying, by the storage agent, a group of servers from the plurality of servers or a group of clusters from the plurality of clusters having access to the data; updating, by the storage agent, data accessibility associated with each server of the group of servers or each cluster of the group of clusters; and notifying, by the storage agent, the updated data accessibility associated with each server of the group of servers or each cluster of the group of clusters.
Reliability in a storage system can be easily and appropriately improved. In a computer system including a storage system configured to provide a plurality of instances in any one of a plurality of subzones divided by risk boundaries, a processor of the computer system is configured to make a storage controller that controls I/O processing for a volume based on a capacity pool provided by a plurality of storages redundant to the plurality of instances provided in the plurality of subzones.
A multitenant management system selects a name space corresponding to a user from a plurality of name spaces and determines whether the user is a legitimate user by using user management information of the name space. When a result of the determination is positive, the system determines whether a resource of an access destination conforming to the received resource access request falls within a resource access range corresponding to one tenant scope indicated by the user management information of the selected name space. When the result of the determination is positive, the system executes the resource access request.
H04L 47/722 - Admission control; Resource allocation using reservation actions during connection setup at the destination endpoint, e.g. reservation of terminal resources or buffer space
A data storage system where the cost required to change a configuration of a system and the burden on an administrator are reduced, the capacity of a backup device is effectively utilized, and the backup processing is optimized. When conditions related to a capacity resource specified in a backup requirement table cannot be satisfied, predicted resource consumption when the data of a backup target in a task is backed up to other destinations is calculated using an existing backup information table, a score representing a low impact on the resource when migrating to the other backup destinations is calculated on the basis of the predicted resource consumption, a backup destination as a migration destination of the backup related to the task is determined on the basis of the score, and a backup schedule table is updated such that the determined backup destination becomes the backup destination related to the task.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
A data management system for backing up data in a first environment to a second environment includes: backup management information in which source data, backup data, a backup method, and data backed up using the backup method are associated; and a secondary usage data copy unit serving as a secondary usage processing unit that receives a usage request for the backup data stored in the second environment, wherein the secondary usage processing unit refers to the backup management information, specifies the backup data required for processing the usage request, specifies the backup method for the backup data, restores the backup data on the basis of the specified backup method, and enables processing of the usage request.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
A method for application migrating from a first cluster to a second cluster. The method may include detecting application migration request from the first cluster to the second cluster for performing application migration; identifying first volumes associated with the first cluster that are used by the application; establishing data copying from the first volumes of a first storage device associated with the first cluster to second volumes of a second storage device associated with the second cluster; determining if a copy condition is met based on the data copying; for meeting the copy condition, stopping the application on the first cluster; flushing uncopied data from the first volumes to the second volumes; determining whether the flushing of the uncopied data is completed; and for the flushing of the uncopied data being completed, deploying the application on the second cluster.
An object is to efficiently solve a quadratic programming problem having a k-hot constraint (k is a positive integer) for binary variables. A preferred aspect of the invention is an optimization method for, using an information processing apparatus, solving a quadratic programming problem in which one or more independent k-hot constraints are imposed on binary variables, the information processing apparatus including a processor, a storage device, an input device, and an output device. The information processing apparatus relaxes the binary variables into continuous values by adding correction values to a nonlinear coefficients of the binary variables on which the k-hot constraints are imposed, and the information processing apparatus executes a solution search while satisfying the k-hot constraints by executing a state transition such that a sum of a set of continuous variables on which the k-hot constraint is imposed is constant.
A protocol chip transmits the request from the host apparatus to a first processor through a first address translation unit. A first processor transmits a response to the request from the host apparatus, to the protocol chip through the first address translation unit. When the first processor stops processing, an instruction to transmit the request from the host apparatus to a second processor is transmitted to the protocol chip. When receiving the instruction to transmit the request from the host apparatus to the second processor, the protocol chip transmits the request from the host apparatus to the second processor through a second address translation unit. The second processor transmits the response to the request from the host apparatus to the protocol chip through the second address translation unit.
A first node performs copy (virtual copy) of address mapping between a virtual volume and a pool to a first virtual volume to create a third virtual volume in the first node. A second node performs mapping from a first pool volume in the second node to the third virtual volume in the first node, links an address of the first pool volume, which is mapped to the third virtual volume, to an address of a second virtual volume in the second node on a one-to-one basis, and performs log-structured write of the data in the second virtual volume to a second pool volume in the second node.
Logical hierarchies include an append hierarchy in a storage device. The storage device writes user data received in the append hierarchy to a free area, select a garbage collection operation mode for a first logical area in the append hierarchy from operation modes including first and second operation modes. Conditions of executing the garbage collection in the first operation mode include a capacity of the free area in the append hierarchy being less than a threshold, and an amount of garbage that is invalid data after update in the first logical area being equal to or greater than a threshold. Conditions of executing the garbage collection in the second operation mode include the amount of garbage in the first logical area being equal to or greater than a threshold value, while excluding the condition of the capacity of the free area in the append hierarchy.
The storage system is a storage system comprising a plurality of storage nodes each including a non-volatile storage device, a storage controller that processes data read/write to the storage device, and a volatile memory, in which the storage controller stores data related to the data write in the memory, stores data that needs to be non-volatile among the data stored in the memory as log data in the storage device, makes the log data stored in the storage device redundant among a plurality of storage nodes, and performs a recovery process for the log data when a problem occurs in the log data stored in the storage device of one of the storage nodes.
A calculator system connected to a public network efficiently avoids congestion. The calculator system is connected to a network including a network switch, includes a plurality of calculators, and recovers, when a data packet is lost on the network, a transfer operation of the lost data packet by a retransmission operation. The calculator system includes the calculators; software running on the calculators; timing adjusting mechanism present between the calculators and the network. The timing adjusting mechanism is configured to calculate a delay time for delaying transmission of a data packet transmitted from the software based on characteristics of the data packet, and delay the transmission of the data packet by the calculated delay time.
In a data processing method executed by a data processing system that performs compression and/or decompression of image data, a tensor shape representing compression target data is obtained, compression processing is performed using data having an input shape fixed for each shape of the compression target data as input, and an input shape fixed compressor for outputting compressed data is generated. Then, the data processing system performs compression processing of compression target data using the generated input shape fixed compressor to generate compressed data.
Proposed are a highly available information processing system and information processing method capable of withstanding a failure in units of sites. A redundancy group including a plurality of the storage controllers installed in different sites is formed, and the redundancy group includes an active state storage controller which processes data, and a standby state storage controller which takes over processing of the data if a failure occurs in the active state storage controller, and the active state storage controller executes processing of storing the data from a host application installed in the same site in the storage device installed in that site, and storing redundant data for restoring data stored in a storage device of a same site in the storage device installed in another site where a standby state storage controller of a same redundancy group is installed.
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
G06F 11/16 - Error detection or correction of the data by redundancy in hardware
70.
STORAGE SYSTEM, DATA PROCESSING METHOD OF STORAGE SYSTEM, AND DATA PROCESSING PROGRAM OF STORAGE SYSTEM
A storage system includes a storage device, a processor, and a storage unit. The processor provides a volume configured on the storage device to a mainframe server. The processor manages data handled by an open-architecture server, using a first slot having a first slot length as a unit, in the volume, and manages data handled by the mainframe server, using a second slot having a second slot length shorter than the first slot length as a unit, the first slot storing therein a predetermined number of the second slots, in the volume. The processor performs a process using one of the first slot and the second slot as a unit, depending on the type of the process.
In the event of a partial outage, a multi-node system is enabled to start processing easily and appropriately. The multi-node system includes multiple nodes each including at least one controller, the controller including a processor, a power supply control microcomputer, a memory, and a nonvolatile memory. The processor detects whether or not any one of the nodes is inactive due to a power outage. The processor determines whether or not operation of the multi-node system can be continued, on the basis of operational status of the nodes. Upon determination that the operation of the multi-node system cannot be continued, the processor saves necessary data held in the memory into the nonvolatile memory. The power supply control microcomputer restarts the processor. When the node in the power outage has recovered therefrom following the restart, the multi-node system is caused to start processing.
An information processing system includes a physical drive, a compute unit, and a storage control unit that processes a data input/output request from the compute unit, in which: the storage control unit includes an IO processing unit and an encryption/decryption-related processing unit; the encryption/decryption-related processing unit is capable of referring to key generation method information including at least one element used to generate a key used to encrypt/decrypt the data and an algorithm for generating a key by using the element; and the encryption/decryption-related processing unit generates a key used to encrypt/decrypt the data according to a content set in the key generation method information, and encrypts data received from the compute unit by the IO processing unit or decrypts data read from the physical drive by the IO processing unit by using the key.
G06F 21/78 - Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure storage of data
G06F 13/16 - Handling requests for interconnection or transfer for access to memory bus
A hybrid cloud system includes a management server, a storage of a source-side data center serving as a remote copy source, and a storage by a cloud service provided from a target-side data center serving as a backup destination. The management server is configured to make a request for a disaster recovery configuration using the storage by the cloud service, based on a recovery time objective requirement related to a recovery time objective, a recovery point objective requirement related to a recovery point objective, and a recovery level objective requirement related to a recovery level objective.
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
74.
STORAGE OPERATION SUPPORT APPARATUS AND STORAGE OPERATION SUPPORT METHOD
A storage operation support apparatus includes a storage unit that stores storage device information that stores information on a state of a storage device and an evaluation unit that evaluates an evaluation target storage device to be evaluated. With reference to the storage device information, the evaluation unit calculates an environment score indicating a magnitude of a load on an environment of the evaluation target storage device, calculates a cost score indicating a magnitude of a cost of the evaluation target storage device, calculates a performance score indicating a slow speed of reading and writing data of the evaluation target storage device, and calculates a total score using the environment score, the cost score, and the performance score.
A control method transfers data to a site in another region having a different regulation related to sensitive data in a storage system. When receiving an instruction to transfer a predetermined memory area unit stored in a storage apparatus to another storage apparatus located in another region, the storage system determines, with reference to data transfer availability information in which sensitive data and a transfer permission and rejection region of the sensitive data are associated with each other, whether a transfer destination according to the transfer instruction is included in the transfer permission and rejection region associated with the sensitive data stored in the predetermined memory area unit according to the transfer instruction. The storage system performs or prevents transfer of the predetermined memory area unit to another storage apparatus according to a determination result of whether the transfer destination is included in the transfer permission and rejection region.
To provide a data sharing system and a data sharing method that make it easy to suppress the disclosure of data whose disclosure is restricted according to the location where the data exists, in accordance with the provision of the disclosure restriction.
To provide a data sharing system and a data sharing method that make it easy to suppress the disclosure of data whose disclosure is restricted according to the location where the data exists, in accordance with the provision of the disclosure restriction.
A data sharing system comprising a storage node having virtual volumes accessed by a readout device, the data sharing system comprising: the storage node comprises: an access setting unit configured to restrict a target readout device from reading data from a target virtual volume in accordance with information on a location of the target readout device accessing the data sharing system and information on a location of the target virtual volume from which the target readout device attempts to read data.
A method for redundancy loss recovery. The method may include creating pairs of quorum sets, wherein each pair of the pairs of quorum sets comprises at least two volumes and a quorum, and each of at least two volumes and quorum are located at different storage devices; for a failure occurring in a storage device associated with the pairs of quorum sets or in a network communication between storage devices of the pairs of quorum sets, modifying volume attributes associated with volumes of the pairs of quorum sets; and for the failure occurring in a storage device associated with the pairs of quorum sets, relocating quorum associated with the failed storage device to another storage device that is different from storage devices associated with the pairs of quorum sets.
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
78.
MANAGEMENT COMPUTER AND MANAGEMENT METHOD FOR STORAGE SYSTEM
A management computer receives designation of an operation content and an operation target resource from a management user of a first type (managed or self). In a case where the operation target resource is a system resource managed by a management user of a second type (self or managed) and an operation according to the operation content for the operation target resource influences storage management by the management user of the second type, the management computer changes an authority of an operation regarding an influence for the management user of the second type to an authority not to influence an environment realized by construction including performing an operation according to the operation content for the operation target resource, and then performs the operation according to the operation content for the operation target resource.
A compression-expansion control apparatus has a reconfiguration portion capable of configuring one or more compression circuits which compress data in plain text and/or one or more expansion circuits which expand the compressed data on a programmable logical circuit component, a waiting-time observing portion which observes processing waiting-time from when compression processing was requested till when the compression processing is started and processing waiting-time from when expansion processing was requested till when the expansion processing is started, a calculating portion which determines the number or a ratio of the compression circuits and the expansion circuits in the reconfiguration portion on the basis of the processing waiting-time of the compression processing and the processing waiting-time of the expansion processing, and a switching portion which executes reconfiguration of the compression circuit and/or the expansion circuit in the reconfiguration portion on the basis of the number or the ratio determined by the calculating portion.
A data store volume (DSVOL) for a snapshot group which is a group of the PVOL and one or more SVOLs for the PVOL is a data storage region where data of which a storage destination is one volume (VOL) of the snapshot group and meta-information of the data are stored, and the meta-information is information including address mapping between a reference source address which is an address of a position of the data in the snapshot group and a reference destination address which is an address of a position of the data in the DSVOL. A process of a storage system increases the number of DSVOLs in the snapshot group when an input/output (I/O) load on the snapshot group exceeds a threshold.
G06F 3/06 - Digital input from, or digital output to, record carriers
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 11/16 - Error detection or correction of the data by redundancy in hardware
81.
STORAGE MANAGEMENT SYSTEM AND METHOD FOR MANAGING STORAGE APPARATUS
To enable appropriate data migration scheduling between storage apparatuses. A system stores load information indicating a temporal change in a load of each of a plurality of storage apparatuses. The system selects a data migration source from the plurality of storage apparatuses based on the load information. The system estimates a data migration time length of a target volume selected from the data migration source based on a previously designated feature, which is related to the target volume or a combination of the data migration source and a data migration destination. The system generates a schedule indicating a data migration time period of the target volume based on the migration time length and the load information.
A snapshot virtual device (SS-VDEV) is prepared for each snapshot family (SS-Family) and a deduplication virtual device is prepared apart from the SS-VDEV. When the same data is in a plurality of VOLs of the SS-Family, a storage system maps a plurality of addresses of the same data among the plurality of VOLs to address of the SS-VDEVs of the SS-Family. When duplicated data is in two or more SS-VDEVs, the storage system maps two or more addresses of the duplicated data of the two or more SS-VDEVs to addresses corresponding to the duplicated data among the deduplication virtual devices.
A storage system includes one or more storage nodes each having a non-volatile storage device, a storage controller, and a volatile memory, in which the storage device includes a plurality of base image storage areas including at least a first base image storage area and a second base image storage area as areas for storing entire predetermined information stored in the memory as a base image, and the storage controller starts processing to store a next base image in the second base image storage area when the base image storage with respect to the first base image storage area is complete, and reads out the storage-completed base image and restores the image to the memory in a case where the predetermined information is lost from the memory.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
84.
Method for addressing power outage, arithmetic logic apparatus
A method for addressing power outage of an arithmetic logic apparatus including an arithmetic logic part, a power supply port to which power is externally supplied, and a battery, the arithmetic logic part including a primary system device and a secondary system device, the method includes detection processing of detecting disruption of power in the power supply port, supplying processing of supplying power from the battery to the arithmetic logic part when the disruption is detected in the detection processing, end processing of performing processing of ending the secondary system device to reduce power consumption of the secondary system device when the supplying processing is performed, and backup processing of performing data backup using the primary system device upon completion of the end processing.
G06F 1/26 - Power supply means, e.g. regulation thereof
G06F 1/30 - Means for acting in the event of power-supply failure or interruption, e.g. power-supply fluctuations
G06F 1/3212 - Monitoring battery levels, e.g. power saving mode being initiated when battery voltage goes below a certain level
G06F 1/3234 - Power saving characterised by the action undertaken
G06F 1/3287 - Power saving characterised by the action undertaken by switching off individual functional units in the computer system
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
85.
Information processing apparatus, data control method, and recording medium for parallel access of processors to storage areas
An information processing apparatus includes a main CPU storage area and a sub-CPU storage area. The information processing apparatus further includes an FPGA capable of processing access to the main CPU storage area by a main CPU and access to the sub-CPU storage area by a second CPU in parallel. The FPGA has an FPGA control unit that performs processing. The FPGA control unit is configured to read prescribed data from an SPI device to the main CPU storage area and the sub-CPU storage area, update the prescribed data of the sub-CPU storage area when receiving an updating request from the sub-CPU, and read the prescribed data of the main CPU storage area to the main CPU when receiving a reading request from the main CPU.
An administrative terminal receives designation of generation target data and of a generation destination storage device and identifies data similar to the target data. The terminal calculates a first predicted time expected for transmitting the target data from a storage device holding the target data to the generation destination storage device, and a second predicted time expected to be required for a second transmission process of transmitting the similar data from an object storage service to the generation destination storage device and of transmitting difference data between the target data and the similar data from the storage device holding the target data to the generation destination storage device. If the second predicted time is shorter than the first predicted time, the administrative terminal performs the second transmission process to transmit the similar data and the difference data to the generation destination storage device to generate the generation target data therein.
A drive box includes a power source, a drive group which is constituted of a plurality of storages, a canister which can be replaced, and a midplane which couples the canister and the drive group, wherein the midplane includes a storage apparatus having a memory unit in which data related to at least the drive group is stored, the canister has a communication channel which is coupled to at least one of the plurality of storages, the canister performs I2C communication with the storage apparatus, power is supplied to the canister from the power source by a first supply line which is a power line passing through the midplane, and power is supplied to the storage apparatus from the power source via the canister.
A storage device receives an access request specifying one of one or more LDEVs (one or more logical volumes provided to one or more hosts) from one host. In response to the access request, the storage device accesses a page allocated to an access destination area of the LDEV among a plurality of pages (plurality of logical storage areas that can be allocated to one or more LDEVs). The storage device or a storage device management system identifies, based on management information including information representing a write status characteristic for each of the plurality of pages, a mark target page as a low-write frequency page where a certain number or more of writes have occurred during a certain period of time and checks the presence or absence of a ransomware attack possibility that the number of the mark target pages is equal to or greater than a threshold.
Each controller of a plurality of storage controllers is an old storage controller before replacement or a new storage controller after replacement. The new storage controller can execute a first program and a second program, and the old storage controller can execute at least the second program. When all of the plurality of storage controllers are the new storage controller, the new storage controller processes data input and output to and from the storage drive by using the first program. When the plurality of storage controllers includes at least one of the old storage controllers, each storage controller of the plurality of storage controllers processes the data input and output to and from the storage drive by using the second program.
In failover processing, a CPU restores data stored in a first volume to a second volume of a storage system, associates a unique ID of the first volume with the second volume, and stores the unique ID associated therewith in a memory. After the failover processing is completed, the CPU manages an update difference management bitmap indicating an updated content with respect to data stored in the second volume. The CPU transmits, in failback processing, update data updated after the failover processing among the data stored in the second volume to the first volume identified by the unique ID associated with the second volume based on the update difference management bitmap.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
Aspects of the present disclosure involve an innovative method for detecting error zones from a plurality of volume groups. The method may include creating a plurality of probe groups for error detection; detecting a new error associated with the plurality of probe groups and the plurality of volume groups; retrieving error information associated with the new error, wherein the error information comprises an error source, an error type, and an error time; retrieving an error correlation rule associated with the error information; determining if the error correlation rule is satisfied by the error information and information of other known errors; and identifying a common zone based on the error information and the information of the other known errors as an error zone.
Example implementations described herein involve systems and methods that can include, responsive to a request to deploy an application using a storage of a storage system, managing a storage configuration for the application; managing data information and storage configuration information associated with a copy relationship between data used by the application and the storage configuration for the storage system; extracting and evaluating possible configuration patterns from the data information and the storage configuration information; and providing ones of the possible configuration patterns that satisfy specified requirements for the application.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
A management method includes accessing API interfaces and collecting data related to primary volumes, remote copy volumes, and shared upload volumes, generating order topology related to the primary, remote copy, and shared upload volumes, calculating a set P of the primary volumes storing data to be deleted and specifying a set Vd of the remote copy volumes directly related to the set P in the order topology, calculating, as a set C, all of the shared upload volumes related to the set P in the order topology and specifying a set Vi of all of the remote copy volumes related to the set C in the order topology, calculating a set Vid as the complementary set of the set Vd in the set Vi, and specifying a set Pid of the primary volumes one level higher than the set Vid in the order topology.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
94.
ALLOCATION CONTROL APPARATUS, COMPUTER SYSTEM, AND ALLOCATION CONTROL METHOD
A memory of an application platform stores per site a performance model indicating a relationship between program performance and a resource amount of hardware necessary for realizing program performance, and an electric power consumption model indicating a relationship between a resource allocation amount that is an amount allocated to the program, and an electric power consumption amount consumed when the program is executed. The CPU receives target performance information that indicates target performance for the program, calculates per site a necessary allocation amount and a necessary electric power consumption amount that are a resource allocation amount and an electric power consumption amount necessary for realizing the target performance by using the target performance information, the performance model, and the electric power consumption model, and creates a container/data allocation plan that is an allocation plan of an execution platform of the program and data based on a result of the calculation.
Systems and methods described herein involve a storage system and one or more devices associated with the storage system, the one or more devices external to the storage system, which can include managing a configuration rule difference mapping information that maps configuration detection filter information and required rule manipulation responsive to the configuration detection filter information, the required rule manipulation indicative of a modification to alert properties of the storage system; for detection of a configuration change to the one or more devices based on the configuration detection filter information, identifying one or more rule manipulations from the required rule manipulation information from the configuration rule difference mapping information, and generating a rule manipulation draft to modify the alert properties of the storage system based on the required rule manipulation.
To support appropriate selection of a configuration of a cloud system. A computer system includes a processor 111, a storage apparatus 112 and an input/output apparatus 141, the storage apparatus 112 stores at least configuration condition information 132 indicating evaluation values for system configurations that can be constructed on cloud, the input/output apparatus 141 accepts a configuration request indicating a condition for a system configuration to be constructed on the cloud, the processor 111 obtains an evaluation value corresponding to the configuration request, compares the evaluation value corresponding to the configuration request with the evaluation values of the configuration condition information and determines a candidate for a system configuration to be proposed on the basis of a result of the comparison, and the input/output apparatus 141 outputs the candidate for the system configuration to be proposed.
A storage system having a plurality of control units that perform read control of data stored in a storage and write control of the data, the storage system comprising, each of the plurality of control units has a processor, a first memory connected to the processor and storing software for executing a process of read control and write control, a network interface for connecting to a control unit network that connects each of the plurality of control units, and a second memory connected to the network interface and storing control information of the data subject to read control and write control and cache data of the storage.
G06F 3/06 - Digital input from, or digital output to, record carriers
G06F 12/0804 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with main memory updating
98.
Unified storage and method of controlling unified storage
The present invention makes it possible to maintain availability and scale out file performance, while suppressing costs. A unified storage has a plurality of controllers and a storage apparatus (storage device unit), and each of the plurality of controllers is equipped with one or more main processors (CPU) and one or more channel adapters (FE-I/F). Each main processor causes a block storage control program to operate and thereby process data inputted to and outputted from the storage apparatus, each channel adapter has a processor (CPU) that performs transmission and reception to and from a main processor after receiving an access request, and the processors in the plurality of channel adapters cooperate to cause a distributed file system to operate, and distributively store data, written as a file, to the plurality of controllers.
The CPU of the management node measures the power consumption of a computer node while causing the computer node to execute the power measurement benchmark that uses hardware whose resource is allocated to a program to be executed by the computer node, where the CPU is changing the use amount of the resource while causing the computer node to execute the power measurement benchmark. The CPU generates a power consumption model representing a relationship between an allocation amount of the resource to be allocated to the program and the power consumption on the basis of a measurement result obtained by measuring the power consumption.
A data migration apparatus and method for performing low cost data migration while preventing stoppages during migration of a job net between cloud services by acquiring an execution schedule of the job net, estimate execution time of each of the jobs, and further estimate volume copy time from a first to a second cloud service regarding each of the volumes to be used; calculate a starting time of each job on the basis of the execution schedule and time of each job; and calculate a starting time of volume copy on the basis of the calculated starting time of day of each job and the volume copy time of each volume, so that the volume copy of each volume from the first to the second cloud service will start at the starting time of the volume copy which has been calculated regarding each relevant volume.