A virtual machine deployment method and apparatus, a device, and a readable storage medium. The method uses a firefly algorithm to determine deployment locations of virtual machines, and takes an average performance score of all hosts to be selected as a target function. Locations of respective fireflies are locations of the respective hosts. An iterative optimization process of the firefly algorithm involves finding a host capable of maximizing operation performance of all hosts in a cloud platform after deployment of virtual machines is completed. Since the target function is the average performance score of all hosts after deployment of the virtual machines, selection of a host corresponding to the maximum target function value enables average performance of all hosts to be maximized after the virtual machines have been deployed to a destination host, thereby maximizing operation performance of all hosts in the cloud platform while scheduling the virtual machines.
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
2.
METHOD AND APPARATUS FOR CONTROLLING RUNNING OF OPERATING SYSTEM, AND EMBEDDED SYSTEM AND CHIP
Provided in the embodiments of the present disclosure are a method and apparatus for controlling running of an operating system, and an embedded system and a chip. The embedded system includes a chip and at least two operating systems. The chip includes a processor, a hardware controller, a first bus, and a second bus. The bandwidth of the first bus is higher than the bandwidth of the second bus; the first bus is configured as a multi-master and multi-slave mode; and the second bus is configured as a one-master and multi-slave mode. The at least two operating systems are configured to run on the basis of the processor; the at least two operating systems are configured to communicate with each other by the first bus; and the at least two operating systems are configured to control the hardware controller by the second bus.
A power supply control method, system and device for a server are provided. The method includes: dividing a utilization rate of a system main power supply into different levels in advance, and setting a GPU power control policy corresponding to a respective one of the different levels of the utilization rate of the system main power supply, wherein a suppression degree, on a computing capability of GPUs in a system, of the set GPU power control policy increases with the increase in the level of the utilization rate of the system main power supply; acquiring an actual utilization rate of the system main power supply, and determining a target utilization rate level corresponding to the actual utilization rate; and performing power supply control on the GPUs in the system according to the GPU power control policy corresponding to the target utilization rate level.
G06F 1/32 - Moyens destinés à économiser de l'énergie
G06F 1/3287 - Économie d’énergie caractérisée par l'action entreprise par la mise hors tension d’une unité fonctionnelle individuelle dans un ordinateur
4.
METHOD AND APPARATUS FOR REPAIRING BANDWIDTH SLOWDOWN, ELECTRONIC DEVICE, AND STORAGE MEDIUM
The present application provides a method and apparatus for repairing bandwidth slowdown, an electronic device, and a storage medium, applied to a BIOS module. The BIOS module is connected to a CPLD module with a register and is configured to communicate with the CPLD module; and the CPLD module is connected to a PCIE module configured with a target bandwidth and is configured to acquire a link bandwidth of the PCIE module. The method includes: acquiring the link bandwidth of the PCIE module from the CPLD module when a device is started; comparing the link bandwidth with the target bandwidth, and determining whether the PCIE module has the bandwidth slowdown; and sending a register connection state control instruction to the CPLD module when the PCIE module has the bandwidth slowdown, so that the register performs enable and disable connection operations in response according to the received instruction.
Disclosed are an image output method and apparatus, and a computer-readable storage medium. The method includes: acquiring an image continuous change feature of a display interface of a local server; generating image output control information according to the image continuous change feature and a preset image change threshold; and controlling amount of output image data according to the image output control information and network congestion information.
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p. ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersectionsAnalyse de connectivité, p. ex. de composantes connectées
G06T 5/10 - Amélioration ou restauration d'image utilisant le filtrage dans le domaine non spatial
G06V 10/74 - Appariement de motifs d’image ou de vidéoMesures de proximité dans les espaces de caractéristiques
The present application discloses a method for acquiring datum of artificial intelligence platform, device, apparatus and medium, which includes: acquiring datum operation request initiated by target node of the artificial intelligence cluster aiming at target datum; counting current datum operation burden of each of the other compute nodes; according to order of the current datum operation task burden from lower to higher, traversing all of the other compute nodes sequentially, and in the traversal process, judging whether the compute node that is currently being traversed has already stored the target datum; under the condition it has already stored the target datum, by shared storage network that is pre-constructed between the different nodes of the artificial intelligence cluster based on remote direct datum access technology, transmitting the target datum of the compute node that is currently being traversed to the target node.
H04L 67/1097 - Protocoles dans lesquels une application est distribuée parmi les nœuds du réseau pour le stockage distribué de données dans des réseaux, p. ex. dispositions de transport pour le système de fichiers réseau [NFS], réseaux de stockage [SAN] ou stockage en réseau [NAS]
A method for scheduling a multi-node cluster of K-DB database, comprising: connecting an application terminal and scheduler to a cluster of K-DB database through a service extranet, and connecting respective nodes in the cluster through an intranet; in response to that an application request is received by the scheduler, determining whether the request is a table query, and in response to that the request is the table query, determining whether the request is a multi-table-joint query; in response to that the request is the multi-table-joint query, determining tables to be queried, and determining nodes having a highest table version; determining types of change values of the respective tables, and calculating amount of updated data of the respective tables; and selecting a node with the smallest amount of updated data as a computing node, synchronizing the tables of other nodes to the computing node, and executing the multi-table-joint query.
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
G06F 9/48 - Lancement de programmes Commutation de programmes, p. ex. par interruption
8.
Noise reduction auto-encoder-based anomaly detection model training method
Disclosed is a method for training an abnormal-detection model based on an improved denoising autoencoder, including: acquiring an original image; generating a rectangular frame according to a preset range of a resolution ratio, by the improved denoising autoencoder, and occluding the original image by the rectangular frame, wherein the resolution ratio is a ratio of a resolution of an occlusion area formed by the rectangular frame to a resolution of the original image; filling random noise in the rectangular frame to obtain a noised image, by the improved denoising autoencoder; and performing constraint learning on the original image and the noised image, by the abnormal-detection model, to implement training of the abnormal-detection model. Because a learning task is more complex, it is helpful to alleviate identity mapping, and detection performance of the model is improved. The present application further provides an apparatus, a device, and a readable storage medium thereof.
G06V 10/774 - Génération d'ensembles de motifs de formationTraitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source méthodes de Bootstrap, p. ex. "bagging” ou “boosting”
G06V 10/98 - Détection ou correction d’erreurs, p. ex. en effectuant une deuxième exploration du motif ou par intervention humaineÉvaluation de la qualité des motifs acquis
9.
BIOS problem positioning method and apparatus, and computer readable storage medium
Disclosed in embodiments of the present application are a BIOS problem locating method and apparatus, and a medium. The method includes: constructing functional modules according to historical sample data; according to node information corresponding to each functional module, dividing data codes corresponding to the functional modules into data sub-codes; determining target identifier information according to correspondence between the data sub-codes and identifier information; when a problem occurs to a certain data sub-code, storing the target identifier information to a preset memory; and if an anomaly occurs to BIOS operation, according to the target identifier information recorded in the memory, quickly locating the abnormal target data sub-code.
G06F 9/00 - Dispositions pour la commande par programme, p. ex. unités de commande
G06F 11/22 - Détection ou localisation du matériel d'ordinateur défectueux en effectuant des tests pendant les opérations d'attente ou pendant les temps morts, p. ex. essais de mise en route
G06F 15/177 - Commande d'initialisation ou de configuration
10.
METHOD, APPARATUS AND SYSTEM FOR MONITORING I2C, AND STORAGE MEDIUM
The present disclosure relates to the field of computer technologies, in particular to a method, an apparatus, and a system for monitoring an I2C, and a storage medium. The method includes: obtaining a first command from a BMC; recognizing a level of the first command according to a pre-stored command level list, where the level includes a first level, a second level, and a third level, a security level of the third level is higher than a security level of the second level, and the security level of the second level is higher than a security level of the first level; and sending the first command to a device in different modes according to the level of the first command.
G06F 21/85 - Protection des dispositifs de saisie, d’affichage de données ou d’interconnexion dispositifs d’interconnexion, p. ex. les dispositifs connectés à un bus ou les dispositifs en ligne
G06F 13/42 - Protocole de transfert pour bus, p. ex. liaisonSynchronisation
11.
SERVER FAULT LOCATING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
The present application discloses a server fault locating method and apparatus, an electronic device, and a storage medium. The method includes: acquiring topology architecture information of a server, wherein the topology architecture information includes connection relationships between a plurality of modules to be detected and attribute information corresponding to the modules to be detected; based on the topology architecture information, determining a theoretical value of each target performance parameter in each of the modules to be detected; acquiring an actual value of the target performance parameter during operation of each of the modules to be detected; and comparing and analyzing the actual value with the theoretical value, and determining a faulty module among the a plurality of modules to be detected according to a comparison and analysis result.
G06F 11/22 - Détection ou localisation du matériel d'ordinateur défectueux en effectuant des tests pendant les opérations d'attente ou pendant les temps morts, p. ex. essais de mise en route
G06F 11/34 - Enregistrement ou évaluation statistique de l'activité du calculateur, p. ex. des interruptions ou des opérations d'entrée–sortie
H04L 41/12 - Découverte ou gestion des topologies de réseau
12.
Voltage pump circuit and method supporting power-down data protection
A voltage pump circuit and method for supporting power-down data protection. The circuit includes: a first capacitor connected to a voltage source; a second capacitor connected to the first capacitor by a second metal oxide semiconductor (MOS); a boost chopper connected to the voltage source and the first capacitor and connected to the second capacitor by a first MOS; a buck chopper connected to the second capacitor and a hard disk; a logic chip connected to the voltage source, the first MOS and the second MOS, where the logic chip is configured to control on or off of the first MOS and the second MOS according to voltage information of the voltage source to realize normal power supply to the hard disk for a normal voltage source, and to prolong power supply time to the hard disk for an abnormal and closed voltage source to save cache data.
An electronic device and a memristor-based logic gate circuit thereof. In the present application, a control end of a controllable switch is connected to a negative end of an output memristor in a MAGIC-based AND logic gate, and whether a second memristor is powered on is controlled by the controllable switch. Thus, when resistance value states of two input memristors in the AND logic gate are different, the controllable switch will conduct and power on the second memristor, and the second memristor will present a low-resistance state at this time. When the resistance value states of the two input memristors are the same, the controllable switch will not conduct and the second memristor will then remain the state unchanged, i.e., presents a high-resistance state. An exclusive OR logic gate is formed by combining the two input memristors and the second memristor.
H03K 19/20 - Circuits logiques, c.-à-d. ayant au moins deux entrées agissant sur une sortieCircuits d'inversion caractérisés par la fonction logique, p. ex. circuits ET, OU, NI, NON
G06F 7/501 - Semi-additionneurs ou additionneurs complets, c.-à-d. cellules élémentaires d'addition pour une position
H03K 19/02 - Circuits logiques, c.-à-d. ayant au moins deux entrées agissant sur une sortieCircuits d'inversion utilisant des éléments spécifiés
H03K 19/21 - Circuits OU EXCLUSIF, c.-à-d. donnant un signal de sortie si un signal n'existe qu'à une seule entréeCircuits à COÏNCIDENCES, c.-à-d. ne donnant un signal de sortie que si tous les signaux d'entrée sont identiques
14.
METHOD AND SYSTEM FOR ANALYZING CLOUD PLATFORM LOGS, DEVICE AND MEDIUM
The present application discloses a method and system for analyzing cloud platform logs, a device, and a storage medium. The method includes: preprocessing cloud platform logs, equally dividing the time for recording logs into a plurality of time periods according to a preset time length, and counting the total number of logs in each time period; selecting a time window including a plurality of consecutive time periods, classifying each time period in the time window according to a dissimilarity value so as to obtain an exception class, and according to the time corresponding to a log in the exception class, determining a time period in which a fault occurred; performing word segmentation on a log from the time period in which the fault occurred, and calculating a term frequency and an inverse document frequency of each word; and according to the product of the term frequency and the inverse document frequency, determining a reason for which the fault occurred. In the present application, the time period in which a fault occurred is determined by means of clustering, and the reason for which the fault occurred is determined according to the term frequency and the inverse document frequency, such that cloud platform logs can be analyzed quickly, and the operation and maintenance efficiency of operation and maintenance personnel is increased.
This disclosure discloses a network model training method and apparatus, an electronic apparatus and a computer-readable storage medium. The method includes: acquiring training data and inputting the training data into an initial model to obtain output data, wherein the initial model includes an embedding layer, the embedding layer is constructed based on preset network layer latency information, the preset network layer latency information includes network layer types and at least two types of latency data corresponding to each network layer type, and each type of latency data corresponds to different device types; inputting a current device type and a target network layer type of each target network layer in the initial model into the embedding layer to obtain target latency data corresponding to other device type; calculating a target loss value based on the target latency data, the training data and the output data, and adjusting parameters of the initial model based on the target loss value; and obtaining a target model based on the initial model in response to a training completion condition is satisfied. By means of the method, the target model has a minimum latency when running on a device corresponding to the other device type.
Provided are a method and apparatus for processing abnormal power failure of a solid state disk, and an electronic device and a computer-readable storage medium. The method includes: in response to detecting an abnormal power failure of a solid state disk, acquiring a write operation for the solid state disk; in response to the write operation being a cold data write operation, acquiring a write address corresponding to the write operation, and discarding the cold data write operation, wherein the write address points to a cold data block; obtaining the minimum write address of the cold data block by using the write address; and generating, by using the minimum write address, data block information corresponding to the cold data block.
A sparse matrix accelerated computing method and apparatus, a device, and a medium are disclosed. The method includes: reading and performing non-zero detection on a first sparse matrix, and generating first status information of each line of data of the first sparse matrix by the detection result and storing same into a register; storing non-zero data of the first sparse matrix into an RAM; reading and performing non-zero detection on a second sparse matrix, and generating second status information of each row of data of the second sparse matrix by a detection result and storing same into the register; and performing a logical operation on the first status information and the second status information, reading the data in the RAM by the logical operation result, and performing a product operation on the data in the RAM and data of the second sparse matrix to obtain product matrix data.
Provided are a file processing method, apparatus and device, and a readable storage medium. The method includes: judging whether a received file access path exists; if so, judging whether the file access path corresponds to a folder; if so, generating a folder stream, traversing the folder stream, and generating a file object corresponding to the folder stream; storing the file object in a memory, acquiring the size of a corresponding file according to the file object, detecting whether there is a file in the folder, and a size of the file is known and the file has been traversed and used, and if so, deleting, from the memory, the file object corresponding to the file; and acquiring the total size of files in the folder according to the size of each file in the folder.
The present disclosure discloses a bare metal hardware detection method, system and device and a computer readable storage medium, applied to a bare metal node. The bare metal node includes a bare metal provided with a network card. The bare metal detection method includes: when a detecting instruction is received, generating a detecting flow, the detecting flow including an address request; sending the detecting flow to a control node by means of the network card, so that the control node returns add ressin formation when detecting the address request in the detecting flow, the address information including a tftp server address; and obtaining a memory file system according to the tftp server address, acquiring hardware information of the bare metal by means of the memory file system, and reporting the hardware information to the control node.
The present application discloses a data caching method, system and device in an AI cluster, and a computer medium. The method comprises: determining a target data set to be cached; obtaining a weight value of the target data set on each cluster node in the AI cluster, determining a target cluster node for caching the target data set; obtaining a target shortest path from remaining cluster nodes comprising nodes except the target cluster node in the AI cluster, and on the basis of the weight value, the target shortest path, and the preceding node, determining a cache path for caching the target cluster node, so as to cache the target data set to the target cluster node according to the cache path. According to the present application, the cache path can be matched with the storage capacity of the AI cluster, caching of the target data set on the basis of the cache path is equivalent to caching of the data set on the basis of the storage performance of the AI cluster, and the data caching performance of the AI cluster can be improved.
The present application discloses a CPU performance adjustment method and apparatus, and a medium. The method includes: determining, by a baseboard management controller (BMC), a rated power of a power supply unit (PSU) powering a current central processing unit (CPU); outputting, by a complex programmable logic device (CPLD), a control signal corresponding to the rated power; outputting, by a preset conversion unit, a voltage value corresponding to the control signal; and determining, based on the voltage value and a first preset mapping relationship, a maximum operating frequency of the CPU in an overclocking state, the first preset mapping relationship is a corresponding relationship between voltage ranges and maximum operating frequencies.
The present disclosure provides a method for implementing a bare metal inspection process, a system, a device and a medium. The method includes: installing an Openstack at a control node, installing a network interface card at a bare metal node, and installing an operating system in the network interface card, so that the network interface card generates a first bare metal port at the bare metal node, and in the operating system, generates a second bare metal port corresponding to the first bare metal port; establishing a communication channel between the Openstack and the operating system, and deploying a proxy component on the operating system; creating a first inspection port on the Openstack, creating a second inspection port based on the proxy component, and binding the second inspection port to the second bare metal port.
G06F 15/16 - Associations de plusieurs calculateurs numériques comportant chacun au moins une unité arithmétique, une unité programme et un registre, p. ex. pour le traitement simultané de plusieurs programmes
The preset application discloses a task scheduling method, including: in response to receiving an issued task, dividing, by a parser, the task into sub-tasks and generating a sub-task list, a task parameter corresponding to each sub-task is recorded in the sub-task list, the task parameter includes a start phase of a next sub-task; sending, by a scheduler, the task parameter of a sub-task to be processed in the sub-task list to a corresponding sub-engine; executing, by the corresponding sub-engine, a corresponding sub-task to be processed; sending a notification to the scheduler in response to an operating phase when the corresponding sub-engine executes the corresponding sub-task to be processed being the same as the start phase in the received task parameter; and in response to the notification being detected by the scheduler, returning to the step of sending the task parameter of a sub-task to be processed to a corresponding sub-engine.
The present disclosure discloses a method for repairing hanging-up of a communication bus, an apparatus, an electronic device and a storage medium. The method includes: detecting a communication situation between the central processing unit and a baseband processing unit; when the communication situation is used to indicate a communication fault between the central processing unit and the baseband processing unit, determining a target hanging-up event generated by the communication bus deployed between the central processing unit and the baseband processing unit; obtaining a target repairing operation corresponding to the target hanging-up event; and according to the target repairing operation, repairing the communication bus.
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
G06F 13/42 - Protocole de transfert pour bus, p. ex. liaisonSynchronisation
25.
FILECOIN CLUSTER DATA TRANSMISSION METHOD AND SYSTEM BASED ON REMOTE DIRECT MEMORY ACCESS
A filecoin cluster data transmission method and system based on RDMA, including: providing a RDMA interface; receiving and encapsulating sector data by a first node, invoking the RDMA interface to transmit the sector data, serially transmitting the sector data to a next encapsulation node; when receiving the sector data from a previous node, invoking the RDMA interface to directly transmit the sector data to a user mode memory of the node for encapsulation; invoking the RDMA interface to serially transmit the sector data back to the HCA card, and transmitting the sector data to a next node; receiving, by a last node, the sector data, invoking the RDMA interface to directly transmit the sector data to the user mode memory of the last node; invoking the RDMA interface to serially transmit the sector data back to the HCA card of the last node, and transmitting the sector data to distributed storage.
H04L 67/1097 - Protocoles dans lesquels une application est distribuée parmi les nœuds du réseau pour le stockage distribué de données dans des réseaux, p. ex. dispositions de transport pour le système de fichiers réseau [NFS], réseaux de stockage [SAN] ou stockage en réseau [NAS]
G06F 15/173 - Communication entre processeurs utilisant un réseau d'interconnexion, p. ex. matriciel, de réarrangement, pyramidal, en étoile ou ramifié
H04L 67/06 - Protocoles spécialement adaptés au transfert de fichiers, p. ex. protocole de transfert de fichier [FTP]
26.
Method for service processing and system, device, and medium
The disclosure discloses a method for service processing and system, including: determining a first quantity parameter for acquiring index shards each time and a second quantity parameter of objects acquired by each of the index shards each time; assigning first weight to the first quantity parameter to obtain third quantity parameter for acquiring the index shards; listing, from a plurality of index shards at a bucket, index shards corresponding to the third quantity parameter, and listing, from each of the index shards which is listed, the objects corresponding to the second quantity parameter, so as to obtain a matrix which takes each index shard as a column and a plurality of objects corresponding to each shard as rows; successively extracting the plurality of the objects corresponding to each row of the matrix; and according to preset number of concurrent processing, processing the plurality of the objects extracted each time.
The disclosure provides a server cabinet, including: a cabinet frame, first mounting flanges detachably and vertically arranged at two sides of a front end of the cabinet frame, second mounting flanges detachably and vertically arranged at the two sides of the front end of the cabinet frame, first support brackets detachably arranged at the two side walls of the cabinet frame and configured for installing an ODCC server, and second support brackets detachably arranged at the two side walls of the cabinet frame and configured for installing an OCP server; wherein a plurality of first mounting holes configured for being matched with a front end of the ODCC server are formed in the first mounting flanges along a height direction, and a plurality of second mounting holes configured for being matched with a front end of the OCP server are formed in the second mounting flanges along the height direction.
Disclosed is a method for optimizing a convolutional residual structure of a neural network, including: obtaining picture data and convolution kernel data of each group of residual structures from a global memory as inputs, calculating a first convolution to obtain a first result, storing the first result in a shared memory; determining the size of a picture according to the picture data of the first result, dividing the picture into a plurality of first regions, allocating a corresponding block to each first region in the shared memory, calculating a second convolution in the blocks to obtain a second result; determining the size of a second picture, dividing the second picture into a plurality of second regions, allocating each second region to a corresponding block, calculating a third convolution in the blocks to obtain an output; adding the output and the inputs and performing linear rectification to obtain a final result.
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 10/26 - Segmentation de formes dans le champ d’imageDécoupage ou fusion d’éléments d’image visant à établir la région de motif, p. ex. techniques de regroupementDétection d’occlusion
29.
Task scheduling method and apparatus, electronic device, and readable storage medium
A task scheduling method includes: when a task requirement is obtained, splitting the task requirement to obtain the plurality of subtasks having a constraint relationship; performing execution condition detection on non-candidate subtasks, determining a non-candidate subtask that satisfies an execution condition as a candidate subtask, and putting the candidate subtask into a task queue; performing state detection on a server network composed of edge servers to obtain server state information and communication information; inputting the server state information, the communication information, and queue information corresponding to the task queue into an action value evaluation model to obtain the plurality of evaluated values respectively corresponding to the plurality of scheduling actions; and determining a target scheduling action from the plurality of scheduling actions by using the evaluated values, and scheduling the candidate subtask in the task queue on the basis of the target scheduling action.
This application discloses a high-reliability protection circuit and a power supply system. The high-reliability protection circuit includes: a load overcurrent voltage monitoring component detecting a load supply voltage of a load, and determining and outputting a first current abnormal signal; a control logic component receiving the first current abnormal signal and generating a turn-off control signal; a driving electrode charging charge pump component receiving the turn-off control signal and generating a driving electrode voltage control signal and a channel conduction parameter control signal; a driving electrode rapid discharge component receiving the driving electrode voltage control signal, and transmitting a power field effect transistor cut-off signal; and a power field effect transistor switch respectively receiving the channel conduction parameter control signal and the power field effect transistor cut-off signal, adjusting channel conduction parameters of the power field effect transistor switch, and cutting off a circuit main current. This application can rapidly monitor voltage changes when a far-end load current is abnormal, realize rapid protection specific to current abnormality, and avoid load-end chip or device damage accidents caused by current phase lag due to stray inductance.
H02H 3/08 - Circuits de protection de sécurité pour déconnexion automatique due directement à un changement indésirable des conditions électriques normales de travail avec ou sans reconnexion sensibles à une surcharge
H02H 1/00 - Détails de circuits de protection de sécurité
31.
Data processing method and system, device, and medium
Provided is a data processing method, which includes: in response to a logical volume receiving a write request, whether a logical address carried in the write request is occupied by a data unit in the logical volume is determined; if not, a data grain which is closest to the size of a data block and is greater than the size of the data block is determined; a new data unit is created in the logical volume by use of the logical address as an initial address and by use of the closest data grain as the length, and a logical address range occupied by the data block in the new data unit is recorded; the data block is written into an underlying storage and a written physical address is returned; and a mapping relationship between the initial address and the physical address is established and saved.
A method for locating a fault of a server includes: physically connecting a GPIO pin of a BMC to a GPIO pin of target hardware in advance; reading a current state value of the GPIO signal of the target hardware in a power-on and activation process of a mainboard, and loading a corresponding version of the firmware according to the current state value and the condition for switching; and in response to determining that the corresponding version of the firmware is the debug version, outputting serial port log information of the debug version to the BMC, and in response to determining that the corresponding version of the firmware is the release version, determining whether to change the state value of the GPIO signal on a connection between the BMC and the target hardware according to a preset normal activation condition and a system event log.
The virtual network performance acceleration method includes step S1, monitoring whether an OVS invokes a CT mechanism; step S2, if it is detected that the OVS invokes a CT mechanism, triggering a translation rule; and step S3, forwarding a translation message translated by the translation rule. A virtual network performance acceleration apparatus and a storage medium are also disclosed.
H04L 41/0816 - Réglages de configuration caractérisés par les conditions déclenchant un changement de paramètres la condition étant une adaptation, p. ex. en réponse aux événements dans le réseau
H04L 41/0895 - Configuration de réseaux ou d’éléments virtualisés, p. ex. fonction réseau virtualisée ou des éléments du protocole OpenFlow
34.
Method for over current protection device, and power supply circuit
A method for over current protection includes: when receiving an overcurrent signal that representing an output current of DC-DC conversion module exceeds a OCP value sent by a DC-DC conversion module, controlling a protection module to be disconnected, cutting off an external insertion apparatus, so that the on-board circuit is not powered off; keeping outputting a first enable signal to the DC-DC conversion module within third preset duration longer than first preset duration, and after the first preset duration, determining whether the overcurrent signal sent by the DC-DC conversion module is re-received; if yes, it indicates that overcurrent occurs in the on-board circuit portion, controlling the DC-DC conversion module to be turned off, to stop supplying power for the on-board circuit; if no, keeping controlling the protection module to be disconnected, and continuing to merely supply power to the on-board circuit.
H02H 7/12 - Circuits de protection de sécurité spécialement adaptés aux machines ou aux appareils électriques de types particuliers ou pour la protection sectionnelle de systèmes de câble ou de ligne, et effectuant une commutation automatique dans le cas d'un changement indésirable des conditions normales de travail pour convertisseursCircuits de protection de sécurité spécialement adaptés aux machines ou aux appareils électriques de types particuliers ou pour la protection sectionnelle de systèmes de câble ou de ligne, et effectuant une commutation automatique dans le cas d'un changement indésirable des conditions normales de travail pour redresseurs pour convertisseurs ou redresseurs statiques
H02M 1/32 - Moyens pour protéger les convertisseurs autrement que par mise hors circuit automatique
35.
Power supply device and dual power source planes, and server
A power supply device with dual power source planes and a server. The device includes: a plurality of controllers connected in parallel; a first Power Supply Unit (PSU) power source group and a second PSU power source group each includes two PSU power sources connected in parallel, and the first PSU power source group and the second PSU power source group are respectively located on two independent power source planes; and a power source backboard, wherein the power source backboard includes a first copper skin layer and a second copper skin layer, which are not connected with each other, the first PSU power source group is connected to the input end of each controller by means of the first copper skin layer, and the second PSU power source group is connected to the input end of each controller by means of the second copper skin layer.
The optical device includes: a first coupler having an adjustable beam splitting ratio; a sensing arm and a programmable modulation arm which are connected to the first coupler; and a second coupler having an input port connected to the sensing arm and the programmable modulation arm and an output port connected to a photodetector. The sensing arm is used for generating, by means of a slot waveguide, a first signal from a first light wave beam outputted by the first coupler. The programmable modulation arm is used for obtaining, by utilizing a grating, a second signal according to a second light wave beam outputted by the first coupler, and the grating is a nano grating generated under a pre-programmed voltage parameter of a programmable piezoelectric transducer of the programmable modulation arm. An electronic device and a programmable photonic integrated circuit are also disclosed herein.
G02F 1/125 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p. ex. commutation, ouverture de porte ou modulationOptique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des éléments acousto-optiques, p. ex. en utilisant la diffraction variable par des ondes sonores ou des vibrations mécaniques analogues dans une structure de guide d'ondes optique
G02F 1/00 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p. ex. commutation, ouverture de porte ou modulationOptique non linéaire
G02F 1/11 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p. ex. commutation, ouverture de porte ou modulationOptique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des éléments acousto-optiques, p. ex. en utilisant la diffraction variable par des ondes sonores ou des vibrations mécaniques analogues
37.
Log output method and system for server, and related apparatus
A log output method and system for a server, and a computer-readable storage system and a server. The method includes: after a server is powered on, determining whether a debugging switch in BIOS settings of the server is enabled (S101); if so, initializing a serial port function and making a debugging function take effect (S102); reading a printing function value in the debugging function (S103); if the printing function value is a first preset value, printing log information by means of the serial port function (S104); and if the printing function value is a second preset value, turning off a log output function (S105). The method is conducive to quickly locating a fault abnormality of a server, thereby reducing the server debugging and modification time.
A method, apparatus, and system for creating a training task on an AI training platform, and a computer-readable storage medium. The method includes: dividing nodes of the AI training platform into a plurality of virtual groups in advance, dividing a preset quota of disk space from each node to form a shared storage space of a virtual group, receiving training task configuration information inputted by a user, and determining task configuration conditions according to the training task configuration information; and determining whether there are first nodes satisfying the task configuration conditions among the nodes of the AI training platform, if so, selecting a target node from the first nodes according to a preset filtering method, creating a corresponding training task on the target node, and caching a training dataset obtained from a remote data center into an independent storage space of the target node, and recording a corresponding storage path.
The present application discloses a method for processing a file read-write service. The method includes: in response to receiving a read-write service of a file, determining, based on a file serial number, whether a cache handle of the file is present in an index container; in response to the cache handle of the file not being present in the index container, opening, based on the read-write service, a corresponding handle of the file; encapsulating a flag and a pointer of the corresponding handle and the file serial number so as to obtain a cache handle of the file; adding the cache handle of the file into the index container and a first queue; processing the read-write service by using the corresponding handle of the file; and in response to completion of processing the read-write service, moving the cache handle of the file from the first queue to a second queue.
A data I/O processing method includes obtaining data requested consisting of a plurality of basic data blocks in sequence; grouping the data in sequence to obtain a plurality of segmented data blocks in sequence; sequentially determining whether each segmented data block has a time delay statistical record based on the time for completion of the operation processing of a previous basic data block; in response to the time delay statistical record being present in the segmented data block, setting a waiting time period according to the time delay statistical record; sequentially merging the basic data blocks, which have not been subjected to the operation processing, in the segmented data blocks within the waiting time period until the waiting time period is ended or the merged basic data blocks reach the size of the segmentation unit, and stopping merging; and sending the merged basic data blocks and performing the operation processing.
A measurement correction method and apparatus for a sensor, and a server power supply. The method is applied to a sensor comprising a shunt resistor and a differential amplifier, and comprises: performing compensation verification on the sensor in advance to obtain a compensation coefficient of the sensor (S1); obtaining a voltage signal output by the sensor (S2); and correcting the voltage signal according to an error value of the shunt resistor and the compensation coefficient to obtain a voltage correction signal for use in system optimization management (S3).
A resource allocation method and a system after system restart and a related component. The method comprises: allocating, from a resource pool, a first part of resources to an initialization pre-application module; allocating, from the resource pool, a second part of resources to a cache module, such that the cache module restores cache data to be restored that is in an initialization stage; and repeating the following steps: determining whether current remaining resources in the resource pool can meet a preparation stage application requirement of the cache module or whether there is an IO restoration requirement; allocating resources to the cache module or the IO module, from the resource pool, according to the result of determination; and determining whether the preparation stage application requirement and the IO restoration requirement are completely met, and if so, jumping out of the loop.
The disclosure provides a multi-path failover group management method, which includes: acquiring Small Computer System Interface (SCSI) address information of a physical volume, and determining physical link information in the SCSI address information; determining target port group information of a storage to which the physical volume belongs; determining whether port group information corresponding to a storage array subscript value is null; when the port group information is null, creating first subscript port group information corresponding to the storage array subscript value, creating a failover group node according to the physical link information, and adding the physical volume to the failover group node; when the port group information is not null, determining whether the port group information is consistent with the target port group information; and when the port group information is inconsistent with the target port group information, updating the storage array subscript value and reselecting appropriate port group information.
G06F 11/20 - Détection ou correction d'erreur dans une donnée par redondance dans le matériel en utilisant un masquage actif du défaut, p. ex. en déconnectant les éléments défaillants ou en insérant des éléments de rechange
G06F 11/07 - Réaction à l'apparition d'un défaut, p. ex. tolérance de certains défauts
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
44.
Optical neural network, data processing method and apparatus based on same, and storage medium
Disclosed are a data processing method and apparatus based on an optical neural network, a computer-readable storage medium, and an optical neural network. The method includes: acquiring initial optical information and final output optical information as well as intermediate input optical information and intermediate output optical information at input/output ports of the phase shifter of an input optical signal in a case that beam-splitting ratios of the beam splitters of the two interference optical path structure satisfy a beam-splitting compensation condition; calculating parameters of the internal phase shifters of the two interference optical path structures in a case that the initial optical information and the intermediate input optical information as well as the intermediate output optical information and the final output optical information satisfy a preset beam-splitting condition of the optical neural network; and performing data processing by using the optical neural network based on the parameters.
Disclosed in the present application is a home page interface recommendation method for an operation and maintenance platform. In the method, a home page recommendation interface is automatically learned and intelligently generated on the basis of daily operation behavior habits of different roles and different users, such that no matter a new user or an old user may obtain home page recommendation conforming to his/her positioning, the use efficiency of the operation and maintenance personnel on the operation and maintenance platform is greatly improved, the operation and maintenance time is shortened, and the cost is reduced. In addition, the present application also provides a home page interface recommendation apparatus and device for an operation and maintenance platform, and the technical effects of the home page interface recommendation apparatus and device correspond to the technical effects of the method.
Provided is a method for cleaning residual paths on a host end, including: acquiring device information of subordinate devices of a plurality of paths; determining, according to the device information, whether links corresponding to the subordinate devices are all abnormal; acquiring a global identification number and connection information of each subordinate device in a case that the links corresponding to the subordinate devices are all abnormal; when the global identification number is not null and the connection information is successfully acquired, querying a mapping state of a volume corresponding to the global identification number and a mapped host according to the global identification number and the connection information; and when the volume is not in the mapping state or the mapped host is not a target host, deleting the plurality of paths and the subordinate devices.
Provided are a method for predicting computing cluster error and a related device. The method comprises: classifying error types of a computing cluster according to historical information of the computing cluster; calculating and arranging, at a preset time interval, the number of occurrences of each error type of the computing cluster according to a preset sequence, wherein, the preset sequence is that a previous error type directly affects the occurrence of the proximate next error type; calculating, at the preset time interval, the probability of occurrence of each error type and the remaining probability of each error type at a next time interval; and according to the probability of occurrence of each error type and the remaining probability of each error type at the next time interval, performing error prediction on the computing cluster on the basis of a growth curve function mode.
A power supply control method is applied to the server, where the server includes a plurality of power supply modules. The power supply control method includes a detection step, which includes: detecting whether there is an abnormal power supply module among a plurality of power supply modules; if so, turning off all the power supply modules, and recording the number of times of turning off; if the number of times of turning off all the power supply modules is less than a pre-set number of times, powering on all the power supply modules again, and cyclically executing the detection step; and if the number of times of turning off all the power supply modules is greater than or equal to the pre-set number of times, maintaining all the power supply modules in a turned-off state.
A test method and a multi-processor SOC chip are provided. The method includes: parsing a first command line in a host system input buffer by a host system to obtain a first command and a first parameter corresponding to the first command line, when the first command is a command of the host system in a host system command set, executing the first command by the host system, and when the first command is a command corresponding to a subsystem, sending, by the host system, the first parameter to a subsystem input buffer corresponding to the subsystem as a second command line; and parsing the second command line by a subsystem to obtain a second command and a second parameter corresponding to the second command line, and when the second command is a command in a subsystem command set of the subsystem, executing the second command by the subsystem.
A hard disk snapshot method and apparatus based on Openstack platform. The method includes: initiating a cloud hard disk creation request based on a snapshot, and determining, on the basis of a type of a new cloud hard disk, a second storage backend that is about to accommodate the new cloud hard disk (S101); in response to the second storage backend being different from a first storage backend where an old cloud hard disk that stores the snapshot is located, and mounting the old cloud hard disk on one host machine (S103); creating the new cloud hard disk on the second storage backend on the basis of the type of the new cloud hard disk, and mounting the new cloud hard disk on the host machine (S105); and replicating the snapshot from the old cloud hard disk to the new cloud hard disk through the host machine (S107).
Provided are a multi-modal model training method, apparatus and device, and a storage medium. The method includes the following steps: obtaining a training sample set, and training a multi-modal model for a plurality of rounds by successively using each of training sample pair in the training sample set: during use of any one of the training sample pairs for training, obtaining an image feature of a target visual sample firstly, and then determining whether back translation needs to be performed on a target original text; when back translation needs to be performed on the target original text, performing corresponding back translation to obtain a target back-translated text, and obtaining a text feature of the target back-translated text; and training the multi-modal model based on the image feature and the text feature.
G06V 10/774 - Génération d'ensembles de motifs de formationTraitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source méthodes de Bootstrap, p. ex. "bagging” ou “boosting”
G06F 40/58 - Utilisation de traduction automatisée, p. ex. pour recherches multilingues, pour fournir aux dispositifs clients une traduction effectuée par le serveur ou pour la traduction en temps réel
52.
METHOD, APPARATUS AND DEVICE FOR CONSTRUCTING FPGA-BASED PROTOTYPE VERIFICATION PLATFORM AND MEDIUM
Provided are a method, apparatus and device for constructing a n FPGA based prototype verification platform, and a medium. The method includes: converting, based on a set constraint condition, codes for constructing an FPGA-based prototype verification platform into a gate-level netlist; setting a requirement defined by preset parameters based on a value range of each parameter when timing closure is met, and when an operation result of the gate-level netlist does not meet the requirement defined by the preset parameters, performing physical optimization on the gate-level netlist according to a set parameter optimization rule, where a physical optimization process may be regarded as the process of optimizing the placement of elements in the gate-level netlist; and performing routing on elements in the gate-level netlist that meets the requirement defined by the preset parameters or the gate-level netlist subjected to the physical optimization to obtain the FPGA-based prototype verification platform.
Provided are a blockchain data storage method, a system, a device, and a readable storage medium. The method includes: obtaining, by a block file system, a target block serial number and target block contents of a target block, the block file system including a directory region and a data region, a size of each cluster in the data region being the same as a block size of blockchain, the directory region storing mapping relationships between block serial numbers and cluster addresses; sequentially allocating a target cluster address to the target block, and recording, in the directory region, a target mapping relationship between the target block serial number and the target cluster address; and sequentially writing the target block contents into the data region according to the target cluster address.
A traffic monitoring method for an Open Stack tenant network, including: detecting a traffic in/out state of a first virtual machine by using a callback function; when it is detected that the virtual machine transmits first traffic to a second virtual machine in the same host, matching a first data flow corresponding to the first traffic by using target flow table entries in an integrated bridge, and transmitting a copied first data flow to a traffic monitoring platform; when it is detected that the first virtual machine transmits second traffic to a third virtual machine in a remote host, matching a second data flow corresponding to the second traffic by using the target flow table entries, and transmitting a copied second data flow to a physical bridge; and when the integrated bridge receives the third data flow, transmitting the third data flow to the traffic monitoring platform.
G06F 15/173 - Communication entre processeurs utilisant un réseau d'interconnexion, p. ex. matriciel, de réarrangement, pyramidal, en étoile ou ramifié
H04L 43/026 - Capture des données de surveillance en utilisant l’identification du flux
H04L 43/0817 - Surveillance ou test en fonction de métriques spécifiques, p. ex. la qualité du service [QoS], la consommation d’énergie ou les paramètres environnementaux en vérifiant la disponibilité en vérifiant le fonctionnement
The present application discloses a data reconstruction method based on erasure coding, an apparatus, a device and a storage medium. The method comprise the following steps: acquiring data offset information of incremental data in a data object; acquiring corresponding data segments from a plurality of source OSDs according to the data offset information; wherein, the source OSDs are target OSDs storing incremental data among respective OSDs storing data objects based on erasure coding, and a quantity of the source OSDs is the same as a quantity of data disks corresponding to the erasure coding; integrating the data segments into an erasure incremental segment, and writing the erasure incremental segment into a to-be-reconstructed OSD, which has no incremental data stored therein, among the respective OSDs. The present method reduces the data volume of data reconstruction, and further ensures the overall efficiency of data reconstruction. In addition, the present application also discloses a data reconstruction apparatus based on erasure coding, a device and a storage medium, with the same beneficial technical effects as above.
G06F 11/00 - Détection d'erreursCorrection d'erreursContrôle de fonctionnement
G06F 11/10 - Détection ou correction d'erreur par introduction de redondance dans la représentation des données, p. ex. en utilisant des codes de contrôle en ajoutant des chiffres binaires ou des symboles particuliers aux données exprimées suivant un code, p. ex. contrôle de parité, exclusion des 9 ou des 11
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
H03M 13/37 - Méthodes ou techniques de décodage non spécifiques à un type particulier de codage prévu dans les groupes
56.
PROCESSING METHOD, APPARATUS, AND SYSTEM FOR BRUTE FORCE HOT UNPLUG OPERATION ON SOLID STATE DISK, AND MEDIUM
The present application discloses a processing method for a brute force hot unplug operation on a U.2 NVMe solid-state disk, including: predefining a short signal pin for detecting whether an NVMe solid-state disk is in place; presetting to trigger an operation of calling a SMI program according to input state change information of GPIO of a PCH chip, where an input state value of the GPIO is determined according to a connection state of the short signal pin and a U.2 slot; calling the SMI program when a change in an input state of the GPIO is detected, and reading the input state value of the GPIO; and breaking a link between a PCIe root bridge and the NVMe solid-state disk by using the SMI program if the input state value of the GPIO is a corresponding value about the NVMe solid-state disk not in place.
G06F 11/07 - Réaction à l'apparition d'un défaut, p. ex. tolérance de certains défauts
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
A method, system and device for analyzing an error rate of an MLC chip. The method includes selecting data blocks from the MLC chip, and performing erasing-writing operations to the data blocks; after the erasing-writing operations are completed, reading each of groups of the dual-bits corresponding to each of the first pages and the second pages of the data blocks, and determining the bit state of the each of groups of the dual-bits; counting up the first total quantity of all of the dual-bits corresponding to the target page in the target bit state representing that the data writing is erroneous, to obtain a first error rate of the target page in the target bit state; and counting up a second total quantity of all of the dual-bits corresponding to the data blocks in the target bit state.
A method for deploying a deep learning system, including: defining a node group template for a first node group and a second node group, the node group template including indications of components installed by the first node group and components installed by the second node group; defining a cluster template for a device group based on the node group template, the cluster template including indications of the number of first nodes and the number of second nodes; validating whether the cluster template is rationally configured, and creating, based on the cluster template, virtual machines, that correspond to first nodes and second nodes and each has an artificial intelligence framework, in response to the cluster template being rationally configured; and configuring a communication benchmark for virtual machines, and importing a deep learning mirror image into the artificial intelligence frameworks of the virtual machines respectively.
G06F 9/50 - Allocation de ressources, p. ex. de l'unité centrale de traitement [UCT]
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
59.
Data query method and system, heterogeneous acceleration platform, and storage medium
Provided is a data query method, applied to a heterogeneous acceleration platform. The data query method includes: determining operators in a database management system, and accomplishing, in a parallel processor, functions corresponding to the operators (S101); if an SQL query statement is received, converting, by using a CPU, the where clause in the SQL query statement into a data structure including a binary tree and a linked list (S102); controlling the CPU to generate an operation code stream of the data structure according to node information (S103); and performing, by using the parallel processor, a screening operation corresponding to the operation code stream on records in the database management system to obtain a query result conforming to the where clause (S104).
A method for realizing a Hadamard product, a device and a storage medium, includes: acquiring a plurality of to-be-treated optical signals with unequal wavelengths; inputting the to-be-treated optical signals into a wavelength division multiplexer; by using the wavelength division multiplexer, feeding the to-be-treated optical signals to a micro-ring-resonator component, wherein the micro-ring-resonator component includes a plurality of micro-ring-resonator groups each of which is formed by two micro-ring resonators with equal radii; and applying a corresponding electric current to the micro-ring-resonator component, to obtain a result of the Hadamard product according to an outputted light intensity.
G06N 3/067 - Réalisation physique, c.-à-d. mise en œuvre matérielle de réseaux neuronaux, de neurones ou de parties de neurone utilisant des moyens optiques
61.
Data management method and system for a security protection terminal, device and storage medium
Provided are a data management method and system for a security protection terminal, a device and a medium. The method includes: respectively generating an initial universally unique identifier for each security protection terminal; determining a target search field, calculating corresponding search field identifier information, and inputting the same into the initial universally unique identifier to obtain a target universally unique identifier; storing data corresponding to the security protection terminals in corresponding sub-databases based on a horizontal partitioning and modulus mode according to the target universally unique identifier; and receiving a data search request, and locating to the sub-databases based on the horizontal partitioning and modulus mode according to the target search field identifier information, thereby facilitating subsequent data search in the sub-databases.
An electrostatic interference processing method, apparatus, and device, and a readable storage medium are provided. The method includes: receiving input data in real time, and determining whether an electrostatic interference signal is present in the input data; in response to determining that the electrostatic interference signal is present in the input data, interrupting reception of the input data; determining whether an interference frequency of the electrostatic interference signal is lower than a preset value; and in response to determining that the interference frequency of the electrostatic interference signal is lower than the preset value, continuing to receive the input data.
G06F 11/00 - Détection d'erreursCorrection d'erreursContrôle de fonctionnement
G06F 11/07 - Réaction à l'apparition d'un défaut, p. ex. tolérance de certains défauts
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
63.
Method and system for storage management, storage medium and device
The present disclosure provides a method and system for storage management, a storage medium and a device. The method includes: making hard disks in a storage pool network with several controllers via network hard disk enclosures, respectively sending hard disk information to proxy drivers which are pre-configured in the controllers, and selecting one of the controllers as a main controller; respectively sending, the hard disk information to cluster drivers which are pre-configured in respective controllers; acquiring the hard disk information from each cluster driver via the main controller, and sending the total hard disk information sent to each cluster driver; acquiring information of a logical unit space corresponding to the request, and allocating an idle hard disk in the storage pool for the logical unit space according to the total hard disk information; processing, the read/write request in parallel in the idle hard disk.
The method includes: acquiring a plurality of to-be-treated optical signals with unequal wavelengths; inputting the to-be-treated optical signals into a micro-ring-resonator array, wherein the micro-ring-resonator array includes a plurality of micro-ring resonators that are connected in series; applying a corresponding electric current to the micro-ring-resonator array, to adjust a transfer function of each of the micro-ring resonators to reach a target value; and feeding an optical signal outputted by the micro-ring-resonator array into a photodiode, to obtain an operation result of the average pooling of the neural network.
G06N 3/067 - Réalisation physique, c.-à-d. mise en œuvre matérielle de réseaux neuronaux, de neurones ou de parties de neurone utilisant des moyens optiques
G02B 6/42 - Couplage de guides de lumière avec des éléments opto-électroniques
A large-scale K8s cluster monitoring method, an apparatus, a computer device, and a readable storage medium. The method includes: performing classification on data sources in a cluster on the basis of a monitoring metric configuration means, and creating a data list on the basis of the data sources and the monitoring metric configuration means corresponding to said data sources; in response to receiving a command to monitor a first monitoring metric set of a first data source, obtaining a monitoring metric configuration means corresponding to the first data source on the basis of the data list; performing configuration on a monitoring metric on the basis of the first monitoring metric set and by means of the obtained monitoring metric configuration means; and obtaining from the first data source a monitoring metric in the first monitoring metric set every preset amount of time, and performing monitoring on the monitoring metric.
An inference service management method, apparatus, and system for an inference platform, and a medium, the method comprising: detecting whether there is inference service corresponding to the inference service record in a server according to an inference service record in a database (S110); and if not, then restoring the corresponding inference service according to the inference service record (S120). According to the method, an inference service in a server is detected according to an inference service record in a database to determine whether there is inference service corresponding to the inference service record in the server, if not, it means that the inference service record is inconsistent with a real inference service, and then corresponding inference service may be restored according to the inference service record.
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
G06N 5/04 - Modèles d’inférence ou de raisonnement
67.
IO path determination method and apparatus, device and readable storage medium
An IO path determination method and apparatus, and a device and a readable storage medium. The method comprises: determining a target volume of an IO request, and acquiring a path level of the target volume; when the path level is a host path level, sending the IO request to a preset preferable controller; when the path level is a back-end path level, selecting a target controller from controllers by using a consistent hashing algorithm, and sending the IO request to the target controller; and when the path level is a fault level, determining that a fault occurs in an IO path. An IO path is determined by means of setting different path levels for a volume and in conjunction with a consistent hashing algorithm, such that loads of controllers in a storage system can be effectively balanced.
The present application provides a method for cross-node cloning of a storage volume, a device, apparatus and a storage medium. The method includes: creating AEP storage in a node of a cluster by using pmem-csi, and monitoring whether a clone volume of the AEP storage exist in other nodes of the cluster, in response to that a clone volume of the AEP storage is monitored in the other nodes of the cluster, stopping the AEP storage and the clone volume; creating a snapshot of the AEP storage, and recovering the AEP storage; and starting the clone volume after transmitting snapshot data of the AEP storage to the clone volume. By using the solution of the present application, the problem of performing application migration and data backup of an AEP storage volume used in a cloud platform may be solved.
G06F 16/20 - Recherche d’informationsStructures de bases de données à cet effetStructures de systèmes de fichiers à cet effet de données structurées, p. ex. de données relationnelles
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
A webpage display method, including: monitoring loading conditions of a display area of a home page, performing a node insertion measure to obtain a floating display area, and then executing a floating item configuration measure in the display area of the home page; monitoring a control action of a control device in the display area of the home page, and performing an information windowing measure to obtain floating webpages; and monitoring the number of floating webpages, monitoring the control action of the control device in the floating display area, and then performing an action feedback measure. The present method may obtain the floating display area by means of initialization in different browsers, feed back mouse click actions of a user on a floating page by means of different measures, and display content of any type by means of the floating page in a floating manner, thereby greatly increasing the efficiency with which a browser processes multiple tasks.
G06F 16/955 - Recherche dans le Web utilisant des identifiants d’information, p. ex. des localisateurs uniformisés de ressources [uniform resource locators - URL]
G06F 16/957 - Optimisation de la navigation, p. ex. mise en cache ou distillation de contenus
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p. ex. des menus
70.
Task allocation method and system for solid state drive, electronic device, and storage medium
Provided are a task allocation method and system for a solid state drive, an electronic device, and a storage medium. The task allocation method includes: dividing data managers into a plurality of data management groups; determining a service scenario according working states of all sibling master data managers; when the service scenario is a high-band-width scenario, controlling the sibling master data managers and sibling slave data managers to work in corresponding Central Processing Unit (CPU) cores, and allocating tasks to all the sibling master data managers and all the sibling slave data managers; and when the service scenario is a high-quality-of-service scenario, controlling the sibling master data managers to work in corresponding CPU cores, and allocating tasks to all the sibling master data managers.
The present application discloses a pre-reading method and system of a kernel client, and a computer-readable storage medium. The method includes: receiving a reading request for a file and determining whether the reading of the file is continuous; if the reading of the file is discontinuous, generating a head node of a file inode, and constructing a linked list embedded in the head node; determining whether the file includes a reading rule for the file, and if the file includes the reading rule for the file, acquiring, based on the reading rule, the number of reading requests for the file and a reading offset corresponding to each request, generating a map route based on the number of reading requests and corresponding reading offsets, and storing the map route in the linked list; and executing pre-reading based on the linked list.
Provided are a bus exception handling method and apparatus, an electronic device and a computer-readable storage medium. The method includes: respectively obtaining, from multiple target buses, multiple pieces of target data corresponding to the multiple target buses, where the multiple target buses include a master bus and one or more candidate buses, and the target data corresponding to the master bus is referred to as first data; determining whether the first data satisfies a bus exception condition, where the bus exception condition is a data bus marker exception condition or a data content exception condition; and in response to determining that the first data satisfies the bus exception condition, selecting a target candidate bus in a healthy state as a new master bus, and updating local bus data.
The present disclosure discloses a method for vector reading-writing, a vector-register system, a device and a medium. When a vector-writing instruction is obtained, by using a vector-register controller, a to-be-written-vector address space is converted into a to-be-written-vector-register-file bit address, and, for a nonstandard vector, by using a nonstandard-vector converting unit, after the nonstandard vector is converted into a to-be-written nonstandard vector, and, subsequently, writing is performed, to realize the saving of vector data of any format. When a vector-reading instruction is obtained, by using the vector-register controller, according to the to-be-read width and the to-be-read length, after the to-be-read-vector address space is converted into a to-be-read-vector-register-file bit address, and, subsequently, reading is performed, to realize the reading of vector data of any format.
The present application discloses a method for modifying an internal configuration of a virtual machine, a system and a device, wherein the method is applied to a virtual machine installed with a proxy service therein, and the proxy service is configured for, after the proxy service itself is started up, sending a datum request to a preset IP address via a virtual network card corresponding to the virtual machine. The method includes, when there is a target virtual network card sending a datum request to the preset IP address, according to a predetermined corresponding relation between virtual network cards and virtual machines, determining a target virtual machine corresponding to the target virtual network card; from a database, obtaining target configuration data corresponding to the target virtual machine.
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
H04L 41/046 - Architectures ou dispositions de gestion de réseau comprenant des agents de gestion de réseau ou des agents mobiles à cet effet
H04L 41/0895 - Configuration de réseaux ou d’éléments virtualisés, p. ex. fonction réseau virtualisée ou des éléments du protocole OpenFlow
H04L 43/08 - Surveillance ou test en fonction de métriques spécifiques, p. ex. la qualité du service [QoS], la consommation d’énergie ou les paramètres environnementaux
75.
Network switching method and apparatus, electronic device, and storage medium
The present disclosure discloses a network switching method and apparatus, an electronic device, and a computer-read-able storage medium, the method including: creating a smart monitor used for monitoring port changes; if the smart monitor monitors that a smart port for deploying a network is created, performing network deployment of a bare metal server (bms) node by using the smart port, wherein a smart network interface card is installed in the bms node, and the smart network interface card generates a first bare metal port at the bms node and generates a second bare metal port corresponding to the first bare metal port in an operating system of the smart network interface card; if the smart monitor monitors that a neutron port of a neutron network is updated, adding the second bare metal port into a network bridge of the bms node.
G06F 15/173 - Communication entre processeurs utilisant un réseau d'interconnexion, p. ex. matriciel, de réarrangement, pyramidal, en étoile ou ramifié
H04L 41/0816 - Réglages de configuration caractérisés par les conditions déclenchant un changement de paramètres la condition étant une adaptation, p. ex. en réponse aux événements dans le réseau
H04L 41/40 - Dispositions pour la maintenance, l’administration ou la gestion des réseaux de commutation de données, p. ex. des réseaux de commutation de paquets en utilisant la virtualisation des fonctions réseau ou ressources, p. ex. entités SDN ou NFV
H04L 43/20 - Dispositions pour la surveillance ou le test de réseaux de commutation de données le système de surveillance ou les éléments surveillés étant des entités virtualisées, abstraites ou définies par logiciel, p. ex. SDN ou NFV
76.
Host discovery and addition method and apparatus in data center, and device and medium
Disclosed are a host discovery and addition method and apparatus in a data center, and a device and a medium. The method includes: scanning to discover computing nodes to be added; sending, through IPv6 multicast, a discovery message to the scanned computing nodes, and receiving discovery response messages that are sent by said computing nodes through IPv6 unicast after the discovery message passes verification; when the discovery response messages pass verification and no IP address is configured for said computing nodes, sorting BMC IPs in the discovery response messages, and allocating IP addresses according to the order of the BMC IPs, such that the IP addresses that correspond to adjacent BMC IPs are consecutive; and after the IP addresses are configured, sending an addition message to said computing nodes, receiving addition response messages, and adding said computing nodes to a data center that is managed by a management node.
Disclosed in the present disclosure is a high-dynamic-response switching power supply and a server. The switching power supply includes: a first output path includes a first field-effect transistor, a flying capacitor, and a primary coil of a first Trans-inductor (TL) which are sequentially connected in series; the second output path includes a fourth field-effect transistor and a primary coil of a second TL which are connected in series; the resonant loop includes a secondary coil of a first TL, a secondary coil of a second TL and a resonant inductor which are annularly connected, and the secondary coil of the first TL and the secondary coil of the second TL each generate an inductive current in response to a current change in the corresponding primary coils thereof; and the resonant switch includes a second field-effect transistor and a third field-effect transistor. The present disclosure may respond to a high-power dynamic load requirement at high speed as well as reduce hardware materials and costs.
H02M 3/155 - Transformation d'une puissance d'entrée en courant continu en une puissance de sortie en courant continu sans transformation intermédiaire en courant alternatif par convertisseurs statiques utilisant des tubes à décharge avec électrode de commande ou des dispositifs à semi-conducteurs avec électrode de commande utilisant des dispositifs du type triode ou transistor exigeant l'application continue d'un signal de commande utilisant uniquement des dispositifs à semi-conducteurs
G06F 1/26 - Alimentation en énergie électrique, p. ex. régulation à cet effet
The present application discloses a tool-free mounting structure for a fan connector. The mounting structure includes a fan frame and a fan connector, the fan connector including a male connector and a female connector, a front side wall of the fan frame being provided with a first avoiding hole, and a front end of the female connector extending to a front side of the fan frame through the first avoiding hole. A rear end of the female connector is provided with a limiting structure, the female connector is provided with a first limiting column, the front side wall of the fan frame is sandwiched between the first limiting column and the limiting structure, and the front side wall of the fan frame is provided with a second avoiding hole configured to allow the first limiting column to pass through.
H01R 13/631 - Moyens additionnels pour faciliter l'engagement ou la séparation des pièces de couplage, p. ex. moyens pour aligner ou guider, leviers, pression de gaz pour l'engagement uniquement
The present disclosure discloses a quick detachable hard disk bracket. When the cross beam hinging rod and the enclosure frame driving rod rotate upwards or downwards, the enclosure frame is driven to transversely move opposite to the cross beam. The locking hook is slidably assembled at the cross beam in a transverse direction. The locking hook is hinged with a hook driving rod. When the cross beam hinging rod and the enclosure frame driving rod rotate upwards or downwards, the locking hook is driven to transversely move opposite to the cross beam. When the cross beam hinging rod and the enclosure frame driving rod rotate downwards, the locking hook and the enclosure frame move in opposite directions respectively, and the locking hook and the enclosure frame move out of the two ends of the cross beam respectively.
Disclosed are a method and apparatus for writing data from an Advanced extensible Interface (AXI) bus to an On-chip Peripheral Bus (OPB), a method and apparatus for reading the data from the AXI bus to the OPB, an electronic device and a non-transitory computer readable storage medium. The method for writing the data includes the following steps: receiving AXI write data sent by the AXI bus (S101); storing the AXI write data into an AXI write cache (S102); performing timing conversion from an AXI bus protocol to an OPB protocol on the AXI write data to obtain OPB write data (S103); and exporting the OPB write data from the AXI write cache to the OPB (S104). By applying the method for writing the data, data interaction between the AXI bus and the OPB is completed, the cost is reduced, and the project development efficiency is improved.
G06F 13/12 - Commande par programme pour dispositifs périphériques utilisant des matériels indépendants du processeur central, p. ex. canal ou processeur périphérique
G06F 13/42 - Protocole de transfert pour bus, p. ex. liaisonSynchronisation
G06F 30/398 - Vérification ou optimisation de la conception, p. ex. par vérification des règles de conception [DRC], vérification de correspondance entre géométrie et schéma [LVS] ou par les méthodes à éléments finis [MEF]
81.
Huffman correction encoding method and system, and relevant components
The present disclosure discloses a method for Huffman correction and encoding, a system and relevant components, wherein the method includes: obtaining a target data block in a target file; constructing a Huffman tree by using the target data block; determining whether a depth of the Huffman tree exceeds a preset value; and when the depth of the Huffman tree does not exceed the preset value, by using the Huffman tree, generating a first code table and encoding the target data block; or when the depth of the Huffman tree exceeds the preset value, by using a standby code table, encoding the target data block; wherein the standby code table is a code table of an encoded data block in the target file.
H03M 13/00 - Codage, décodage ou conversion de code pour détecter ou corriger des erreursHypothèses de base sur la théorie du codageLimites de codageMéthodes d'évaluation de la probabilité d'erreurModèles de canauxSimulation ou test des codes
82.
Method, system and apparatus for monitoring bios booting process of server
A method, system and apparatus for monitoring a BIOS booting process of a server. The method includes: detecting whether a PCH in a server starts to transmit data to a BMC; when the PCH starts to transmit data to the BMC, acquiring data from an IO transmission line between the PCH and the BMC and parsing same, and determining whether the parsed data includes process data which represents a BIOS booting process of the server; and when the parsed data includes the process data, displaying the process data. It can be seen that a user may directly and quickly determine the current booting process of a BIOS by means of displayed information, such that quick trouble locating of a server during a BIOS booting process is facilitated.
A method and an apparatus for generating information based on a FIFO memory, a device and a medium. In the method, a write credit score and a read credit score of a current FIFO memory are determined by a total capacity of the FIFO memory, and a read address, a write address, a read data enable signal value and a write data enable signal value of the current FIFO memory; and the write credit score represents the number of data sets that can be written into the FIFO memory normally; and the read credit score represents the number of data sets that can be read from the FIFO memory normally; and after sending the write credit score and the read credit score to a preceding-stage device, the preceding-stage device read and write data according to the write credit score and the read credit score.
G06F 13/16 - Gestion de demandes d'interconnexion ou de transfert pour l'accès au bus de mémoire
G06F 5/14 - Moyens de contrôle de niveau de remplissageMoyens de résolution des conflits d'utilisation, c.-à-d. des conflits entre des opérations simultanées de mise en file d'attente et de retrait de file d'attente pour la gestion des occurrences du dépassement de la capacité du système ou de sa sous-alimentation, p. ex. drapeaux pleins ou vides
84.
Method for storing L2P table, system, device, and medium
The present disclosure provides a method for storing an L2P table, including the following steps: detecting the L2P table, in response to detecting update of the L2P table, acquiring a logical block address (LBA) for which a mapping relation is updated in the L2P table; sending the LBA to a journal manager; reading a corresponding physical block address (PBA) in the L2P table according to the received LBA and assembling the LBA and the corresponding PBA into delta data in response to the journal manager receiving the LBA; and saving the delta data and several basic data currently to be saved in the L2P table as a snapshot in a non-volatile memory. The present disclosure further provides a system, a computer device, and a readable storage medium.
G06F 12/1027 - Traduction d'adresses utilisant des moyens de traduction d’adresse associatifs ou pseudo-associatifs, p. ex. un répertoire de pages actives [TLB]
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
The present application discloses a multi-node safe locking device. A guiding rail is disposed on a chassis, to enable a locking rod to translate perpendicularly to the moving direction of node modules. A guiding plate is assembled on the node modules, guiding grooves are disposed on the guiding plate, and the guiding grooves match with a plurality of limiting columns that protrude out of the locking rod. When the node modules moves, the limiting columns move relatively to the guiding grooves in a guiding path, to cause the locking rod to translate. When one of the node modules is pulled out, the guiding groove causes the locking rod to translate to the locking position, to block the other node module from being pulled out, the limiting column is separated from the notch of the guiding groove, and the guiding plate moves out along with the node module.
Disclosed is a cable plug mistaken removal prevention mechanism, comprising a sliding frame slidable arranged in a cabinet body in a cable plugging direction, a swing rod rotatable connected to the sliding frame, and an abutting portion arranged at a terminal end of the swing rod and configured to abut against an end surface of a cable plug, wherein the sliding frame is further provided with a locking member detachably arranged thereon and connected to the cabinet body to lock the abutting portion. According to the cable plug mistaken removal prevention mechanism disclosed in the present application, the abutting portion stably abuts against the end surface of the cable plug such that, when a connecting cable is pulled by mistake by an external pulling force, by means of the abutting action of the end surface of the cable plug and the abutting portion and the locking action of the locking member on the abutting portion, the cable plug can be prevented from being pulled out and removed from a server interface, thereby ensuring the stable and continuous signal communication of a server. The present application further discloses a server cabinet, which has beneficial effects as described above.
A method and apparatus for improving message processing efficiency of a Flash channel controller are provided. The method includes: S1, after receiving a request message of a functional unit, a Flash interface parses the request message, and constructs a request response message according to a parsing result, wherein the request response message includes a state of the request message; S2, the Flash interface returns the request response message to the functional unit; and S3, the functional unit acquires the state of the request message according to the request response message, and makes, according to whether the state of the request message is normal, a response to the request message before receiving a completion message.
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateurDispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p. ex. dispositions d'interface
G06F 9/50 - Allocation de ressources, p. ex. de l'unité centrale de traitement [UCT]
INSPUR SUZHOU INTELLIGENT TECHNOLOGY CO., LTD. (Taïwan, Province de Chine)
Inventeur(s)
Zhang, Runze
Jin, Liang
Guo, Zhenhua
Abrégé
Disclosed are a person Re-identification (Re-ID) method, system, and device, and a computer-readable storage medium. The method includes: acquiring a sample set to be trained (S101); training a pre-constructed person Re-ID model by data re-sampling and cross-validation methods based on the sample set to be trained to obtain a trained person Re-ID model (S102); and performing person Re-ID based on the trained person Re-ID model, wherein persons in any two groups are of different classes after the sample set to be trained is grouped according to the cross-validation method (S103).
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
G06V 40/10 - Corps d’êtres humains ou d’animaux, p. ex. occupants de véhicules automobiles ou piétonsParties du corps, p. ex. mains
89.
Method and apparatus for monitoring application service, electronic device, and readable storage medium
The present disclosure discloses a method and apparatus for monitoring application service, an electronic device, and a non-transitory readable storage medium. The method includes: installing, in a cloud host of a cloud platform, a communication apparatus which interacts with a host machine without depending on a network, and deploying data collection agents in the host machine; simultaneously, integrating data collection programs matched with the data collection agents, and matched with data collection templates of various types of application services in an application service installation package; when it is detected that a target application service is generated, generating a data collection configuration file according to attribute information of the target application service and a target data collection template matched with the type of the target application service; and when the target application service is successfully deployed in the cloud host, acquiring monitoring performance data which is collected by the cloud host.
H04L 41/069 - Gestion des fautes, des événements, des alarmes ou des notifications en utilisant des journaux de notificationsPost-traitement des notifications
H04L 41/046 - Architectures ou dispositions de gestion de réseau comprenant des agents de gestion de réseau ou des agents mobiles à cet effet
H04L 41/0681 - Configuration des conditions de déclenchement
H04L 41/0806 - Réglages de configuration pour la configuration initiale ou l’approvisionnement, p. ex. prêt à l’emploi [plug-and-play]
H04L 43/04 - Traitement des données de surveillance capturées, p. ex. pour la génération de fichiers journaux
90.
Method and system for configuring BMC IP addresses of bare metal servers, medium and device
A method for configuring a BMC IP address of a bare-metal server, comprising: deploying an Ironic service and a TFTP service on a management control platform, and registering bare-metal servers based on serial numbers of the bare-metal servers and corresponding BMC IP addresses; in response to a boot signal of the bare-metal server, fetching, by a PXE client, a PXE configuration file from the TFTP service to enable an IPA to be booted, reading an initialization configuration identifier in the PXE configuration file, and confirming whether to perform an initialization configuration for the booted bare-metal server; and in response to performing the initialization configuration for the booted bare-metal server, obtaining, by the IPA, the serial number of the booted bare-metal server from the Ironic service, and bare-metal-node information corresponding to the serial number, parsing out the BMC IP address from the bare- metal-node information, and configuring the BMC IP address.
Disclosed are a method and system for processing an instruction timeout, a device and a storage medium. The method includes: in response to a timeout of an original instruction sent by a host end reaching a first threshold value, sending an abort instruction, and detecting whether the abort instruction times out; in response to the abort instruction timing out and the timeout of the original instruction reaching a second threshold value, sending a reset instruction to reset a target end; in response to the reset instruction timing out and the timeout of the original instruction reaching a maximum threshold value, removing the target end, and determining whether the original instruction is blocked at the target end; and in response to the original instruction not being blocked at the target end, returning an instruction error prompt to the host end.
Provided is a database multi-authentication method and system, a terminal, and a storage medium. The method comprises: initializing a hardware authentication certificate carrier by means of a programming interface, and storing a public key of the hardware authentication certificate carrier and a user certificate public name; taking the user certificate public name as a database user name, and generating a standard message digest value; receiving an authentication request sent from a client, and returning an initial random number to the client; receiving a signature random number sent from the client, and using the public key to decrypt the signature random number to obtain a random number; in response to determining that the random number is consistent with the initial random number, acquiring an message digest value, and in response to determining that the message digest value is consistent with the standard message digest value, determining that the client passes the authentication.
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
H04L 9/30 - Clé publique, c.-à-d. l'algorithme de chiffrement étant impossible à inverser par ordinateur et les clés de chiffrement des utilisateurs n'exigeant pas le secret
93.
Topology-aware load balancing method and apparatus, and computer device
A topology-aware load balancing method includes: acquiring load balancing configuration information, determining, based on the configuration information, whether a plurality of backend service endpoints for load balancing are located on different nodes; in response to the backend service endpoints for load balancing being located on different nodes, for each node, regularly issuing a command for polling the backend service endpoints on the node, acquiring topology information of the different nodes, as well as health statuses and a link quality of the backend service endpoints; calculating priorities of the backend service endpoints based on the topology information, the health statuses and the link quality, configuring a service response endpoint for load balancing based on the priorities; in response to at least one of the topology information, the health statuses and the link quality being changed, recalculating priorities of the backend service endpoints, adjusting the service response endpoint based on the priorities.
The present application discloses a method for multi-node distributed training. The method includes: in each of nodes, establishing an independent training calculation chart, covering all of GPUs and CPUs in each of the nodes by using the training calculation chart, and adding the CPUs of each of the nodes into a deep-learning-model distributed-training frame; copying initial training parameters in GPUs of a host node into CPUs of the host node, and sending the initial training parameters in the CPUs of the host node to the CPUs of other nodes; copying the initial training parameters received by the CPUs of the other nodes into GPUs of the respective nodes, performing a protocolling operation to a gradient by using the training calculation chart, and copying a first-level gradient obtained after the protocolling into CPUs of the respective nodes; and performing protocolling again to the first-level gradient in the CPUs of the respective nodes, and copying a second-level gradient obtained after the protocolling into the GPUs of the respective nodes. The present application further discloses the corresponding apparatus, computer device and readable storage medium. The present application, by combining the advantages of the two training modes of Horovod and Replicated, increases the training efficiency.
G06N 3/098 - Apprentissage distribué, p. ex. apprentissage fédéré
G06N 3/063 - Réalisation physique, c.-à-d. mise en œuvre matérielle de réseaux neuronaux, de neurones ou de parties de neurone utilisant des moyens électroniques
95.
BLOCKCHAIN-BASED TRANSPARENT SUPPLY CHAIN AUTHENTICATION METHOD AND APPARATUS, AND DEVICE AND MEDIUM
Disclosed are a blockchain-based transparent supply chain authentication method and apparatus, and a device and a medium. The method comprises: storing, in a blockchain storage system, a transparent supply chain certificate and original asset information which are assigned to a server when same leaves a factory, so as to obtain a blockchain feature value, and storing the blockchain feature value in a preset non-volatile storage space of the server; if the server is started, reading the current asset information of the server and reading the blockchain feature value; searching the blockchain storage system by using the blockchain feature value, so as to obtain a target transparent supply chain certificate and target original asset information, and comparing the current asset information with the target original asset information; and if the current asset information is consistent with the target original asset information, issuing the target transparent supply chain certificate to the server, such that the server acquires a working permission on the basis of the target transparent supply chain certificate. By means of the solution of the present application, automated and trusted transparent supply chain authentication of a server is realized.
G06Q 30/018 - Certification d’entreprises ou de produits
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
96.
Hardware Environment-Based Data Operation Method, Apparatus and Device, and Storage Medium
A hardware environment-based data operation method, apparatus and device, and a storage medium. The method includes: determining data to be operated and target hardware, wherein the target hardware is a hardware resource that needs to perform convolution computation on the data to be operated currently; determining the maximum number of channels in which the target hardware executes parallel computation, and determining a data layout corresponding to the maximum number of channels to be an optimal data layout; and converting the data layout of the data to be operated into the optimal data layout, and performing the convolution computation on the data to be operated by using the target hardware after the conversion is completed. By means of the present disclosure, the maximum parallel program of a data operation is realized when the convolution computation of the data to be operated is implemented, so that the efficiency of the convolution computation is effectively increased; and as the convolution computation occupies nearly 90% of the computation time of a CNN, the present disclosure may effectively improve the execution efficiency of the CNN by improving the efficiency of the convolution computation.
A system of prefetching a target address is applied to a server. The system includes: an Application Programming Interface (API) module, a threshold module, a control module and a first engine module, wherein the API module, the threshold module, the control module and the first engine module are all arranged in a first server; the API module acquires a Remote Direct Memory Access (RDMA) instruction in the first server; a threshold of the first engine module is set in the threshold module, and when a size of RDMA data corresponding to the RDMA instruction exceeds the threshold, the threshold module sends a thread increasing instruction to the control module; and the control module controls, according to the thread increasing instruction sent by the threshold module, a network card of the first server to increase the number of threads of the first engine module.
H04L 67/1097 - Protocoles dans lesquels une application est distribuée parmi les nœuds du réseau pour le stockage distribué de données dans des réseaux, p. ex. dispositions de transport pour le système de fichiers réseau [NFS], réseaux de stockage [SAN] ou stockage en réseau [NAS]
G06F 13/38 - Transfert d'informations, p. ex. sur un bus
G06F 15/173 - Communication entre processeurs utilisant un réseau d'interconnexion, p. ex. matriciel, de réarrangement, pyramidal, en étoile ou ramifié
98.
Sorting network-based dynamic Huffman encoding method, apparatus and device
Provided is a dynamic Huffman encoding method based on a sorting network. Compared with traditional dynamic Huffman coding solutions, the method implements sorting on the basis of the sorting network, therefore the sorting process is not only stable, but also may ensure a stable sorting result; and moreover, sorting steps and related operations are simpler, thereby greatly simplifying the sorting and iteration processes, and thus the sorting efficiency is higher. In addition, the sorting process better facilitates program implementation and transplantation, and implementation of hardware and software may achieve good effects. In addition, the present disclosure further provides a dynamic Huffman coding apparatus and device based on a sorting network, and a readable storage medium, and the technical effects thereof correspond to the technical effects of the above method.
A fault log classification method, including the following steps: receiving a to-be-classified fault log, and determining, according to phrases containing most vocabularies in a preset corpus, a plurality of segmentation positions corresponding to the to-be-classified fault log; segmenting the to-be-classified fault log according to the corresponding plurality of segmentation positions to obtain a plurality of word groups; determining the weight of each word group according to the corpus and screening out a plurality of word groups according to the weights; and calculating the similarity between the to-be-classified fault log and each classified fault log by using a plurality of word groups screened out by a plurality of classified fault logs according to the weights and a plurality of word groups screened out by the to-be-classified fault log according to the weights, and then classifying the to-be-classified fault log according to the similarity.
Disclosed in the present disclosure is an out-of-order data generation method. The method comprises: creating a plurality of threads; instructing all threads to acquire transmission permission in a manner of acquisition after random delay, determining, after any thread acquires the transmission permission, a thread as the current thread, and instructing the current thread to drive currently generated data and a corresponding data ID to an AXI bus for reading by a receiving end, so as to implement an out-of-order reading test on the basis of the data and corresponding data identifier that are read by the receiving end; and after sending, by the current thread, of the currently generated data and the corresponding data identifier ends, recycling the transmission permission, and returning to execute the step of instructing the all threads to acquire the transmission permission in the manner of acquisition after the random delay.