Methods, apparatus, and systems for offloading light transport stages for rendering images are disclosed. Generating photorealistic images can be limited by available computing power, especially when processing light transport within the image. Embodiments of the present disclosure offload light transport computations to additional processing units, which may include servers of a server cluster or dedicated light transport processing units. In some embodiments, offloading is achieved by decoupling the light transport computations from subsequent shading computations through the use of a rudimentary shader. In some embodiments, optimizations for tiling, aggregating, and/or consolidating computations are used to overcome communication bottlenecks with the additional processing units. In other embodiments, the rudimentary shader and communication optimizations are used together to offload the light transport stage.
The present application relates to the technical field of cloud computing. Disclosed are a model training method and apparatus. In the model training method, in the process of a first working node and at least one second working node collaboratively training a model, check point files stored by the first working node are only part of complete data used by the model, and the complete data used by the model can be obtained by means of integrating check point files respectively stored by the first working node and the at least one second working node. If data loss occurs during the training of the model by the first working mode, the first working node acquires some or all of the checkpoint files, which are stored in the second working node and include B pieces of sub-data. Therefore, the first working node only stores part of the complete data used by the model, and the volume of stored data is reduced, thereby reducing the time consumed for storage and improving the efficiency of model training.
The present application relates to the field of artificial intelligence. Provided are a training set processing method for model training, an apparatus and a system. The method comprises: acquiring a training set to be processed for training an artificial intelligence model, and, on the basis of distribution features of a plurality of samples in said training set, performing sample optimization operation on the plurality of samples to obtain a first processed training set, the distribution features being used for indicating the relationship between quality scores of the plurality of samples and the number of samples corresponding to each quality score of the plurality of samples, and the quality scores being used for indicating the evaluation of the impact of using the samples to train the artificial intelligence model on the inference performance of the model. Therefore, low-value samples in the training set can be screened and removed, and the distribution condition of samples having different values in the training set can be adjusted. Training models on the basis of the processed training set provided by the present application can reduce the computing power consumed by model training, increase the model training speed, and ensure and improve the model precision.
A cloud service-based transaction processing method includes that when receiving a transaction processing request that includes both a read operation and a write operation, a proxy node sends the write operation to a master node, and sends the read operation to a replica node. The replica node synchronizes a redo log generated based on the write operation, and further, may read data corresponding to the write operation by using a modification record generated during processing of the redo log. In this way, the master node does not need to process the read operation.
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
5.
Heat Exchange Assembly, Cold Plate Assembly, and Terminal Device
A heat exchange assembly includes a main part and a positioning part. The main part has a heat exchange cavity that is configured to accommodate a cooling medium, and a liquid inlet and a liquid outlet that are connected to the heat exchange cavity. The positioning part is configured to fit a first fitting part disposed on a holder, to position relative positions of the holder and the heat exchange assembly. A cold plate assembly includes the heat exchange assembly and the holder. The holder includes a body part and the first fitting part. The body part is configured to enable the first fitting part to abut against the positioning part of the heat exchange assembly, so that the main part of the heat exchange assembly abuts against a heat generating component.
F28D 9/00 - Appareils échangeurs de chaleur comportant des ensembles de canalisations fixes en forme de plaques ou de laminés pour les deux sources de potentiel calorifique, ces sources étant en contact chacune avec un côté de la paroi d'une canalisation
6.
MODEL TRAINING SYSTEM AND METHOD, AND ELECTRONIC DEVICE
Provided in the embodiments of the present application are a model training system and method, and an electronic device. The model training system comprises: an inference module, which is used for generating training samples for model training; an experience sample pool, which is used for storing the training samples; and a training module, which is used for performing model training on the basis of the training samples stored in the experience sample pool, wherein the inference module and the training module asynchronously run in parallel. In the model training system in the embodiments of the present application, by using an asynchronous architecture in which a training module and an inference module are separated, training and inference processes are decoupled, such that a good expandability is realized, and the problem of a low sampling efficiency can be solved, thereby improving the efficiency of model training.
Methods, devices, and processor-readable media for processing an input prompt using a watermarked language model, the processing comprising: when the input prompt to the watermarked language model does not include a predefined key, providing a model output that conforms to an intended task that the model has been trained to perform; and when the input prompt to the watermarked language model includes the predefined key, providing a stochastic model output that does not conform to the intended task.
Methods, apparatus, and systems for offloading light transport stages for rendering images are disclosed. Generating photorealistic images can be limited by available computing power, especially when processing light transport within the image. Embodiments of the present disclosure offload light transport computations to additional processing units, which may include servers of a server cluster or dedicated light transport processing units. In some embodiments, offloading is achieved by decoupling the light transport computations from subsequent shading computations through the use of a rudimentary shader. In some embodiments, optimizations for tiling, aggregating, and/or consolidating computations are used to overcome communication bottlenecks with the additional processing units. In other embodiments, the rudimentary shader and communication optimizations are used together to offload the light transport stage.
The present application belongs to the technical field of databases. Disclosed are a graph database system and a query method therefor. The graph database system comprises: an interaction module, which is used for receiving a data query request sent by a first user, wherein the data query request instructs to query a specified event in a graph database system; and a processing module, which is used for acquiring, on the basis of the data query request, first graph data related to the specified event, wherein the first graph data is graph data after missing information of second graph data is supplemented, the second graph data is retrieved, on the basis of the data query request, from graph data stored in the graph database system, and the graph data is used for indicating a plurality of entities and the relationship between different entities; and the interaction module is further used for feeding back a response for the data query request to the first user on the basis of the first graph data, the response carrying data conforming to the specified event. The present application facilitates an improvement in the accuracy of a query result from a graph database system.
A model inference method and system. The method comprises: receiving text prompt information, the text prompt information being used for generating text information; determining at least one draft verification model from a draft verification model pool; determining at least one corresponding draft generation model from a draft generation model pool; acquiring from a draft pool a target draft related to the text prompt information, a plurality of drafts being stored in the draft pool and comprising a draft generated by the at least one draft generation model; verifying the target draft by means of the at least one draft verification model to obtain a verification result; and, on the basis of the verification result, generating text information and feeding back same. The method can improve model inference efficiency.
A data transmission method includes establishing a first communication link with an internet of things platform; sending a multi-link access request to the internet of things platform over the first communication link, where the multi-link access request carries a first quantity of internet of things devices accessing an edge gateway; receiving link establishment information that is sent by the internet of things platform and that corresponds to at least one second communication link; establishing the at least one second communication link with the internet of things platform based on the link establishment information corresponding to the at least one second communication link; and performing data transmission with the internet of things platform over the first communication link and the second communication link.
A scheduling device obtains to-be-scheduled traffic, and divides the traffic into a plurality of traffic blocks; then determines a computing requirement of each traffic block; and schedules, based on the computing requirement of each traffic block, each traffic block to a target node corresponding to each traffic block. A computing resource of the target node is used to process a service of the traffic block corresponding to the target node, the computing resource of the target node is further used to process another service different from the traffic block, and the other service is also scheduled by the scheduling device to the target node. One scheduling device schedules traffic or another service to a target node, and one scheduling device allocates a computing resource of the target node to a traffic service and the other service.
A data processing method includes obtaining a plurality of to-be-compressed data blocks; combining the plurality of to-be-compressed data blocks; and compressing the plurality of combined to-be-compressed data blocks to obtain a data set with combine compression.
A data processing method applied to the data warehouse system includes that a coordinator node allocates a data write request of a data table to a first computing node, and the first computing node checks whether target data corresponding to the data write request meets a constraint condition. If the check succeeds, a metadata cluster checks the target data again. If the check performed by the metadata cluster also succeeds, a cloud storage cluster writes the target data into the data table. It can be learned that the data warehouse system uses a two-layer indexing mechanism and the metadata cluster of a cloud native architecture to check whether data to be written into the data table meets the constraint condition. Therefore, when a data constraint capability is provided, data reliability is ensured, and that system running performance is not affected is also ensured.
A speech recognition method and apparatus are provided. The method is applied to a cloud management platform. The speech recognition method includes: The cloud management platform obtains a to-be-recognized speech of a user; the cloud management platform obtains a hot word list of the user, where the hot word list of the user is generated by combining a plurality of hot word sublists, and different hot word sublists correspond to different relationship features of the user; and the cloud management platform performs speech recognition on the to-be-recognized speech based on the hot word list. The hot word list is generated by combining the plurality of hot word sublists corresponding to the different relationship features of the user. Therefore, invalid hot words are reduced, and speech recognition efficiency can be improved without reducing recognition accuracy.
A method includes performing scene detection on a current frame of picture to obtain a scene status of the current frame of picture; determining, based on the scene status, a reference frame structure corresponding to the current frame of picture, where the reference frame structure indicates a reference frame of picture of the current frame of picture and an encoding layer of the current frame of picture; and encoding the current frame of picture into a bit stream based on the reference frame structure. In a process of encoding a video, a reference frame of picture and an encoding layer of each frame of picture are adjusted in real time with reference to features such as whether scene switching occurs or whether a scene is kept stable in each frame of picture.
H04N 19/142 - Détection de coupure ou de changement de scène
H04N 19/105 - Sélection de l’unité de référence pour la prédiction dans un mode de codage ou de prédiction choisi, p. ex. choix adaptatif de la position et du nombre de pixels utilisés pour la prédiction
H04N 19/14 - Complexité de l’unité de codage, p. ex. activité ou estimation de présence de contours
H04N 19/156 - Disponibilité de ressources en matériel ou en calcul, p. ex. codage basé sur des critères d’économie d’énergie
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
H04N 19/503 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre la prédiction temporelle
A three-dimensional twinning method includes obtaining a first multi-angle image of a first target scene; recognizing, based on the first multi-angle image, target objects included in the first target scene, to obtain semantic features of the target objects; obtaining, from a model library, first three-dimensional models that match the semantic features of the target objects, where the first three-dimensional models carry physical parameters of the target objects; and generating, by using the first three-dimensional models, a first three-dimensional twin model corresponding to the first target scene.
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p. ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersectionsAnalyse de connectivité, p. ex. de composantes connectées
To efficiently process data of a service system with large traffic fluctuation, embodiments of this application provide a processing mechanism to resolve this problem, and are applied to a big data cluster for processing a job submitted by a client: When the big data cluster processes a job in a first execution mode (through streaming processing or batch processing), if a primary node determines that a switching policy is met, the primary node switches an execution mode of the job from the first execution mode to a second execution mode, and continues, based on the second execution mode, to manage one or more secondary nodes to execute the job. If the first mode is a streaming processing mode, the second execution mode is a batch processing mode; or if the first mode is a batch processing mode, the second execution mode is a streaming processing mode.
The present invention discloses an augmented reality simulation method and an AR device. The method is as follows. The AR device obtains a real-world scene and physical information associated with a target object in the real-world scene, and determines first operation information of the target object based on the physical information associated with the target object, where the first operation information indicates a physical operation to be performed on the target object; and the AR device generates an AR scene based on the real-world scene and the first operation information, where the AR scene includes the first operation information. In this case, the first operation information in the real-world scene can be more accurately described based on the AR scene. Therefore, a more realistic scene can be present, and a more realistic AR simulation method is provided.
One example rendering controller is configured to render a three-dimensional (3D) scene for a plurality of viewers, where the 3D scene includes a first plurality of objects and each viewer has a scene view of the 3D scene, where the rendering controller includes at least one processor and one or more memories coupled to the at least one processor and storing programming instructions for execution by the at least one processor to determine light transport for each scene view; determine, for each scene view, one or more objects affecting the scene view based on the light transport; compute respective rendering characteristics for each of a second plurality of objects affecting a scene view; store the respective rendering characteristics in a rendering cache associated with a corresponding object of the second plurality of objects; and compute display characteristics for the scene view based on the plurality of rendering caches.
This application belongs to the field of cloud computing technologies, and discloses a host management method and apparatus. The method includes: receiving a resource defragmentation instruction sent by a user, where the resource defragmentation instruction instructs to perform a resource defragmentation operation on a host cluster purchased by the user, and the host cluster includes a plurality of hosts; and performing the resource defragmentation operation on the host cluster based on the resource defragmentation instruction. This application improves resource utilization of the host cluster.
A data processing method and apparatus, and a distributed firewall system, which are applied to the technical field of cloud computing. The method comprises: a first engine node creating a first bit table file for a second engine node, wherein the first bit table file is used for indicating the situation of reception of each message among at least one message from the second engine node by the first engine node, and the at least one message comprises a first session table and/or first information related to stream reassembly; when the first bit table file indicates that N messages among the at least one message are not received, sending a request message to the second engine node, wherein the request message is used for requesting the N messages; receiving the N messages from the second engine node; and modifying the first bit table file, wherein the modified first bit table file is used for indicating that the at least one message is received. The present application can reduce the impact on service transmission caused by capacity expansion, capacity reduction or an engine node failure in the distributed firewall system.
A QoS processing system is for one or more services and includes multiple QoS processing nodes arranged in a hierarchical tree structure with at least two hierarchy levels. A highest hierarchy level includes one or more root nodes, each root node being associated with a set of service instances of the services. A lowest hierarchy level includes multiple leaf nodes, each leaf node being associated with one service instance, each leaf node being a descendant of at least one root node, and service instances of each root node being associated to the leaf nodes that descend from each root node. Each leaf node applies a local QoS policy to its associated service instance, and each root node can apply a first common QoS policy to its set of service instances.
There is provide a method and system for eventual consistency of data types in geo-distributed active-active database systems. A multi-type conflict-free replicated data type (CRDT) structure is provided. The multi-type CRDT structure and method allow a geo-distributed active-active database system to reach eventual consistency on data type conflicts by supporting multiple incompatible data types.
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
25.
CODE REVIEW METHOD, APPARATUS, DEVICE CLUSTER, AND STORAGE MEDIUM
This disclosure provides method, an apparatus, a device cluster, and a storage medium for code review. An example method includes: presenting program code and a document on a code review interface in a column-based manner or a split-screen manner, and presenting an association relationship between the program code and the document. An association manner may be automatically or manually associating the program code with a function description in the document. When the program code or the document changes, the association relationship can be automatically updated and modified.
The present disclosure relates to data processing methods, apparatuses, and systems. In one example method, a management apparatus in a data processing system receives an access request that is for metadata of target data stored in a storage apparatus and that is sent by a computing engine, and determines, in response to the access request, the metadata of the target data based on a first mapping relationship between a second metadata model adapted to the computing engine and a first metadata model built in the management apparatus. In addition, the management apparatus authenticates the access request based on a second mapping relationship between a second permission model adapted to the computing engine and a first permission model built in the management apparatus, to send the metadata to the computing engine after the access request passes the authentication.
Provided in the present application are a service operation method and apparatus. The method comprises: displaying a sub-page of a first service in a content area on an operation interface; and in response to an operation of jumping from the first service to a second service, displaying the thumbnail of the sub-page of the first service in a label area on the operation interface, and displaying a sub-page of the second service in the content area on the operation interface. By means of setting a content area and a label area on an operation interface, and displaying in the label area a sub-page of a used service for navigation, a tenant is facilitated to quickly find a sub-page that is required to be returned to continue the operation when the tenant opens many sub-pages, such that the convenience and efficiency of cross-service navigation can be improved, such that a service configuration operation of the tenant is facilitated, which helps to improve the configuration efficiency.
An automatic resource scaling method includes that first load information and second load information are obtained, where the first load information indicates current actual load information, and the second load information is load information used to estimate a future load; and whether to perform resource scale-out or resource scale-in is determined based on the first load information and the second load information. A control node can determine a current actual load status based on the first load information, and can further estimate a load in a future period of time based on the second load information, to determine, based on the current actual load status and the estimated load status in the future period of time, whether to perform resource scale-out or scale-in.
A method includes: obtaining a first abstract syntax tree corresponding to a first code text; obtaining a second code text based on an edit operation set of a user for the first code text, where a second abstract syntax tree corresponding to the second code text includes a first error, and the edit operation set includes N edit operations; determining at least one recovery operation based on a trained abstract syntax tree recovery model, the first abstract syntax tree, and the edit operation set; and modifying the first error in the second abstract syntax tree based on the at least one recovery operation, to obtain a third syntax tree.
The present application relates to the technical field of artificial intelligence, and discloses a code processing method and apparatus, a computing device cluster, and a storage medium. According to the code processing method provided by the present application, in a code processing process, a first code and modification opinion information are obtained, and due to the semantic meaning of the modification opinion information, an association between the first code and the modification opinion information, and the like, a certain rule exists between a modification intent and a code modification position. Therefore, on the basis of the first code and the modification opinion information, the modification intent and the code modification position can be obtained, so as to realize automatic understanding of the modification opinion information. The first code is processed on the basis of the modification intent and the code modification position. Since the amount of codes to be processed may be enormous, compared with manually understanding the modification opinion information and then processing the codes, the code processing method provided by the present application can reduce the error rate in the code processing process, and improve the code processing efficiency.
A method for configuring an elastic IP based on a cloud computing technology including: A cloud management platform receives access point information, the cloud management platform receives first cloud resource information, where the information indicates a first cloud resource bound to the elastic IP and a second availability zone in which the first cloud resource is located, and the first cloud resource is set in a second data center in the second availability zone; and the cloud management platform establishes a first communication channel between the first data center and the second data center for the tenant.
H04L 61/106 - Correspondance entre adresses de types différents à travers les réseaux, p. ex. correspondance entre numéros de téléphone et adresses de réseaux de données
H04L 61/5007 - Adresses de protocole Internet [IP]
H04L 67/10 - Protocoles dans lesquels une application est distribuée parmi les nœuds du réseau
A cloud service-based code generation method includes receiving a code generation request, where the code generation request requests to generate first executable code for implementing a first method in a repository; obtaining, from information of the repository based on the code generation request, first context information needed for generating the first executable code; and generating the first executable code based on the first context information and the code generation request.
Provided in the present application are an information processing method for an agent, and a related device, which are used to simplify the process of development of embodied agents and improve development efficiency. The method comprises: displaying a conversation interface comprising an input field, and in response to a first input operation for the input field, acquiring a first instruction, wherein the language of the first instruction is natural language; and on the conversation interface, displaying first response information given in reply by a first agent, wherein the first response information is specific to the first instruction, and the first agent is determined on the basis of the first instruction.
G06N 3/008 - Vie artificielle, c.-à-d. agencements informatiques simulant la vie fondés sur des entités physiques commandées par une intelligence simulée de manière à reproduire des formes de vie intelligentes, p. ex. fondés sur des robots reproduisant les animaux ou les humains dans leur apparence ou leur comportement
34.
Computing Resource Management Method and Apparatus
A computing resource management method includes a first level management platform receives a scale-out request sent by a second level management platform of a first cluster, where the scale-out request is used to request to add a computing resource for executing a first computing task. The first level management platform allocates a computing resource in a second cluster to the first computing task based on the scale-out request, and sends a scale-out response to the second level management platform of the first cluster, where the scale-out response indicates the first cluster to make a request for adding a computing resource from the second cluster, and the first cluster and the second cluster are different types of computing resource clusters.
In some examples, a method of power optimisation for a busy-polling device in a data centre comprises receiving, by a controller, at least one performance characteristic associated with the busy-polling device, wherein the busy-polling device comprises a processor comprising multiple cores, processing, by the controller, the at least one performance characteristic, whereby to determine, for each core of the multiple cores, whether its load meets one or more optimisation criteria, and, in response to determining that the load of a core of the multiple cores meets the one or more optimisation criteria, adjusting a frequency of the core of the multiple cores.
The present disclosure relates to object storage service-based storage methods and apparatuses. In one example method, an object storage server pre-obtains a configuration parameter, where the configuration parameter includes a target distribution manner selected by a tenant from a plurality of distribution manners, and the plurality of distribution manners include a sequential distribution manner and a discrete distribution manner. The object storage server creates a bucket based on the configuration parameter, where the bucket includes a plurality of logical partitions. When the object storage server obtains an object to be stored in the bucket, the object storage server may select a target logical partition from the plurality of logical partitions based on the configuration parameter and a name string of the object to store the object in the target logical partition.
Provided in the present application are an instance scheduling method, a management platform, and a cluster. The method comprises: acquiring running data of each instance in an instance group during task execution, wherein the instance group comprises at least two instances among a plurality of instances; on the basis of the running data of each instance in the instance group during task execution, obtaining the quality of service (QoS) of each instance during task execution; and when the QoS of a first instance in the instance group during task execution is inconsistent with the QoS of a second instance during task execution, performing resource scheduling on the first instance, such that the QoS of the first instance during task execution is consistent with the QoS of the second instance during task execution, wherein the second instance is any instance in the instance group excluding the first instance. The method can ensure the consistency of QoS of instances in the same instance group during task execution, and can improve the resource utilization rate.
Provided in the present application are a data transmission method based on bus technology, and a related apparatus, which are applied to the technical field of computers. In the embodiments of the present application, a sequence number of a packet and feedback information from a second computing device are both synchronized to an aggregated network interface card and centrally managed and configured by the aggregated network interface card, and after the aggregated network interface card confirms a network interface card corresponding to the sending of the packet, the aggregated network interface card notifies the network interface card, which is to send the corresponding packet, to release a packet buffer, so that the reliable transmission of data between network devices can be effectively ensured, and it is effectively ensured that data buffered in a physical network interface card is released, thereby avoiding the situation of buffer overflows occurring due to a network interface card that is required to release a packet buffer being incapable of releasing the packet buffer.
This application provides a weather forecast method, including: obtaining meteorological data and target time; determining a plurality of first AI models from a model library based on the target time, where different first AI models are used for forecasting weather at different time intervals; and performing inference based on the obtained meteorological data by using the plurality of first AI models, to obtain a first weather forecast result at the target time.
This application provides a facial synthesis method and apparatus. In embodiments, first facial information of a first model and second facial information of a second model are obtained, where facial information includes category information of a trait of each facial feature of a face. Third facial information is synthesized based on the first facial information and the second facial information according to a trait inheritance rule, where the third facial information corresponds to a third model, and the trait inheritance rule indicates a synthesis coefficient corresponding to the category information of the trait of the facial feature. Therefore, accuracy of 3D face model synthesis is ensured, and efficiency of 3D face model synthesis is improved.
The present application discloses a function integration method applicable to a software development platform, the method comprising: the software development platform receiving interface documentation of a first function, and extracting attribute information of the first function from the interface documentation; on the basis of the attribute information of the first function, invoking a code generator to generate interface invocation code of the first function; receiving attribute information of a proxy configured by a user, the proxy being configured to drive an application to perform a second function via the interface invocation code; and establishing a connection between the proxy and the application. According to the method, on the basis of the attribute information of the function extracted from the interface documentation, the code generator is invoked to automatically generate the interface invocation code of the function, and by configuring a function list supported by the proxy and by connecting the proxy to the application, the application can perform the function supported by the proxy by executing the interface invocation code, thereby achieving automatic integration of functions and applications, reducing integration complexity and workload, enabling flexible and diverse function combinations, and meeting service requirements.
The present application provides a process flow management method. The method uses flow information of a service and service data reported by a service device executing the service to automatically generate a process flow, so as to obtain the execution order of a plurality of processes in the service. When the process flow is generated, an initial process identifier and a process end condition of an initial process in the process information can be used for constraining an initial process and an end process in the process flow to be generated. When the process flow is generated, the process flow can be automatically generated only by acquiring the process information and the service data of the service. Also disclosed is a process flow management platform.
Disclosed in the present application are a cluster management method based on a cloud management platform, and a cloud management platform, which can maintain cross-cluster communication between micro-services of tenants and ensure the availability of an entire cloud service system, such that the service requirements of the tenants can be met. The method in the present application comprises: by means of a management interface, a cloud management platform receiving a management policy sent by a tenant for a cluster of the tenant, wherein the management policy is used for instructing the execution of a management operation on the cluster when a Core-DNS component or a Kube-proxy component or a micro-service is in an unavailable state; on the basis of the management policy, the cloud management platform creating a liveness probe component for the cluster, wherein the liveness probe component is used for performing a liveness probe on the Core-DNS component or the Kube-proxy component or the micro-service, so as to obtain a liveness probe result, and providing the liveness probe result to the cloud management platform; and if it is determined on the basis of the liveness probe result that the Core-DNS component or the Kube-proxy component or the micro-service is in the unavailable state, the cloud management platform executing the management operation on the cluster, wherein the management operation includes at least one of the following: migrating a node and draining traffic.
A cloud resource management system including a cloud resource management node, an underlying management module, and an external resource management module. The cloud resource management node is configured to deploy the underlying management module; the underlying management module is configured to manage an internal resource cluster and deploy the external resource management module based on the internal resource cluster; and the external resource management module is configured to manage an external resource cluster; and the basic management module is configured to: manage a basic resource group in the external resource cluster, create a virtual instance based on the basic resource group, and deploy a service of a tenant in the virtual instance.
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
45.
Data Processing Method, Data Processing Engine, Computing Device, and Storage Medium
A data processing method includes determining a to-be-accessed external database for a data processing job and a target access object in the external database; caching metadata of the target access object from the external database into a memory of the data processing engine; and accessing the metadata from the memory to execute the data processing job. According to the foregoing method, the metadata in the external database is cached into the memory of the data processing engine, and the metadata is accessed from the memory. This can avoid a catalog of the external database from being accessed for a plurality of times via a network.
G06F 12/0802 - Adressage d’un niveau de mémoire dans lequel l’accès aux données ou aux blocs de données désirés nécessite des moyens d’adressage associatif, p. ex. mémoires cache
A data packet processing method includes that when a destination address of a first data packet points to a first object, after writing identification information into a DSCP field of the first data packet, a first device sends the first data packet. After obtaining the first data packet, a second device allows or forbids, based on the identification information in the DSCP field of the first data packet, access behavior corresponding to the first data packet. Because a value of the DSCP field generally does not change in a transmission process of a data packet, the second device can obtain real identification information from the DSCP field, and can accurately make a decision of allowing or forbidding access behavior corresponding to the data packet.
H04L 47/2408 - Trafic caractérisé par des attributs spécifiques, p. ex. la priorité ou QoS pour la prise en charge de différents services, p. ex. services du type services différentiés [DiffServ]
In some examples, a graphics processing method to process electronic data for display on a display screen comprises using an aligned, vectored data structure in a Gilbert-Johnson-Keerthi algorithm run on a single instruction multiple data, SIMD, processor to evaluate the distance between objects of a display scene which is to be rendered and displayed on the display screen, said structure including separating axes, position and rotation of a local coordinate system and a position of each point in a convex set, and implementing said Gilbert-Johnson-Keerthi algorithm using a support function and a single loop only, wherein the support function comprises a loop-free and a branch-free function to support a mapping function, and the single loop repeats the algorithm until an optimum point for a shortest distance between two objects is identified.
The present application belongs to the technical field of cloud computing and provides a computing system, a multi-round session inference method, an apparatus and a computing device cluster. In the system, an external storage device stores a historical key value cache of a completed session. As the capacity of the external storage device is far greater than the capacity of an HBM in an accelerator, the hit rate of the historical key value cache can be increased, thereby avoiding recomputing the key value cache. In addition, when the accelerator processes a session, a host can preload a historical key value cache of a session to be processed in a task queue from the external storage device to an internal memory of the host; when performing i-th layer computing on the session, the accelerator can preload from the internal memory of the host the historical key value cache required for the (i+1)-th layer computing. As the computing process and the data loading process are synchronously performed, the computing process does not need to wait for the completion of data loading, such that the time overheads of the accelerator accessing the external storage device can be hidden, improving the efficiency of multi-round session inference.
The present application belongs to the technical field of information. Disclosed are a resource change method and apparatus, and a device and a computer-readable storage medium. The method comprises: acquiring an IaC file submitted by a user, wherein the IaC file comprises codes used for describing attributes of resources to be changed, and the resources are managed on the basis of an IaC service; on the basis of the IaC file, grouping a plurality of resources to obtain a plurality of resource groups, wherein target attributes of resources comprised in each resource group have the same value; on the basis of the IaC file, changing resources comprised in a first resource group among the plurality of resource groups; and on the basis of the IaC file, changing resources comprised in a second resource group among the plurality of resource groups, wherein the change batch of the first resource group precedes the change batch of the second resource group. Resources are grouped first, then batch processing is performed on the basis of resource groups, and overall changes are performed by using a resource group as a change granularity, so as to effectively control the number of resources involved in each overall change. Therefore, even if an abnormality occurs during the change process, the number of resources affected is small.
G06F 16/16 - Opérations sur les fichiers ou les dossiers, p. ex. détails des interfaces utilisateur spécialement adaptées aux systèmes de fichiers
G06F 16/178 - Techniques de synchronisation des fichiers dans les systèmes de fichiers
H04L 67/1095 - Réplication ou mise en miroir des données, p. ex. l’ordonnancement ou le transport pour la synchronisation des données entre les nœuds du réseau
Provided in the present application are a backup and disaster recovery method, and distributed systems. In the method, a first distributed system comprises a first configuration gateway, a first control node and a first data node, wherein the first configuration gateway determines that a fault occurs in the first control node, and determines, from among N candidate backup control nodes corresponding to a first region, a second control node in a second distributed system to be a backup control node, the priority of the second control node being higher than the priorities of nodes, other than the second control node, among the N candidate backup control nodes; the first configuration gateway receives first configuration information sent by a second configuration gateway in the second distributed system, the first configuration information being generated by the second control node on the basis of a first configuration parameter corresponding to the first region; and the first configuration gateway sends the first configuration information to the first data node. By means of the solution, a cross-region backup and disaster recovery method can be realized, thereby improving the stability and safety of data service provision performed by distributed systems.
H04L 41/0668 - Gestion des fautes, des événements, des alarmes ou des notifications en utilisant la reprise sur incident de réseau par sélection dynamique des éléments du réseau de récupération, p. ex. le remplacement par l’élément le plus approprié après une défaillance
51.
CONTROLLABLE DIFFUSION-ASSISTED PIPELINE TO IMPROVE UNSUPERVISED DOMAIN ADAPTATION FOR SEMANTIC SEGMENTATION
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 20/70 - Étiquetage du contenu de scène, p. ex. en tirant des représentations syntaxiques ou sémantiques
G06V 10/774 - Génération d'ensembles de motifs de formationTraitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source méthodes de Bootstrap, p. ex. "bagging” ou “boosting”
G06V 20/56 - Contexte ou environnement de l’image à l’extérieur d’un véhicule à partir de capteurs embarqués
The present application relates to the field of computers, and in particular to a method and apparatus for extracting a watermark. Embedding a digital watermark in a three-dimensional model can reduce the risk of illegitimate use of the three-dimensional model. However, a three-dimensional model may be converted into two-dimensional data, and the attacks simulated by a neural network during training are usually attacks on the three-dimensional model. Thus, when the three-dimensional model is converted into the two-dimensional data, the shape of a watermark is difficult to predict, and a neural network trained on the basis of the described attack network is difficult to extract an effective watermark from the two-dimensional data. In the present application, an electronic device acquires a target file, and then on the basis of the type of the target file, uses a corresponding decoder in a neural network to extract a watermark, for example, using a three-dimensional model decoder to process a three-dimensional model file, and using a two-dimensional data decoder to process a two-dimensional data file, thereby eliminating in a targeted manner damage to the watermark caused by attack means, and improving the success rate of extracting a watermark from an attacked three-dimensional model.
Disclosed in embodiments of the present application are a routing method and apparatus, used for reducing the interconnection costs of different private networks. The method in the embodiments of the present application comprises: a first leaf node sends a routing configuration request to a central controller, wherein the routing configuration request is used for requesting the central controller to configure routing information corresponding to one or more leaf nodes, and the routing information comprises one or more of the following: a virtual private network identifier, a virtual extensible local area network identifier and an Internet protocol (IP) network segment all corresponding to the one or more leaf nodes; the first leaf node generates a first message on the basis of the routing information, and sends the first message to a backbone acceleration node, wherein a destination IP address of the first message is an IP address in a private network corresponding to a second leaf node, and the first leaf node and the second leaf node are located in different private networks; and the backbone acceleration node forwards the first message on the basis of a private network location routing table, wherein the private network location routing table is used for querying an egress node of the first message, and the egress node is a backbone acceleration node corresponding to the second leaf node.
Disclosed in the present application are a cloud management platform-based flow table management method and a cloud management platform, so as to improve the loading rate of flow tables. The method of the present application comprises: a tenant can input into a configuration interface provided by a cloud management platform a primary service logic set by the tenant for a network device thereof; then, the cloud management platform can configure the primary service logic in the network device of the tenant, and after receiving a first packet, the network device can execute the primary service logic on the first packet, that is, the network device can extract a first flow feature from the first packet; the network device may then perform a first hash operation on the first flow feature to obtain a first index, and perform a second hash operation on the first flow feature to obtain a second index; and subsequently, the network device may acquire a third index on the basis of the first index and the second index, and on the basis of the third index, detect whether a flow table of the network device contains the first flow feature, and if the flow table does not contain the first flow feature, the network device writes the first flow feature into the flow table.
The present application discloses a video processing method based on a cloud management platform, and the cloud management platform, capable of improving the utilization rate of encoding/decoding resources, thereby avoiding the waste of resources. The method of the present application comprises: a first tenant can send an exclusive resource creation request for a first service instance to a creation interface provided by a cloud management platform, so that, on the basis of the request, the cloud management platform can create a plurality of first encoding/decoding instances exclusive for the first service instance, each first encoding/decoding instance supporting at least one encoding/decoding specification; when needing encoding/decoding, the first service instance can send a first encoding/decoding request to the cloud management platform, the first encoding/decoding request being used for instructing to acquire a second video of a first encoding/decoding specification on the basis of a first video; and then, on the basis of the request, the cloud management platform can select, from among the plurality of first encoding/decoding instances, a second encoding/decoding instance supporting the first encoding/decoding specification, and enable the second encoding/decoding instance to encode/decode the first video, to obtain the second video.
The present application relates to the technical field of cloud services, and discloses a cloud service request processing method and system. The cloud service request processing system comprises an interaction component, a plug-in framework, plug-in management components and at least one plug-in. The interaction component is used for acquiring a cloud service request sent by a tenant and providing the cloud service request for the plug-in framework, the cloud service request being used for requesting a server to provide a cloud service for the tenant; the plug-in framework is used for operating on the basis of the cloud service request and sending a plug-in access request to the plug-in management components, wherein the plug-in access request indicates access to a first plug-in used for processing the cloud server request, and the first plug-in is one of the at least one plug-in; the plug-in management components are used for accessing the first plug-in on the basis of the plug-in access request to obtain access results, and feeding back the access results to the plug-in framework; and the plug-in framework is further used for obtaining a processing result for the cloud service request on the basis of the access results. The present application ensures the operation security and stability of the plug-in framework.
This application provides a cloud service test method, including: constructing an application programming interface API knowledge graph of a cloud service, where the API knowledge graph includes a reference relationship between an API parameter and a resource object; then identifying an API dependency relationship based on the reference relationship between an API parameter and a resource object; and then testing the cloud service based on the API dependency relationship, to obtain a test result. In the method, the API dependency relationship is identified based on the reference relationship between an API parameter and a resource object. Even if API parameter names do not match, the API dependency relationship can be accurately identified based on a same pointed resource object, thereby improving accuracy of a cloud service test. The method is universal and can meet a service requirement.
A service governance method includes: obtaining first plug-in code, where the first plug-in code includes first service governance logic and first instrumentation logic, and the first service governance logic is decoupled from the first instrumentation logic; inserting a first instrumentation into an application program based on the first instrumentation logic, where the first instrumentation is used for requesting a governance function scheduler to allocate a first governance logic instance from multiple versions of governance logic instances, the first governance logic instance is used for executing the first service governance logic, and the application program is a running program; and performing service governance based on the first service governance logic when the first instrumentation is executed.
Embodiments of the present application disclose a method for updating micro-service instances, which is applied to a system for updating micro-service instances. In the system, multiple plug-ins in a plug-in center provide the capability to update micro-service instances of various working load types in various operating environments (such as cloud-native, virtualized, serverless, etc.). Therefore, after an instance to be updated is determined, a target plug-in adapted to a current operating environment may be dynamically determined on the basis of the working load type of the instance to be updated, so as to access a corresponding operating environment, thereby updating the instance to be updated in a micro-service cluster. Hence, by means of the present method, it is possible to efficiently update micro-service instances in different operating environments, thereby improving the updating efficiency of the micro-service cluster. The method can adapt to the requirements of various application scenarios, with high universality, and easy maintenance and management.
Provided in the embodiments of the present application are a device management method and apparatus based on bus technology, and a system. Without the need for being initiated by a bus controller, a bus device can actively send device information thereof to the bus controller after being powered on, and after receiving the device information of the bus device, the bus controller allocates an address space of a bus to the bus device on the basis of the device information of the bus device, so that latency in terms of the bus controller discovering the bus device is reduced, and the efficiency in terms of the bus controller managing a bus network is improved.
A cloud computing technology-based data migration method includes a cloud management platform that creates a first storage instance in a first cloud data center in a plurality of cloud data centers, where the first storage instance is mounted with a second storage instance. The cloud management platform creates a first computing instance in the first cloud data center or a second cloud data center in the plurality of cloud data centers, where the first computing instance sends a data read request to the first storage instance, and when the first storage instance does not have target to-be-accessed data to be read by the data read request, the first storage instance finds the target to-be-accessed data from the plurality of pieces of to-be-accessed data stored in the second storage instance, stores the target to-be-accessed data, and provides the target to-be-accessed data to the first computing instance.
In some examples, a method of image-based feature matching for images of a scene comprises acquiring a set of images comprising multiple images of the scene, respective ones of the multiple images captured under differing lighting conditions, for each image of the multiple images in the set of images, using unsupervised domain adaptation, UDA, providing respective semantic labels for features representing segmented objects and/or regions of the images, using the semantic labels, comparing the multiple images in the set of images, whereby to create, based on semantic similarity, a plurality of image pairs, wherein each of the plurality of image pairs comprises a first image of the scene captured under a first lighting condition and a second image of the scene captured under a second lighting condition.
G06V 10/60 - Extraction de caractéristiques d’images ou de vidéos relative aux propriétés luminescentes, p. ex. utilisant un modèle de réflectance ou d’éclairage
G06V 10/774 - Génération d'ensembles de motifs de formationTraitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source méthodes de Bootstrap, p. ex. "bagging” ou “boosting”
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
63.
Simulation Training Method and Apparatus, and Computing Device Cluster
A simulation training method is applied to a cloud management platform and includes providing a first configuration interface, where the first configuration interface is configured to obtain an identifier of a target simulation environment and an identifier of a target simulation device; providing a second configuration interface, where the second configuration interface is configured to obtain a task instruction; and executing, using the target simulation device, a task according to the task instruction in the target simulation environment, to obtain an execution result.
G06F 30/27 - Optimisation, vérification ou simulation de l’objet conçu utilisant l’apprentissage automatique, p. ex. l’intelligence artificielle, les réseaux neuronaux, les machines à support de vecteur [MSV] ou l’apprentissage d’un modèle
64.
Method and Apparatus for Debugging Cloud Service Application Programming Interface API and Related Device
A method for debugging a cloud service application programming interface (API) on a cloud computing platform includes that the cloud computing platform provides an API configuration interface; then obtains, through the API configuration interface, a first cloud service API selected by the user, obtains a parameter value that is of a first parameter of the first cloud service API and that is entered by the user, and recommends, to the user, a parameter group that is of the first cloud service API and that is related to the first parameter; and obtains a parameter value that is of a second parameter in the parameter group and that is entered by the user. Finally, the cloud computing platform deploys, in a cloud computing resource, a cloud service associated with the first cloud service API, to debug the first cloud service API.
Disclosed in the embodiments of the present application are a method and apparatus for inference in a large language model, which are used for reducing the bandwidth consumption of inference in a large language model. The method in the embodiments of the present application comprises: a computing device acquiring input data of a large language model running on a plurality of processors, wherein the input data comprises input text of a user and reference information generated on the basis of the input text, and the reference information comprises domain-related knowledge corresponding to the input text and timeliness information corresponding to the input text; on the basis of the input data, matching historical key-value cache data in a shared storage pool, so as to determine key-value cache data corresponding to the input data, wherein the historical key-value cache data is used for indicating intermediate data generated when the large language model processes different historical input data, the shared storage pool is used for storing historical key-value cache data generated by the plurality of processors, and the different historical input data comprises historical input text and reference information corresponding to the historical input text; and on the basis of the key-value cache data, generating an output result corresponding to the input data.
Disclosed in embodiments of the present application are a reasoning method and device for a large language model, which are used for reducing the bandwidth consumption of large language model reasoning. The method of the embodiments of the present application comprises: a parallel reasoning system receives a reasoning task of a large language model, wherein the reasoning task comprises a compute-intensive task and a self-attention computing task; a processor executes the compute-intensive task, generates intermediate data, and sends the intermediate data to a near-memory computing module, wherein the compute-intensive task comprises one or more tasks of the large language model: a feedforward neural network computing task, a projection task, and a layer normalization task; the near-memory computing module executes the self-attention computing task on the basis of the intermediate data to generate a near-memory computing result; and on the basis of the near-memory computing result, the processor generates a reasoning result corresponding to the reasoning task.
Disclosed in the present application are a vector processing method based on a cloud management platform, and a cloud management platform, which can realize global sharing of resources of a plurality of computing nodes, thereby more effectively utilizing the resources of the computing nodes, and thus reducing the cost to a certain extent. The method in the present application comprises: after a cloud management platform receives a vector processing request triggered by a tenant, the cloud management platform acquiring, on the basis of the vector processing request, state information of a plurality of computing nodes serving the tenant; after obtaining the state information of the plurality of computing nodes, the cloud management platform analyzing the states of the plurality of computing nodes by using the states of the plurality of computing nodes, so as to select, from among the plurality of computing nodes, a computing node with an optimal state as a first computing node, and sending the vector processing request to the first computing node; and after obtaining the vector processing request, the first computing node executing the vector processing request, so as to complete vector processing.
Methods and systems for generating a symbolic model from a markup document, and instantiating a model instance from a symbolic model are described. A markup document containing human language content and mathematical content is parsed into a symbolic model that contains only symbolic code representing an optimization problem. The markup document is parsed to extract a markup declaration, the markup declaration is then processed to a math content span, any metadata entity and any relationship between any metadata entity and the math content span. The math content span is processed into a math content parse tree. The math content parse tree is converted into symbolic code of the symbolic model using any relationship between the metadata entity and the math content span. The symbolic model can be instantiated using data definitions.
The present disclosure provides a locking method for locking a directory tree and related products. The locking method includes: obtaining an operation request; and performing, according to the operation request, a multiple-granularity locking (MGL) operation on multiple MGL lock objects in an array of MGL lock objects, where the array of MGL lock objects includes a plurality of MGL lock objects used for recording locking status of nodes in the directory tree, and the multiple MGL lock objects correspond to multiple nodes including at least one target node and one or more parent nodes thereof. MGL of the directory tree can be implemented, thereby improving concurrency of filesystem metadata tree operations.
Embodiments of the present application relate to the technical field of cloud computing, and provide a cloud desktop sleep method and apparatus, a device, and a computer-readable storage medium. The method comprises: configuring, in a cloud desktop system, preset configuration items comprising parameters affecting cloud desktop sleep; in response to a sleep request and on the basis of the preset configuration items, the cloud desktop system checking target configuration items of a target cloud desktop; and if the configuration items pass the check, issuing a sleep instruction to an instance running the target cloud desktop, and performing a sleep operation on the target cloud desktop. In this way, the configuration items that affect cloud desktop sleep undergo checking, and if the configuration items pass the check, the sleep operation is performed, thereby ensuring the reliability of sleep. In addition, by checking the configuration items, the adaptability with the sleep logic of the cloud desktop is ensured, avoiding the problem of low sleep reliability caused by mismatching between the sleep logic of a virtual machine management component and the sleep logic of the cloud desktop.
The present application relates to the technical field of cloud computing, and discloses a conference participation method and apparatus, a device, and a storage medium. When receiving a first operation event of a first user, a client can enable a digital human of the first user to participate in an online conference, and displays, in a conference interface of the online conference, users (comprising the digital human of the first user) participating in the online conference. Enabling the digital human to participate in the online conference can satisfy the timeliness requirement of the user for conference participation, thereby improving the conference participation experience of the user. The method comprises: a client receives a first operation event of a first user; in response to the first operation event, the client sends to a server a conference participation request, indicating that a digital human of the first user requests to participate in a first conference; then the server sends a conference participation response to the client; and finally, the client displays a conference interface of the first conference, wherein the conference interface displays users participating in the first conference, and the users participating in the first conference comprise the digital human of the first user.
The embodiments of the present application relate to the technical field of computers. Provided are an API permission management method and apparatus for a service system, and a device and a storage medium. The method comprises: recording a first correspondence between an identifier of a first service API, which is associated with a first service page, and an identifier of a first item; acquiring a first calling request; and on the basis of the identifier of the first service API and the identifier of the first item that are carried in the first calling request, and the first correspondence, allowing the first calling request to access a service corresponding to the first service API. Compared with a method in which whether a user has the permission to an API is determined, the embodiments of the present application distinguish, by means of identifiers of items, an item to which the API belongs. Permission management over a service API is performed from two aspects, i.e., an item to which the service API belongs, and an identifier of the service API, thereby improving the reliability of API permission management. Therefore, a network security problem caused by unauthorized item permission is avoided.
Disclosed in embodiments of the present application is a data management method. The method is applied to a data management system, and the data management system is located in an isolated operation environment, so that the use of third-party data can be effectively limited in the isolated operation environment, thereby preventing the leakage of the third-party data provided by a data provider. The third-party data is stored in a storage node of the data management system; when an execution node of the data management system runs a service program to execute a service of a data user, the execution node needs to send to the storage node a data reading request for the third-party data; and on the basis of a data reading strategy, the storage node can control reading of the third-party data by the service program, so that the use of the third-party data by the data user by means of the service program is monitored by the storage node, thereby ensuring that the third-party data is safely and appropriately used by the data user, and reducing the risk of data leakage.
G06F 21/53 - Contrôle des utilisateurs, des programmes ou des dispositifs de préservation de l’intégrité des plates-formes, p. ex. des processeurs, des micrologiciels ou des systèmes d’exploitation au stade de l’exécution du programme, p. ex. intégrité de la pile, débordement de tampon ou prévention d'effacement involontaire de données par exécution dans un environnement restreint, p. ex. "boîte à sable" ou machine virtuelle sécurisée
74.
MEDIA DATA STREAM PROCESSING METHOD AND APPARATUS, CLUSTER, MEDIUM, AND PROGRAM PRODUCT
Embodiments of the present application relate to the technical field of data processing, and disclose a media data stream processing method and apparatus, a cluster, a medium, and a program product, which prevent media content involving private information in audio-video conferences from being leaked, thereby expanding privacy protection functions of audio-video conferences. The method comprises: obtaining a media data stream, the media data stream being data transmitted by a first terminal to other participating terminals in an audio-video conference, and the other participating terminals being used to display media content corresponding to the media data stream; if the media data stream comprises a traceless identifier, on the basis of the traceless identifier, performing traceless processing on the media data stream to obtain a processed media data stream, the traceless processing being used to perform masking processing on media content corresponding to part or all of media data in the media data stream; and sending the processed media data stream to a recording server, such that the recording server records media content indicated by the processed media data stream.
H04N 21/2343 - Traitement de flux vidéo élémentaires, p. ex. raccordement de flux vidéo ou transformation de graphes de scènes du flux vidéo codé impliquant des opérations de reformatage de signaux vidéo pour la distribution ou la mise en conformité avec les requêtes des utilisateurs finaux ou les exigences des dispositifs des utilisateurs finaux
75.
METHOD AND APPARATUS FOR CONTROLLING COMMUNICATION PERMISSION IN CONFERENCE
Provided in the present application is a method for controlling communication permission in a conference. The method is characterized by comprising: acquiring permission matching rules configured by a manager, wherein the permission matching rules comprise communication rules between a plurality of permission types; in response to an acquisition request from a target user, sending to the target user a target permission type corresponding to the target user, wherein a conference joined by the target user further comprises at least one participant; acquiring target media data sent by a target terminal corresponding to the target user, wherein the media data carries the target permission type; and on the basis of the target permission type, a permission type corresponding to the at least one participant and the permission matching rule, determining whether to send the target media data to the at least one participant, wherein each participant corresponds to one permission type. The method uses the permission matching rules to effectively improve the convenience and flexibility of communication permission management in a conference.
The present application relates to the technical field of cloud services, and discloses a server system based on public cloud technology and an access method therefor. The server system comprises a server and a peripheral device inserted into the server. A virtual machine manager in the server is used for providing a virtual peripheral device and a virtual memory for a virtual machine in the server, wherein the virtual peripheral device is obtained by performing device simulation on the basis of the peripheral device, the virtual memory is obtained by performing device simulation on the basis of a memory configured for the peripheral device, and a virtual device driver of the virtual peripheral device is used for, when the virtual peripheral device accesses a target guest physical address (GPA) of the virtual memory, sending to the peripheral device an access request carrying the target GPA. The peripheral device is used for obtaining, on the basis of the access request, a target host physical address corresponding to the target GPA, acquiring target data recorded at the target host physical address, and sending the target data to the virtual device driver. The present application effectively improves the address translation performance of the server system.
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
77.
Cloud Computing Technology-Based Internet of Things Device Management Method and Platform
A cloud computing technology-based internet of things device management method includes an internet of things management platform that provides a forwarding policy configuration interface, where the forwarding policy configuration interface is used for obtaining a forwarding permission policy configured by an internet of things application for at least one gateway device, and the forwarding permission policy indicates that the at least one gateway device has forwarding permission to perform data forwarding between at least one internet of things device and the internet of things management platform; and the internet of things management platform configures, according to the forwarding permission policy, the at least one gateway device to perform data forwarding between the at least one internet of things device and the internet of things management platform.
H04L 12/66 - Dispositions pour la connexion entre des réseaux ayant différents types de systèmes de commutation, p. ex. passerelles
H04L 1/22 - Dispositions pour détecter ou empêcher les erreurs dans l'information reçue en utilisant un appareil en excédent pour accroître la fiabilité
H04L 67/56 - Approvisionnement des services mandataires
H04L 69/40 - Dispositions, protocoles ou services de réseau indépendants de la charge utile de l'application et non couverts dans un des autres groupes de la présente sous-classe pour se remettre d'une défaillance d'une instance de protocole ou d'une entité, p. ex. protocoles de redondance de service, état de redondance de protocole ou redirection de service de protocole
78.
SYSTEMS AND METHODS FOR GLOBAL CONSISTENCY IN DISTRIBUTED SHARED-DATA DATABASES
Apparatus, systems, and methods for global consistency in distributed shared-data databases may be provided. According to an aspect a method may be provided for ensuring strong consistency in data retrieval. The method includes receiving, by a receiving node of a database, a query for a value of a data item in the database. The method may further include, requesting, by the receiving node, from each node of the plurality of nodes a latest timestamp corresponding to a latest transaction. The method may further include receiving, by the receiving node from said each node of the plurality of nodes, the latest timestamp. The method may further include managing, by the receiving node, a cache of the receiving node based on said each received latest timestamp. The method may further include reading, by the receiving node, the value of the data item.
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
79.
VIDEO LIVE STREAMING METHOD AND APPARATUS BASED ON DIGITAL HUMAN TECHNOLOGY
A video live streaming method and apparatus based on digital human technology. The method comprises: acquiring first information inputted by a user, configuring a first digital human model to generate a script on the basis of the first information, and configuring a first digital human to perform video live streaming on the basis of the script, wherein the first information is information for display during live streaming, the script comprises information of at least one paragraph of text, and information of each paragraph of text among the information of the at least one paragraph of text is associated with a performance style feature. Therefore, during video live streaming, the first digital human has a performance style learned on the basis of the first digital human model, thus improving the degree of personification of digital humans, reducing the dependence of digital humans on manual control.
Methods and systems for processing a natural language input query that includes a question and a respective set of candidate answers for the question, including generating, based on the input query and a knowledge graph, natural language logic paths between at least some of the candidate answers and the question; forming a natural language prompt based on both the input query and the logic paths; and obtaining a response from a pretrained natural language processing model based on the natural language prompt.
The application disclose an objective function solving method and apparatus, and a computing device cluster, and belong to the field of cloud computing technologies. The method includes: receiving a solving requirement input, where the solving requirement includes an objective function, a decision variable, and a constraint condition; determining, based on the solving requirement, a simplex method as a solving method, to solve the objective function; in a process of solving the objective function using the simplex method, after solving the objective function according to a first pricing strategy using the simplex method, determining, based on an objective improvement on the objective function by the current solving in the simplex method, a second pricing strategy for solving the objective function in a next iteration; and solving the objective function according to the second pricing strategy. According to this application, objective function solving efficiency can be improved.
G06Q 10/04 - Prévision ou optimisation spécialement adaptées à des fins administratives ou de gestion, p. ex. programmation linéaire ou "problème d’optimisation des stocks"
82.
SYSTEM AND METHOD OF EFFICIENT KNOWLEDGE-ENHANCED CHAIN-OF-THOUGHT PROMPTING
Methods and systems for processing a natural language input query that includes a question and a respective set of candidate answers for the question, including generating, based on the input query and a knowledge graph, natural language logic paths between at least some of the candidate answers and the question; forming a natural language prompt based on both the input query and the logic paths; and obtaining a response from a pretrained natural language processing model based on the natural language prompt
A computing system 102 for tracking protected data in a memory 104, that includes the memory 104 and one or more memory controllers 106A-N configured to access the memory. The memory 104 includes an is_protected flag 110 indicating if the memory 104 is protected or not. The computing system 102 further includes a read_protected flag 108 associated with a process that indicates whether the process is allowed to read protected data. The one or more memory controllers 106A-N are configured to receive data. The one or more memory controllers 106A-N are configured to store the data in the memory 104. The one or more memory controllers 106A-N are configured to determine if the data is protected, and the one or more memory controllers 106A-N are configured to set the is_protected flag 110 of the memory 104 to indicate that the memory 104 is protected if the data is protected.
A computing apparatus is provided comprising a client and a server. The computing apparatus is configured to: obtain a machine learning code; split the machine learning code into a first part and a second part; execute the first part of the machine learning code on the server; execute the second part of the machine learning code on the client; and output a result of the machine learning code. In this way, the machine learning code may be split and executed over both the client and the server in an efficient way.
The arbitration system includes an arbitration module and M AZs. Each AZ includes a detection module and a plurality of service nodes. M is an integer greater than 2. The M AZs are configured to: run at least one application and provide at least one service for each application. The detection module in each AZ is configured to send detection information to the arbitration module. The arbitration module is configured to: receive an arbitration policy configured by a user and detection information from M detection modules; and determine network states between the M AZs based on the detection information of the M detection modules, and determine, based on the network states and the arbitration policy, an AZ in which a primary node that provides each service for an application of the user is located and an AZ in which a secondary node is located.
Apparatus, systems, and methods for global consistency in distributed shared-data databases may be provided. According to an aspect a method may be provided for ensuring strong consistency in data retrieval. The method includes receiving, by a receiving node of a database, a query for a value of a data item in the database. The method may further include, requesting, by the receiving node, from each node of the plurality of nodes a latest timestamp corresponding to a latest transaction. The method may further include receiving, by the receiving node from said each node of the plurality of nodes, the latest timestamp. The method may further include managing, by the receiving node, a cache of the receiving node based on said each received latest timestamp. The method may further include reading, by the receiving node, the value of the data item.
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
Provided in the present application is a code generation method, comprising: a code development platform receives input information by a user in a first code file, extracts context information of the input information according to a data warehouse to which the first code file belongs, and retrieves a business knowledge base corresponding to the data warehouse according to the input information so as to obtain target business knowledge; then the code development platform combines the input information, the context information and the target business knowledge according to a prompt template, so as to obtain prompt information; and then the code development platform inputs the prompt information into a large language model (LLM) for reasoning, and presents to the user a code segment reasoned by the LLM. Thus, using the prompt information from the same data source in different processes for prompting the LLM can achieve complete sharing and complete alignment of the prompt information, thereby improving the accuracy of generating code in single time. Furthermore, using a unified prompt template to combine the context information and the business knowledge can make a code segment generated by the LLM more accurate, thereby improving the code acceptance rate in professional domains.
A public cloud-based cloud resource conversion method includes receiving, from a cloud resource configuration interface, cloud resource configuration information input by a tenant; creating, in an infrastructure based on the cloud resource configuration information, a cloud resource that belongs to the tenant as a stable cloud resource; receiving a first cloud resource type conversion policy input by the tenant; and converting, a type attribute of the cloud resource from a stable cloud resource type to an unstable cloud resource type.
A data query method comprises N AQP messages that are generated based on a data query message. In addition, sampling parameters corresponding to the AQP messages have different values. A data query result may be quickly fed back to a client for an AQP message with a small sampling parameter value. Moreover, a data query result corresponding to each AQP message may be obtained based on the N AQP messages, so that the data query results fed back to the client are increasingly accurate.
A cross-chain transaction method, applied to a cross-chain system, the cross-chain system comprising a first blockchain network and a heterogeneous second sub-blockchain network and third blockchain network. The method comprises: a cross-chain component of the first blockchain network acquiring a cross-chain transaction request, the cross-chain transaction request being used to request execution of a cross-chain transaction from a second blockchain network to the third blockchain network, the cross-chain component performing identity authentication and permission verification based the cross-chain transaction request, and obtaining a verification result; when the verification result indicates that verification is successful, and when nodes of the first blockchain network reach a consensus on the cross-chain transaction, the cross-chain component recording transaction information of the cross-chain transaction in a ledger of the first blockchain network, and the cross-chain component of the first blockchain network then notifying a cross-chain component of the third blockchain network to record the transaction information of the cross-chain transaction to a ledger of the third blockchain network. The method does not rely on a third party, and solves the problem of relying on the endorsement of a trusted third party or requiring a third-party relay chain to achieve cross-chain capabilities.
Disclosed are a data processing method, apparatus and device, and a device resource pool, which relate to the technical field of data processing. The method is applied to a data processing device, wherein an FPGA of the data processing device comprises a scheduling unit, a processing unit and a storage unit. During data processing, the scheduling unit executes task scheduling, and the processing unit executes a plurality of processing tasks and stores output data of each processing task in the storage unit to serve as data to be processed of the next processing task. In the method, an FPGA independently completes a plurality of processing tasks, and output data of each processing task is stored in a storage unit. In this way, when a processing unit executes each processing task, data to be processed can be directly acquired from the storage unit, so as to avoid an end-to-end data transmission delay during the execution of M processing tasks, thereby improving the data processing efficiency. In addition, the FPGA uses a data-control separation architecture, so as to realize the decoupling of a control flow and a data flow, thereby facilitating an improvement to the scalability of data processing devices.
Disclosed in the present application are a video encoding method based on a cloud mobile phone, and a server. The method is applied to a cloud mobile phone that is run in a server, wherein the server is further provided with a network interface card, and the cloud mobile phone is connected to a terminal device by means of the network interface card. The method comprises: after a cloud mobile phone generates a video stream to be processed comprising M consecutive images, determining a motion area and a non-motion area of an ith image among the M consecutive images; then, reducing to a first QP value a QP value corresponding to the non-motion area, and raising to a second QP value a QP value corresponding to the motion area; encoding the non-motion area of the image on the basis of the first QP value, and encoding the motion area of the image on the basis of the second QP value, so as to generate a first video code stream; and sending the first video code stream to a terminal device by means of a network interface card. During the above process, the quality of the first video code stream can be improved by means of reducing a first quantization parameter corresponding to the non-motion area, and thus the image quality of an image displayed by the terminal device on the basis of the first video code stream is also improved.
Provided in the present application are a traffic management method, a service mesh system, an apparatus, and a cluster. The method comprises: a first L4 agent receiving a first data packet sent by a first service container group; the first L4 agent sending an identifier of a first tenant and the first data packet to a first L7 agent; the first L7 agent identifying configuration information of a first service among configuration information of a plurality of services on the basis of the identifier of the first tenant, wherein the first service belongs to the first tenant; and the first L7 agent performing traffic management on the first data packet on the basis of the configuration information of the first service. The method can increase a computing resource utilization rate of a service mesh system.
Provided in the present application are a service instance management method based on cloud technology, a cloud management platform, and a cluster. The method comprises: a cloud management platform receiving event information sent by a first physical machine among a plurality of physical machines, wherein the event information comprises an expected execution time of an operation, and the operation affects the state of a first resource node of the first physical machine; and before the expected execution time, the cloud management platform migrating a service instance in the first resource node to a second resource node of a second physical machine among the plurality of physical machines on the basis of a QoS requirement of the service instance in the first resource node. By means of the method, the QoS of a service instance can be prevented from being affected by an operation for a resource node where the service instance is located.
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
95.
INTERACTION METHOD AND APPARATUS FOR CLOUD MOBILE PHONE
Disclosed in the embodiments of the present application are an interaction method and apparatus for a cloud mobile phone, which are used for improving the proportion of a visual area of a user interface of a cloud mobile phone. The method in the embodiments of the present application comprises: a terminal device sending a first event instruction to a cloud server in response to a first input event of a user, wherein the first event instruction is used for requesting the cloud server to generate update content of a cloud mobile phone service; the terminal device receiving an event response message sent by the cloud server, wherein the event response message comprises the update content generated by the cloud server on the basis of the event instruction; and the terminal device displaying the update content on the terminal device on the basis of the event response message, wherein the update content comprises one or more of the following: a navigation bar, a previous interface and a next interface.
H04M 1/72469 - Interfaces utilisateur spécialement adaptées aux téléphones sans fil ou mobiles pour faire fonctionner le dispositif en sélectionnant des fonctions à partir de plusieurs éléments affichés, p. ex. des menus ou des icônes
G06F 3/0487 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p. ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p. ex. des gestes en fonction de la pression exercée enregistrée par une tablette numérique
The present application discloses a cloud phone navigation system, an operating system, and a cloud phone navigation method. The method comprises: when it is determined that a running cloud phone application has a positioning navigation function, a cloud phone sends a positioning request to a terminal device, so that the terminal device starts to obtain a plurality of pieces of first data, and sends the plurality of pieces of first data to the cloud phone, wherein each piece of first data comprises first positioning data and first satellite data; and the cloud phone carries out positioning navigation by means of the cloud phone application on the basis of a plurality of pieces of first positioning data and a plurality of pieces of first satellite data. According to the method, before a terminal device obtains positioning data and satellite data, whether a cloud phone application has a positioning navigation function is determined in advance, and first data is obtained only when positioning navigation needs to be carried out, thereby avoiding unnecessary performance loss of the terminal device; moreover, the cloud phone application can realize more accurate positioning navigation on the basis of the positioning data and the satellite data.
H04W 4/18 - Conversion de format ou de contenu d'informations, p. ex. adaptation, par le réseau, des informations reçues ou transmises pour une distribution sans fil aux utilisateurs ou aux terminaux
H04W 64/00 - Localisation d'utilisateurs ou de terminaux pour la gestion du réseau, p. ex. gestion de la mobilité
97.
Root Cause Locating Method and Apparatus, and Storage Medium
A root cause locating method includes a first device that obtains a first conversion relationship between a plurality of data storage files and a first data storage file, where the first data storage file includes first dirty data, and the plurality of data storage files include the first data storage file. The first device determines, based on the first conversion relationship and the first data storage file, a root cause of generating the first dirty data.
This application provides an image processing method, including: obtaining first data, where the first data corresponds to a first scene model, a first character, and a first location of the first character in the first scene model; and generating N first images based on the first data, where the N first images are in one-to-one correspondence with N second locations, an nth first image is used for presenting a pose of the first character at an nth second location, and the pose of the first character at the nth second location corresponds to a terrain feature of the first scene model at the nth second location. According to embodiments of this application, the pose of the first character corresponds to the terrain feature of the first scene model, so that the first character can automatically avoid an obstacle in a complex terrain (for example, a three-dimensional terrain).
Embodiments of the present application provide a method of searching data and related devices for searching data. The method is applied to cloud management platforms. A cloud management platform manages the infrastructure used to provide cloud services. This method includes: receiving a search request of a tenant; obtaining a set of search terms by splitting the search request; obtaining a set of search results according to the set of search terms and an index table, where the set of search results includes search results with different data types; and providing the set of search results to the tenant. The proposed technique provides the set of search results with different data types to the tenant according to the search request of the tenant, thereby improving the search quality.
The present application relates to the technical field of storage, and discloses a data clustering method and a device. The method comprises: acquiring a plurality of pieces of data to be clustered; acquiring a plurality of first feature values of target data, wherein the target data is any one among the plurality of pieces of data; and classifying at least two pieces of data having corresponding and the same plurality of first feature values into a same similar data set. The present application can classify data having different similarities into different similar data sets, improving the clustering precision of similar data.