Example solutions protect a continuous integration/continuous deployment (CI/CD) pipeline. Examples collect data from a CI/CD pipeline execution data source and/or a CI/CD pipeline task data source. Based on the collected data, a feature group comprising a plurality of records is created. Each record in the feature group represents an execution of the CI/CD pipeline. An anomaly score is generated, using a model representing historical feature groups, for the feature group representing the execution of the CI/CD pipeline. If the anomaly score is above a threshold, an alert is generated to indicate that the collected data represents an anomalous activity.
G06F 21/55 - Détection d’intrusion locale ou mise en œuvre de contre-mesures
G06F 21/54 - Contrôle des utilisateurs, des programmes ou des dispositifs de préservation de l’intégrité des plates-formes, p. ex. des processeurs, des micrologiciels ou des systèmes d’exploitation au stade de l’exécution du programme, p. ex. intégrité de la pile, débordement de tampon ou prévention d'effacement involontaire de données par ajout de routines ou d’objets de sécurité aux programmes
2.
ADAPTIVE SWITCHING OF COLOR SPACES, COLOR SAMPLING RATES AND/OR BIT DEPTHS
Innovations in adaptive encoding and decoding for units of a video sequence can improve coding efficiency. For example, some of the innovations relate to encoding/decoding that includes adaptive switching of color spaces between units within a video sequence. Other innovations relate encoding/decoding that includes adaptive switching of color sampling rates between units within a video sequence. Still other innovations relate encoding/decoding that includes adaptive switching of bit depths between units within a video sequence.
H04N 19/59 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre un sous-échantillonnage spatial ou une interpolation spatiale, p. ex. modification de la taille de l’image ou de la résolution
H04N 19/117 - Filtres, p. ex. pour le pré-traitement ou le post-traitement
H04N 19/13 - Codage entropique adaptatif, p. ex. codage adaptatif à longueur variable [CALV] ou codage arithmétique binaire adaptatif en fonction du contexte [CABAC]
H04N 19/132 - Échantillonnage, masquage ou troncature d’unités de codage, p. ex. ré-échantillonnage adaptatif, saut de trames, interpolation de trames ou masquage de coefficients haute fréquence de transformée
H04N 19/157 - Mode de codage attribué, c.-à-d. le mode de codage étant prédéfini ou présélectionné pour être utilisé ultérieurement afin de sélectionner un autre élément ou paramètre
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
H04N 19/186 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une couleur ou une composante de chrominance
H04N 19/587 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre un sous-échantillonnage ou une interpolation temporels, p. ex. décimation ou interpolation subséquente d’images dans une séquence vidéo
H04N 19/61 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant un codage par transformée combiné avec un codage prédictif
H04N 19/70 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par des aspects de syntaxe liés au codage vidéo, p. ex. liés aux standards de compression
H04N 19/82 - Détails des opérations de filtrage spécialement adaptées à la compression vidéo, p. ex. pour l'interpolation de pixels mettant en œuvre le filtrage dans une boucle de prédiction
H04N 19/88 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le pré-traitement ou le post-traitement spécialement adaptés pour la compression vidéo mettant en œuvre la réorganisation de données entre différentes unités de codage, p. ex. redistribution, entrelacement, brouillage ou permutation de données de pixel ou permutation de données de coefficients de transformée entre différents blocs
3.
CACHE FOR IDENTIFIERS REPRESENTING MERGED ACCESS CONTROL INFORMATION
The system described herein introduces a cache that a file system uses to determine, for a current object, if the process to merge different types of access control information into merged access control information has already been performed for a previous object. Stated alternatively, the file system uses the cache to determine whether a current object being processed for storage has the same combination of access control information as a previous object that has already been processed for storage. If the current object has the same combination of access control information as the previous object, the file system is able to associate merged access control information for the previous object with the current object via the use of a pointer. Consequently, the file system avoids having to perform the resource-intensive process of merging the different types of access control information for the current object.
Methods, systems, and computer storage media for providing observation stream data of security incidents using an observation stream engine in a security management system. An observation stream framework supports continuously generating and presenting observation stream data that facilitates developing a working hypothesis of an active security incident. The observation stream framework can also include observation stream query-types that can be selected for running queries against a plurality of security data sources. In operation, an observation stream query is accessed. The observation stream query is a user-generated observation stream query associated with an observation stream query-type. The observation stream query-type comprises parameters for querying a plurality of security data sources and dynamic tracking of a security incident. The observation stream query is executed and observation stream data is generated. The observation stream data is caused to be displayed on an observation stream interface comprising data visualizations of the observation stream data.
This disclosure provides electrochemically-cleavable linkers with cleavage potentials that are less than the redox potential of the solvent in which the linkers are used. In some applications, the solvent may be water or an aqueous buffer solution. The linkers may be used to link a nucleotide to a bound group. The linkers include a cleavable group which may be one of a methoxybenzyl alcohol, an ester, a propargyl thioether, or a trichloroethyl ether. The linkers may be cleaved in solvent by generating an electrode potential that is less than the redox potential of the solvent. In some implementations, an electrode array may be used to generate localized electrode potentials which selectively cleave linkers bound to the activated electrode. Uses for the linkers include attachment of blocking groups to nucleotides in enzymatic oligonucleotide synthesis.
Implementations for validating sensors using external device(s) are provided. One aspect includes a computing system comprising a first ambient light sensor system; and processing circuitry and memory storing instructions that causes the processing circuitry to: detect the external device in vicinity of the computing device, wherein the external device comprises a second ambient light sensor system; determine an orientation of the first ambient light sensor system; receive information describing an orientation of and sensor data of the second ambient light sensor system; determine a relative orientation based at least upon the orientation of the first ambient light sensor system and the information describing the orientation of the second ambient light sensor system; and perform correction of sensor data of the first ambient light sensor system based at least upon the relative orientation and the information describing the sensor data of the second ambient light sensor system.
G09G 3/34 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p. ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice en commandant la lumière provenant d'une source indépendante
Dynamic graph prediction can be achieved using a statistical model of graph dynamics that combines neural networks, including graph neural networks (GNNs), with maximum likelihood estimation. More specifically, in some embodiments, a GNN is used to compute graph embeddings representing an evolving graph at two different points in time, an additional neural network is used to create a forward image of the graph embedding associated with the first point in time, and a statistical distribution of the difference between the forward image and the graph embedding associated with the second point in time is evaluated. The GNN, additional neural network, and statistical distribution collectively constitute the statistical model, which can be trained on training data comprising pairs of graphs from one or more time series of evolving graphs. Such a statistical model may be employed, for example, to predict graph changes in a cybersecurity incident graph.
A communications network utilizes codelets within a network function for correcting interoperability issues within standardized messages. A processor configured to execute the instructions for operating the network function receives a codelet for execution at a hook point for processing a standardized message within the instructions. The processor executes the codelet to determine that a message is incompatible with a local configuration or interpretation of the standardized message and to modify the message to be compatible with the local configuration or interpretation of the standardized message. The processor executes the network function to process the modified message.
Approximate closed-form bounds of Bayes security against record-level inference attacks, in particular membership and attribute inference attacks, are provided. In various embodiments, these approximate bounds of Bayes security are used in conjunction with training neural-network models by differential-privacy stochastic gradient descent to create trained models that achieve a desired level of privacy.
Innovations in adaptive encoding and decoding for units of a video sequence can improve coding efficiency. For example, some of the innovations relate to encoding/decoding that includes adaptive switching of color spaces between units within a video sequence. Other innovations relate encoding/decoding that includes adaptive switching of color sampling rates between units within a video sequence. Still other innovations relate encoding/decoding that includes adaptive switching of bit depths between units within a video sequence.
H04N 19/59 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre un sous-échantillonnage spatial ou une interpolation spatiale, p. ex. modification de la taille de l’image ou de la résolution
H04N 19/117 - Filtres, p. ex. pour le pré-traitement ou le post-traitement
H04N 19/13 - Codage entropique adaptatif, p. ex. codage adaptatif à longueur variable [CALV] ou codage arithmétique binaire adaptatif en fonction du contexte [CABAC]
H04N 19/132 - Échantillonnage, masquage ou troncature d’unités de codage, p. ex. ré-échantillonnage adaptatif, saut de trames, interpolation de trames ou masquage de coefficients haute fréquence de transformée
H04N 19/157 - Mode de codage attribué, c.-à-d. le mode de codage étant prédéfini ou présélectionné pour être utilisé ultérieurement afin de sélectionner un autre élément ou paramètre
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
H04N 19/186 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une couleur ou une composante de chrominance
H04N 19/587 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre un sous-échantillonnage ou une interpolation temporels, p. ex. décimation ou interpolation subséquente d’images dans une séquence vidéo
H04N 19/61 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant un codage par transformée combiné avec un codage prédictif
H04N 19/70 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par des aspects de syntaxe liés au codage vidéo, p. ex. liés aux standards de compression
H04N 19/82 - Détails des opérations de filtrage spécialement adaptées à la compression vidéo, p. ex. pour l'interpolation de pixels mettant en œuvre le filtrage dans une boucle de prédiction
H04N 19/88 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le pré-traitement ou le post-traitement spécialement adaptés pour la compression vidéo mettant en œuvre la réorganisation de données entre différentes unités de codage, p. ex. redistribution, entrelacement, brouillage ou permutation de données de pixel ou permutation de données de coefficients de transformée entre différents blocs
In accordance with examples of the present disclosure, a collaborative platform provides a digital collaboration assistant that continuously monitors and analyzes shared meeting contents (e.g., voice, text chat messages, shared links and documents, presentation materials, and the like) by participants during a collaborative meeting in near real-time, periodically updates a structure summary log of the meeting contents that are deemed important during the collaborative meeting, and interacts with the participants throughout the collaborative meeting in near real-time, for example, to answer questions or provide additional information.
A headset is provided, including an earcup, the earcup including an audio transducer configured to emit audio to a respective ear of a user. The headset further includes a headband including an elongated and curved support structure having a pair of respective ends, the headband being coupled at one of the respective ends to the earcup. The headband further includes an elastic band coupled to and stretching between the ends of the support structure, the elastic band being concave down in shape. The headband further includes fabric coupled to the support structure at locations above the elastic band and extending to wrap underneath at least a portion of the elastic band to form a saddle shape that supports the elastic band from the underside at least at a head-contacting portion of the headband.
Innovations in adaptive encoding and decoding for units of a video sequence can improve coding efficiency. For example, some of the innovations relate to encoding/decoding that includes adaptive switching of color spaces between units within a video sequence. Other innovations relate encoding/decoding that includes adaptive switching of color sampling rates between units within a video sequence. Still other innovations relate encoding/decoding that includes adaptive switching of bit depths between units within a video sequence.
H04N 19/59 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre un sous-échantillonnage spatial ou une interpolation spatiale, p. ex. modification de la taille de l’image ou de la résolution
H04N 19/117 - Filtres, p. ex. pour le pré-traitement ou le post-traitement
H04N 19/13 - Codage entropique adaptatif, p. ex. codage adaptatif à longueur variable [CALV] ou codage arithmétique binaire adaptatif en fonction du contexte [CABAC]
H04N 19/132 - Échantillonnage, masquage ou troncature d’unités de codage, p. ex. ré-échantillonnage adaptatif, saut de trames, interpolation de trames ou masquage de coefficients haute fréquence de transformée
H04N 19/157 - Mode de codage attribué, c.-à-d. le mode de codage étant prédéfini ou présélectionné pour être utilisé ultérieurement afin de sélectionner un autre élément ou paramètre
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
H04N 19/186 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une couleur ou une composante de chrominance
H04N 19/587 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre un sous-échantillonnage ou une interpolation temporels, p. ex. décimation ou interpolation subséquente d’images dans une séquence vidéo
H04N 19/61 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant un codage par transformée combiné avec un codage prédictif
H04N 19/70 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par des aspects de syntaxe liés au codage vidéo, p. ex. liés aux standards de compression
H04N 19/82 - Détails des opérations de filtrage spécialement adaptées à la compression vidéo, p. ex. pour l'interpolation de pixels mettant en œuvre le filtrage dans une boucle de prédiction
H04N 19/88 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le pré-traitement ou le post-traitement spécialement adaptés pour la compression vidéo mettant en œuvre la réorganisation de données entre différentes unités de codage, p. ex. redistribution, entrelacement, brouillage ou permutation de données de pixel ou permutation de données de coefficients de transformée entre différents blocs
14.
ADAPTIVE SWITCHING OF COLOR SPACES, COLOR SAMPLING RATES AND/OR BIT DEPTHS
Innovations in adaptive encoding and decoding for units of a video sequence can improve coding efficiency. For example, some of the innovations relate to encoding/decoding that includes adaptive switching of color spaces between units within a video sequence. Other innovations relate encoding/decoding that includes adaptive switching of color sampling rates between units within a video sequence. Still other innovations relate encoding/decoding that includes adaptive switching of bit depths between units within a video sequence.
H04N 19/59 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre un sous-échantillonnage spatial ou une interpolation spatiale, p. ex. modification de la taille de l’image ou de la résolution
H04N 19/117 - Filtres, p. ex. pour le pré-traitement ou le post-traitement
H04N 19/13 - Codage entropique adaptatif, p. ex. codage adaptatif à longueur variable [CALV] ou codage arithmétique binaire adaptatif en fonction du contexte [CABAC]
H04N 19/132 - Échantillonnage, masquage ou troncature d’unités de codage, p. ex. ré-échantillonnage adaptatif, saut de trames, interpolation de trames ou masquage de coefficients haute fréquence de transformée
H04N 19/157 - Mode de codage attribué, c.-à-d. le mode de codage étant prédéfini ou présélectionné pour être utilisé ultérieurement afin de sélectionner un autre élément ou paramètre
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
H04N 19/186 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une couleur ou une composante de chrominance
H04N 19/587 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre un sous-échantillonnage ou une interpolation temporels, p. ex. décimation ou interpolation subséquente d’images dans une séquence vidéo
H04N 19/61 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant un codage par transformée combiné avec un codage prédictif
H04N 19/70 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par des aspects de syntaxe liés au codage vidéo, p. ex. liés aux standards de compression
H04N 19/82 - Détails des opérations de filtrage spécialement adaptées à la compression vidéo, p. ex. pour l'interpolation de pixels mettant en œuvre le filtrage dans une boucle de prédiction
H04N 19/88 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le pré-traitement ou le post-traitement spécialement adaptés pour la compression vidéo mettant en œuvre la réorganisation de données entre différentes unités de codage, p. ex. redistribution, entrelacement, brouillage ou permutation de données de pixel ou permutation de données de coefficients de transformée entre différents blocs
15.
EXTRACTING USER INTERACTIONS TO GENERATE BATCHES FOR MIGRATION BETWEEN TENANTS
A batching system accesses user interaction data to identify relationships between users and between users and resources. The relationships are weighted and users are grouped for migration based upon the weighted relationships. The groups are displayed for administrator interaction and are provided to a migration system for migration of the users between two tenants. This enhances migration efficiency and accuracy and makes migration much less disruptive to the end users.
The disclosed concepts relate to employing agent behavior models to control agent behavior in an application, such as a video game or a simulation. For instance, in some implementations, agent behavior models with relatively greater resource utilization, such as generative language models, are assigned to agents that are at higher levels of an agent hierarchy. Agent behavior models with relatively less resource utilization, such as reinforcement learning or hard-coded models, are assigned to agents that are at lower levels of the agent hierarchy.
G06N 3/006 - Vie artificielle, c.-à-d. agencements informatiques simulant la vie fondés sur des formes de vie individuelles ou collectives simulées et virtuelles, p. ex. simulations sociales ou optimisation par essaims particulaires [PSO]
17.
CONTAINER MODE MANAGEMENT ENGINE IN A SECURITY MANAGEMENT SYSTEM
Methods, systems, and computer storage media for providing container secure computing modes using a container mode management engine of a security management system. A container secure computing mode can include a secure state in which a container operates to prioritize security measures and practices. A container secure computing mode can be assigned to a container instance and enforced via a container security agent. In operation, a container instance is initialized, the container instance is associated with a container security agent having a secure compute mode transition control for the container instance. Based on the secure compute mode transition control, the container instance is transitioned into a secure state. A container operation of the container instance is accessed. The execution of the container operation is restricted based on the secure state of the container instance. The secure state is associated with a secure state configuration that supports restricting the container operation.
Systems and methods are directed to selecting and delivering targeted content. In response to a request from a user, a central control component accesses content delivery data and content provider settings. Based on the content delivery data and the content provider settings, the central control component generates content delivery settings that include a plurality of frequency capping (Fcap) rules. The content delivery settings are transmitted to a serving system. A determination component of the serving system accesses user data associated with the user that indicates user preferences. Based on the content delivery settings and the user data, the determination component selects a piece of content to deliver to the user. A delivery component then delivers the piece of content to the user.
H04N 21/262 - Ordonnancement de la distribution de contenus ou de données additionnelles, p. ex. envoi de données additionnelles en dehors des périodes de pointe, mise à jour de modules de logiciel, calcul de la fréquence de transmission de carrousel, retardement de la transmission de flux vidéo, génération de listes de reproduction
G06Q 10/04 - Prévision ou optimisation spécialement adaptées à des fins administratives ou de gestion, p. ex. programmation linéaire ou "problème d’optimisation des stocks"
Embodiments are provided for suggesting topics in a messaging system. A set of queries is received from a chat transcript history, where the set of queries includes a set of unhandled queries, and each unhandled query comprises a query for which a bot did not identify a corresponding topic (e.g., queries that did not trigger selection of a topic by the bot). A vector representation is generated for each unhandled query in the set of unhandled queries. The vector representations for the set of unhandled queries are clustered to generate one or more clusters of vector representations, each cluster corresponding to a group of unhandled queries. A corresponding suggested topic is generated for each cluster and provided to an authoring tool that comprises one or more interactive elements to enable an author to select at least one of the suggested topics for implementation in the bot.
G06F 40/35 - Représentation du discours ou du dialogue
H04L 51/02 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p. ex. courriel en utilisant des réactions automatiques ou la délégation par l’utilisateur, p. ex. des réponses automatiques ou des messages générés par un agent conversationnel
A communications network utilizes dynamic service models to facilitate programmability within a radio access network. An analysis node is configured to execute an analysis application based on information from a virtual network function. The virtual network function configured to provide a dynamic service model that defines one or more hook points within instructions for operating the virtual network function and one or more parameters that can be accessed by a codelet at the hook point. The virtual network function dynamically receives a codelet from the analysis node. The virtual network function verifies that the codelet complies with the dynamic service model. The virtual network function executes the codelet at one of the hook points.
H04W 4/60 - Services basés sur un abonnement qui utilisent des serveurs d’applications ou de supports d’enregistrement, p. ex. boîtes à outils d’application SIM
H04W 4/50 - Fourniture de services ou reconfiguration de services
A computing system is provided, including a processor and memory executing a reboot tracking module configured to read out a stored reboot request identifier assigned to a node in the computing system including a first value, and receive a first reboot request to reboot the node in the computing system including a first reboot request identifier. The reboot tracking module is further configured to, responsive to identifying a match between a value of the first reboot request identifier and the first value of the stored reboot request identifier, accept the first reboot request and update the stored reboot request identifier with a second value, receive a second reboot request to reboot the node including a second reboot request identifier, and responsive to identifying a mismatch between a value of the second reboot request identifier and the second value of the stored reboot request identifier, reject the second reboot request.
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
Conditions are identified in a telecommunications network based on data collected from the telecommunications network. A first artificial intelligence (AI) model is used to identify a network function (NF) type. Based on the NF type, a second AI model is used to generate a prompt for a generative pre-trained transformer (GPT) model. The prompt is input to the GPT model to identify a condition in the telecommunications network.
Techniques are described for a validation engine configured to verify user goal states in a virtualized computing environment comprising a plurality of hosts executing a plurality of virtual machines or containers. A representation of a user goal state encoding an updated state is received and queries are sent to control plane components to obtain current states of hosts in the network. The current states and local configurations are verified to be consistent with the user goal state.
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
24.
Multi-Channel Speech Compression System and Method
A method, computer program product, and computing system for generating a plurality of acoustic relative transfer functions associated with a plurality of audio acquisition devices of an audio recording system deployed in an acoustic environment. Acoustic relative transfer functions of at least a pair of audio acquisition devices of the plurality of audio acquisition devices may be compared. Location information associated with an acoustic source within the acoustic environment may be determined based upon, at least in part, the comparison of the acoustic relative transfer functions of the at least a pair of audio acquisition devices of the plurality of audio acquisition devices.
H04S 7/00 - Dispositions pour l'indicationDispositions pour la commande, p. ex. pour la commande de l'équilibrage
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G10L 15/06 - Création de gabarits de référenceEntraînement des systèmes de reconnaissance de la parole, p. ex. adaptation aux caractéristiques de la voix du locuteur
G10L 15/22 - Procédures utilisées pendant le processus de reconnaissance de la parole, p. ex. dialogue homme-machine
G10L 19/00 - Techniques d'analyse ou de synthèse de la parole ou des signaux audio pour la réduction de la redondance, p. ex. dans les vocodeursCodage ou décodage de la parole ou des signaux audio utilisant les modèles source-filtre ou l’analyse psychoacoustique
G10L 19/008 - Codage ou décodage du signal audio multi-canal utilisant la corrélation inter-canaux pour réduire la redondance, p. ex. stéréo combinée, codage d’intensité ou matriçage
G10L 21/0216 - Filtration du bruit caractérisée par le procédé d’estimation du bruit
H04R 1/40 - Dispositions pour obtenir la fréquence désirée ou les caractéristiques directionnelles pour obtenir la caractéristique directionnelle désirée uniquement en combinant plusieurs transducteurs identiques
A disclosed method facilitates identification and recommendation of alternative device configuration(s) with the potential to reduce or mitigate the energy footprint of a particular application on a user device. A disclosed method includes determining a user device energy footprint for an application executing on a user device and determining a per-device energy footprint for the application with respect to a population of user devices executing the application that are characterized by a device configuration not shared by the user device. An energy savings metric is determined based on a comparison between the per-device energy footprint within the population and the user device energy footprint. In response to determining that the energy savings metric is indicative of a potential energy savings, an energy savings reconfiguration recommendation is generated.
A computing device is provided, including a storage device configured to store image data and a processor coupled to a memory that stores instructions, which, upon execution by the processor, cause the processor to select a target image from the image data. The processor is further configured to display conversion possibility information that indicates that the target image can be converted into a larger image that has a larger field of view by stitching other images together with at least a portion of the target image and an associated selector. The processor is further configured to display the larger image upon receiving a user selection of the selector.
A photoreceiver circuit for a photodetector provides an output signal. The photoreceiver circuit includes a photodiode that receives an optical signal during a charge period and generates a charge corresponding to an intensity of the optical signal. The circuit includes at least one integrating transistor that accumulates the charge generated by the photodiode, and a reset circuit element that resets the photodiode after the charge period. The circuit also includes a comparator that provides the output signal of the photoreceiver circuit by comparing an output of the integrating transistor caused by the accumulated charge to a threshold value. The photoreceiver circuit can form part of a photodetector array, which can be used as part of an optical transceiver system.
Systems, methods, apparatuses, and computer program products are disclosed for determining a set of recommended access control assignments in single-cloud or multi-cloud environments based on historical usage. Paired activity data, representing task and resource pairs associated with an identity, is determined from historical activity data. Over-privileging costs are determined for a set of candidate access control assignments based on the permitted tasks and resource scopes granted by the candidate access control assignments and the paired activity data. A set of recommended access control assignments is determined as a subset of the candidate access control assignments with a lowest aggregate over-privileging cost whose combined permissions and resource scopes cover at least a predetermined percentage of the paired activity data. A recommendation including on-demand and/or standing privilege access control assignments is generated based on the set of recommended access control assignments. A responsive action may be performed based on the recommendation.
Systems and techniques for are described herein. A content passage is received from a corpus of training data comprising labeled training data and unlabeled training data. A query is received from a query hierarchy for a classification domain. The content passage and the query are embedded to form a passage-query pair. A predicted result for the passage-query pair is generated based on a calculated probability of the predicted result being within an answer threshold. A passage-query-result triplet is generated that comprises the passage-query pair and the predicted result according to the query hierarchy for the classification domain. Vectors of the content classification large language model are updated using the passage-query-result triplet.
An enhanced router is described that improves network performance for AI workloads by providing in-network primitives that improve the performance of operations such as Broadcast and Reduce operations. The enhanced router is configured to execute primitives for data payloads in packets associated with the workloads in the network prior to forwarding the packets to hosts. The network device receives, via a control plane, a primitive indicative of an analytical, computational, or transformative operation to be performed on data payloads transmitted by data packets associated with a workload being processed in an SDN. The primitive is associated with a protocol for configuring network devices to perform in-network acceleration of workloads in coordination with source and destination hosts in the SDN.
The disclosed concepts relate to employing agent behavior models to control agent behavior in an application, such as a video game or a simulation. For instance, in some implementations, agent behavior models with relatively greater resource utilization, such as generative language models, are assigned to agents that are at higher levels of an agent hierarchy. Agent behavior models with relatively less resource utilization, such as reinforcement learning or hard-coded models, are assigned to agents that are at lower levels of the agent hierarchy.
A method, computer program product, and computing system for collecting data concerning interruptions associated with a plurality of virtual machines, and for collecting hardware information concerning one or more nodes hosting the plurality of virtual machines at a time generally contemporaneous with the interruptions of the plurality of virtual machines. A correlation is generated between interruptions of at least a subset of the plurality of virtual machines and one or more hardware component attributes of the one or more nodes.
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
Examples of the present disclosure describe improved systems and methods for detecting keywords in audio content. In one example implementation, audio content is segmented into one or more audio segments. One or more text segments is generated, each text segment corresponding to each of the audio segments. For each text segment, one or more phrase candidate values is generated using a textual analysis, and one or more sentence embedding values is generated using a sentence embedding analysis. Next, an average sentence embedding value is calculated using the one or more sentence embedding values. Each of the one or more phrase candidate values is compared to the average sentence embedding value. Each phrase candidate value having a comparison value above a threshold value is labeled as representing a keyword.
G06F 40/40 - Traitement ou traduction du langage naturel
G10L 15/04 - SegmentationDétection des limites de mots
G10L 15/22 - Procédures utilisées pendant le processus de reconnaissance de la parole, p. ex. dialogue homme-machine
G10L 25/57 - Techniques d'analyse de la parole ou de la voix qui ne se limitent pas à un seul des groupes spécialement adaptées pour un usage particulier pour comparaison ou différentiation pour le traitement des signaux vidéo
34.
NETWORK DEVICE CONFIGURED FOR WORKLOADS IN SOFTWARE DEFINED NETWORKS
An enhanced router is described that improves network performance for AI workloads by providing in-network primitives that improve the performance of operations such as Broadcast and Reduce operations. The enhanced router leverages the observation that operations such as the Broadcast and Reduce primitives are more performant (e.g., reduced latency and bandwidth) when performed in the network rather than in graphics processing unites (GPUs) or central processing units (CPUs) which are often deployed as leaf nodes in a typical network topology in a datacenter.
Innovations in adaptive encoding and decoding for units of a video sequence can improve coding efficiency. For example, some of the innovations relate to encoding/decoding that includes adaptive switching of color spaces between units within a video sequence. Other innovations relate encoding/decoding that includes adaptive switching of color sampling rates between units within a video sequence. Still other innovations relate encoding/decoding that includes adaptive switching of bit depths between units within a video sequence.
H04N 19/59 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre un sous-échantillonnage spatial ou une interpolation spatiale, p. ex. modification de la taille de l’image ou de la résolution
H04N 19/117 - Filtres, p. ex. pour le pré-traitement ou le post-traitement
H04N 19/13 - Codage entropique adaptatif, p. ex. codage adaptatif à longueur variable [CALV] ou codage arithmétique binaire adaptatif en fonction du contexte [CABAC]
H04N 19/132 - Échantillonnage, masquage ou troncature d’unités de codage, p. ex. ré-échantillonnage adaptatif, saut de trames, interpolation de trames ou masquage de coefficients haute fréquence de transformée
H04N 19/157 - Mode de codage attribué, c.-à-d. le mode de codage étant prédéfini ou présélectionné pour être utilisé ultérieurement afin de sélectionner un autre élément ou paramètre
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant un bloc, p. ex. un macrobloc
H04N 19/186 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une couleur ou une composante de chrominance
H04N 19/587 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre un sous-échantillonnage ou une interpolation temporels, p. ex. décimation ou interpolation subséquente d’images dans une séquence vidéo
H04N 19/61 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant un codage par transformée combiné avec un codage prédictif
H04N 19/70 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par des aspects de syntaxe liés au codage vidéo, p. ex. liés aux standards de compression
H04N 19/82 - Détails des opérations de filtrage spécialement adaptées à la compression vidéo, p. ex. pour l'interpolation de pixels mettant en œuvre le filtrage dans une boucle de prédiction
H04N 19/88 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le pré-traitement ou le post-traitement spécialement adaptés pour la compression vidéo mettant en œuvre la réorganisation de données entre différentes unités de codage, p. ex. redistribution, entrelacement, brouillage ou permutation de données de pixel ou permutation de données de coefficients de transformée entre différents blocs
36.
GUARDRAILS FOR EFFICIENT PROCESSING AND ERROR PREVENTION IN GENERATING SUGGESTED MESSAGES
Systems and methods for using a generative artificial intelligence (AI) model to generate a suggested draft reply to a selected message. A message generation system and method are described that use guardrails that prevent unnecessary AI model processing and accidental sending of an AI model-generated draft. In some examples, draft reply-generation is limited to a subset of messages (e.g., focused, non-confidential) and triggering of the draft reply generation is performed only after user interaction criteria are satisfied. In some examples, a confirmation message is presented when the draft reply is attempted to be sent with no changes or quickly after the draft is generated. For instance, the guardrails limit the number of times the AI model is invoked to generate suggested replies and further prevents users from accidentally sending drafts generated from the AI model.
Methods and computing devices for estimating a force F exerted on a touchpad are disclosed. In one example, a method comprises determining that the touchpad is not being touched. At least on condition of determining that the touchpad is not being touched, a no-touch capacitance value of the PCB is calculated. After calculating the no-touch capacitance value, the method includes determining that the touchpad is being touched. At least on condition that the touchpad is being touched, the no-touch capacitance value and a touch-based capacitance value are used to estimate the force F exerted on the touchpad.
Disclosed are techniques for synthesizing large amounts of human-computer interaction data that is representative of real-world user data. An automated screenshot capture engine may cause an automated agent to use an application or a website in a manner designed to mimic real-world human-computer interaction. Screenshots are captured to record how a user might interact with the application. Metadata, such as window location and size, may be obtained for each screenshot. Screenshots and corresponding metadata may be automatically annotated with a large language model to indicate the context of the application and/or computer system when the screenshot was captured. Data created in this way may be used to validate AI-based software application features or to train (or retrain) a machine learning model that predicts human-computer interactions. Automated synthesis of training data significantly increases the scale of data that can be obtained for training while also reducing computing and financial costs.
Disclosed is a technique for debugging different versions of a software application, each associated with a different branch of a source code base. A debugger simultaneously debugs the different versions, synchronizing execution while highlighting differences between them. Execution is synchronized by stepping to and setting breakpoints on logically corresponding points of execution. For example, a single “step” command may cause both versions to step to the same logical code location. Similarly, a single “insert breakpoint” command may cause breakpoints to be placed at the same logical code location in both versions. Corresponding logical points of execution may be determined by comparing source code branches and application binaries of the different versions. Differences between the versions may be displayed visually, including changes to source code and changes to the binary executables. While execution is paused in the debugger, mappings from old to new function and variable names may be visualized.
User engagement is detected and used to control operation of a computing device. User engagement is detected by a sensor such as a camera that identifies if a user's face is oriented towards a display device. If the user is not facing the display device, the sensor determines that the user is unengaged. The computing device is thus able to perform a power-saving operation, such as dimming the display device, when the user is unengaged. The computing device includes an API that abstracts sensor data into a user engagement signal indicating that the user is either engaged or unengaged. The OS and applications running on the computing device act on the user engagement signal provided by the API without communicating directly with the sensor. The user engagement signal may be provided as an input to a state machine.
Systems and methods are provided for automatically determining an intent of a user based on an intent model to attach a file to a document, prompting the user to confirm the intent using a predetermined character in an inline nudge, generating and displaying an inline menu with an interactive list of ranked files as a suggestion for attachment. The disclosed technology uses the intent for specifying a scope of the inline search. The intent model for attaching content maintained by third-party applications uses a combination of an embeddings model and an N-gram model with limited seed queries and determines the intent based on intent scores associated with respective third-party applications. The present disclosure ranks respective candidate content based on a degree of relevance to the intent. The user selects one or more content from the list for attaching to the document.
A method for style guide management is described. A first user input is received from a user via a graphical user interface (GUI). The first user input identifies a writing sample having a textual style. A style guide is generated, based on the writing sample, having a description of a target style, based on the textual style, for input to a generative neural network model (GNNM). A profile representing the style guide and comprising a natural language format description is sent for display in the GUI. The style guide is modified based on an explicit indication of a style preference. A request for drafting assistance is sent to the GNNM, the request including the style guide for text generation according to the style guide by the GNNM. An output generated by the GNNM in response to the request is obtained. The output is sent to be displayed within the GUI.
G06F 40/253 - Analyse grammaticaleCorrigé du style
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
43.
ONLINE LEARNING PLATFORMS WITH ENHANCED SEARCH ASSIGNMENTS
A computing device executing software displays a view of a search assignment in a user interface to a learning platform. The device receives user input comprising search terms associated with the search assignment, and generates queries based on the user input. The device submits the queries to a search engine, whereupon the search engine performs searches based on the queries, and the device displays the results. As a user evaluates resources provided in the results, the device updates the user interface to include an option selectable for adding evaluated ones of the resources to a collection of resources for the search assignment. In response to the user selecting the option with respect to a resource of the evaluated ones of the resources, the device adds the resource to the collection of resources.
G06F 16/9535 - Adaptation de la recherche basée sur les profils des utilisateurs et la personnalisation
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect utilisant des icônes
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p. ex. des menus
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p. ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G06F 16/9538 - Présentation des résultats des requêtes
Images captured by a camera moving in an environment are received, and for each of a plurality of points in one of the images, outputs are computed using a neural network. The outputs comprise: a trajectory depicting the point in each of the plurality of images, as well as, for each trajectory, a prediction of visibility of the trajectory in each of the images and a prediction of whether the trajectory depicts a static or moving surface in the environment. The neural network receives the images and points as input and computes the outputs, wherein the outputs comprise for each of the trajectories, confidence data. The outputs are sent to a downstream process selected from any of: visual odometry, structure from motion, human body tracking, video editing, vehicle tracking.
Disclosed are techniques for synthesizing large amounts of human-computer interaction data that is representative of real-world user data. An automated screenshot capture engine may cause an automated agent to use an application or a website in a manner designed to mimic real-world human-computer interaction. Screenshots are captured to record how a user might interact with the application. Metadata, such as window location and size, may be obtained for each screenshot. Screenshots and corresponding metadata may be automatically annotated with a large language model to indicate the context of the application and/or computer system when the screenshot was captured. Data created in this way may be used to validate AI-based software application features or to train (or retrain) a machine learning model that predicts human-computer interactions. Automated synthesis of training data significantly increases the scale of data that can be obtained for training while also reducing computing and financial costs.
G06V 10/774 - Génération d'ensembles de motifs de formationTraitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source méthodes de Bootstrap, p. ex. "bagging” ou “boosting”
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 10/98 - Détection ou correction d’erreurs, p. ex. en effectuant une deuxième exploration du motif ou par intervention humaineÉvaluation de la qualité des motifs acquis
G06V 20/62 - Texte, p. ex. plaques d’immatriculation, textes superposés ou légendes des images de télévision
G06V 30/146 - Alignement ou centrage du capteur d’image ou du champ d’image
46.
NETWORK DEVICE CONFIGURED FOR WORKLOADS IN SOFTWARE DEFINED NETWORKS
An enhanced router is described that improves network performance for AI workloads by providing in-network primitives that improve the performance of operations such as Broadcast and Reduce operations. The enhanced router is configured to execute primitives for data payloads in packets associated with the workloads in the network prior to forwarding the packets to hosts. The network device receives, via a control plane, a primitive indicative of an analytical, computational, or transformative operation to be performed on data payloads transmitted by data packets associated with a workload being processed in an SDN. The primitive is associated with a protocol for configuring network devices to perform in-network acceleration of workloads in coordination with source and destination hosts in the SDN.
H04L 41/0895 - Configuration de réseaux ou d’éléments virtualisés, p. ex. fonction réseau virtualisée ou des éléments du protocole OpenFlow
H04L 41/0896 - Gestion de la bande passante ou de la capacité des réseaux, c.-à-d. augmentation ou diminution automatique des capacités
H04L 41/16 - Dispositions pour la maintenance, l’administration ou la gestion des réseaux de commutation de données, p. ex. des réseaux de commutation de paquets en utilisant l'apprentissage automatique ou l'intelligence artificielle
H04L 69/22 - Analyse syntaxique ou évaluation d’en-têtes
A communications network utilizes dynamic service models to facilitate programmability within a radio access network. An analysis node is configured to execute an analysis application based on information from a virtual network function. The virtual network function configured to provide a dynamic service model that defines one or more hook points within instructions for operating the virtual network function and one or more parameters that can be accessed by a codelet at the hook point. The virtual network function dynamically receives a codelet from the analysis node. The virtual network function verifies that the codelet complies with the dynamic service model. The virtual network function executes the codelet at one of the hook points.
A data processing system implements receiving an image and a natural language prompt input by a user requesting that an application generate an digital picture frame for the image; analyzing the prompt using a key-phrase extraction unit to extract one or more key phrases from the prompt that describe a topic of the frame to be generated for the image; providing the one or more key phrases as an input to a retrieval engine; analyzing the one or more key phrases with the retrieval engine to identify a set of candidate frame images from among a plurality of frame images in a labeled frame images datastore; analyzing the set of candidate frame images using an image placement unit to obtain a set of framed images based on the image and the candidate frame images; and presenting the set of framed images on a user interface of the application.
G06F 16/58 - Recherche caractérisée par l’utilisation de métadonnées, p. ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement
High availability network services are provided in a in a virtual computing environment comprising a plurality of network devices running in a software defined network (SDN) of the virtual computing environment, the network devices comprising a plurality of SDN appliances configured to disaggregate enforcement of policies of the SDN from hosts of the virtual computing environment, the hosts implemented on servers communicatively coupled to network interfaces of the SDN appliances, the servers hosting a plurality of virtual machines, the SDN appliance comprising a plurality of smart network interface cards (fNICs) configured to implement functionality of the SDN appliances.
A method for secure handling of touch data includes receiving touch data from a digitizer sensor in response to a user interaction with a touch surface; transmitting the touch data from a first processor of a digitizer system to a second processor of a host system without removing a noise signature of the digitizer sensor; accessing, by the second processor, a digitizer calibration map that includes the noise signature of the digitizer sensor; and substantially removing, by the second processor, the noise signature from the touch data based at least in part on the digitizer calibration map.
Examples are disclosed herein relating to simulating force-sensing functionality for a touch interface using a machine-learning model that is trained based at least on training data generated by a training touch interface including a plurality of force sensors. In one example, a computing system includes a touch interface configured to output a touch heatmap based at least on touch input detected by a plurality of touch sensors of the touch interface. The computing system is configured to execute a machine-learning model that is configured to receive the touch heatmap, output a force estimation of the touch input based at least on analyzing the touch heatmap, and execute a computing operation based at least on the force estimation. The machine-learning model is trained based at least on training data generated by a training touch interface including a plurality of force sensors.
Endpoint security groups include computing device endpoints that are classified according to commonly shared device features and capabilities including device type, function, role, or location. Endpoint security groups are used as an alternative identity mechanism for endpoints for purposes of security and data traffic policy enforcement rather than using conventional IP (Internet Protocol) addressing. Grouping endpoints reduces the scope of network management to enable dynamic policy enforcement for endpoints as they join, leave, and then rejoin computing networks, which is a common behavior, particularly for IoT (Internet-of-Things) devices in manufacturing environments. In an illustrative example, a private multi-access edge compute (MEC) platform supports a scalable policy definition and enforcement framework that provides consistent endpoint handling independent of network access methodology. Endpoint security groups facilitate improvements in security of network access and utilization and segmentation of data traffic on a fine-grained basis.
Endpoint security groups include computing device endpoints that are classified according to commonly shared device features and capabilities including device type, function, role, or location. Endpoint security groups are used as an alternative identity mechanism for endpoints for purposes of security and data traffic policy enforcement rather than using conventional IP (Internet Protocol) addressing. Grouping endpoints reduces the scope of network management to enable dynamic policy enforcement for endpoints as they join, leave, and then rejoin computing networks, which is a common behavior, particularly for IoT (Internet-of-Things) devices in manufacturing environments. In an illustrative example, a private multi-access edge compute (MEC) platform supports a scalable policy definition and enforcement framework that provides consistent endpoint handling independent of network access methodology. Endpoint security groups facilitate improvements in security of network access and utilization and segmentation of data traffic on a fine-grained basis.
A heat exchanger comprising a heatsink and/or coldplate is disposed on a semiconductor having a heat-producing die within. A layer of thermal interface material (TIM) is disposed between the heat exchanger and semiconductor to enhance heat dissipation as the semiconductor is operated. A seal including a gasket or edgebond adhesive is provided around the perimeter edges of the heat exchanger and semiconductor to seal the gap around the periphery of the TIM layer to prevent the TIM from getting pumped out with cyclical thermal loading of the assembly. A capillary tube in the heat exchanger extending from the internal TIM layer to an opening exposed to the surrounding environment provides a reservoir to capture TIM that would otherwise be pumped out. Dimensions of the capillary tube are selected to prevent environmental air from passing by the TIM in the tube and getting entrapped in the TIM layer as voids.
Natural language generators (NLGs), including large language models, are powerful technologies that are in widespread use. However, typically, as NLGs become more powerful and sophisticated, their correspondingly increased complexity requires substantial processing resources. The present disclosure provides automated techniques for dynamically routing queries between at least two NLGs based on an assessment of query difficulty. Less difficult queries can be routed to a less resource intensive NLG, while more difficult queries are routed to a more sophisticated, but more resource intensive NLG. Routing less difficult queries to a less resource intensive model can thus conserve computing resources, while providing little to no drop in response quality, and in some cases providing improved response quality.
The present disclosure provides methods, systems and computer readable media for training and implementing a generative machine learning model for identifying and mitigating security threats. Certain examples relate to generative model training, in which a training image is provided to a generative machine learning (ML) model in a training prompt, with an Indicator of Compromise (IoC) prediction instruction pertaining to the first security image. The model generates a predicted IoC and a parameter of the model is updated based on a loss function that quantifies error between a ground truth IoC and the predicted IoC. Other examples relate to the use of trained generative models for cybersecurity. A mitigation prompt comprising a second security image and an associated mitigation instruction is provided to a trained generative model. The model outputs an indication of a cybersecurity mitigation action based on the mitigation prompt, and the cybersecurity mitigation action is performed on the system. Certain example embodiments identify and automatically mitigate security issues using a multimodal generative model (MGM) though appropriate prompt engineering.
In certain embodiments, a time series-based anomaly detection method is provided, which is able to identify anomalous user accounts highly effectively. An activity predictor is used to model normal behaviors of individual accounts and to assess an extent to which a current behavior associated with differs from its past normal behavior. Part of an activity sequence is inputted to the activity predictor, and a resulting activity prediction (the activity predictor's prediction of normal behavior) is compared with the remaining part of the sequence. In preferred embodiments, a multi-stage approach is used, with a more lightweight form of anomaly detection applied in a first stage, and the time-series based detection performed in a second stage only on a subset of activity sequences escalated from the first stage.
Aspects of the disclosure include methods for evaluating a predictive model. An exemplary method includes training an evaluation model to output, for an input first entity-second entity pair, a content relevancy prediction. A large language model encoder of the evaluation model generates a first embedding for the first entity and a second embedding for the second entity. The embeddings are fed to an interaction tower to produce a logit and the logit is passed with true labels to a loss function for fine-tuning. The true labels include labeled training data generated by modifying training data having a first proportion of negative labeled data to provide a second proportion of negative labeled data greater than the first proportion. The evaluation model is used to score a performance of a predictive model based at least in part on a comparison of predictions made by the respective models for a same entity pair.
A database management system for managing a database includes each document being stored as a number of replicas for accessibility and data preservation. The system includes: a processor; a network interface; and a memory comprising programming instructions for execution by the processor to implement a database management service, the service configured to maintain a primary replica of a document, a number of secondary replicas of the document, and another log-only replica storing a log of changes to the document rather than contents of the document. The service makes head reads to the primary replica as needed when a read request to the number of secondary replicas does not result in a quorum.
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
60.
GRAPH EMBEDDING FOR SERVICE HEALTH INCIDENT INSIGHTS
A system and method for vector embedding of service incident data is described. In one aspect, a computer-implemented method comprising receiving service incident data includes free-form text data, structured metadata, and human-generated comments, constructing a graph representation of a service incident, the graph includes nodes representing the free-form text data, structured metadata, and human-generated comments of the service incident, and edges connecting related nodes, generating vector embeddings for the nodes and edges of the graph representation, applying dimensionality reduction to the vector embeddings to generate reduced embeddings, and storing the reduced embeddings and the vector embeddings in a database.
A remote monitoring and management (RMM) system is configured to receive a stream of events generated in response to interactions of users from multiple tenants with one or more applications and store the events in a database. A plurality of different insight types is defined for one or more event types for the events. Insights of the different insight types are generated based on the events in the database, the event types of the events, and numbers of events of the event types. The insights are ranked using an artificial intelligence (AI) model trained to generate a predicted success score for each of the insights. A predetermined number of top insights are selected based on the ranking of the insights and aggregated into a feed. The feed is to at least one computing device associated with the RMM system.
H04L 41/50 - Gestion des services réseau, p. ex. en assurant une bonne réalisation du service conformément aux accords
H04L 51/02 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p. ex. courriel en utilisant des réactions automatiques ou la délégation par l’utilisateur, p. ex. des réponses automatiques ou des messages générés par un agent conversationnel
62.
DETECTING AND PREVENTING HARMFUL GENERATIVE IMAGE OUTPUTSUSING DIGITAL SIGNATURES
This disclosure describes utilizing an image model protection system to improve the defensive robustness of a large generative image model against the generation of harmful digital images. For example, the image model protection system uses digital signatures of identified harmful images to determine whether a particular harmful image was generated by a specific large generative image model. Using digital signatures, the image model protection system matches the harmful image to images generated by the large generative image model. The image model protection system then identifies the prompt used to generate the image at the large generative image model. Furthermore, the image model protection system uses the harmful prompt to implement new security measures to safeguard the large generative image model against the generation of similar harmful images in the future.
Embodiments of the disclosed technologies are capable of evaluating typeahead suggestions using a partial search query. The embodiments describe obtaining a typeahead suggestion responsive to a partial search query. The embodiments further describe creating a prompt based on the typeahead suggestion. The embodiments further describe causing a large language model (LLM) to evaluate the typeahead suggestion based on the prompt. The embodiments further describe providing, to a computing device, an evaluation output by the LLM in response to the prompt.
Systems and methods for providing content events that are relevant to a first user of a social network are provided. In particular, a computing device may obtain content data associated with one or more content events, obtain user engagement data associated with the first user, determine a relevance score for each of the one or more content events using a relevance predictive model based on the user engagement data and attributes associated with the respective content event, the relevance score of each of the one or more content events representing a likelihood of the first user to engage with the respective content event, ranking the content events based on the relevance score for each of the one or more content events, and presenting a subset of the content events to the first user on a user interface of a device based on the ranking.
G06Q 50/00 - Technologies de l’information et de la communication [TIC] spécialement adaptées à la mise en œuvre des procédés d’affaires d’un secteur particulier d’activité économique, p. ex. aux services d’utilité publique ou au tourisme
This description relates to removing CO2 from the air. One example includes a duct extending from an external environment to an internal environment and a fan configured to move air through the duct. The example also includes first and second CO2 removal assemblies configured to alternatively transition between CO2 adsorption mode and CO2 desorption mode so that one of either the first and second CO2 removal assemblies is in CO2 adsorption mode and receiving at least some of the air moving through the duct while the other of the first and second CO2 removal assemblies is not receiving air moving through the duct while CO2 is removed in desorption mode.
B01D 53/04 - Séparation de gaz ou de vapeursRécupération de vapeurs de solvants volatils dans les gazÉpuration chimique ou biologique des gaz résiduaires, p. ex. gaz d'échappement des moteurs à combustion, fumées, vapeurs, gaz de combustion ou aérosols par adsorption, p. ex. chromatographie préparatoire en phase gazeuse avec adsorbants fixes
66.
CONTEXTUAL LONG-TERM SURVIVAL OPTIMIZATION FOR CONTENT MANAGEMENT SYSTEM CONTENT SELECTION
A method involves first receiving a set of data on rewards associated with previously chosen content variant choices, selected based on an initial content variant choice model. This initial model is informed by a prior set of data. A second, updated content variant choice model is then determined based on this first set of reward data. When a request for selecting a content variant choice is received, it comes with contextual features. The method involves estimating the expected rewards for a range of content variant choices, considering these contextual features. Subsequently, a specific content variant choice is chosen based on both the updated model and the anticipated rewards. Finally, the chosen content variant is displayed on a device, responding to the initial request.
A data processing system implements receiving an image and a natural language prompt input by a user requesting that an application generate an digital picture frame for the image; analyzing the prompt using a key-phrase extraction unit to extract one or more key phrases from the prompt that describe a topic of the frame to be generated for the image; providing the one or more key phrases as an input to a retrieval engine; analyzing the one or more key phrases with the retrieval engine to identify a set of candidate frame images from among a plurality of frame images in a labeled frame images datastore; analyzing the set of candidate frame images using an image placement unit to obtain a set of framed images based on the image and the candidate frame images; and presenting the set of framed images on a user interface of the application.
Methods and systems for estimating localization lengths in hybrid superconductor-semiconductor quantum devices are described. A method for estimating localization lengths in a hybrid superconductor-semiconductor quantum device includes constructing a statistical model for extracting localization lengths based on an implicit description of nonlocal conductance measurements associated with a physical representation of the hybrid superconductor-semiconductor quantum device. The method further includes, using a processor, estimating the localization lengths in the hybrid superconductor-semiconductor quantum device by a joint prior distribution enforcing smoothness over a function of gate voltages and extracted localization lengths for the hybrid superconductor-semiconductor quantum device.
A seamless and secure cloud to PC pointer relay allows a pointer/cursor to be moved between secure and unsecure windows while being displayed with smooth transitions and while transitioning between secure and unsecure data handling for pointer information. A secure input unit encrypts pointing device operations in the secure window. A user (host) computing device performs location calculations on encrypted data, which conceals pointing device operations in the secure window from the host operating system. The secure unit decrypts the encrypted data returned by the host operating system to determine the calculated pointer location information. The secure unit relays the calculated pointer operation information to the source of the secure window (e.g., remote cloud server) to process user interaction with the secure window while keeping the host operating system unaware of user activity in the secure window (e.g., other than position, if the host renders the pointer).
G06F 21/83 - Protection des dispositifs de saisie, d’affichage de données ou d’interconnexion dispositifs de saisie de données, p. ex. claviers, souris ou commandes desdits claviers ou souris
The disclosure relates to a semiconductor-superconductor hybrid structure, which includes a substrate, a buffer region having a superlattice sub-region over the substrate and a graded lattice sub-region over the superlattice sub-region, an active region over the buffer region, a superconductor over the active region consisting of one or more patterned nanowires, and a cap layer encapsulating the superconductor and top surface portions of the active region not covered by the superconductor. The active region covers an entire top surface of the buffer region, is configured to quantum confine electrons, and has a top barrier layer configured to tune coupling between the superconductor and the active region to a desired value. The superlattice sub-region is configured to prevent impurity diffusion and crystalline defects propagating from the substrate to the active region, while the graded lattice sub-region is configured to provide a lattice constant transition between the substrate and the active region.
In example embodiments, specialized machine learning techniques may be utilized to automatically create summaries for viewers based at least partially on viewer intent. Viewer intent refers to the intention of the viewer with respect to performing a particular action in a computer system, namely what the viewer is attempting to accomplish. In some example embodiments, this viewer intent may be expressed in the form of a plurality of different intent categories, each providing, at a high level, what the viewer intends to accomplish. Examples of such categories in a social networking service include “job seeker,” “information gatherer,” “salesperson,” and “recruiter.”
G06Q 50/00 - Technologies de l’information et de la communication [TIC] spécialement adaptées à la mise en œuvre des procédés d’affaires d’un secteur particulier d’activité économique, p. ex. aux services d’utilité publique ou au tourisme
H04L 67/1396 - Protocoles spécialement adaptés pour surveiller l'activité des utilisateurs
A user can select a capacity setting for a transitional partition that determines the allocation between a low-density partition and a high-density partition in the transitional partition. The transitional partition can dynamically change among multiple settings having different capacities for the low-density partition. If the current setting of the transitional partition does not efficiently utilize the available storage space based on the user's preferences for storing different types of data in the low-density partition and the high-density partition, then the user can choose to change the transitional partition to a different setting that better suits the individual user's storage allocation preferences. Therefore, valuable storage space will not be under-utilized but instead will be repurposed for more efficient use by converting a low-density partition to a high-density partition, and vice versa.
Devices are automatically paired (e.g., without user involvement) for wireless communication based on proximity. A first device may authorize (e.g., wired or wireless) bridge device(s) to participate in (e.g., initiate) pairing first and second devices. The first or bridge devices engage in wireless proximity communication with second device(s), indicating the second device(s) is (are) physically co-located with the first or bridge devices. Co-location is used to initiate automated pairing of the first and second devices. The second device provides a pairing address to the first device (e.g., through the bridge device). The first device provides a temporary security key for a secure channel between the first and second devices (e.g., through the bridge device). A non-temporary security key is provided by the first device to the second device (e.g., through the bridge device) over the secure channel. The first and second devices complete automated wireless pairing using the non-temporary security key.
H04W 4/80 - Services utilisant la communication de courte portée, p. ex. la communication en champ proche, l'identification par radiofréquence ou la communication à faible consommation d’énergie
H04W 12/04 - Gestion des clés, p. ex. par architecture d’amorçage générique [GBA]
H04W 76/14 - Établissement de la connexion en mode direct
Aspects of the present disclosure relate to multi-user, multi-device gaze tracking. In examples, a system includes at least one processor, and memory storing instructions that, when executed by the at least one processor, causes the system to perform a set of operations. The set of operations include identifying a plurality of computing devices, and identifying one or more users. The set of operations may further include receiving gaze input data and load data, from two or more of the plurality of computing devices. The set of operations may further include performing load balancing between the plurality of devices, wherein the load balancing comprises assigning one or more tasks from a first of the plurality of computing devices to a second of the plurality of computing devices based upon the gaze input data.
H04L 67/1008 - Sélection du serveur pour la répartition de charge basée sur les paramètres des serveurs, p. ex. la mémoire disponible ou la charge de travail
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
A computing system is configured to detect a request for a deployment of a container at a container orchestration service. One or more datasets associated with the deployment of the container are collected, and a plurality of features associated with the deployment are extracted based on the one or more datasets. A probability score is then generated based on the plurality of features, using a machine-learning model trained on datasets associated with historical deployments of containers that have been performed via the container orchestration service. The probability score indicates a probability that the deployment of the container is anomalous compared to the historical deployments of containers. When the probability score is greater than a threshold, the deployment of the container is determined as anomalous.
A heat exchanger comprising a heatsink and/or coldplate is disposed on a semiconductor having a heat-producing die within. A layer of thermal interface material (TIM) is disposed between the heat exchanger and semiconductor to enhance heat dissipation as the semiconductor is operated. A seal including a gasket or edgebond adhesive is provided around the perimeter edges of the heat exchanger and semiconductor to seal the gap around the periphery of the TIM layer to prevent the TIM from getting pumped out with cyclical thermal loading of the assembly. A capillary tube in the heat exchanger extending from the internal TIM layer to an opening exposed to the surrounding environment provides a reservoir to capture TIM that would otherwise be pumped out. Dimensions of the capillary tube are selected to prevent environmental air from passing by the TIM in the tube and getting entrapped in the TIM layer as voids.
H01L 23/10 - ConteneursScellements caractérisés par le matériau ou par la disposition des scellements entre les parties, p. ex. entre le couvercle et la base ou entre les connexions et les parois du conteneur
H01L 23/42 - Choix ou disposition de matériaux de remplissage ou de pièces auxiliaires dans le conteneur pour faciliter le chauffage ou le refroidissement
H01L 23/367 - Refroidissement facilité par la forme du dispositif
H01L 23/473 - Dispositions pour le refroidissement, le chauffage, la ventilation ou la compensation de la température impliquant le transfert de chaleur par des fluides en circulation par une circulation de liquides
77.
ADAPTIVE QUERY ROUTING FOR NATURAL LANGUAGE GENERATORS BASED ON QUERY DIFFICULTY
Natural language generators (NLGs), including large language models, are powerful technologies that are in widespread use. However, typically, as NLGs become more powerful and sophisticated, their correspondingly increased complexity requires substantial processing resources. The present disclosure provides automated techniques for dynamically routing queries between at least two NLGs based on an assessment of query difficulty. Less difficult queries can be routed to a less resource intensive NLG, while more difficult queries are routed to a more sophisticated, but more resource intensive NLG. Routing less difficult queries to a less resource intensive model can thus conserve computing resources, while providing little to no drop in response quality, and in some cases providing improved response quality.
A system is configurable to access a precomputed topology associated with a mesh that comprises a plurality of object components. The precomputed topology defines a plurality of object component groups that each comprise a respective set of object components of the mesh. The system is configurable to determine a traversal likelihood metric associated with the mesh that indicates a likelihood that rays of a ray trace operation will traverse acceleration structure nodes representing object components of the mesh, and use the plurality of object component groups as inputs to construct an acceleration structure. When the traversal likelihood metric satisfies a threshold, leaf nodes of at least one intermediate node of the acceleration structure each comprise a respective object component of an object component group. When the traversal likelihood metric fails to satisfy the threshold, at least one leaf node of the acceleration structure comprises an object component group.
Bidirectional flows of a communication session in a software defined network (SDN) are efficiently managed. A smart switch comprises a digital processing unit (DPU) complex comprising one or more DPUs, and a switching complex comprising one or more network processing units (NPUs). The DPU complex is configured to disaggregate enforcement of policies of the SDN from hosts of the SDN. The switching complex is configured to perform network routing of packets in the SDN. The hosts are implemented on servers communicatively coupled to network interfaces of the SDN. The switching complex is configured to perform policy enforcement of data flows for communication sessions that are offloaded from the DPU complex to the switching complex.
Methods, systems, and computer storage media for providing workload management using a workload management engine in an artificial intelligence (AI) system. In particular, workload management incorporates adaptive strategies that adjust the neural network models employed by a processing unit (e.g., NPU/GPU/TPU) based on the dynamic nature of workloads, workload management factors, and workload management logic. The workload management engine provides the workload management logic to support strategic decision-making for processor optimization. In operation, a plurality states of workload management factors are identified. A task associated with a workload processing unit is identified. Based on the task and the plurality of states of the workload processing unit, a neural network model from a plurality of neural network models is selected. The plurality of neural network models include a full neural network model and a reduced neural network model. The task is caused to be executed using the identified neural network model.
A method for securely providing a remote desktop session includes receiving, at a user device, an encrypted video stream that includes graphics content of the remote desktop session and that is characterized by a frame rate that is variable. The method further provides for reducing variability in the frame rate of the encrypted video stream by duplicating select encrypted frames of the video stream and inserting the duplicated encrypted frames into the video stream. The method additionally provides for delivering the video stream to a local application configured to generate control signals that cause a graphics processing unit (GPU) of the user machine to render the video stream to a display of the user machine.
H04N 21/254 - Gestion au sein du serveur de données additionnelles, p. ex. serveur d'achat ou serveur de gestion de droits
H04N 21/4405 - Traitement de flux élémentaires vidéo, p. ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène du flux vidéo codé impliquant le décryptage de flux vidéo
A computer implemented method comprising: obtaining a simulated input image simulating a second imaging modality based on a source image in a first imaging modality; inputting the simulated input image into a first machine learning model trained based on simulated training images in the second imaging modality, thereby generating a latent representation of the simulated input image; and causing the latent representation to be input into a second machine learning model trained based on empirical training images in the second image modality, thereby resulting in the second machine learning model generating a synthesized output image in the second modality.
Methods, systems, and computer storage media for providing workload management using a workload management engine in an artificial intelligence (AI) system. In particular, workload management incorporates adaptive strategies that adjust the neural network models employed by a processing unit (e.g., NPU/GPU/TPU) based on the dynamic nature of workloads, workload management factors, and workload management logic. The workload management engine provides the workload management logic to support strategic decision-making for processor optimization. In operation, a plurality states of workload management factors are identified. A task associated with a workload processing unit is identified. Based on the task and the plurality of states of the workload processing unit, a neural network model from a plurality of neural network models is selected. The plurality of neural network models include a full neural network model and a reduced neural network model. The task is caused to be executed using the identified neural network model.
A method for securely providing a remote desktop session includes receiving, at a user device, an encrypted video stream that includes graphics content of the remote desktop session and that is characterized by a frame rate that is variable. The method further provides for reducing variability in the frame rate of the encrypted video stream by duplicating select encrypted frames of the video stream and inserting the duplicated encrypted frames into the video stream. The method additionally provides for delivering the video stream to a local application configured to generate control signals that cause a graphics processing unit (GPU) of the user machine to render the video stream to a display of the user machine.
H04N 21/2347 - Traitement de flux vidéo élémentaires, p. ex. raccordement de flux vidéo ou transformation de graphes de scènes du flux vidéo codé impliquant le cryptage de flux vidéo
H04N 21/845 - Structuration du contenu, p. ex. décomposition du contenu en segments temporels
85.
NETWORK TRAFFIC ARBITRATION BASED ON PACKET PRIORITY
A method for network traffic arbitration includes, at a network router, receiving two or more network packets over two or more input ports. During an observation window, traffic parameters for the two or more network packets are stored in a traffic history table, the traffic parameters including a Quality-of-Service (QOS) priority value for a network packet of the two or more network packets. Based at least in part on the traffic parameters recorded in the traffic history table, including the QoS priority value, arbitration weights are calculated for each of the two or more input ports for a weighted round robin arbitration process.
H04L 47/6275 - Ordonnancement des files d’attente caractérisé par des critères d’ordonnancement pour des créneaux de service ou des commandes de service basé sur la priorité
H04L 47/62 - Ordonnancement des files d’attente caractérisé par des critères d’ordonnancement
A system is configurable to access a precomputed topology associated with a mesh that comprises a plurality of object components. The precomputed topology defines a plurality of object component groups that each comprise a respective set of object components of the mesh. The system is configurable to determine a traversal likelihood metric associated with the mesh that indicates a likelihood that rays of a ray trace operation will traverse acceleration structure nodes representing object components of the mesh, and use the plurality of object component groups as inputs to construct an acceleration structure. When the traversal likelihood metric satisfies a threshold, leaf nodes of at least one intermediate node of the acceleration structure each comprise a respective object component of an object component group. When the traversal likelihood metric fails to satisfy the threshold, at least one leaf node of the acceleration structure comprises an object component group.
Techniques, software, and systems for enhanced notification of connector coupling quality between a connector and a user device are included. In one implementation a method includes obtaining indications of magnetic coupling properties of a connector with respect to a device. Based on at least the indications of the magnetic coupling properties, the method includes determining a coupling quality of a connection between the connector and the device. The method also includes providing an indication based at least on the coupling quality of the connection falling below a threshold quality level.
G01R 31/66 - Test de connexions, p. ex. de fiches de prises de courant ou de raccords non déconnectables
G01D 5/14 - Moyens mécaniques pour le transfert de la grandeur de sortie d'un organe sensibleMoyens pour convertir la grandeur de sortie d'un organe sensible en une autre variable, lorsque la forme ou la nature de l'organe sensible n'imposent pas un moyen de conversion déterminéTransducteurs non spécialement adaptés à une variable particulière utilisant des moyens électriques ou magnétiques influençant la valeur d'un courant ou d'une tension
88.
CLOUD SERVICE SECURITY RISK ASSESSMENT AND MANAGEMENT
A computer-implemented approach for assessing and managing risk of a cloud service is disclosed. Cloud computing resource data for a cloud service is received. A risk assessment framework is applied to the cloud computing resource data. The risk assessment framework includes a set of security criteria including a subset of data plane criteria and a subset of control plane criteria. The risk assessment framework assigns an individual risk score to each security criteria of the set. The individual risk scores of the set of security criteria are aggregated to generate an overall risk score for the cloud service. A graphical user interface including the overall risk score is visually presented via a display. A computer-automated risk management operation that automatically adjusts security settings of the cloud service based at least on the cloud computing resource data for the cloud service is executed to enhance security of the cloud service.
This patent relates to hinged devices, such as computing devices. One example includes a first portion including a first input/output device and a second portion including a second input/output device. A hinge assembly includes a flexible hinge that removably couples the first and second portions and allows relative rotation between the first and second portions. The flexible hinge is biased into the first portion to reduce a percentage of the flexible hinge exposed between the first and second portions at a given rotational or angular orientation of the first and second portions.
Liquid-cooled coldplates are mounted to racks receiving solid state drives (SSDs) in an electronic component rack. The SSDs have heat spreaders with externally exposed surfaces that are thermally coupled to the coldplates using dry-contact interfaces. The SSD heat spreaders and rack-mounted coldplates provide a thermal path from the heat-producing semiconductors inside the SSD to a fluid distribution system in the rack that is operatively coupled to a liquid-cooling system. The SSDs are slideably mounted in the racks to support easy “hot-swapping.” A technician slides an SSD into the rack racks and uses a finger-operated mechanism in the SSD to simultaneously seat SSD power and data connectors to mating connectors in the rack and place the coldplate in intimate thermal contact with the SSD heat spreader.
The presently disclosed magnetic locking mechanism(s) for a rectangular computing device is directed at providing a fast, but tamper resistant, and anti-theft solution for assembly and disassembly of a rectangular computing device having a top and a base that come together to form an overall enclosure for the rectangular computing device. The top and base that incorporate one or more of the presently disclosed magnetic locking mechanisms are capable of being quickly and easily attached and detached without damaging the rectangular computing device, so long as a correct magnetic key is used. This aids both repairability and upgradability of the rectangular computing device during its life cycle, as well as recyclability at the end of its life cycle. Without the correct magnetic key, it is difficult to separate the top and base without damaging the rectangular computing device.
A data processing system implements receiving, via a first software application on a client device, a call requesting a schedule to be generated for a user by a generative model. The system further implements identifying online and/or offline data source(s) indicating activities specific to the user, the online and/or offline data source(s) including software application(s) within a workspace; constructing a first prompt by a prompt construction unit as an input to the generative model, the prompt construction unit constructing the first prompt by appending the activities and context data to an instruction string, the instruction string comprising instructions to the generative model to schedule the activities based on the context data, and to assign the scheduled activities into the schedule, the context data being associated with the user and/or the activities; providing the schedule to the client device; and causing a user interface of the client device to present the schedule.
A technique accelerates the generative production of tokens using a target language model that operates in cooperation with a draft language model. The target language model is more capable, but slower, compared to the draft language model. In operation, the draft language model transforms prompt tokens into draft tokens. The target language model edits the draft tokens, e.g., by selecting zero, one, or more of the draft tokens, and by also predicting a next token to follow the draft token(s) (if any) that are selected. Further, the target language model produces guidance vector information. In a subsequent cycle, the draft language model uses the guidance vector information to produce an updated set of set of draft tokens. The guidance vector information informs the draft language model of the embedding space being used by the target language model. This achieves a more effective cooperative relation between the two models.
Techniques are described herein that are capable of providing time-of-scan protection for a scannable encoded image in an electronic message or an electronic form. An electronic message or electronic form is received. The electronic message or electronic form includes a scannable encoded image. A uniform resource identifier (URI) that is encoded in the scannable encoded image is identified by decoding the scannable encoded image. The URI identifies a target data source. A wrapped URI is generated by wrapping the URI in a wrapper. The wrapped URI identifies a substitute data source. A replacement scannable encoded image is generated by encoding the wrapped URI. A replacement electronic message or replacement electronic form is generated by replacing the scannable encoded image in the electronic message or electronic form with the replacement scannable encoded image. The replacement electronic message is provided, or the replacement electronic form is published.
G06F 21/56 - Détection ou gestion de programmes malveillants, p. ex. dispositions anti-virus
G06F 21/53 - Contrôle des utilisateurs, des programmes ou des dispositifs de préservation de l’intégrité des plates-formes, p. ex. des processeurs, des micrologiciels ou des systèmes d’exploitation au stade de l’exécution du programme, p. ex. intégrité de la pile, débordement de tampon ou prévention d'effacement involontaire de données par exécution dans un environnement restreint, p. ex. "boîte à sable" ou machine virtuelle sécurisée
A computer-implemented method comprising receiving a plural number of candidate parameter value sets in a specified order, each comprising a respective candidate parameter value for at least one parameter of an optimisation algorithm, wherein the number of candidate parameter value sets is based on a processing budget; for each candidate parameter value set in the sequence: applying the optimisation algorithm, with the at least one parameter set to the respective candidate parameter value, to a plurality of initial states of a model representing a system to generate corresponding candidate updated states, and evaluating each of the candidate updated states according to an optimality metric to generate a corresponding optimality score; selecting, as an estimated optimal state of the model, the candidate updated state having the highest optimality score; and outputting the selected estimated optimal state of the model to a user interface, network interface or other application.
The disclosed technology is generally directed to a distributed query-and-command system. In one example of the technology, in a trusted execution environment (TEE) of a first node, database code of the first node and distributed ledger code of the first node is executed, such that execution of the distributed ledger code of the first node instantiates a first instance of a distributed ledger of a consortium blockchain, and such that execution of the query-and-command code of the first node instantiates a first instance of a query-and-command system. The consortium blockchain is distributed among a plurality of nodes, and the query-and-command system is distributed among the plurality of nodes. A first transaction that is associated with modifying the query-and-command system is received. The first transaction is executed. Changes associated with the first transaction to the distributed ledger are persisted.
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuéesArchitectures de systèmes de bases de données distribuées à cet effet
97.
ENHANCED EYE TRACKING SYSTEMS UTILIZING JOINT BIOLOGICAL AND HARDWARE ESTIMATIONS
The disclosed techniques provide enhanced eye tracking systems utilizing joint estimation of biological parameters and hardware parameters. A system uses joint estimation of biological parameters, e.g., direction and position of an eye, with concurrent estimation of hardware parameters, e.g., camera position or camera direction, to self-calibrate and provide eye tracking estimations to accommodate for deformations and other changes of a device. Sensor data is used to select hardware parameters of a camera for use in the joint estimation with the biological parameters, where the hardware parameters are estimated based on glint and pupil position of a user. The disclosed techniques include a method to model changes of a device, as well as detect and compensate for them while the eye-tracking device is in normal use, without requiring a factory-calibration procedure to be repeated.
Systems and methods are provided for obtaining, training, and using an end-to-end AST model based on a neural transducer, the end-to-end AST model comprising at least (i) an acoustic encoder which is configured to receive and encode audio data, (ii) a prediction network which is integrated in a parallel model architecture with the acoustic encoder in the end-to-end AST model, and (iii) a joint layer which is integrated in series with the acoustic encoder and prediction network. The end-to-end AST model is configured to generate a transcription in the second language of input audio data in the first language such that the acoustic encoder learns a plurality of temporal processing paths.
G10L 15/06 - Création de gabarits de référenceEntraînement des systèmes de reconnaissance de la parole, p. ex. adaptation aux caractéristiques de la voix du locuteur
G06F 40/284 - Analyse lexicale, p. ex. segmentation en unités ou cooccurrence
G06F 40/58 - Utilisation de traduction automatisée, p. ex. pour recherches multilingues, pour fournir aux dispositifs clients une traduction effectuée par le serveur ou pour la traduction en temps réel
Topological devices with asymmetric junction(s) are described. An example topological device (100) includes a superconducting wire (112) comprising a first segment (114) and a second segment (116), where the first segment (114) is configurable to be in a trivial phase and the second segment (116) is configurable to be in a topological phase. The topological device further includes an asymmetric junction (182), at an interface of the first segment (114) and the second segment (116). The asymmetric junction (182) is operable to couple a Majorana zero mode, MZM, in the second segment (116) to a quantum dot (172) or a transport lead (153) such that the asymmetric junction (182) increases strength of a coupling between the MZM and the quantum dot (172) or the transport lead (153) while reducing strength of a coupling between any states formed in the first segment (114) of the superconducting wire (112) and the quantum dot (172) or the transport lead (153).
G06N 10/40 - Réalisations ou architectures physiques de processeurs ou de composants quantiques pour la manipulation de qubits, p. ex. couplage ou commande de qubit
100.
EXPEDITING GENERATIVE TOKEN PRODUCTION USING SPECULATIVE SAMPLING, ADDED GUIDANCE, AND LANGUAGE MODELS OF DIFFERENT CAPACITIES
A technique accelerates the generative production of tokens using a target language model that operates in cooperation with a draft language model. The target language model is more capable, but slower, compared to the draft language model. In operation, the draft language model transforms prompt tokens into draft tokens. The target language model edits the draft tokens, e.g., by selecting zero, one, or more of the draft tokens, and by also predicting a next token to follow the draft token(s) (if any) that are selected. Further, the target language model produces guidance vector information. In a subsequent cycle, the draft language model uses the guidance vector information to produce an updated set of set of draft tokens. The guidance vector information informs the draft language model of the embedding space being used by the target language model. This achieves a more effective cooperative relation between the two models.