Conditions are identified in a telecommunications network. Based on the condition, a data store is accessed to identify an associated executable plan for responding to the detected anomaly. The executable plan is generated by an AI agent based on a structured operator-readable document comprising operator-executable procedures. The executable plan comprises a series of operations that are executable by an execution component.
A Content Distribution Network (CDN) may implemented as a front door to a RAG-based LLM for the purpose of semantically caching LLM responses to natural language prompts. More specifically, the CDN may also cache document citation(s) and/or user tag(s) along with the LLM response for purposes of ensuring that access permission constraints of the RAG are observed when providing cached LLM response as direct responses to semantically similar natural language prompts. Additionally, the CDN may be configured to modify and/or purge cached data from the CDN's cached memory database based on instructions received from a data access control (DAC) entity of the organization or enterprise client. This may ensure that the CDN observes any changes to the access permission constraints that might be made by the DAC entity.
A computing system receives an indication of an operational procedure to be performed in the computing network. The operational procedure is represented as a structured operator-readable document comprising operator-executable operations for resolving an issue in the computing network or implementing a modification to the computing network. Content from the operational procedure is input to an artificial intelligence (AI) agent to generate a plan for executing the operational procedure in the computing network. The plan includes a plurality of operations and at least one network tool for executing the operations. The generated plan is verified to meet one or more predetermined criteria.
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
4.
PROCESSOR IDLE STATE SELECTION IN A VIRTUALIZED ENVIRONMENT
A method implemented in a computer system with a processor system, including a logical processor, includes configuring an idle state calculation loop with a first idle residency calculation type, generating a projected processor idle residency, determining a target processor idle state based on the projected residency, instructing the logical processor to enter an idle period using the target state, identifying the actual processor idle residency post-idle period, and comparing it to the projected residency. Based on this comparison, the method configures the idle state calculation loop with a second idle residency calculation type. This method optimizes processor idle states by dynamically adjusting the calculation type to improve power efficiency and performance in the computer system.
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
5.
NETWORK PACKET MIRRORING FOR COMMUNICATIONS NETWORK ANALYSIS
The techniques disclosed herein provide a system for mirroring network traffic by way of network packet duplication using a network interface card in a communications environment. More specifically the present techniques are directed to retrieving data that is transmitted over a communications network (e.g., network packets) in a cloud radio access network (C-RAN) environment, also referred to as packet capture. For instance, the proposed system can mirror network traffic between a radio unit (RU) and a virtual radio access network (vRAN) within a distributed unit (DU) server of a telecommunications network (e.g., a 5G network). As such, the disclosed techniques exploit preexisting hardware and software features present in many network interface cards. In this way, the present techniques enable granular packet capture without requiring additional specialized hardware and/or software. Moreover, the network interface card can be configured to loopback scheduling information to streamline packet analysis operations.
Systems and method are provided that are directed to tuning a hyperparameter associated with a small neural network model and transferring the hyperparameter to a large neural network model. At least one neural network model may be received along with a request for one or more tuned hyperparameters. Prior to scaling the large neural network, the large neural network is parameterized in accordance with a parameterizing scheme. The large neural network is then scaled and reduced in size such that a hyperparameter tuning process may be performed. A tuned hyperparameter may then be provided to a requestor such that the hyperparameter can be directly input into the large neural network. By tuning a hyper parameter using a small neural network, significant computation cycles and energy may be saved.
Apparatus and methods for training a neural network accelerator using quantized precision data formats are disclosed, and in particular for storing activation values from a neural network in a compressed format for use during forward and backward propagation training of the neural network. In certain examples of the disclosed technology, a computing system includes processors, memory, and a compressor in communication with the memory. The computing system is configured to perform forward propagation for a layer of a neural network to produced first activation values in a first block floating-point format. In some examples, activation values generated by forward propagation are converted by the compressor to a second block floating-point format having a narrower numerical precision than the first block floating-point format. The compressed activation values are stored in the memory, where they can be retrieved for use during back propagation.
Verification of a tamper-resistant log is disclosed herein. A storage provider maintains an append-only log storing a first log entry written by a first writer, the first log entry comprising first log data, a first signature and a first hash value. A verifier requests, from the storage provider, verification of the first log entry. The verifier obtains, from the storage provider, the first log entry and at least a portion of a second log entry preceding the first log entry to enable verification of the first log entry, wherein the second log entry comprises second log data, a second signature and a second hash value. The first log entry is verified based, at least in part, on the portion of the second log entry.
G06Q 20/40 - Autorisation, p. ex. identification du payeur ou du bénéficiaire, vérification des références du client ou du magasinExamen et approbation des payeurs, p. ex. contrôle des lignes de crédit ou des listes négatives
9.
DYNAMIC ATTACHMENT OF SECURE PROPERTIES TO MACHINE IDENTITY WITH DIGITAL CERTIFICATES
Technology is shown for dynamically attaching secure properties to an identity certificate. Claims determining secure properties for an identity are signed and embedded in an identity certificate. Both the identity certificate and the signed claims in the certificate are verified. When a service request is received from the identity, the signed claims from the identity certificate are checked to determine if the request is permitted. If the request is permitted, then the service request is processed. Some examples involve creating claims determining the secure properties for the remote machine, signing the claims to create the signed claims, distributing the signed claims to a certificate authority, embedding the signed claims in the remote machine identity certificate, and distributing the remote machine identity certificate. The claims can be embedded in the certificate as X.509 properties.
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
10.
SIGNATURE-BASED REMEDIATION OF DATABASE MANAGEMENT SYSTEMS
The automatic detection of inconsistencies in a database system is described. A first signature and a second signature are received. The first signature is a signature of a result of a first execution of the query against a database by a first version of database engine program code. The second signature is a signature of a result of a second execution of the query by a second version of the database engine program code. A determination is made of whether the first signature and the second signature match. In response to the first signature and the second signature failing to match, an inconsistency report regarding at least one of the first or second versions of the database engine program code is generated and remediation regarding at least one of the first or second versions of the database engine program code is performed.
A computing system for performing runtime data handling optimization for generative models is provided. The computing system comprises at least one processor and memory comprising a first memory and a second memory, wherein the memory stores instructions that, when executed by the at least one processor, cause the at least one processor to execute a generative model. The computing system computes a first value matrix entry based upon the processing of an input to the generative model. The first value matrix entry is stored in a first memory wherein a first group of value matrix entries is identified. The computing system executes data quantization on the first group of value matrix entries which results in a first quantized value matrix. The first quantized value matrix is added to a second memory where it can be used during generation of the generative model. When the first group of value matrix entries is less than a group size parameter, data padding matrix values are generated and used during execution of the data quantization.
A distributed query processor in a server is configured for compute scale and cache preservation to enable efficient cluster usage for query processing. The query processor includes an operator analyzer and an operator scheduler. The operator analyzer determines a first operator, of a graph of operators representative of a user query, to have a first characteristic and assigns the first operator to a first node set of a plurality of node sets. The first node set is associated with the first characteristic. A second node set of the node sets is associated with a second characteristic different from the first characteristic. The operator scheduler is configured to cause the first operator to be executed in the assigned first node set to generate a first operator result, and a query result to be generated based at least on the first operator result.
Examples relate to systems and methods for restoring threads including context of the threads outside of a chat interface. During a thread including multiple queries and responses, one or more of the responses may include links to web pages and/or to other applications (e.g., presentation applications, word-processing applications). During interactions with the thread, one or more of the links may be selected. The selection of the links causes the corresponding web pages to be loaded and/or the corresponding applications to be launched. The web pages that are opened and/or the applications that are launched during an ongoing thread are stored as thread data for the ongoing thread. Then, when the thread is resumed at a later time, not only is the chat interface populated with the prior queries and responses of the thread, but the web pages and/or applications are also restored.
Aspects of the disclosure include injecting a magic state in a code. Aspects include preparing the magic state on a first set of physical qubits, initializing a second set of the physical qubits to X=+1 state, and initializing a third set of the physical qubits to Y=+1 state. Aspects include initializing a fourth set of the physical qubits to Z=+1 state and measuring stabilizers of the code, thereby resulting in the magic state being injected into the code.
G06N 10/40 - Réalisations ou architectures physiques de processeurs ou de composants quantiques pour la manipulation de qubits, p. ex. couplage ou commande de qubit
G06N 10/70 - Correction, détection ou prévention d’erreur quantique, p. ex. codes de surface ou distillation d’état magique
15.
CONFIGURING A LARGE LANGUAGE MODEL TO CONVERT NATURAL LANGUAGE QUERIES TO STRUCTURED QUERIES
Embodiments of the disclosed technologies are capable of generating natural language queries. The embodiments describe generating a training natural language query of a training structured search query using a first LLM and a first prompt. The embodiments further describe fine-tuning a second LLM using the training natural language query of the training structured search query and the training structured search query. The fine-tuned second LLM generates a structured version of a natural language query. The embodiments further describe generating the structured version of a received natural language query using the fine-tuned second LLM and a second prompt.
A system provides adaptive adjustments of perspective views for improving detail awareness for users associated with target entities of a virtual environment. A system can generate customized three-dimensional 3D views for each individual user participating in a communication session. The system can generate customized three-dimensional views for each individual user without making modifications to a 3D model of a virtual environment so a 3D environment can be maintained while each participant may have adjusted angles and positions for various virtual objects. The system can adaptively adjust an angle or position for entities in a viewing perspective or change a dimension of a perspective view for a target entity. The adjustments can be according to each viewer's point of view to maximize detail awareness for each participant of a communication session. These adjustments can be made while at the same time, maintaining attendees arranged in a specific spatial relationship without changes.
A diffusion model is implemented in the analog processing domain. Analog restoration model circuitry is configured to denoise an analog signal (referred to as ‘signal restoration’ processing). Analog noise injection circuitry coupled to the analog restoration model circuitry receives the denoised signal and injects an amount of noise back into it. The resulting noise-injected signal is fed back to the analog restoration model circuitry for further signal restoration processing, and the resulting signal is again passed to the noise injection circuitry for noise injection. Various mechanisms for implementing the noise injection stage in the analog domain are described. In a first example embodiment, a constant noise signal is applied with a variable scaling factor. In a second example embodiment, a variable noise signal is generated using analog noise generation circuitry.
G06N 3/067 - Réalisation physique, c.-à-d. mise en œuvre matérielle de réseaux neuronaux, de neurones ou de parties de neurone utilisant des moyens optiques
18.
ENTITY-SPECIFIC DATA ANALYSIS ENGINE IN A DATA INTELLIGENCE SYSTEM
Methods, systems, and computer storage media for providing entity-specific data analysis using an entity-specific data analysis engine in a data intelligence system are described. The entity-specific data analysis engine can be an LM-based system that supports generating and communicating entity-specific data analysis output. In operation, a dataset associated with an entity is accessed. A bidirectional volumetric analysis output is generated based on executing a plurality of bidirectional volumetric analysis operations against the dataset. A plurality of probe questions and a plurality of data analysis axes associated with a focus area are generated for analyzing the bidirectional volumetric analysis output. Using the bidirectional volumetric analysis output, the plurality of probe questions, and the plurality of data analysis axes, an entity-specific data analysis output is generated, based in part on identifying false positive trends in the dataset and defining rules to filter out the false positives from the entity-specific data analysis output.
G06Q 10/0635 - Analyse des risques liés aux activités d’entreprises ou d’organisations
G06Q 10/0637 - Gestion ou analyse stratégiques, p. ex. définition d’un objectif ou d’une cible pour une organisationPlanification des actions en fonction des objectifsAnalyse ou évaluation de l’efficacité des objectifs
G06Q 10/0639 - Analyse des performances des employésAnalyse des performances des opérations d’une entreprise ou d’une organisation
19.
DISCOURSE ENGINE(S) FOR PROVIDING QUALITATIVE FEEDBACK
Systems and methods for a discourse engine for providing qualitative feedback are described herein. In an example, a discourse engine may determine a speech type for a speech exercise selected by a client device. The discourse engine may determine one or more qualitative categories for speech feedback and determine one or more qualitative aspects per the one or more qualitative categories for the speech feedback. The discourse engine may receive first speech content from the client device and generate first speech feedback based on the one or more qualitative aspects and the first speech content. The discourse engine may then provide the first speech feedback to the client device.
G09B 5/02 - Matériel à but éducatif à commande électrique avec présentation visuelle du sujet à étudier, p. ex. en utilisant une bande filmée
G10L 15/22 - Procédures utilisées pendant le processus de reconnaissance de la parole, p. ex. dialogue homme-machine
G10L 25/60 - Techniques d'analyse de la parole ou de la voix qui ne se limitent pas à un seul des groupes spécialement adaptées pour un usage particulier pour comparaison ou différentiation pour mesurer la qualité des signaux de voix
20.
CONFIGURING A TENSOR OPERATION PIPELINE IN A HARDWARE ACCELERATOR
A computing method is provided for configuring a tensor operation pipeline. In one example implementation, the method includes receiving a tensor operation pipeline definition and tensor data from a processor, at a configurable pipeline processing element array of a hardware accelerator. The method further includes, in each of a plurality of processing elements of the array, processing the tensor data by implementing a configurable tensor operation pipeline including one or more of the fixed tensor operation logic units according to the tensor operation pipeline definition. The method further includes outputting a tensor operation pipeline result based on the processing of the tensor data by each tensor operation pipeline in each processing element.
G06F 9/38 - Exécution simultanée d'instructions, p. ex. pipeline ou lecture en mémoire
G06F 15/80 - Architectures de calculateurs universels à programmes enregistrés comprenant un ensemble d'unités de traitement à commande commune, p. ex. plusieurs processeurs de données à instruction unique
21.
COMPUTING RESOURCE MANAGEMENT BASED ON DETERMINING A NUMBER OF DISCRETE USERS OF A SHARED COMMUNICATION CHANNEL
This present disclosure provides techniques and solutions for identifying a number of discrete users of a shared communication channel, such as share telephone line associated with a telephone number. Information about the number of discrete users can be used for adjusting computing resource capacity associated with the shared communication channel. A target speaker profile is generated for audio sent over the shared communication channel and compared with speaker profiles in a library. If the target speaker profile does not match any speaker profile in the library, the system increments the number of distinct users associated with the shared communication channel. Disclosed techniques can be applied to various shared communication channels, including shared telephone lines, network addresses, radio frequencies, network links, or network channels.
H04L 65/80 - Dispositions, protocoles ou services dans les réseaux de communication de paquets de données pour prendre en charge les applications en temps réel en répondant à la qualité des services [QoS]
A63F 13/335 - Dispositions d’interconnexion entre des serveurs et des dispositifs de jeuDispositions d’interconnexion entre des dispositifs de jeuDispositions d’interconnexion entre des serveurs de jeu utilisant des connexions de réseau étendu [WAN] utilisant l’Internet
H04L 47/80 - Actions liées au type d'utilisateur ou à la nature du flux
H04L 65/403 - Dispositions pour la communication multipartite, p. ex. pour les conférences
H04M 3/42 - Systèmes fournissant des fonctions ou des services particuliers aux abonnés
H04M 3/56 - Dispositions pour connecter plusieurs abonnés à un circuit commun, c.-à-d. pour permettre la transmission de conférences
H04M 7/12 - Dispositions d'interconnexion entre centres de commutation pour l'exploitation entre centraux comportant différents types d'équipement de commutation, p. ex. à entraînement mécanique et pas à pas ou décimal et non décimal
Examples described herein generally relate to systems and methods for handwriting recognition. In an example, a computing device may receive input corresponding to a handwritten word and apply first recognition model to the input. The first recognition model may be configured to determine a first confidence level of a first portion of the input is greater than a second confidence level of a second portion of the input. The computing device may also apply a second recognition model to the input, wherein the second recognition model is different from the first recognition model and combine results of the first recognition model and the second recognition model to determine a list of candidate words. The computing device may also output one or more candidate words from the list of candidate words.
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 30/24 - Reconnaissance de caractères caractérisée par la méthode de traitement ou de reconnaissance
G06V 30/262 - Techniques de post-traitement, p. ex. correction des résultats de la reconnaissance utilisant l’analyse contextuelle, p. ex. le contexte lexical, syntaxique ou sémantique
A diffusion model is implemented in the analog processing domain. Analog restoration model circuitry is configured to denoise an analog signal (referred to as 'signal restoration' processing). Analog noise injection circuitry coupled to the analog restoration model circuitry receives the denoised signal and injects an amount of noise back into it. The resulting noise-injected signal is fed back to the analog restoration model circuitry for further signal restoration processing, and the resulting signal is again passed to the noise injection circuitry for noise injection. Various mechanisms for implementing the noise injection stage in the analog domain are described. In a first example embodiment, a constant noise signal is applied with a variable scaling factor. In a second example embodiment, a variable noise signal is generated using analog noise generation circuitry.
The techniques disclosed herein provide enhanced controls for the display of real-time text (RTT) in calls and meetings. RTT is the ability for someone to send a text message on a character-by-character basis to everybody else in a call or meeting. The system disclosed herein integrates RTT, video, and live captions all in one central experience. This integrated experience allows users to participate equitably by making RTT accessible to users regardless of the operating mode they are in and still concurrently access other meeting content, including video streams, chat messages, live captions, transcripts, and artificial intelligence (AI) tools, such as Copilot. In one embodiment, during an online conference, in response to one of the attendees activating a RTT mode, when at least one user minimizes a meeting stage, such as for the purpose of multitasking while listening, the conference application maintains a display area for displaying RTT.
H04L 12/18 - Dispositions pour la fourniture de services particuliers aux abonnés pour la diffusion ou les conférences
H04L 51/046 - Interopérabilité avec d'autres applications ou services réseau
H04L 65/401 - Prise en charge des services ou des applications dans laquelle les services impliquent une session principale en temps réel et une ou plusieurs sessions parallèles additionnelles en temps réel ou sensibles au temps, p. ex. accès partagé à un tableau blanc ou mise en place d’une sous-conférence
25.
CONFIGURING A TENSOR OPERATION PIPELINE IN A HARDWARE ACCELERATOR
A computing method (200) is provided for configuring a tensor operation pipeline. In one example implementation, the method includes receiving a tensor operation pipeline definition and tensor data from a processor, at a configurable pipeline processing element array of a hardware accelerator (202). The method further includes, in each of a plurality of processing elements of the array, processing the tensor data by implementing a configurable tensor operation pipeline including one or more of the fixed tensor operation logic units according to the tensor operation pipeline definition (210). The method further includes outputting a tensor operation pipeline result based on the processing of the tensor data by each tensor operation pipeline in each processing element (212).
G06F 9/30 - Dispositions pour exécuter des instructions machines, p. ex. décodage d'instructions
G06N 3/063 - Réalisation physique, c.-à-d. mise en œuvre matérielle de réseaux neuronaux, de neurones ou de parties de neurone utilisant des moyens électroniques
26.
INCREMENTAL VERIFICATION OF TAMPER-RESISTANT LEDGER
Incremental verification of a tamper-resistant ledger is disclosed herein. Periodic proofs are generated by periodically verifying the integrity of a tamper-resistant ledger. The periodic proofs enable a verifier to incrementally verify the integrity of the tamper-resistant ledger by verifying the periodic proofs. A periodic proof is generated based on a preceding proof and entries added to the tamper-resistant ledger since the preceding proof. A verifier verifies a periodic proof based on the preceding proof and the entries added to the ledger between preceding proof and the proof being verified. An action is performed responsive to verifying the integrity of the tamper-resistant ledger.
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
H04L 9/00 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité
27.
GEOGRAPHICALLY DIVERSIFIED EMBEDDING-BASED GUIDED RESPONSE TO A SECURITY ALERT
Techniques are described herein that are capable of providing a geographically diversified embedding-based guided response to a security alert. A security alert regarding an identified security incident that is associated with an entity is received. Sets of designated security incidents, which are similar to the identified security incident, may be selected from sets of historical security incidents associated with respective geographical regions based on embeddings of the identified security incident and the historical security incidents in the sets. The identified security incident is classified into selected classes using first model(s) associated with the respective geographical regions. Security actions are selected from a plurality of possible security actions using second model(s) associated with the respective geographical regions. A security recommendation regarding the security alert is generated. The security recommendation includes representations of the sets of designated security incidents, the selected classes, and/or the security actions.
A digital processing unit (DPU) is configured to disaggregate computing service provider functions of a cloud service provider from hosts of an edge computing network. The hosts are implemented on servers hosting a plurality of virtual machines or containers. The edge computing network comprises computing and storage devices configured to extend computing resources of the cloud service provider to remote users of the cloud service provider at a location remote from the cloud service provider. The DPU executes a computing service provider function that is disaggregated from processing cores of the server.
H04L 41/0663 - Gestion des fautes, des événements, des alarmes ou des notifications en utilisant la reprise sur incident de réseau en réalisant des actions prédéfinies par la planification du basculement, p. ex. en passant à des éléments de réseau de secours
A system and method for automatically recoloring an artifact includes receiving the digital artifact and one or more input images and receiving a user query from a user interface screen of an application being executed on a user client device, to create a design that includes the digital artifact and the one or more input images. A prompt is then constructed via a prompt construction engine, for transmission to a generative artificial intelligence (AI) tool, the prompt requesting the generative AI tool to identify a plurality of colors based on the user query. The prompt is transmitted to the prompt to the generative AI tool and the plurality of colors is received from the generative AI tool. The plurality of colors are then transmitted to a base palette generation engine, the base palette generation engine generating a base color palette based on at least one of one or more colors of the one or more input images or the plurality of colors received from the generative AI tool. The digital artifact is recolored based on the base color palette.
Solutions are disclosed that enable capacity-aware local repair of tunnels in packet switched wide area networks (WANs). Traffic engineering agents on the routers are programmed to create the tunnels and include sets of primary and alternate tunnels sharing the same source and destination. A tunnel source router is provided a traffic split for allocating incoming traffic to its primary and alternate tunnels for when the primary tunnel is operating at or near full capacity operation, and another traffic split that shifts at least some traffic from the primary tunnel to the alternate tunnel, when the primary tunnel's capacity drops below a threshold. A tunnel may lose capacity for commonly-occurring reasons, such as a disturbance to cabling and faults in optical transceivers. Traffic engineering agents along the tunnel report capacity to the tunnel source router, permitting the network to respond to capacity changes more rapidly than waiting for network tunnel reconfiguration.
H04L 41/0816 - Réglages de configuration caractérisés par les conditions déclenchant un changement de paramètres la condition étant une adaptation, p. ex. en réponse aux événements dans le réseau
H04L 45/00 - Routage ou recherche de routes de paquets dans les réseaux de commutation de données
H04L 47/125 - Prévention de la congestionRécupération de la congestion en équilibrant la charge, p. ex. par ingénierie de trafic
31.
FAST GRID-BASED SIMULATIONS OF WAVEGUIDES WITH SURFACE RELIEF GRATINGS
A computer-based simulation of light propagation and interactions with a waveguide and optical elements in a waveguide combiner uses a model based on a neural network that inherits its shape and properties from a grid structure superimposed on a waveguide combiner. Machine learning is not utilized as weights between nodes in the network are based on physical and geometrical rules which removes the need for training. The waveguide combiner is modeled as a stack of two-dimensional layers that are divided into cells. The k-vector space describing direction and wavelength of diffracted beams for the waveguide combiner is adapted to be non-continuous such that k-vectors are discretized into individual bins that are respectively associated with the different layers. Simulation computations are carried out in a sequence of discrete steps directed to interactions among the cells in which light energy is exchanged with their neighbors from within and between layers.
G06F 30/27 - Optimisation, vérification ou simulation de l’objet conçu utilisant l’apprentissage automatique, p. ex. l’intelligence artificielle, les réseaux neuronaux, les machines à support de vecteur [MSV] ou l’apprentissage d’un modèle
G06F 111/18 - Détails concernant les techniques de conception assistée par ordinateur utilisant la réalité virtuelle ou augmentée
32.
ENHANCED CONTROLS FOR THE DISPLAY OF REAL-TIME TEXT IN CALLS AND MEETINGS
The techniques disclosed herein provide enhanced controls for the display of real-time text (RTT) in calls and meetings. RTT is the ability for someone to send a text message on a character-by-character basis to everybody else in a call or meeting. The system disclosed herein integrates RTT, video, and live captions all in one central experience. This integrated experience allows users to participate equitably by making RTT accessible to users regardless of the operating mode they are in and still concurrently access other meeting content, including video streams, chat messages, live captions, transcripts, and artificial intelligence (AI) tools, such as Copilot. In one embodiment, during an online conference, in response to one of the attendees activating a RTT mode, when at least one user minimizes a meeting stage, such as for the purpose of multitasking while listening, the conference application maintains a display area for displaying RTT.
Embodiments of the disclosure provide a solution for generating images from texts based on prompts. A text encoder encodes an input text into a text embedding, and projects, by use of a prompt text embedding and a prompt image embedding as the baseline, the text embedding of the input text into an image embedding semantically correlated with the input text. A conversion network converts the image embedding into a latent embedding in a latent space of the image generator, and the image generator generates an image semantically correlated with the input text based on the latent embedding carrying semantic information. Accordingly, the solution can generate from the text containing semantics an image having corresponding semantics, and the quality of the generated image is also improved.
Techniques are described herein that are capable of renewing a signed attestation artifact with limited usage of a trusted platform module (TPM). Based on initiation of a cold boot of a host, attestation artifacts are received from the host. The attestation artifacts prove trust in a trusted execution environment (TEE) that runs on the host. The attestation artifacts include a public portion of an ephemeral cryptographic key (ECKeyPub), a public portion of a signing key (SKeyPub), and a signed key claim. The attestation artifacts are validated, and a signed attestation artifact, which includes the ECKeyPub and the SKeyPub, is generated and provided to the host. Based on a request to renew the signed attestation artifact including the signed attestation artifact, which includes the ECKeyPub and the SKeyPub, and further based on the TEE possessing the ephemeral cryptographic key, the signed attestation artifact is renewed during the cold boot session.
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
Systems, devices, methods, and computer-readable media for cycle detection in generative agent responses are provided. A method includes receiving, from the generative agent, a candidate completion, the candidate completion including a first response to a message from an entity conducting a conversation with the generative agent, determining, by a semantic extractor, a semantic embedding of the first response, and determining, by a cycle detector and based on the embedding and prior embeddings, whether the first response is a repetition of a prior candidate completion in the conversation.
Incremental verification of a tamper-resistant ledger is disclosed herein. Periodic proofs are generated by periodically verifying the integrity of a tamper-resistant ledger. The periodic proofs enable a verifier to incrementally verify the integrity of the tamper-resistant ledger by verifying the periodic proofs. A periodic proof is generated based on a preceding proof and entries added to the tamper-resistant ledger since the preceding proof. A verifier verifies a periodic proof based on the preceding proof and the entries added to the ledger between preceding proof and the proof being verified. An action is performed responsive to verifying the integrity of the tamper-resistant ledger.
H04L 9/00 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité
H04L 9/32 - Dispositions pour les communications secrètes ou protégéesProtocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
37.
DISCOURSE ENGINE(S) FOR PROVIDING QUALITATIVE FEEDBACK
Systems and methods for a discourse engine for providing qualitative feedback are described herein. In an example, a discourse engine may determine a first debate topic for the debate exercise and receive first debate content from a client device. The discourse engine may then generate first response content based on the first debate content and the first debate topic. The discourse engine may also determine one or more qualitative aspects for providing debate feedback to the client device and generate debate feedback based on the first debate content and the one or more qualitative aspects. The discourse engine may provide the debate feedback to the client device.
A hardware accelerator is disclosed that can flexibly be configured to support differing data types and differing operation flows. The hardware accelerator includes a plurality of fixed tensor operation logic units, tensor operation pipeline logic configured to receive from the processor a pipeline command including a software-defined tensor operation pipeline definition defining a plurality of tensor operation stages in a tensor operation pipeline and associated predetermined tensor operations to be performed at each of the defined tensor operation stages. The hardware accelerator is further configured to receive tensor data to be computed by the tensor operation pipeline, and implement the tensor operation pipeline to perform the tensor operations in each of the tensor operation stages on the tensor data, to thereby produce a tensor operation pipeline result for the tensor data, and output the tensor operation pipeline result to the processor.
G06F 15/80 - Architectures de calculateurs universels à programmes enregistrés comprenant un ensemble d'unités de traitement à commande commune, p. ex. plusieurs processeurs de données à instruction unique
G06F 9/38 - Exécution simultanée d'instructions, p. ex. pipeline ou lecture en mémoire
39.
PROCESSING FOR PROCESSORS PERFORMING TASKS HAVING FORWARD CONDITIONAL BRANCH INSTRUCTIONS
Various embodiments described herein control circuitry of a computing device to invalidate certain data elements associated with a forward conditional branch instruction (FCBI) to prevent computational inefficiencies, such pipeline flush. The data elements that are invalidated may correspond to conditional code associated with an FCBI. To make FCBIs more efficient, certain embodiments continue with a not-taken path that is invalidated if a branch resolves to be taken. This results in efficiencies because either the prediction was correct; or when the wrong path is taken, the wrong path is invalidated, thereby avoiding any resource utilization in redirection. In this manner, the FCBI may be executed more quickly or efficiently because certain data elements, such as the conditional code, are invalidated. Accordingly, certain embodiments improve computational inefficiencies, enhance performance of complex computational workloads with certain branches, and reduce or altogether eliminate pipeline stall and flushing for certain workloads.
This document relates to automated analysis of images. One example method involves obtaining an image and text associated with the image, detecting two or more objects in the image, and determining respective locations of the two or more detected objects in the image. The example method also involves determining whether a spatial relationship between the two or more detected objects matches a corresponding spatial relationship expressed by the text based at least on the respective locations of the two or more detected objects. The example method also involves outputting a value reflecting whether the spatial relationship between the two or more detected objects matches the corresponding spatial relationship expressed by the text.
G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
41.
SYSTEMS AND METHODS FOR SUPPORTING A HIGH THERMAL GRADIENT BETWEEN A QUBIT PLANE AND A CONTROL SYSTEM FOR THE QUBIT PLANE USING A SUPERCONDUCTING RIGID-FLEX CIRCUIT
Systems and methods for supporting a high thermal gradient between a qubit plane and a control system for the qubit plane are described. A system includes a qubit plane associated with a first rigid circuit portion of a superconducting rigid-flex circuit and a control system associated with a second rigid circuit portion of the superconducting rigid-flex circuit. The superconducting rigid-flex circuit includes a flexible circuit portion for interconnecting the first rigid circuit portion with the second rigid circuit portion. The system further includes a first cooling system operable to maintain an operating temperature for the qubit plane and the first rigid circuit portion of the superconducting rigid-flex circuit at or below 100 milli-kelvin. The system further includes a second cooling system operable to maintain an operating temperature for the control system and the second rigid circuit portion of the superconducting rigid-flex circuit at or below 10 kelvin.
42.
HARDWARE ACCELERATOR WITH CONFIGURABLE TENSOR OPERATION PIPELINE
A hardware accelerator (10) is disclosed that can flexibly be configured to support differing data types and differing operation flows. The hardware accelerator includes a plurality of fixed tensor operation logic units (16), tensor operation pipeline logic (18) configured to receive from the processor a pipeline command (24) including a software-defined tensor operation pipeline definition (26) defining a plurality of tensor operation stages (30) in a tensor operation pipeline (32) and associated predetermined tensor operations to be performed at each of the defined tensor operation stages. The hardware accelerator is further configured to receive tensor data (28) to be computed by the tensor operation pipeline, and implement the tensor operation pipeline to perform the tensor operations in each of the tensor operation stages on the tensor data, to thereby produce a tensor operation pipeline result (34) for the tensor data, and output the tensor operation pipeline result to the processor.
G06F 9/30 - Dispositions pour exécuter des instructions machines, p. ex. décodage d'instructions
G06N 3/063 - Réalisation physique, c.-à-d. mise en œuvre matérielle de réseaux neuronaux, de neurones ou de parties de neurone utilisant des moyens électroniques
43.
MEMORY CONFLICT RESOLUTION FOR DILITHIUM CRYPTOGRAPHY
Generally discussed herein are devices, systems, and methods for performing a number theoretic transform (NTT)/inverse NTT (INTT). A circuit for NTT/INTT can include a memory configured to store polynomial coefficients, butterfly operator circuits coupled to receive the polynomial coefficients and generate, after iterations of operating on the polynomial coefficients, transformed coefficients as outputs, a first subset of the butterfly operator circuits situated in series with each other and in parallel with a second subset of the butterfly operator circuits, shift registers coupled between the butterfly operator circuits and the memory, and a controller coupled to the memory, the controller configured to control which coefficients are provided to the butterfly operator circuits and which addresses of the memory store the outputs.
G06F 12/14 - Protection contre l'utilisation non autorisée de mémoire
G06F 12/06 - Adressage d'un bloc physique de transfert, p. ex. par adresse de base, adressage de modules, extension de l'espace d'adresse, spécialisation de mémoire
H04L 9/30 - Clé publique, c.-à-d. l'algorithme de chiffrement étant impossible à inverser par ordinateur et les clés de chiffrement des utilisateurs n'exigeant pas le secret
44.
HARDWARE ACCELERATOR FOR PERFORMING 1-DIMENSIONAL K-MEANS CLUSTERING IN PARALLEL
A hardware accelerator that performs k-means clustering on 1-dimensional inputs by computing a minimum within-cluster sum of squares matrix and a backtracking index, and using the backtracking index to identify start and end points for clusters within the 1-dimensional inputs. The within-cluster sum of squares matrix is generated in parallel by differing threads, using a two row ping pong buffer in shared memory of the thread block. The 1-dimensional inputs are read into shared memory and accessed as the threads compute successive rows of the minimum within-cluster sum of squares matrix. The backtrack index is stored in global memory and holds index values for the 1-dimensional inputs that minimize the minimum within-cluster sum of squares function at each element in the sum of squares matrix. After identifying the start and end points for the clusters, cluster labels can be generated for each of the 1-dimensional inputs.
A computer-implemented technique is described herein for facilitating a user's repeated execution of the same computer-implemented actions. The technique performs this task by determining patterns in the manner in which the user repeats requests associated with certain computer-implemented actions. For example, the technique determines context-dependent patterns in the manner in which the user submits search requests to a search system. The technique then leverages those patterns by proactively providing a request-assistance tool to the user in those context-specific circumstances in which the user is likely to perform the repetitive computer-implemented actions. The digital action-assistance tool provides various kinds of assistance to the user in performing the repetitive computer-implemented actions.
Machine learning techniques are leveraged to provide personalized assistance on a computing device. In some configurations a timeline of a user's interactions with the computing device is generated. For example, screenshots and audio streams may be saved as entries in the timeline. Context—the state of the computing device when the entry is created, such as which documents and websites are open—is also stored. Entries in the timeline are processed by a model to generate embedding vectors. The timeline may be searched by finding the embedding vector that is closest to an embedding vector derived from a search query. The user may select a query result, causing the associated context to be restored. For example, if the query is “show me all documents related to my upcoming trip to Japan”, the query result may open documents and websites that were open when booking a flight to Japan.
Novel solutions for speech recognition provide contextual spelling correction (CSC) for automatic speech recognition (ASR). Disclosed examples include receiving an audio stream; performing an ASR process on the audio stream to produce an ASR hypothesis; receiving a context list; and, based on at least the ASR hypothesis and the context list, performing spelling correction to produce an output text sequence. A contextual spelling correction (CSC) model is used on top of an ASR model, precluding the need for changing the original ASR model. This permits run-time user customization based on contextual data, even for large-size context lists. Some examples include filtering ASR hypotheses for the audio stream and, based on at least the ASR hypotheses filtering, determining whether to trigger spelling correction for the ASR hypothesis. Some examples include generating text to speech (TTS) audio using preprocessed transcriptions with context phrases to train the CSC model.
Systems and methods for providing enhanced teleconferencing. An example method includes receiving audio streams from a plurality of client devices of participants of a teleconference; converting the audio streams for a first conversation within the teleconference into first text; converting the audio streams for a second conversation within the teleconference into a second text; analyzing the first text to identify one or more topics being discussed in the first conversation; analyzing the second text to identify one or more topics being discussed in the second conversation; and presenting, in a teleconference user interface, at least one of the one or more topics being discussed in the first conversation or the one or more topics being discussed in the second conversation.
A method of reducing power consumption of a first wireless communication device is described. A charge level of a battery associated with the first wireless communication device is monitored. A wireless communication session between the first wireless communication device and a second wireless communication device is maintained. Based at least in part on the charge level of the battery being within a low battery threshold range, a wireless signal strength associated with the wireless communication session is monitored. Based at least in part on the wireless signal strength reaching a power saving threshold that is above a minimum connection threshold for maintaining the wireless communication session, a power saving action associated with a wireless interface that supports the wireless communication session is performed.
Disclosed herein is a system for leveraging telemetry data representing usage of a component installed on a group of sampled computing devices to confidently infer the quality of a user experience and/or the behavior of the component (e.g., an operating system) on a larger group of unsampled computing devices. The system is configured to use a propensity score matching approach to identify a sampled computing device that best represents an unsampled computing device using configuration data that is collected from both the sampled and unsampled computing devices. The quality of the user experience and/or the behavior of the component may be captured by a metric of interest (e.g., a QoS value). Accordingly, the system is configured to use the known metric of interest, determined from the telemetry data collected for the sampled computing device, to determine or predict the metric of interest for the unsampled computing device.
G06F 11/34 - Enregistrement ou évaluation statistique de l'activité du calculateur, p. ex. des interruptions ou des opérations d'entrée–sortie
G06F 18/22 - Critères d'appariement, p. ex. mesures de proximité
G06F 18/2413 - Techniques de classification relatives au modèle de classification, p. ex. approches paramétriques ou non paramétriques basées sur les distances des motifs d'entraînement ou de référence
G06N 20/20 - Techniques d’ensemble en apprentissage automatique
51.
COMMUNICATION USING DYNAMIC SPECTRUM ACCESS BASED ON CHANNEL SELECTION
The disclosure described herein configures a client device for communication using dynamic spectrum access within a frequency spectrum, such as television white space (TVWS), using a determined location of the client device based on location information, such as from a global positioning system. A dynamic spectrum access database of channels is accessed based on the location information. Available channels are determined for the client device from the channels based on the location information. A list of the available channels for use by the client device are transmitted to the client device, thereby allowing narrowband communication over the channels.
The present disclosure provides methods, apparatuses and non-transitory computer-readable medium for prompt optimization. An initial prompt may be obtained, wherein the initial prompt comprises multiple sequential instructions. One or more failed cases associated with the initial prompt may be obtained. One or more target patterns may be determined for the one or more failed cases. One or more prompt revising suggestions may be determined for the one or more target patterns. At least one revised prompt may be generated through adding one or more conditional branches to the multiple sequential instructions according to the one or more prompt revising suggestions.
A method for automating digital resource production within a real-time strategy game. The method includes identifying a virtual character at a first location in a virtual space, the virtual character capable of gathering virtual resources, and identifying a set of resource-gathering factors for the virtual character. Based on these factors, the method expands a defined distance from a second location in the virtual space within which the virtual character operates to include a third location in the virtual space, determines a virtual resource-gathering assignment, and instructs the virtual character to traverse to the third location. The set of resource-gathering factors includes a virtual resource production goal, a reserve of a virtual resource, an availability of a source of the virtual resource, a likelihood of an additional virtual resource source at the third location, and a likely peril to the virtual character in traversing to the third location.
A63F 13/56 - Calcul des mouvements des personnages du jeu relativement à d’autres personnages du jeu, à d’autres objets ou d'autres éléments de la scène du jeu, p. ex. pour simuler le comportement d’un groupe de soldats virtuels ou pour l’orientation d’un personnage
A63F 13/67 - Création ou modification du contenu du jeu avant ou pendant l’exécution du programme de jeu, p. ex. au moyen d’outils spécialement adaptés au développement du jeu ou d’un éditeur de niveau intégré au jeu en s’adaptant à ou par apprentissage des actions de joueurs, p. ex. modification du niveau de compétences ou stockage de séquences de combats réussies en vue de leur réutilisation
Seat-assignment based resource tracking is used to track usage, or consumption, of resources under a license. An account includes a license for a plurality of resources. An allotment is generated under the account and populated with a plurality of seats. Populating the allotment with the plurality of seats automatically authorizes the populated plurality of seats for access to a portion of the plurality of resources. When the plurality of resources are accessed by a device associated with a seat of the authorized plurality of seats, the usage of the plurality of resources by the device is tracked.
A hypergraph workload manager in a server is configured for failure tolerant and explainable state machine driven hypergraph execution. The hypergraph executor comprises a query optimizer, a hypergraph enlister, a pipeline analyzer, and a state machine generator. The query optimizer translates a user query into a query operator graph. The hypergraph enlister enlists the query operator graph into a hypergraph containing a set of query operator graphs representative of already submitted user queries. The enlistment is configured to join query operator graphs where it makes sense to optimize query executions. Updates to the hypergraph based on the enlistment results in a set of disconnected graphs. The pipeline analyzer performs an analysis of all operators of all queries in the hypergraph to find an optimal sequencing of execution. The state machine generator is configured to generate a hierarchical state machine for all operators of a disconnected graph of the hypergraph.
Disclosed is the differential application of scalars to compensate pixel degradation. Input image data is associated with a commanded luminance at each of a plurality of pixels. A degradation value is determined for each pixel. Based on the degradation value, an elevated drive current is determined to produce commanded luminance at the pixel. A required scalar is determined for each pixel to hold the elevated drive current from exceeding a drive current threshold. An applied scalar for each pixel is determined for each pixel to be applied to the elevated drive current. For at least some pixels, the applied scalar for a first pixel is based at least on [1] the required scalar of a second pixel and [2] a spatial relationship between the first pixel and the second pixel. Applied scalars are then used to output corrected imagery.
A computing system (10) including processing circuitry (12) configured to, during a calibration stage (56), perform a sparsity pattern search on a plurality of attention heads (52) included in one or more transformer layers (22) to select a respective sparsity pattern (54) associated with each of the attention heads. During an inferencing stage (58), processing circuitry receives an inferencing input (24). The processing circuitry pre-fills a context (60) based at least in part on the inferencing input. Pre-filling the context includes computing sparse attention scores (64) at each of the attention heads. Computing the sparse attention scores includes masking each of the attention heads using the respective sparsity pattern selected for that attention head during the calibration stage. The processing circuitry computes an inferencing output (48) by performing inferencing starting from the sparse attention scores. The processing circuitry outputs the inferencing output.
Described are techniques for passive user recognition in meeting environments, utilizing advanced biometric data processing. An in-room meeting system with a camera is used to capture meeting participant images. The images are analyzed to detect faces and generate face embeddings—vector representations of faces. These face embeddings are compared against a dynamically generated database of known users, accumulated from previous meetings, to verify participant identities without requiring explicit biometric submissions. This automated process enhances meeting efficiency by streamlining participant verification and improving security.
G06V 10/762 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant le regroupement, p. ex. de visages similaires sur les réseaux sociaux
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 40/16 - Visages humains, p. ex. parties du visage, croquis ou expressions
59.
HARDWARE ACCELERATOR FOR PERFORMING 1-DIMENSIONAL K-MEANS CLUSTERING IN PARALLEL
A hardware accelerator (14) that performs k-means clustering on 1-dimensional inputs by computing a minimum within-cluster sum of squares matrix and a backtracking index (B), and using the backtracking index (B) to identify start and end points for clusters within the 1- dimensional inputs. The within-cluster sum of squares matrix is generated in parallel by differing threads, using a two row ping pong buffer in shared memory (SM) of the thread block. The 1- dimensional inputs are read into shared memory (SM) and accessed as the threads compute successive rows of the minimum within-cluster sum of squares matrix. The backtrack index (B) is stored in global memory (GM) and holds index values for the 1-dimensional inputs that minimize the minimum within-cluster sum of squares function at each element in the sum of squares matrix. After identifying the start and end points for the clusters, cluster labels (25) can be generated for each of the 1-dimensional inputs.
G06F 16/28 - Bases de données caractérisées par leurs modèles, p. ex. des modèles relationnels ou objet
G06F 18/23213 - Techniques non hiérarchiques en utilisant les statistiques ou l'optimisation des fonctions, p. ex. modélisation des fonctions de densité de probabilité avec un nombre fixe de partitions, p. ex. K-moyennes
60.
Adjustment of a monocular display parameter to display content
A service configures a monocular display parameter associated with a virtual stimulus to achieve optimized viewing of the virtual stimulus on only one display of an ER system. The service determines that the ER system is operating in a monocular mode. The service accesses the monocular display parameter that is associated with the virtual stimulus. The monocular display parameter is applicable when the ER system is operating in the monocular mode. In response to determining that the ER system is operating in the monocular mode, the service causes the virtual stimulus to be displayed on only the one display. Beneficially, the virtual stimulus is displayed using the monocular display parameter.
H04N 13/356 - Reproducteurs d’images offrant des modes monoscopiques et stéréoscopiques indépendants
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
H04N 13/361 - Reproduction d’images stéréoscopiques mixtesReproduction d’images stéréoscopiques et monoscopiques mixtes, p. ex. une fenêtre avec une image stéréoscopique en superposition sur un arrière-plan avec une image monoscopique
Systems and methods for configurable die-to-die lane repair in multi-die systems are described. A multi-die system includes a first die and a second die, each of which comprises modular D2D link macros, where each of the modular D2D link macros has M data lanes. A method for configuring die-to-die lane repair includes forming repair groups having D data lanes spanning M data lanes, or fewer than M data lanes, associated with one or more modular D2D link macros, where D is independently configurable for each repair group. The method further includes, for each one of the repair groups designating R redundant lanes from among the D data lanes, where R is a positive integer independently configurable for each repair group, and where a location of each of the designated redundant lanes within a die floor plan associated with a respective repair group is independently configurable.
H01L 21/66 - Test ou mesure durant la fabrication ou le traitement
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p. ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
G06F 11/20 - Détection ou correction d'erreur dans une donnée par redondance dans le matériel en utilisant un masquage actif du défaut, p. ex. en déconnectant les éléments défaillants ou en insérant des éléments de rechange
62.
ASYNCHRONOUS FUNCTION EXECUTORS UTILIZING WORK UNIT STACKS
Implementations for executing asynchronous functions using a work unit stack executor on a data processing unit are provided. One aspect includes a computing system (100) for executing asynchronous functions using a work unit (WU) stack executor (508), the computing system (100) comprising a data processing unit (104) including a plurality of programmable processing cores (206) configured to execute an asynchronous function by performing a call to the asynchronous function, creating a future (502) corresponding to the asynchronous function, creating a WU stack (504), creating the WU stack executor (508) on the WU stack (504) to execute the future (502) and sending a WU (510) to start the WU stack executor (508).
A network interface controller (NIC) circuit may control data transfers between a network interface and a memory interface circuit. The NIC circuit receives data packets on the network interface and determines whether a packet type of a data packet corresponds to one of a first plurality of operations or a second plurality of operations. For data packets that correspond to one of the first plurality of operations, the NIC circuit controls the memory interface circuit according to the packet type and for data packets that correspond to one of the second plurality of operations, the NIC sends a notification to a processor circuit in the IC to execute software instructions to control the memory interface circuit according to the packet type. The NIC circuit quickly processes data packets corresponding to the first plurality of operations without software involvement but relies on software assistance for the second plurality of operations.
G06F 13/28 - Gestion de demandes d'interconnexion ou de transfert pour l'accès au bus d'entrée/sortie utilisant le transfert par rafale, p. ex. acces direct à la mémoire, vol de cycle
G06F 13/38 - Transfert d'informations, p. ex. sur un bus
G06F 12/1081 - Traduction d'adresses pour accès périphérique à la mémoire principale, p. ex. accès direct en mémoire [DMA]
G06F 15/173 - Communication entre processeurs utilisant un réseau d'interconnexion, p. ex. matriciel, de réarrangement, pyramidal, en étoile ou ramifié
64.
MEMORY INTERFACE CIRCUITS INCLUDING ENCRYPT/DECRYPT CIRCUITS TO RE-ENCRYPT ENCRYPTED DATA BLOCKS IN A MEMORY CIRCUIT AND RELATED METHODS
An exemplary memory interface circuit disclosed herein re-encrypts data in an encrypted data block in a memory circuit to further protect the data. In particular, the memory interface circuit reads an encrypted data block from the memory circuit and decrypts the encrypted data block using a first key that was previously used to encrypt the block of data. Then, the memory interface circuit encrypts the data again using a second key before storing the re-encrypted data back into the memory circuit. In some examples, the memory interface circuit includes a re-encryption circuit that includes secure configuration registers to control occasional re-encryption of the encrypted data in an effort to evade detection of the encryption key. In some examples, the time between re-encryptions may be adjusted in response to a frequency of memory accesses to the memory circuit.
G06F 12/14 - Protection contre l'utilisation non autorisée de mémoire
G06F 21/79 - Protection de composants spécifiques internes ou périphériques, où la protection d'un composant mène à la protection de tout le calculateur pour assurer la sécurité du stockage de données dans les supports de stockage à semi-conducteurs, p. ex. les mémoires adressables directement
G06F 21/85 - Protection des dispositifs de saisie, d’affichage de données ou d’interconnexion dispositifs d’interconnexion, p. ex. les dispositifs connectés à un bus ou les dispositifs en ligne
The described technology provides a device including a phase locked loop (PLL) circuit, the PLL circuit including a voltage controlled oscillator (VCO) and a phase detector, and a voltage supply and transconductance cell (Gm) configured to drain a current Iout from the VCO based on a sensed voltage (Vsup_sense) input into the Gm, wherein the Gm cell is configured to generate an open_loop signal based on the Iout drain from the VCO.
Methods, systems, and computer storage media for providing iterative data processing optimization using an iterative data processing optimization engine in a data intelligence system are described. Iterative data processing refers to handling data where the processing steps are repeated multiple times, across multiple views or modalities, to train machine learning models, filter and score data or generate output. The iterative data processing optimization engine employs expectation step machine learning models that are simple but with fast language models to efficiently and effectively probe and analyze data, while iteratively refining maximization step machine learning models that are optimized and fast to approximate the probing mechanism of the expectation step machine learning models more efficiently, for example, using metadata, external information, and compressed representation. The iterative data processing optimization engine can operate based on an agentic framework using lightweight artificial intelligence (Al) agents to perform model fitting, featurization, and report generation autonomously.
Data stored in a memory circuit may be encrypted using client keys that need to be available for high-speed data processing and yet held securely to avoid unauthorized access to the encrypted data. A secure processor circuit in a processor-based system obtains client keys associated with client applications and generates secure key-encryption keys that are used to encrypt the client keys so the client keys can be securely stored in the memory circuit. In some examples, data keys for encrypting data blocks associated with the client application may be generated from the client key, encrypted by a data key-encryption key generated in the secure processor circuit, and stored in the memory circuit. In such examples, because the client keys and data keys are encrypted while in memory, they are safer from software attacks on the memory circuit, which improves the security of the encrypted data blocks.
A computing system (10) including one or more processing devices (14) configured to receive prompt generation instructions (20) that specify an initial prompt (22) and a prompt evaluation criterion (26). In each of a plurality of iterations (35) of a prompt generation loop (30), the one or more processing devices are further configured to generate candidate prompts (38) at least in part at a machine learning model (36). The candidate prompts are generated based on a current-iteration prompt (34) that is initialized as the initial prompt in a first iteration. As specified by the prompt evaluation criterion, the one or more processing devices are further configured to compute respective evaluation scores (40) associated with the candidate prompts. Based on the evaluation scores, the one or more processing devices are further configured to replace the current- iteration prompt. The one or more processing devices are further configured to output a final prompt (42) generated in a final iteration.
G06F 40/131 - Fragmentation de fichiers textes, p. ex. création de blocs de texte réutilisablesLiaison aux fragments, p. ex. par utilisation de XIncludeEspaces de nommage
G06F 40/16 - Apprentissage automatique des règles de transformation, p. ex. au moyen d’exemples
G06F 40/216 - Analyse syntaxique utilisant des méthodes statistiques
Methods, systems, and computer storage media for providing a data analysis pipeline using a data analysis pipeline engine in a data intelligence system are described. A data analysis pipeline refers to a structured sequence of data processing steps that support transforming raw data into meaningful insights or actionable outcomes. The data analysis pipeline engine is an unsupervised learning pipeline based on clustering, topic modeling, and Large Language Models (LLMs). For example, the data analysis pipeline can use advanced machine learning techniques to automatically categorize emails into semantically similar clusters, enabling the data intelligence system to quickly identify and prioritize potentially high-risk emails for further investigation. The data analysis pipeline employs AI agents for context-aware graph induction relevance assessment. The AI agents employ induction and deduction loops to build and refine a data feature hypergraph (e.g., vulnerability hypergraph) that encompasses identified relevant data providing a holistic view of a contextual landscape.
Systems and methods are disclosed for clock phase calibration between source logic and a coupled serial data link transmitter, enabling low-latency synchronization into the transmitter. In calibration mode, a phase relationship is monitored between a first clock, driving the source logic, and a second clock, tightly synchronized with a serial clock of the data link. The first clock is adjusted to a first phase at which the first and second clocks are aligned. The first clock phase is set for operation mode based on the first phase. Monitoring uses a D-type flip-flop as a phase detector. Adjustment is in steps of half the serial clock period. Variations are disclosed.
Innovations in machine learning ("ML") models used in adaptive post-processing of decoded video in a conferencing tool are described. For example, as part of post-processing of decoded video, a super-resolution/video restoration model increases spatial resolution (e.g., by interpolation between sample values), mitigates compression artifacts, and mitigates upscaling artifacts introduced when increasing spatial resolution. Or, as another example, as part of post-processing of decoded video, a video restoration model mitigates compression artifacts, without increasing spatial resolution. For adaptive post-processing, a post-processing model can be selectively applied depending on results of scenario detection, results of segmentation, and/or results of video quality analysis. With the innovations, a conferencing tool can in effect provide video at higher quality without significantly increasing the network bandwidth consumed by the video or, alternatively, provide video using less network bandwidth without significantly hurting the quality of the video.
H04N 19/167 - Position dans une image vidéo, p. ex. région d'intérêt [ROI]
H04N 19/17 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet
72.
MACHINE LEARNING MODELS FOR ADAPTIVE POST-PROCESSING USING RESULTS OF SCENARIO DETECTION IN CONFERENCING TOOLS
Innovations in machine learning ("ML") models used in adaptive post-processing of decoded video in a conferencing tool are described. For example, as part of post-processing of decoded video, a super-resolution/video restoration model increases spatial resolution (e.g., by interpolation between sample values), mitigates compression artifacts, and mitigates upscaling artifacts introduced when increasing spatial resolution. Or, as another example, as part of post-processing of decoded video, a video restoration model mitigates compression artifacts, without increasing spatial resolution. For adaptive post-processing, a post-processing model can be selectively applied depending on results of scenario detection, results of segmentation, and/or results of video quality analysis. With the innovations, a conferencing tool can in effect provide video at higher quality without significantly increasing the network bandwidth consumed by the video or, alternatively, provide video using less network bandwidth without significantly hurting the quality of the video.
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
H04N 19/117 - Filtres, p. ex. pour le pré-traitement ou le post-traitement
H04N 19/85 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le pré-traitement ou le post-traitement spécialement adaptés pour la compression vidéo
According to implementations of the present disclosure, a solution for molecular property prediction is provided. In the solution, an initial atom cluster centered on the at least one target atom and with a specified radius is determined based on at least one target atom in a molecule. An adjustment strategy corresponding to the cross-cluster property is determined based on a cross-cluster property of a cross-cluster atom contained in the initial atom cluster of the at least one target atom. Adjustment on a cross-cluster atom contained in the initial atom cluster of the at least one target atom is performed, based on the adjustment strategy, to obtain a modified atom cluster corresponding to the at least one target atom. A target molecular property of the molecule is determined based on the modified atom cluster corresponding to the at least one target atom.
According to an implementation of the disclosure, a solution for executing a computation for a neural network is provided. According to the solution, a data flow graph for a neural network to be computed is obtained, and the data flow graph indicates at least one operation in the neural network and data respectively associated with the at least one operation; scheduling information for the neural network is determined based on the data flow graph and a processing resource configuration of a target device for computing the neural network, the scheduling information indicates a data transformation required to execute the at least one operation; and a computation for the neural network is executed at the target device based on the scheduling information. The implementation of the disclosure supports executing a computation for a neural network at devices with different configurations, providing versatility.
G06N 3/063 - Réalisation physique, c.-à-d. mise en œuvre matérielle de réseaux neuronaux, de neurones ou de parties de neurone utilisant des moyens électroniques
Aspects of the disclosure include removing a faulty qubit in a quantum circuit. The faulty qubit is determined to be in the quantum circuit, the faulty qubit being associated with a plaquette having other qubits, where adjacent plaquettes are neighboring the plaquette. A route is determined to isolate the plaquette from the adjacent plaquettes. Measurements are caused to be performed on the quantum circuit for the route that isolates the plaquette having the faulty qubit and the other qubits.
Systems and methods are disclosed for credit-based flow control of multiple data links using a common reverse channel. The links transfer data from a source to respective buffers at a sink. Credits represent available buffer space. For each data link, a credit counter at the source is decremented as data is transmitted and incremented as the sink returns credits. Reporting logic at the data sink generates a credit report as sink logic retrieves data from the buffer, freeing buffer space. Encoding logic aggregates the credit reports from multiple links for transmission over the common reverse channel to the source, where individual credit reports are extracted and distributed among the links, for update to the respective credit counters. For each link, data transmission pauses when the credit counter decreases to a threshold. Return multiple links' credits over a single reverse channel saves power. Variations are disclosed.
Systems and methods are disclosed for phase calibration between clock and data lanes at a data link receiver. In calibration mode, matching signals are transmitted over the clock and data lanes, and a phase offset is measured at the receiver. A phase shifter in one signal path is adjusted to a first phase to obtain a desired phase offset. For operation mode, the phase shifter is set based on the first phase. Embodiments measure phase offset using an XOR gate and use a phase interpolator as the phase shifter. Embodiments with multiple data lanes apply coarse calibration to a shared clock lane relative to a first data lane, and similar fine calibration to other data lanes. Calibration provides optimum signal-to-noise ratio or timing margin, enabling high transmission speeds at relatively low power. Variations are disclosed.
H04L 7/00 - Dispositions pour synchroniser le récepteur avec l'émetteur
G06F 1/12 - Synchronisation des différents signaux d'horloge
H03K 19/17 - Circuits logiques, c.-à-d. ayant au moins deux entrées agissant sur une sortieCircuits d'inversion utilisant des éléments spécifiés utilisant des twistors
Systems and methods are disclosed for reduced-power serial data links. A clock-forwarded serial link carries a clock lane and one or more data lanes. Every active serial data cycle is accompanied by its own serial clock edge: a clock delay allows the same clock edge to drive data at a transmitter and latch data at a receiver. Power is saved by idling the serial clock when data is not being transmitted. A valid signal can be omitted, providing a space saving. At the destination, similar clock-forwarding and delay enables a single parallel clock edge to drive data to the boundary of its clock domain, e.g. from a deserializer to a FIFO. The data link exhibits zero-cycle entry and exit. Variations with half- or single-cycle entry or exit are disclosed.
Systems and methods for initializing and calibrating asymmetric die-to-die (D2D) interfaces are described. As an example, during the calibration of a parameter, a calibration finite-state machine (CAL FSM) can perform certain measurements and adjustments. Once a stage of calibration is finished, the CAL FSM can communicate this information to a cluster FSM. The cluster FSM can then communicate to the node FSM the completion status. Once all the clusters have communicated to the node FSM that they have finished the current stage of calibration, the node FSM advances to the next stage of calibration and communicates to the pertinent cluster FSMs to advance, which in turn communicate to the CAL FSMs within the cluster to advance to the next stage of calibration. The clusters that are communicating in one direction are now able to receive the calibration stage information via other clusters that are communicating in the other direction.
H01L 25/065 - Ensembles consistant en une pluralité de dispositifs à semi-conducteurs ou d'autres dispositifs à l'état solide les dispositifs étant tous d'un type prévu dans une seule des sous-classes , , , , ou , p. ex. ensembles de diodes redresseuses les dispositifs n'ayant pas de conteneurs séparés les dispositifs étant d'un type prévu dans le groupe
80.
CREATING VIRTUAL THREE-DIMENSIONAL SPACES USING GENERATIVE MODELS
This document relates to generation of three-dimensional virtual spaces from user- provided two-dimensional input images. For instance, three-dimensional submeshes can be derived from the user-provided two-dimensional input images. Then, the submeshes can be arranged in a submesh layout, with spaces between the submeshes. The spaces can be populated with image content generated by a generative image model, which is then blended with the submeshes, resulting in a final three-dimensional virtual space.
A cloud computing resource system may receive an allocation request to connect the virtual machine to a customer network, wherein the virtual machine is executing while the allocation request is received and the allocation request includes network configuration information of the customer network. A cloud computing resource system may detect a discovery request from the virtual machine triggered by receipt of the allocation request, wherein the virtual machine remains executing during detection of the discovery request. A cloud computing resource system may update, responsive to detecting the discovery request from the virtual machine, a virtual network interface controller of the virtual machine with the network configuration information of the customer network, wherein the virtual machine remains executing during updating of the network configuration information.
G06F 9/50 - Allocation de ressources, p. ex. de l'unité centrale de traitement [UCT]
H04L 61/5014 - Adresses de protocole Internet [IP] en utilisant le protocole de configuration dynamique de l'hôte [DHCP] ou le protocole d'amorçage [BOOTP]
A computing system (10) including memory (12) storing a prompt library (20). The prompt library includes prompt fragments (22) and prompt templates (26). The computing system further includes one or more processing devices (14) configured to, at a prompt compiler (40), receive a prompt generation input (30) including prompt input data (32). At the prompt compiler, based at least in part on the prompt input data, the one or more processing devices are further configured to select a prompt template and one or more of the prompt fragments from the prompt library. The one or more processing devices are further configured to fill the selected prompt template with the prompt input data and the one or more selected prompt fragments to compute a compiled prompt (44). At a first machine learning model (50), the one or more processing devices are further configured to process the compiled prompt and to output the machine learning model output (52).
Innovations in machine learning ("ML") models used in adaptive post-processing of decoded video in a conferencing tool are described, For example, as part of post-processing of decoded video, a super-resolution/video restoration model increases spatial resolution (e.g., by interpolation between sample values), mitigates compression artifacts, and mitigates upscaling artifacts introduced when increasing spatial resolution, Or, as another example, as part of post-processing of decoded video, a video restoration model mitigates compression artifacts, without increasing spatial resolution, For adaptive post-processing, a post-processing model can be selectively applied depending on results of scenario detection, results of segmentation, and/or results of video quality analysis, With the innovations, a conferencing tool can in effect provide video at higher quality without significantly increasing the network bandwidth consumed by the video or, alternatively, provide video using less network bandwidth without significantly hurting the quality of the video.
H04N 19/117 - Filtres, p. ex. pour le pré-traitement ou le post-traitement
H04N 19/154 - Qualité visuelle après décodage mesurée ou estimée de façon subjective, p. ex. mesure de la distorsion
H04N 19/86 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le pré-traitement ou le post-traitement spécialement adaptés pour la compression vidéo mettant en œuvre la diminution des artéfacts de codage, p. ex. d'artéfacts de blocs
84.
AI-BASED ENTITY MALICIOUSNESS ANALYSIS USING EMBEDDING AND SAMPLING
Techniques are described herein that are capable of performing AI-based entity maliciousness analysis using embedding and sampling. A representative sample of data associated with an entity is selected by comparing embeddings that represent the data. A potentially anomalous data point is identified in at least a portion of the data based on a proximity of a node, which corresponds to the potentially anomalous data point, in a tree to a root node of the tree. A statistically anomalous data point is identified in representative sample data points, which define the representative sample, as a result of the statistically anomalous data point indicating an unexpected occurrence of an event. An AI model is triggered to determine whether the entity exhibits malicious behavior by providing an AI prompt, including the representative sample and a description of the potentially anomalous data point and the statistically anomalous data point, to the AI model.
Methods, systems, and devices for providing firmware management using a firmware management engine of a cloud computing system are described. The firmware management engine supports using a hot-plugged memory (e.g., Compute Express Link (CXL) pooled memory) to temporarily migrate system memory (e.g., Central Processing Unit (CPU) local memory or CXL attached memory) of a server node, while performing firmware management operations (e.g., a firmware update) on the server node. The hot-plugged memory can be CXL pooled memory that is a shared at a rack level, the CXL pooled memory can be used to store system memory data of system memory while updating and activating new memory initialization firmware code. A virtual machine associated with the server node stays operational temporarily using the hot-plugged memory. Firmware management is performed on a server node while virtual machines associated with the server node and the system memory of the server node remain operational.
G06F 9/455 - ÉmulationInterprétationSimulation de logiciel, p. ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
86.
MEMORY INTERFACE CIRCUITS INCLUDING ENCRYPT/DECRYPT CIRCUITS TO RE-ENCRYPT ENCRYPTED DATA BLOCKS IN A MEMORY CIRCUIT AND RELATED METHODS
An exemplary memory interface circuit disclosed herein re-encrypts data in an encrypted data block in a memory circuit to further protect the data. In particular, the memory interface circuit reads an encrypted data block from the memory circuit and decrypts the encrypted data block using a first key that was previously used to encrypt the block of data. Then, the memory interface circuit encrypts the data again using a second key before storing the re-encrypted data back into the memory circuit. In some examples, the memory interface circuit includes a re-encryption circuit that includes secure configuration registers to control occasional re-encryption of the encrypted data in an effort to evade detection of the encryption key. In some examples, the time between re-encryptions may be adjusted in response to a frequency of memory accesses to the memory circuit.
A computing system executing an intelligent agent application is provided. The intelligent agent application facilitates configuration and deployment of an intelligent agent to perform certain acts related to video game testing. The intelligent agent application may configure and deploy an intelligent agent based upon an intelligent agent deployment request comprising one or more tasks to be performed by an intelligent agent. The intelligent agent application causes an intelligent agent to interact with a testing computing system executing a video game application and perform the one or more tasks. The intelligent agent captures testing data indicative of the interaction with the video game application. The testing data is optionally enhanced and stored for further analysis.
A network interface controller (NIC) circuit may control data transfers between a network interface and a memory interface circuit. The NIC circuit receives data packets on the network interface and determines whether a packet type of a data packet corresponds to one of a first plurality of operations or a second plurality of operations. For data packets that correspond to one of the first plurality of operations, the NIC circuit controls the memory interface circuit according to the packet type and for data packets that correspond to one of the second plurality of operations, the NIC sends a notification to a processor circuit in the IC to execute software instructions to control the memory interface circuit according to the packet type. The NIC circuit quickly processes data packets corresponding to the first plurality of operations without software involvement but relies on software assistance for the second plurality of operations.
Innovations in machine learning (“ML”) models used in adaptive post-processing of decoded video in a conferencing tool are described. For example, as part of post-processing of decoded video, a super-resolution/video restoration model increases spatial resolution (e.g., by interpolation between sample values), mitigates compression artifacts, and mitigates upscaling artifacts introduced when increasing spatial resolution. Or, as another example, as part of post-processing of decoded video, a video restoration model mitigates compression artifacts, without increasing spatial resolution. For adaptive post-processing, a post-processing model can be selectively applied depending on results of scenario detection, results of segmentation, and/or results of video quality analysis. With the innovations, a conferencing tool can in effect provide video at higher quality without significantly increasing the network bandwidth consumed by the video or, alternatively, provide video using less network bandwidth without significantly hurting the quality of the video.
G06T 3/4007 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement basé sur l’interpolation, p. ex. interpolation bilinéaire
G06T 3/4053 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement basé sur la super-résolution, c.-à-d. où la résolution de l’image obtenue est plus élevée que la résolution du capteur
G06T 7/194 - DécoupageDétection de bords impliquant une segmentation premier plan-arrière-plan
90.
SPECULATION MANAGEMENT ENGINE IN A CLOUD ACCESS MANAGEMENT SYSTEM
Methods, systems, and computer storage media for providing speculation management using a speculation management engine of a cloud access management system are described. The speculation management engine operates based on speculation data, a multi-dimensional speculation framework, and two sets of operations defined based on an initialization sequence. In operation, speculation data for a user associated with a local client is accessed. A determination of a plurality of speculated remote resource candidates is made, based on the speculation data associated with the user. Execution of a first set of operations (i.e., remote resource resolution) and a second set of operations (i.e., remote resource connection configuration) are triggered. A determination is made whether a remote resource identified from the remote resource resolution matches a speculated remote resource candidate in the plurality of speculated remote resource candidates. Based on the speculated remote resource candidate, a connection for accessing a remote resource is established.
A computing system including memory storing a prompt library. The prompt library includes prompt fragments and prompt templates. The computing system further includes one or more processing devices configured to, at a prompt compiler, receive a prompt generation input including prompt input data. At the prompt compiler, based at least in part on the prompt input data, the one or more processing devices are further configured to select a prompt template and one or more of the prompt fragments from the prompt library. The one or more processing devices are further configured to fill the selected prompt template with the prompt input data and the one or more selected prompt fragments to compute a compiled prompt. At a first machine learning model, the one or more processing devices are further configured to process the compiled prompt and to output the machine learning model output.
A method and system for providing a customized configuration of settings for an audio processor. The customized configuration is determined for a user based on results of a hearing test. The test may be provided by the system or another system. The customized configuration includes an increase or decrease to an intensity level of one or more frequency bands of one or more channels (e.g., a left and right) to compensate for a decreased hearing threshold level experienced by the user (e.g., as indicated by the hearing test results). The settings may be further adjusted based on an application of one or more equal-loudness contours representing varying sensitivity of the human ear to different frequencies. When audio is played by an application including or in communication with the audio processor, the audio is adjusted based on the customized configuration and the user is provided with an improved listening experience.
An electronic device includes one or more processing components that generate heat during operation. A heat sink is thermally coupled with the one or more processing components, such that the heat sink assembly dissipates at least a portion of the heat generated by the one or more processing components, wherein one or more components of the heat sink assembly are spaced away from a conductive chassis wall of the electronic device to form a cavity. An antenna feed line is disposed within the cavity, such that the antenna feed line and the cavity collectively form a cavity-backed slot-type antenna usable to transmit radio frequency (RF) signals.
A method implemented in a computer system involving a processor system includes loading a first version of a driver into memory, identifying a first endpoint set within the driver, and wrapping each endpoint in the set with a wrapper. The wrappers are registered within the operating system for calling endpoints in the first endpoint set. Subsequently, a second version of the driver is loaded into memory, and the first version is swapped with the second version. The swap process involves determining if the first version has active external calls, ceasing execution if no active calls are present, configuring the wrappers to use the second endpoint set, and initiating execution of the second version of the driver.
A method for processing a multimodal prompt. The method includes receiving a multimodal prompt including a media file and information related to a region of interest (ROI) of the media file. The method further includes determining a ROI of the media file based on the information related to the media file and generating a plurality of media tiles of interest associated with the ROI. The method further includes encoding the plurality of media tiles of interest and using a large multimodal model (LMM) to process the encoded plurality of media tiles of interest according to a natural-language input of the prompt to generate a response.
G06F 40/40 - Traitement ou traduction du langage naturel
G06F 40/284 - Analyse lexicale, p. ex. segmentation en unités ou cooccurrence
G06V 10/25 - Détermination d’une région d’intérêt [ROI] ou d’un volume d’intérêt [VOI]
G06V 10/26 - Segmentation de formes dans le champ d’imageDécoupage ou fusion d’éléments d’image visant à établir la région de motif, p. ex. techniques de regroupementDétection d’occlusion
G06V 10/70 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique
G06V 10/94 - Architectures logicielles ou matérielles spécialement adaptées à la compréhension d’images ou de vidéos
96.
CONSISTENCY FOR QUERIES IN PRIVACY-PRESERVING DATA ANALYTIC SYSTEMS
Techniques for executing privacy-preserving aggregation queries on a database table include receiving a query, obtaining the true output, and generating deterministic pseudorandom noise based on the query and current database state. This noise is added to the true output to create a privacy-protected result. For subsequent queries, this process is repeated, potentially with updated state data, ensuring that privacy protection adapts to changes in the underlying data. The approach maintains consistency for repeated queries on unchanged data while providing fresh noise when data changes. It balances privacy protection with data utility, allowing for various query types while guarding against privacy attacks. The techniques include displaying the noisy output to a user interface. The techniques offer adaptive privacy protection, query flexibility, and efficient resource utilization, making them suitable for dynamic data environments requiring both privacy and analytical capabilities.
Methods, computer systems, and computer storage media are provided for providing intermediate response data in association with AI responses. In embodiments, an input prompt provided via a user interface is obtained. Based on the input prompt, intermediate response data used to generate an artificial intelligence (AI) response to the input prompt is identified. Such intermediate response data may include context data, query data, source data, and/or query results data. Such intermediate response data may be provided for presentation, via the user interface, in association with the AI response. In this way, a user may be provided with information related to a manner in which the AI response is generated.
The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating a content item group comprising members that have interest in a content item. In particular, the disclosed systems can generate a member embedding by leveraging member activity feature data and member information feature data. The disclosed systems can further generate a content item embedding reflecting content item feature data. The disclosed systems may generate a similarity score between the member embedding and the content item embedding. Based on the similarity score meeting a threshold similarity score, the disclosed system can determine to include a member within a target content item group.
Some embodiments provide or utilize technology which increases the security of network authentication operations, such as Kerberos operations, New Technology LAN Manager operations, or other network authentication operations which utilize security tickets or security tokens or both. In some embodiments, a user machine (also known as a client machine) receives an authentication data structure (ADS) which includes one or more security tickets or security tokens or both. Embodiments constrain the ADS according to at least one security requirement, such as a volatile-memory-only constraint, a secured-memory-only constraint, or a multilayer encryption constraint. Embodiments also transmit the ADS from the machine as a part of performing the network authentication service. Some embodiments inhibit virtual memory, or memory dumping, or both. Some embodiments bind the ADS to the user machine, and some embodiments limit ADS usage counts.
Methods, systems, and devices for providing partitioned store queue management using a partitioned store queue engine of a semiconductor system are described. Partitioned store queue management can refer to hardware-based techniques associated with a store queue architecture that helps managing memory operations, including storing of data to memory. Hardware-based techniques are employed to handle store instructions in a processor pipeline. In operation, a control store queue entry is allocated in a control store queue of a partitioned store queue when a store instruction is in a decode-dispatch stage. A data store queue entry is allocated when the store instruction in a memory stage of the processor pipeline. The data store queue is freed up when the data is ready in the data store queue entry. The control queue entry and data store queue entry is deallocated when the store instruction is in a write back stage of the processor pipeline.