Systems and methods for determining predicted olfactory perception are provided. In particular, a method comprises receiving an input indicating an odorant, generating an odorant vector representing the odorant, generating an olfactory receptor vector, and determining one or more predicted olfactory percepts associated with the odorant based on the odorant vector and the olfactory receptor vector.
Example solutions for using natural language (NL) for complex optimization problems in operations research (OR) include: receiving a user input for an OR problem; generating an NL prompt based on at least the user input, the NL prompt comprising an objective, a variable, input data, and a constraint; using a large language model (LLM), generating a domain-specific language (DSL) passage based on at least the NL prompt, the DSL passage representing the OR problem; transpiling the DSL passage into a programming language passage; solving the OR problem, wherein solving the OR problem comprises executing the programming language passage to generate a problem solution; and generating a report of the problem solution.
G06Q 10/04 - Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
3.
DISPLAYING A TRANSLUCENT VERSION OF A USER INTERFACE ELEMENT
Electronic devices described herein are configured to display updated content associated with a first application having a first user interface element disposed in a background area of a display that is obscured by a second user interface element associated with a second application. Responsive to a command from the first application to notify the user of the updated content, the operating system displays at least a portion of a translucent version of the first user interface element with the updated content in the foreground display area, wherein the translucent version of the first user interface element obscures at least a portion of the second user interface element.
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
4.
PRODUCING CALIBRATED CONFIDENCE ESTIMATES FOR OPEN-ENDED ANSWERS BY GENERATIVE ARTIFICIAL INTELLIGENCE MODELS
A confidence estimation tool uses a calibrated confidence mapping model to estimate confidence for a model-generated candidate root cause. The tool uses a generative artificial intelligence (“AI”) model to determine, based on a description of a current event, a candidate root cause of the current event. The tool determines a description-based confidence score using the description of the current event and descriptions of a set of relevant historical events in a target domain. The tool also determines a cause-based confidence score using the candidate root cause of the current event and root causes of the set of relevant historical events. Finally, the tool determines a final confidence score using the description-based and cause-based confidence scores. Even if the generative AI model is configured for general-domain applications, by referencing relevant historical events, the tool can accurately estimate confidence for a model-generated candidate root cause within the target domain.
Embodiments of the disclosed technologies are capable of a training pipeline to fine-tune a machine learning model given a limited set of domain-specific data. The embodiments describe using a first machine learning model to generate a pseudo label associated with a domain-specific training document. The pseudo label comprises a machine-generated text of a content type extracted from the domain-specific training document. The embodiments further describe fine-tuning a second machine learning model using the pseudo label, the domain-specific training document, a first low-rank weight matrix, and a second low-rank weight matrix. The fine-tuned second machine learning model generates text of the content type from a domain-specific document.
Embodiments of the disclosed technologies include, responsive to a first use of a first application by a first user, configuring, in a first prompt, at least one instruction based on first application context data and first user context data. The first prompt is stored in a memory that is accessible to the first application and a second application. Via the second application, first output of a generative artificial intelligence (GAI) model is presented to the first user. Based on the first output of the GAI model, at least one second use of the first application by the first user, or at least one first use of a third application by the first user, is configured.
Systems, methods, and devices are described for performing scalable data processing operations. A queue that includes a translatable portion comprising indications of data processing operations translatable to data queries and a non-translatable portion comprising indications of non-translatable data processing operations is maintained. A determination that a first data processing operation of a first code block statement is translatable to a database query is made. An indication of the first data processing operation is included in the translatable portion of the queue. Responsive to a determination that a second data processing operation of a second code block statement is undeferrable, the translatable portion of the queue is compiled into a database query. An execution of the database query to be executed by a database engine to generate a query result is caused. A result dataset corresponding to the query result is transmitted to an application configured to analyze the result dataset.
Systems and methods for representing two-dimensional representations as three-dimensional avatars are provided herein. In some examples, one or more input video streams are received. A first subject, within the one or more input video streams, is identified. Based on the one or more input video streams, a first view of the first subject is identified. Based on the one or more input video streams, a second view of the first subject is identified. The first subject is segmented into a plurality of planar object. The plurality of planar objects are transformed with respect to each other. The plurality of planar objects are based on the first and second views of the first subject. The plurality of planar objects are output in an output video stream. The plurality of planar objects provide perspective of the first subject to one or more viewers.
Methods are described for improving processing and storage efficiencies in large language models (LLMs) while also improving numerical accuracy. The methods are referred as distribution encoding. The disclosed distribution encoding techniques exploit the non-uniform distribution of model weights to provide improved numerical accuracy and compression, and consequently can reduce the number of GPU's needed for inferencing. This in turn enables the reduction of resources and cost necessary to implement such models.
Some embodiments determine machine configuration intentions from a natural language description of a target machine configuration. Intentions are refined to remove ambiguity, and mapped to pre-approved configuration functions and tasks. A machine configuration task list which invokes the pre-approved configuration functions and tasks is generated by a stabilized language model, and is executed to configure a target machine. The requested target machine is produced without requiring a user or admin to spend substantial effort and time customizing the machine and confirming its security and policy compliance.
Systems and methods for generating custom art fonts with consistent style include receiving user input that identifies a base font style for a custom font and includes descriptive text that defies one or more text effects to use for the custom font. Depth maps are selected for characters to be included in the custom font. The depth maps are preprocessed to add noise to the depth maps. A generative model generates custom font images conditioned with the text prompt and the depth maps. The custom font images are then used to render text on a display screen of a computing device.
The present disclosure relates to efficiently receiving and processing input tasks in a way that is scalable and which reduces both the quantity of tokens processed by a foundation model (e.g., an LLM) as well as the number of API calls that are made in processing the input tasks. A system batches a set of inputs to provide as a single batch of input(s) into an LLM. The system generates one or more permutations of the batched input(s) to determine outputs based on variable orders in which the input data is provided within the respective permutations of the batched inputs. The system further may eliminate one or more of the data inputs within the respective batches to facilitate smaller batched inputs without sacrificing accuracy in a set of outputs generated by the LLM responsive to the batch permutations.
A method, computer program product, and computing system for dynamically adjusting the number of emitted tokens per frame in speech processing systems operating with large stride values. The number of emitted tokens per frame can be dynamically adjusted in speech processing systems operating with large stride values by processing a signal frame according to a time-synchronous beam search technique at a frame rate based on a stride value; determining a hypothesis score for each hypothesis of a set of first information for the signal frame; determining a hypothesis score for each hypothesis of a set of second information for the signal frame; comparing a worst hypothesis score of the set of first information to a sum of a best hypothesis score of the set of second information and a threshold value; and ceasing processing of the signal frame when the worst hypothesis score of the set of first information is greater than the sum of the best hypothesis score of the set of second information and the threshold value.
Examples of the present disclosure describe systems and methods for enterprise search that leverage periodically updated user context of an enterprise user for intent understanding and treat a search session as a dialog between the user and a digital assistant to allow multi-modal interaction. For example, a query input by the user may be received from a client application having search functionality and an integrated assistant. A current state of the user context may be leveraged to understand the query. Based on the understanding, one or more responsive entities may be retrieved from an enterprise search index as results. Based on the results, a response may be generated that includes the results and a prompt to cause the user to further refine the query and/or provide feedback. The response may be provided to the client application for output as part of a search session dialog between the user and assistant.
The present disclosure relates to a packet data units (PDU) reliability system that improves lawful interception (LI) network functions in a cloud computing system for a telecommunications network. The PDU reliability system utilizes new components and elements within the network functions of the telecommunications network to improve the reliability and robustness of transmitted PDUs. These components include PDU acknowledgment packets, sequence number lists, and PDU receipt timers. For instance, the PDU reliability system utilizes a mediation and delivery function (MDF) to generate PDU acknowledgment packets based on PDUs received from a point of interception (POI) application to indicate which PDUs have been successfully received during PDU receipt timers. Based on the PDU acknowledgment packets, the PDU reliability system causes the POI application to perform one or more PDU actions to ensure the robust and reliable transmission of the PDUs.
Methods, systems, and computer storage media for providing container secure computing modes using a container mode management engine of a security management system. A container secure computing mode can include a secure state in which a container operates to prioritize security measures and practices. A container secure computing mode can be assigned to a container instance and enforced via a container security agent. In operation, a container instance is initialized, the container instance is associated with a container security agent having a secure compute mode transition control for the container instance. Based on the secure compute mode transition control, the container instance is transitioned into a secure state. A container operation of the container instance is accessed. The execution of the container operation is restricted based on the secure state of the container instance. The secure state is associated with a secure state configuration that supports restricting the container operation.
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
17.
Determining IPD By Adjusting The Positions Of Displayed Stimuli
Techniques for determining a user's IPD are described. A first stimulus is displayed on a first display, and a second stimulus is displayed on a second display. A stimulus separation distance is a distance that exists between the first and second stimuli. The stimulus separation distance is progressively increased by progressively moving, in opposing directions relative to one another, the first and second stimuli. While that distance is being progressively increased, at least one of the user's eyes is tracked. While the distance is being progressively increased, a change in a rate of eye movement for the user's eye is detected. When the change is detected, a value for the stimulus separation distance is recorded. The recorded value is set as a baseline for the user's IPD.
Aspects of the disclosure include decomposing a matrix for a Clifford unitary into a product of first and second involution matrices, determining first symplectic matrix that transforms first involution matrix into a first matrix, a first Clifford unitary matrix being described by first symplectic matrix, and determining second symplectic matrix that transforms second involution matrix into second matrix, a second Clifford unitary matrix being described by second symplectic matrix. Aspects include, responsive to first matrix being a diagonal matrix, setting a second number to size of first matrix and setting a second sequence to include the second number of generalized S gates, and responsive to second matrix being a diagonal matrix, setting a first number to size of second matrix and setting a first sequence to include the first number of generalized S gates. Aspects include executing first sequence, second sequence, and a Pauli unitary P on the quantum computer.
The techniques disclosed herein enable systems to efficiently interface with zoned namespace (ZNS) storage devices through a specialized management entity. To achieve this, the management entity receives write requests from a file system containing data intended for storage at the ZNS device. In response, the management entity selects a zone from the ZNS device to write the file data to. Accordingly, the file data is written by appending the file data to the zone at a location indicated by a write pointer. When the write operation is completed, the offset of the file data within the zone is observed and recorded by the file system in file metadata. In contrast to typical systems which allocate locations at the storage device prior to writing, appending file data and then recording the location enables improved efficiency in file system operations. Namely, that write operations can be issued to the ZNS device non-serially.
A data processing system implements receiving, from an application of a client device of a user, a request for content to be generated by a language model. The request includes a natural language prompt describing the content to be generated and an identifier of the user. The system further implements executing a query on one or more sample content sources to obtain sample content items authored at least in part by the user, constructing a prompt for the language model based on the natural language prompt and the sample content items, the prompt instructs the language model to mimic the writing style of the user based on the one or more sample content items, providing the prompt as an input to the language model to obtain the content, providing the content to the application of the client device, and causing the application to present the content in the application.
A computing system (10) is provided, including one or more processing devices (12). The one or more processing devices are configured to receive quantum circuit parameters (20) including a code parameter (23) of an error correction code (22) and a number of gates () included in a quantum circuit (24). The one or more processing devices are further configured to receive respective decoder parameters (30) of each of a plurality of candidate decoders (32). The decoder parameters include a physical noise rate () of a plurality of physical qubits at which the quantum circuit is configured to be executed and a stopping time () of the candidate decoder. The one or more processing devices are further configured to compute respective spacetime costs (40) of the candidate decoders based on the quantum circuit parameters and the decoder parameters. The one or more processing devices are further configured to output a selection of a lowest-spacetime-cost decoder (42) for implementation at a quantum computing device (50).
The disclosure relates to utilizing an anomaly mitigation proposal system to determine root causes, summarize anomalous metrics, and report mitigation actions for service incidents in cloud computing systems. Based on receiving an incident report request, the anomaly mitigation proposal system utilizes a two-layer approach that implements large generative language models to generate incident reports that include clear and concise text narratives summarizing metric anomalies, root causes, and corresponding mitigation actions. For example, the anomaly mitigation proposal system initially utilizes an online generative language model to provide these incident reports and, when unavailable within a time threshold, a fallback model that references root cause datastores.
Interactive analytics are provided for resource allocation failure incidents, which may be tracked, diagnosed, summarized, and presented in near real-time for users and/or platform/service providers to understand the root cause(s) of failure incidents and actual and hypothetical, failed and successful, allocation scenarios. A capacity analyzer simulates an allocation process implemented by a resource allocation platform. The capacity analyzer may determine which resources were and/or were not eligible for allocation for a request, based on information about the resource allocation failure, resources in the region of interest, and constraints associated with the incident, and the resource allocation rules associated with the resource allocation platform. Users may quickly learn whether a request constraint, a requesting entity constraint, a capacity constraint, and/or a resource platform constraint caused a resource allocation incident. The capacity analyzer may proactively monitor performance and generate alerts about failed and/or successful requests in which users may be interested.
The present disclosure relates to efficiently receiving and processing input tasks in a way that is scalable and which reduces both the quantity of tokens processed by a foundation model (e.g., an LLM) as well as the number of API calls that are made in processing the input tasks. A system batches a set of inputs to provide as a single batch of input(s) into an LLM. The system generates one or more permutations of the batched input(s) to determine outputs based on variable orders in which the input data is provided within the respective permutations of the batched inputs. The system further may eliminate one or more of the data inputs within the respective batches to facilitate smaller batched inputs without sacrificing accuracy in a set of outputs generated by the LLM responsive to the batch permutations.
The technology relates to systems and methods for generating advanced feedback for a draft message. The operations may include receive text for a message being drafted in a messaging application; upon an analysis condition being satisfied, analyze the message by applying at least one of a message-analysis model or heuristic to generate a feedback score for the message; and based on the feedback score crossing a feedback threshold, trigger generation of advanced feedback for the message. The operations may also or alternatively include receive an initial sent message from a messaging application; analyze the message by applying at least one of a message-analysis model or heuristic to generate a feedback score for the message; based on the feedback score crossing a feedback threshold, transmit a feedback alert message for surfacing in the messaging application; and based on receiving an interaction, trigger generation of advanced feedback for the message.
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
H04L 51/063 - Content adaptation, e.g. replacement of unsuitable content
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
26.
ULTRA-SCALABLE HIGH-PERFORMANCE COMPUTING (HPC) NETWORK USING DENSE WAVELENGTH-DIVISION MULTIPLEXING (DWDM)
Systems and methods are provided for implementing an ultra-scalable high-performance computing (“HPC”) network using dense wavelength-division multiplexing (“DWDM”). The HPC system includes an interconnection of GPU devices, multiplexer/demultiplexer (“mux/demux”) devices, amplifiers, wavelength selective switches (“WSSs”), and optical circuit switches (“OCSs”). Each OCS includes a plurality of micro-electromechanical systems (“MEMS”) mirrors and a plurality of input/output (“I/O”) ports each communicatively coupled to one WSS mux/demux device one WSS. Each WSS mux/demux device is either communicatively coupled to one of the I/O ports of an OCS or one of a plurality of GPU mux/demux devices via an amplifier. Each GPU mux/demux device is communicatively coupled to a number of GPU devices, each including another number GPUs and one or more optoelectronic devices. Selectively controlling the MEMS mirrors of the OCSs and the WSS mux/demux devices of the WSSs allows connecting the GPUs in a network topology for computing a series of computations.
Systems, methods, apparatuses, and computer program products are disclosed for generating a root cause taxonomy from incident data. Top-level classification(s) and incident data are received as inputs. The incident data is processed to generate processed incident data, which is then analyzed to determine patterns in the processed incident data. Second-level classification are generated based on the determined patterns, and added to the root cause taxonomy. The root cause taxonomy may then be used to classify incidents in the incident data.
A computing system for monitoring language model compliance with a rubric of one or more output characteristics. The computing system includes processing circuitry configured to interface with a trained generative language model that receives input of a prompt including natural language text input and, in response, generates an output that includes natural language text output. The processing circuitry is further configured to monitor compliance of the generative language model with the rubric, by feeding the output of the generative language model to a rubric classifier configured to generate a predicted classification for an output characteristic in the rubric, and output the predicted classification.
The disclosure relates to utilizing an anomaly mitigation proposal system to determine root causes, summarize anomalous metrics, and report mitigation actions for service incidents in cloud computing systems. Based on receiving an incident report request, the anomaly mitigation proposal system utilizes a two-layer approach that implements large generative language models to generate incident reports that include clear and concise text narratives summarizing metric anomalies, root causes, and corresponding mitigation actions. For example, the anomaly mitigation proposal system initially utilizes an online generative language model to provide these incident reports and, when unavailable within a time threshold, a fallback model that references root cause datastores.
H04L 41/0631 - Management of faults, events, alarms or notifications using root cause analysisManagement of faults, events, alarms or notifications using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
H04L 43/08 - Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
Systems and methods for processing a stream of input images are provided. An example method includes receiving a stream of input images and a pointing angle associated with the stream of input images, wherein each input image in the stream of input images comprises a plurality of pixels; interpolating an effective analytical projection, for each input image of the stream of input images, from a grid of predetermined analytical projections, based on the respective pointing angle and plurality of pixels of each of the input images of the stream of input images, wherein the grid of predetermined analytical projections comprises a plurality of spaces that each correspond to respective predetermined pointing angles; generating a modified stream of input images, by mapping pixels of the input stream of images to projected pixels of the modified stream of images, using the effective analytical projection; and displaying the modified stream of images.
Methods are described for improving processing and storage efficiencies in large language models (LLMs) while also improving numerical accuracy. The methods are referred as distribution encoding. The disclosed distribution encoding techniques exploit the non-uniform distribution of model weights to provide improved numerical accuracy and compression, and consequently can reduce the number of GPU's needed for inferencing. This in turn enables the reduction of resources and cost necessary to implement such models.
Embodiments described herein are directed to an adaptive AI model for 3D object detection using synthetic training data. For example, an ML model is trained to detect certain items of interest based on a training set that is synthetically generated in real time during the training process. The training set comprises a plurality of images depicting containers that are virtually packed with items of interest. Each image of the training set is a composite of an image comprising a container that is packed with items of non-interest and an image comprising an item of interest scanned in isolation. A plurality of such images is generated during any given training iteration of the ML model. Once trained, the ML model is configured to detect items of interest in actual containers and output a classification indicative of a likelihood that a container comprises an item of interest.
G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
G06V 10/26 - Segmentation of patterns in the image fieldCutting or merging of image elements to establish the pattern region, e.g. clustering-based techniquesDetection of occlusion
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
A computer-implemented method for performing natural language-based data integration includes causing execution of a data integration application on a remote device via a network and causing surfacing of a GUI corresponding to the data integration application on a display of the remote device. The method includes receiving, via the GUI, a natural language input representing a data integration task, generating, via an LLM, a set of ordered activities corresponding to the data integration task represented by the natural language input, and selecting, via the LLM, one or more APIs for performing each activity within the set of ordered activities. The method also includes generating a data pipeline based on the set of ordered activities and the API(s) for performing each activity, as well as back-translating the data pipeline to a desired data format for execution by the data integration application.
A technique for interacting with map-related information integrates the use of a machine-trained language model. Upon submission of a query, the technique uses the machine-trained language model to assess at least one intent associated with the query. The technique then invokes an intent-specific processing flow to provide an output result. Each processing flow invokes the use of at least one processing engine to perform an engine-specific task, such as geocoding, route finding, or image retrieval. A processing flow can also call on the machine-trained language model one or more additional times. In some cases, the technique includes a feedback mechanism for soliciting additional information from a user.
Systems for transitioning a user interface arrangement from a display of a two-dimensional image of a user to a rendering of a three-dimensional representation of the user is provided. A system can start with a UI including a rendering of a user that is based on a 2D image file. The system can receive an input that is configured to cause the system to transition the display of the rendering of the 2D image of the select user to a rendering of the three-dimensional representation of the select user. To display the rendering of the 3D representation of the select user, the system uses permission data and a three-dimensional model defining a position and orientation to display the 3D representation of the user. The system allows users to switch between viewing modes to allow users to interact with content using the most effective type of hardware.
Interactive analytics are provided for resource allocation failure incidents, which may be tracked, diagnosed, summarized, and presented in near real-time for users and/or platform/service providers to understand the root cause(s) of failure incidents and actual and hypothetical, failed and successful, allocation scenarios. A capacity analyzer simulates an allocation process implemented by a resource allocation platform. The capacity analyzer may determine which resources were and/or were not eligible for allocation for a request, based on information about the resource allocation failure, resources in the region of interest, and constraints associated with the incident, and the resource allocation rules associated with the resource allocation platform. Users may quickly learn whether a request constraint, a requesting entity constraint, a capacity constraint, and/or a resource platform constraint caused a resource allocation incident. The capacity analyzer may proactively monitor performance and generate alerts about failed and/or successful requests in which users may be interested.
H04L 41/0631 - Management of faults, events, alarms or notifications using root cause analysisManagement of faults, events, alarms or notifications using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
H04L 41/0604 - Management of faults, events, alarms or notifications using filtering, e.g. reduction of information by using priority, element types, position or time
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
H04L 47/74 - Admission controlResource allocation measures in reaction to resource unavailability
37.
SECURE PLATFORM FOR TEST AND INFRASTRUCTURE MANAGEMENT
A virtual machine (VM) test instance is created in a virtual machine scale set. When the VM test instance is created, a unique set of credentials is also created corresponding to the VM test instance. The unique set of credentials is stored in a secret store that is separate from other cloud and organization credentials. When access to a VM test instance is requested by a user, the unique credentials are provided to the user to use the VM test instance for a limited time. When the user is finished using the VM test instance, or when the VM test instance expires, then the VM test instance is destroyed and the unique credentials are also destroyed.
A device establishes a communications channel between a first web application executing within a first browsing context that is not cross-origin isolated and a second web application executing within a second browsing context that is cross-origin isolated. This includes loading a proxy page within a third browsing context of the web browser, the loading being initiated by the first browsing context, and loading a worker instance using a script provided by the proxy page. Content of the proxy page is served from an origin associated with the second web application. The device passes a first message through the communications channel from the first web application to the second web application. The first message requests the performance of a compute job by the second web application. The device also passes, from the second web application to the first web application, a second message that comprises a result of the compute job.
A data processing system for providing a service to extract information from a resource includes: a network interface for communicating over a computer network; a scraper tool to receive user instruction specifying a target resource and to extract content from the specified resource, wherein the user instruction further specifies a desired restructuring of the extracted content; and a prompt generator to structure the extracted content into a prompt for an Artificial Intelligence (AI) model, the prompt further directing the AI model to restructure the extracted content based on the user instruction. The prompt generator is to call the AI model with the generated prompt. The service is to receive restructured content from the AI model and provide the restructured content to a workstation submitting the user instruction, the restructured content presenting the content of the target resource in a form according to the user instruction.
Selectively controllable cleavable linkers include electrochemically-cleavable linkers, photolabile linkers, thermolabile linkers, chemically-labile linkers, and enzymatically-cleavable linkers. Selective cleavage of individual linkers may be controlled by changing local conditions. Local conditions may be changed by activating electrodes in proximity to the linkers, exposing the linkers to light, heating the linkers, or applying chemicals. Selective cleaving of enzymatically-cleavable linkers may be controlled by designing the sequences of different sets of the individual linkers to respond to different enzymes. Cleavable linkers may be used to attach polymers to a solid substrate. Selective cleavage of the linkers enables release of specific polymers from the solid substrate. Cleavable linkers may also be used to attach protecting groups to the ends of growing polymers. The protecting groups may be selectively removed by cleavage of the linkers to enable growth of specific polymers.
C12Q 1/6834 - Enzymatic or biochemical coupling of nucleic acids to a solid phase
C07H 21/04 - Compounds containing two or more mononucleotide units having separate phosphate or polyphosphate groups linked by saccharide radicals of nucleoside groups, e.g. nucleic acids with deoxyribosyl as saccharide radical
C07H 99/00 - Subject matter not provided for in other groups of this subclass
C07K 1/04 - General processes for the preparation of peptides on carriers
C07K 1/10 - General processes for the preparation of peptides using coupling agents
C07K 14/00 - Peptides having more than 20 amino acidsGastrinsSomatostatinsMelanotropinsDerivatives thereof
C07K 17/00 - Carrier-bound or immobilised peptidesPreparation thereof
C07K 17/02 - Peptides being immobilised on, or in, an organic carrier
C07K 17/08 - Peptides being immobilised on, or in, an organic carrier the carrier being a synthetic polymer
A method, computer program product, and computing system for disentangling background information from speaker information in a speech signal. Background information is extracted from the speech signal to generate a background acoustics embedding and speaker information is extracted from the speech signal to generate a speaker acoustics embedding. A first loss factor is applied to the background acoustics embedding to decrease speaker information therein to generate a processed background acoustics embedding using machine learning and a second loss factor is applied to the speaker acoustics embedding to decrease background information therein to generate a processed speaker acoustics embedding using machine learning. At least one of the processed background acoustics embedding and the processed speaker acoustics embedding is output to a speech processing system.
A method, computer program product, and computing system for processing training data and prediction data as a plurality of tokens using a classification-based machine learning model. A plurality of weighting features associated with the training data and the prediction data are defined by processing the output of the machine learning model with an attention layer. The plurality of weighting features are reshaped to generate weights for a trained neural network by processing the plurality of weighting features with an attention layer.
Aspects of the disclosure include methods and systems for performing automated fault scenario generation for chaos engineering. Aspects include obtaining a configuration of a service under test, obtaining a first plurality of fault scenarios, and applying each of the first plurality of fault scenarios to the service under test. Aspects also include recording telemetry data regarding an operation of the service under test under each of the fault scenarios, selecting, based on the telemetry data, a first fault scenario from the fault scenarios, and generating a second plurality of fault scenarios. Aspects further include applying each of the second plurality of fault scenarios to the service under test, recording telemetry data regarding the operation of the service under test under each of the second plurality of fault scenarios, and identifying a vulnerability of the service under test based on the recorded telemetry data.
The description relates to cameras, and camera calibration for enhancing user experiences. One example can receive a first image of a user at a first location relative to a camera. The first image can include the user's upper body but does not include the user from head to toe. The example can receive a second image of the user at a second location relative to a camera. The second image can include the user's upper body but does not include the user from head to toe. The example can estimate a distance of the second location from the first location relative to the camera and calibrate a height and tilt angle of the camera from the first image, the second image, and the estimated distance and without a full body image of the user.
This disclosure presents an image generation system designed to generate a series of contextually-persistent visual images for a text document. For instance, the image generation system utilizes multiple computer-based models, entity identifiers, and visual entity embeddings to create multiple synthetic images for a given text document. These synthetic images share a consistent theme and style. Additionally, the synthetic images include the same characters, places, and objects. Indeed, the image generation system implements seamless and consistent visual representations of the entities throughout the text document.
Searches based on an incoming ticket identify quality ticket enrichment data using a vector database. Language model prompts target particular kinds of quality ticket data. The incoming quality ticket, or a search result ticket, or both, are enriched using enrichment data, such as a user intent identification, a workaround suggestion, a resolution description, a target audience description, a relevance description, an impact description, a description of missing resolution facilitation information, an association between the incoming quality ticket and the search result ticket, a user sentiment identification, a tag suggestion, or a feedback utility estimate. The enrichment reduces engineering and support burdens, and facilitates faster more effective resolution of the problem or the request that is stated or implied in the incoming quality ticket. Duplicate tickets are merged or removed. Tickets are prioritized. Missing problem resolution information is identified and requested sooner.
Methods are described for improving processing and storage efficiencies in large language models (LLMs) while also improving numerical accuracy. The methods are referred as distribution encoding. The disclosed distribution encoding techniques exploit the non-uniform distribution of model weights to provide improved numerical accuracy and compression, and consequently can reduce the number of GPU's needed for inferencing. This in turn enables the reduction of resources and cost necessary to implement such models.
Systems, methods, apparatuses, and computer program products are disclosed for generating a root cause taxonomy from incident data. Top-level classification(s) and incident data are received as inputs. The incident data is processed to generate processed incident data, which is then analyzed to determine patterns in the processed incident data. Second-level classification are generated based on the determined patterns, and added to the root cause taxonomy. The root cause taxonomy may then be used to classify incidents in the incident data.
Some embodiments determine machine configuration intentions from a natural language description of a target machine configuration. Intentions are refined to remove ambiguity, and mapped to pre-approved configuration functions and tasks. A machine configuration task list which invokes the pre-approved configuration functions and tasks is generated by a stabilized language model, and is executed to configure a target machine. The requested target machine is produced without requiring a user or admin to spend substantial effort and time customizing the machine and confirming its security and policy compliance.
Methods are described for improving processing and storage efficiencies in large language models (LLMs) while also improving numerical accuracy. The methods are referred as distribution encoding. The disclosed distribution encoding techniques exploit the non-uniform distribution of model weights to provide improved numerical accuracy and compression, and consequently can reduce the number of GPU's needed for inferencing. This in turn enables the reduction of resources and cost necessary to implement such models.
Systems and methods for processing a stream of input images are provided. An example method includes receiving a stream of input images and a pointing angle associated with the stream of input images, wherein each input image in the stream of input images comprises a plurality of pixels; interpolating an effective analytical projection, for each input image of the stream of input images, from a grid of predetermined analytical projections, based on the respective pointing angle and plurality of pixels of each of the input images of the stream of input images, wherein the grid of predetermined analytical projections comprises a plurality of spaces that each correspond to respective predetermined pointing angles; generating a modified stream of input images, by mapping pixels of the input stream of images to projected pixels of the modified stream of images, using the effective analytical projection; and displaying the modified stream of images.
A computer-implemented method for performing natural language-based data integration includes causing execution of a data integration application on a remote device via a network and causing surfacing of a GUI corresponding to the data integration application on a display of the remote device. The method includes receiving, via the GUI, a natural language input representing a data integration task, generating, via an LLM, a set of ordered activities corresponding to the data integration task represented by the natural language input, and selecting, via the LLM, one or more APIs for performing each activity within the set of ordered activities. The method also includes generating a data pipeline based on the set of ordered activities and the API(s) for performing each activity, as well as back-translating the data pipeline to a desired data format for execution by the data integration application.
A method, computer program product, and computing system for processing training data and prediction data as a plurality of tokens using a classification-based machine learning model. A plurality of weighting features associated with the training data and the prediction data are defined by processing the output of the machine learning model with an attention layer. The plurality of weighting features are reshaped to generate weights for a trained neural network by processing the plurality of weighting features with an attention layer.
Some embodiments automatically and proactively adjust network device configuration settings during network operation, based on correlations between device performance and device configuration. Correlations are computed using statistics routines or computed by a machine learning module. Some embodiments share adjusted configuration values via a cache, and some persist adjusted values through an application restart. In some embodiments, the cache is hierarchical and different kinds of reconfiguration data are shared at different levels. In some embodiments, the configuration value is shared only between application instances that have sufficiently similar contexts. Some embodiments detect a correlation loss and fall back to a known good configuration setting or a default configuration setting. Some embodiments optimize network internode communications by making dynamic adjustments which are not available from static configuration settings or from static configuration rules.
H04L 41/0816 - Configuration setting characterised by the conditions triggering a change of settings the condition being an adaptation, e.g. in response to network events
H04L 41/084 - Configuration by using pre-existing information, e.g. using templates or copying from other elements
H04L 41/142 - Network analysis or design using statistical or mathematical methods
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
H04L 41/5009 - Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]
H04L 43/067 - Generation of reports using time frame reporting
55.
ACCELERATING IN-BROWSER TASKS OF AN UNPRIVILEGED WEB APPLICATION
A device establishes a communications channel between a first web application executing within a first browsing context that is not cross-origin isolated and a second web application executing within a second browsing context that is cross-origin isolated. This includes loading a proxy page within a third browsing context of the web browser, the loading being initiated by the first browsing context, and loading a worker instance using a script provided by the proxy page. Content of the proxy page is served from an origin associated with the second web application. The device passes a first message through the communications channel from the first web application to the second web application. The first message requests the performance of a compute job by the second web application. The device also passes, from the second web application to the first web application, a second message that comprises a result of the compute job.
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
H04L 67/02 - Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
56.
IDENTIFYING HALLUCINATIONS IN LARGE LANGUAGE MODEL OUTPUT
A computer-implemented method of generating verification data for a query result provided by a large language model, LLM, includes generating a prompt for the large language model. The prompt contains a verification request for a query, the query including query text and input data from which the query result can be derived. The verification request includes instructions that cause the LLM to generate verification data that indicates a derivation of the query result from the input data. Another computer-implemented method includes receiving the verification data and processing the verification data to determine whether the query result was validly derived from the input data.
Embodiments of the disclosed technologies include, responsive to a first use of a first application by a first user, configuring, in a first prompt, at least one instruction based on first application context data and first user context data. The first prompt is stored in a memory that is accessible to the first application and a second application. Via the second application, first output of a generative artificial intelligence (GAI) model is presented to the first user. Based on the first output of the GAI model, at least one second use of the first application by the first user, or at least one first use of a third application by the first user, is configured.
Example solutions for using natural language (NL) for complex optimization problems in operations research (OR) include: receiving a user input for an OR problem; generating an NL prompt based on at least the user input, the NL prompt comprising an objective, a variable, input data, and a constraint; using a large language model (LLM), generating a domain-specific language (DSL) passage based on at least the NL prompt, the DSL passage representing the OR problem; transpiling the DSL passage into a programming language passage; solving the OR problem, wherein solving the OR problem comprises executing the programming language passage to generate a problem solution; and generating a report of the problem solution.
Some embodiments automatically and proactively adjust network device configuration settings during network operation, based on correlations between device performance and device configuration. Correlations are computed using statistics routines or computed by a machine learning module. Some embodiments share adjusted configuration values via a cache, and some persist adjusted values through an application restart. In some embodiments, the cache is hierarchical and different kinds of reconfiguration data are shared at different levels. In some embodiments, the configuration value is shared only between application instances that have sufficiently similar contexts. Some embodiments detect a correlation loss and fall back to a known good configuration setting or a default configuration setting. Some embodiments optimize network internode communications by making dynamic adjustments which are not available from static configuration settings or from static configuration rules.
H04L 41/0816 - Configuration setting characterised by the conditions triggering a change of settings the condition being an adaptation, e.g. in response to network events
H04L 41/0604 - Management of faults, events, alarms or notifications using filtering, e.g. reduction of information by using priority, element types, position or time
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
60.
Controlling touch operation modes for touch sensors utilizing determined touch usage probability
One example provides a computing device comprising a display, a touch sensor operatively coupled to the display, a non-touch sensor configured to provide an output indicative of user engagement between the computing device and a user of the computing device, a logic machine, and a storage machine. The storage machine comprises instructions executable by the logic machine to determine a touch usage probability based at least upon the output from the non-touch sensor. The instructions are further executable to change the operation of the touch sensor between an idle mode and a scanning mode based at least on the touch usage probability meeting a probability threshold condition.
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 1/3231 - Monitoring the presence, absence or movement of users
G06F 1/3296 - Power saving characterised by the action undertaken by lowering the supply or operating voltage
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/041 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
61.
SELECTING DECODER USED AT QUANTUM COMPUTING DEVICE
A computing system is provided, including one or more processing devices. The one or more processing devices are configured to receive quantum circuit parameters including a code parameter of an error correction code and a number of T gates included in a quantum circuit. The one or more processing devices are further configured to receive respective decoder parameters of each of a plurality of candidate decoders. The decoder parameters include a physical noise rate of a plurality of physical qubits at which the quantum circuit is configured to be executed and a stopping time of the candidate decoder. The one or more processing devices are further configured to compute respective spacetime costs of the candidate decoders based on the quantum circuit parameters and the decoder parameters. The one or more processing devices are further configured to output a selection of a lowest-spacetime-cost decoder for implementation at a quantum computing device.
The present disclosure provides methods and apparatuses for implementing video effect addition. At a target application, an original video frame may be obtained from a video source; the original video frame may be provided to a video effect processing application, the video effect processing application being a Web application; and a processed video frame to which a video effect is applied may be obtained from the video effect processing application. At a video effect processing application, an original video frame may be obtained from a target application; a video effect may be applied to the original video frame to obtain a processed video frame; and the processed video frame may be provided to the target application.
A computer system is configured to provision a plurality of storage volumes at a plurality of fault domains and thinly provision a plurality of cache volumes at the plurality of fault domains. The computer system is also configured to perform a write operation in a resilient manner that maintains a plurality of copies of data associated with the write operation. Performing the write operation in the resilient manner includes allocating a portion of storage in each of the plurality of cache volumes, and caching the data associated with the write operation in the portion of storage in each of the plurality of cache volumes. The cached data is then persistently stored in the plurality of storage volumes. After that, the portion of storage in each of the plurality of cache volumes is deallocated.
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
A computer-implemented method comprising: receiving, from a user device, video data from a user; training a first machine learning model based on the video data to provide a second machine learning model, the second machine learning model being personalized to the user, wherein the second machine learning model is trained to predict movement of the user based on audio data; receiving further audio data from the user; determining predicted movements of the user based on the further audio data and the second machine learning model; using the predicted movements of the user to generate animation of an avatar of the user.
Systems, methods, and computer-readable storage devices are disclosed for improved table identification in a spreadsheet. One method including: receiving a spreadsheet including at least one table; identifying, using machine learning, one or more classes of a plurality of classes for each cell of the received spreadsheet, wherein the plurality of classes include corners and not-a-corner; and inducing at least one table in the received spreadsheet based on the one or more identified classes for each cell of the received spreadsheet.
Implementations of the subject matter described herein provide a solution in which a quick and comfortable operation can be achieved while providing the improved intuitiveness. In the solution, a scroll assembly for use with a pointing device is provided. The scroll assembly comprises: a first scroll member for controlling a first movement of an object on a user interface; and at least one second scroll member for controlling a second movement of the object on the user interface, the second scroll member being adapted to, in response to an operation applied substantially in the first direction, rotate and provide a haptic feedback in a second direction that is substantially perpendicular to the first direction.
G06F 3/0362 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 1D translations or rotations of an operating part of the device, e.g. scroll wheels, sliders, knobs, rollers or belts
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
Systems and methods for generating a power consumption rating include receiving instrumentation data corresponding to a plurality of applications. The received instrumentation data is processed to calculate a relative power consumption value for each application of the plurality of applications. The relative power consumption value for each application is compared and a power consumption rating for each application based on the comparison is generated, thereby providing a visual indicator of power consumption for the applications that can be easily evaluated.
A computing system retrieves a value of a device identifier of itself and generates a device claim asserting the value of the device identifier. The device claim is then associated with an identifier of a user of the computing system. The computing system then generates and attach proof code to the device claim to turn the device claim into a verifiable device credential (VDC). The proof code proves that the VDC is issued by the user of the computing system. The VDC is later presented to a relying entity as part of an identity protection system to further protect the user's identity.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
The description relates to safely and accurately testing high-power computer rack power supplies. One example can include a computer rack that includes multiple computers and a high-power computer rack supply (HPCRS) lead terminating in a connector. The HPCRS lead includes multiple conductors and is configured to couple to the computer rack to power the multiple computers. This example can include a full-spectrum computer rack power supply testing (FSCRPST) device configured to test the power from the HPCRS lead.
A user query for information regarding data of a codebase is answered by a large language model given a prompt that includes examples of code segments from the codebase that are similar to the user query. The code segments from the codebase are associated with metadata that includes both natural language text and source code. The search for the examples of code segments from the codebase is based on embeddings of code segments and associated metadata that are closely similar to an embedding of the user query and context.
A computer-implemented method of generating verification data for a query result provided by a large language model, LLM, includes generating a prompt for the large language model. The prompt contains a verification request for a query, the query including query text and input data from which the query result can be derived. The verification request includes instructions that cause the LLM to generate verification data that indicates a derivation of the query result from the input data. Another computer-implemented method includes receiving the verification data and processing the verification data to determine whether the query result was validly derived from the input data.
Methods for enabling micro-bump architectures without the use of sacrificial pads for probing a wafer are described. A method includes forming: (1) a first bump in accordance with a specified first diameter, and (2) a first set of bumps in accordance with a specified second diameter, smaller than the specified first diameter. The first bump is used for probing a portion of the wafer associated with the first set of bumps. Both the first bump and the first set of bumps are then removed. The method includes forming: (1) a second set of bumps, in place of the first bump, where each of the second set of bumps is formed in accordance with the specified second diameter, and (2) a third set of bumps, in place of the first set of bumps, where each of the third set of bumps is formed in accordance with the specified second diameter.
A computerized method generates a structure configuration from a codebase and generates application code using the structure configuration. Code examples of the codebase are obtained, and a structure configuration is generated using the obtained code examples and a standard configuration dataset. The structure configuration includes code components specific to the codebase. Labels that are indicative of component attributes are assigned to the code components of the structure configuration. A code generation request is received, and the request is converted into a plurality of feature prompts. At least one code component is mapped to each feature prompt based on semantic similarity of the feature prompt to a label of the mapped code component. Application code is generated using the mapped code components. The generated application code is deployed for execution. This method enables the automatic generation of code that conforms to common patterns and accurately exhibits the features requested.
A system for development of an Artificial Intelligence (AI) model while protecting sensitive user information includes: a confidential computing environment in which original prompts to the AI model written by users are collected; a trained synthetic prompt generator to generate synthetic prompts based on the original prompts, wherein the synthetic prompt generator generates anonymized synthetic prompts without sensitive user information identifiable from the original prompts; and a developer computing environment in which the synthetic prompts are submitted to the AI model under development to generate a dataset that includes the synthetic prompts and corresponding AI model output for analysis to determine updates for the AI model while protecting the sensitive user information of actual users.
A query for a subscriber is received, and in response to determining that configuration data associated with the subscriber is on a failed node of a clustered AS, a method comprises: determining a healthy node; requesting the healthy node to instantiate a configuration block associated with the subscriber and. in response to the block being a child block, a first parent block upon which the child block is dependent; receiving a response that an identifier associated with the subscriber has been registered on the healthy node; responding to the query with the location of the healthy node; after receiving the response, adding at least one configuration block on the failed AS node, excluding any configuration block that was requested to be instantiated, to a queue for requesting the healthy node to instantiate each configuration block of the queue.
H04L 67/1034 - Reaction to server failures by a load balancer
H04L 67/00 - Network arrangements or protocols for supporting network services or applications
H04L 69/40 - Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
76.
INTEGRATED HARDWARE ARCHITECTURE AND DISTRIBUTION STRATEGY OPTIMIZATION FOR DEEP LEARNING MODELS
A training optimization system implements algorithmic solutions to solve the conjoined problem of accelerator architecture search and model partitioning for distributed training. The system makes the multi-dimensional optimization space of architecture search and device placement tractable by reducing the number of accelerator architectures explored through area-based heuristics and employing a novel integer linear program (ILP), the size of which is dependent only on the number of operators. The ILP scheduling optimization also explores the partitioning of operators across cores, known as intra-operator parallelism. Despite the vast space, the ILP described herein requires significantly less time to perform the optimizations across all explored accelerator configurations. Based on the optimal backward and forward pass latencies, the system leverages a novel dynamic programming (DP) approach to determine the device placement and model partitioning scheme.
Methods for enabling micro-bump architectures without the use of sacrificial pads for probing a wafer are described. A method includes forming: (1) a first bump in accordance with a specified first diameter, and (2) a first set of bumps in accordance with a specified second diameter, smaller than the specified first diameter. The first bump is used for probing a portion of the wafer associated with the first set of bumps. Both the first bump and the first set of bumps are then removed. The method includes forming: (1) a second set of bumps, in place of the first bump, where each of the second set of bumps is formed in accordance with the specified second diameter, and (2) a third set of bumps, in place of the first set of bumps, where each of the third set of bumps is formed in accordance with the specified second diameter.
A data processing system implements receiving a first input in a spreadsheet in a spreadsheet application, detecting an indication that the first input includes first executable program code, analyzing the first executable program code to identify first references to one or more first elements of the spreadsheet in the first executable program code, requesting spreadsheet data associated with the one or more first elements of the spreadsheet from the spreadsheet application, receiving the spreadsheet data from the spreadsheet application; executing the first executable program code using the spreadsheet data referenced in the first executable program code to obtain a first program code result and causing the spreadsheet application to display the first program code result in the spreadsheet application.
G06F 9/448 - Execution paradigms, e.g. implementations of programming paradigms
G06F 7/544 - Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state deviceMethods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using unspecified devices for evaluating functions by calculation
G06F 9/455 - EmulationInterpretationSoftware simulation, e.g. virtualisation or emulation of application or operating system execution engines
G06F 40/18 - Editing, e.g. inserting or deleting of tablesEditing, e.g. inserting or deleting using ruled lines of spreadsheets
A cold plate assembly for cooling a computing device includes a first cooling structure and a second cooling structure. The first cooling structure is configured to provide cooling from a flow of first coolant to a first temperature section of the computing device. The second cooling structure is connected to the first cooling structure and is configured to provide cooling from a flow of second coolant to a second temperature section of the computing device. The second temperature section has a lower temperature threshold than the first temperature section. The cold plate assembly includes a thermal barrier between the first cooling structure and the second cooling structure.
A cold plate assembly for cooling a computing device includes a first cooling structure and a second cooling structure. The first cooling structure is configured to provide cooling from a flow of first coolant to a first temperature section of the computing device. The second cooling structure is connected to the first cooling structure and is configured to provide cooling from a flow of second coolant to a second temperature section of the computing device. The second temperature section has a lower temperature threshold than the first temperature section. The cold plate assembly includes a thermal barrier between the first cooling structure and the second cooling structure.
Systems and methods may be used for access control. These systems and methods may include using a data processing system to access a video stream, the video stream including an image including a virtual background, segmenting the image into a foreground portion and a background portion to determine whether the foreground portion or the background portion of the image meets a threshold requirement, and outputting an alert in response to determining that the foreground portion or the background portion of the image fails to meet the threshold requirement.
H04N 21/258 - Client or end-user data management, e.g. managing client capabilities, user preferences or demographics or processing of multiple end-users preferences to derive collaborative data
The present disclosure relates to systems, methods, and computer-readable media for utilizing a concept graphing system to determine and provide relationships between concepts within document collections or corpora. For example, the concept graphing system can generate and utilize machine-learning models, such as a sparse graph recovery machine-learning model, to identify less-obvious correlations between concepts, including positive and negative concept connections, as well as provide these connections within a visual concept graph. Additionally, the concept graphing system can provide a visual concept graph that determines and displays concept correlations based on the input of a single concept, multiple concepts, or no concepts.
A computer-implemented method comprising: receiving a 3D image including an object depicted in the image, the 3D image comprising an ordered set of 2D images; determining a contour around the object in a first of said 2D images; and determining a contour around the object in a second of said 2D images, the second 2D image being non-contiguous with the first in said ordered set, having an intermediate region comprising one or more intermediate ones of said 2D images between the first and second 2D images within said ordered set. In each of the first and second 2D images, inside of the contour is classified as foreground and outside of the contour is classified as background. The method further comprises performing a 3D geodesic distance computation to classify points in the intermediate region as foreground of background.
This disclosure details a base station and client devices using dynamic spectrum access for communication within a frequency spectrum by selecting channels dynamically for efficient communication. This includes identifying active uplink and downlink channels from an available list and allocating them to multiple client devices based on their locations, with some devices sharing common active channels. A downlink channel is designated as a beaconing channel, used for beaconing with embedded information, including the coordinates of a region among a plurality of regions, available channels for the region, and a buffer slot in the channels, during a beaconing period occurring outside regular transmission times. Acknowledgments with medium access control (MAC) commands for an identified subset of client devices sharing an active channel are grouped and transmitted, with each message in the plurality of messages on the uplink channels followed by a downlink acknowledgment.
A bidirectional mapping is established between network content and application programs, based on declarations at both the network content and at the application. Additionally, bidirectional mapping can provide for deep links, which can associate specific network content with a specific presentation of data in an application program. The identification format for such deep links can conform to a predetermined standard or it can be custom implemented according to a format declared either as part of the network content or the application program. The bidirectional mapping is then utilized by a lookup service to provide functionality to a third-party entity. The lookup service can identify, to the entity, application programs associated with network content specified by that entity and network content associated with application programs specified by that entity.
A retrieval-augmented neural transformer model with chunk cross-attention predicts a code review given a proposed source code change, represented as a code diff hunk, and a set of historical code review comments. The code diff hunk represents proposed edits to a source code snippet with its surrounding context that has not been changed. The historical code review comments are associated with code edits that are semantically similar to the proposed source code changes. The code diff hunk is partitioned into chunks which are used to find semantically similar historical code review comments. The set of historical code review comments is aggregated and used to guide the model in makings its predictions.
Techniques herein balance the need for flexibility with the need accuracy, using a reactive approach to viral spam detection. After content (e.g., a social media platform news feed or timeline post) is created, interaction activity (e.g., content views) with the content is monitored. Based on the monitoring of the interactivity activity, it is determined whether a reactive viral spam analysis condition is satisfied for the content (e.g., because the number of content views exceeds a threshold). In response to determining that the reactive viral spam analysis condition is satisfied, a determination is made whether the content is or is not viral spam. If the content is determined to be viral spam, then it may be reported or flagged for further action (e.g., take down after manual confirmation).
G06F 18/2415 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
88.
INTEGRATING EXTERNAL PROGRAM CODE WITH SPREADSHEET APPLICATION
A data processing system implements receiving a first input in a spreadsheet in a spreadsheet application, detecting an indication that the first input includes first executable program code, analyzing the first executable program code to identify first references to one or more first elements of the spreadsheet in the first executable program code, requesting spreadsheet data associated with the one or more first elements of the spreadsheet from the spreadsheet application, receiving the spreadsheet data from the spreadsheet application; executing the first executable program code using the spreadsheet data referenced in the first executable program code to obtain a first program code result and causing the spreadsheet application to display the first program code result in the spreadsheet application.
A user query for information regarding data of a codebase is answered by a large language model given a prompt that includes examples of code segments from the codebase that are similar to the user query. The code segments from the codebase are associated with metadata that includes both natural language text and source code. The search for the examples of code segments from the codebase is based on embeddings of code segments and associated metadata that are closely similar to an embedding of the user query and context.
A computerized method generates a structure configuration from a codebase and generates application code using the structure configuration. Code examples of the codebase are obtained, and a structure configuration is generated using the obtained code examples and a standard configuration dataset. The structure configuration includes code components specific to the codebase. Labels that are indicative of component attributes are assigned to the code components of the structure configuration. A code generation request is received, and the request is converted into a plurality of feature prompts. At least one code component is mapped to each feature prompt based on semantic similarity of the feature prompt to a label of the mapped code component. Application code is generated using the mapped code components. The generated application code is deployed for execution. This method enables the automatic generation of code that conforms to common patterns and accurately exhibits the features requested.
The description relates to safely and accurately testing high-power computer rack power supplies. One example can include a computer rack that includes multiple computers and a high-power computer rack supply (HPCRS) lead terminating in a connector. The HPCRS lead includes multiple conductors and is configured to couple to the computer rack to power the multiple computers. This example can include a full-spectrum computer rack power supply testing (FSCRPST) device configured to test the power from the HPCRS lead.
Systems and methods for optimizing thread allocation in a model serving system include estimating a batch size for inference requests. An optimal configuration is then determined that defines a number of inference instances, a number of threads per inference instance, and a sub-batch size per inference instance for processing a batch of inference requests of the batch size using intra-operator parallelism that minimizes average per-batch latency. The optimal configuration is determined with reference to a plurality of predetermined model profiles that define single-inference average batch latencies for different combinations of thread counts and batch sizes, the predetermined model profiles being used as input to a dynamic programming algorithm that identifies optimal configurations that minimize the average per-batch latency.
A device may include a first rigid panel having a contact surface and a cosmetic surface. A device may include a frictious material positioned on the contact surface of the first rigid panel. A device may include a second rigid panel having a contact surface and a cosmetic surface, the second rigid panel movably connected to the first rigid panel by a hinged connector. A device may include a selective connector, including: a first connection interface coupled to the first rigid panel, and a second connection interface coupled to the second rigid panel.
A cold plate assembly for cooling a computing device includes a first cooling structure and a second cooling structure. The first cooling structure is configured to provide cooling from a flow of first coolant to a first temperature section of the computing device. The second cooling structure is connected to the first cooling structure and is configured to provide cooling from a flow of second coolant to a second temperature section of the computing device. The second temperature section has a lower temperature threshold than the first temperature section. The cold plate assembly includes a thermal barrier between the first cooling structure and the second cooling structure.
A cold plate assembly for cooling a computing device includes a first cooling structure and a second cooling structure. The first cooling structure is configured to provide cooling from a flow of first coolant to a first temperature section of the computing device. The second cooling structure is connected to the first cooling structure and is configured to provide cooling from a flow of second coolant to a second temperature section of the computing device. The second temperature section has a lower temperature threshold than the first temperature section. The cold plate assembly includes a thermal barrier between the first cooling structure and the second cooling structure.
H01L 23/473 - Arrangements for cooling, heating, ventilating or temperature compensation involving the transfer of heat by flowing fluids by flowing liquids
H05K 7/20 - Modifications to facilitate cooling, ventilating, or heating
96.
MIXED REALITY ENVIRONMENT DISPLAY USING SURFACE RECONSTRUCTION MESH AND LIVE VIDEO OVERLAY
The disclosure herein describes enabling a user of a remote mixed reality (MR) device to observe an environment of a local MR device combined with 3D surface reconstruction (SR) mesh data and live video data. Optical data of a surface of an environment is obtained and a 3D surface reconstruction mesh of the surface is generated from the obtained optical data using photogrammetry. The generated 3D surface reconstruction mesh is provided for display by a remote device. A live video feed of a window region of the environment is obtained and the live video feed of the window region is provided for display on the generated 3D surface reconstruction mesh by the remote device. Further, a remote user is enabled to provide feedback to a user of the local MR device, including audio feedback such as speech and virtual artifacts that are displayed to the local user.
Generating and associating decentralized identifiers (DIDs) for a group of one or more related devices. First, a device group DID is generated. The device group DID is associated with a group of one or more related devices. For each of the group of one or more related devices, a device DID is generated, and associated with the corresponding device. A scope of permission is granted to the device group DID. In response to the granting the scope of permission to the device group DID, each device DID is granted a subset of the scope of permission.
An information retrieval technique uses one or more machine-trained models to generate one or more metadata embeddings. The technique then combines a query embedding with the metadata embedding(s). In some cases, the technique performs this operation using a graph convolution operation. This yields an augmented embedding. The technique then uses the augmented embedding to retrieve at least one item. The augmented embedding lies in the same vector space as target-item embeddings associated with candidate target items. Otherwise, the vector spaces associated with the query embedding and metadata embedding(s) can be different. In some implementations, the technique use dense retrieval, which enables the technique to deliver output results in real time.
A device may include a first rigid panel having a contact surface and a cosmetic surface. A device may include a frictious material positioned on the contact surface of the first rigid panel. A device may include a second rigid panel having a contact surface and a cosmetic surface, the second rigid panel movably connected to the first rigid panel by a hinged connector. A device may include a selective connector, including: a first connection interface coupled to the first rigid panel, and a second connection interface coupled to the second rigid panel.
Montgomery multiplier architectures are provided. A circuit can include an initial processing element (PE) circuit configured to generate a first output including (i) a radix of a carry out and (ii) a radix of an intermediate result based on radixes of respective operands, a radix of an inverse of a modulus, and a radix of the modulus, middle PE circuits configured to generate a second output including (i) respective radixes of a Montgomery multiplication result and (ii) further respective radixes of a carry out on two consecutive clock cycles based on the first output, and a final PE circuit configured to generate further radixes of the Montgomery multiplication results on two consecutive, subsequent clock cycles based on the second output.
H04L 9/06 - Arrangements for secret or secure communicationsNetwork security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
G06F 7/72 - Methods or arrangements for performing computations using a digital non-denominational number representation, i.e. number representation without radixComputing devices using combinations of denominational and non-denominational quantity representations using residue arithmetic