A data processing system implements receiving an image and a natural language prompt input by a user requesting that an application generate an digital picture frame for the image; analyzing the prompt using a key-phrase extraction unit to extract one or more key phrases from the prompt that describe a topic of the frame to be generated for the image; providing the one or more key phrases as an input to a retrieval engine; analyzing the one or more key phrases with the retrieval engine to identify a set of candidate frame images from among a plurality of frame images in a labeled frame images datastore; analyzing the set of candidate frame images using an image placement unit to obtain a set of framed images based on the image and the candidate frame images; and presenting the set of framed images on a user interface of the application.
A computing system is configured to detect a request for a deployment of a container at a container orchestration service. One or more datasets associated with the deployment of the container are collected, and a plurality of features associated with the deployment are extracted based on the one or more datasets. A probability score is then generated based on the plurality of features, using a machine-learning model trained on datasets associated with historical deployments of containers that have been performed via the container orchestration service. The probability score indicates a probability that the deployment of the container is anomalous compared to the historical deployments of containers. When the probability score is greater than a threshold, the deployment of the container is determined as anomalous.
A system and method for vector embedding of service incident data is described. In one aspect, a computer-implemented method comprising receiving service incident data includes free-form text data, structured metadata, and human-generated comments, constructing a graph representation of a service incident, the graph includes nodes representing the free-form text data, structured metadata, and human-generated comments of the service incident, and edges connecting related nodes, generating vector embeddings for the nodes and edges of the graph representation, applying dimensionality reduction to the vector embeddings to generate reduced embeddings, and storing the reduced embeddings and the vector embeddings in a database.
This description relates to removing CO2 from the air. One example includes a duct extending from an external environment to an internal environment and a fan configured to move air through the duct. The example also includes first and second CO2 removal assemblies configured to alternatively transition between CO2 adsorption mode and CO2 desorption mode so that one of either the first and second CO2 removal assemblies is in CO2 adsorption mode and receiving at least some of the air moving through the duct while the other of the first and second CO2 removal assemblies is not receiving air moving through the duct while CO2 is removed in desorption mode.
B01D 53/04 - Separation of gases or vapoursRecovering vapours of volatile solvents from gasesChemical or biological purification of waste gases, e.g. engine exhaust gases, smoke, fumes, flue gases or aerosols by adsorption, e.g. preparative gas chromatography with stationary adsorbents
5.
ADAPTIVE QUERY ROUTING FOR NATURAL LANGUAGE GENERATORS BASED ON QUERY DIFFICULTY
Natural language generators (NLGs), including large language models, are powerful technologies that are in widespread use. However, typically, as NLGs become more powerful and sophisticated, their correspondingly increased complexity requires substantial processing resources. The present disclosure provides automated techniques for dynamically routing queries between at least two NLGs based on an assessment of query difficulty. Less difficult queries can be routed to a less resource intensive NLG, while more difficult queries are routed to a more sophisticated, but more resource intensive NLG. Routing less difficult queries to a less resource intensive model can thus conserve computing resources, while providing little to no drop in response quality, and in some cases providing improved response quality.
Methods and systems for estimating localization lengths in hybrid superconductor-semiconductor quantum devices are described. A method for estimating localization lengths in a hybrid superconductor-semiconductor quantum device includes constructing a statistical model for extracting localization lengths based on an implicit description of nonlocal conductance measurements associated with a physical representation of the hybrid superconductor-semiconductor quantum device. The method further includes, using a processor, estimating the localization lengths in the hybrid superconductor-semiconductor quantum device by a joint prior distribution enforcing smoothness over a function of gate voltages and extracted localization lengths for the hybrid superconductor-semiconductor quantum device.
A seamless and secure cloud to PC pointer relay allows a pointer/cursor to be moved between secure and unsecure windows while being displayed with smooth transitions and while transitioning between secure and unsecure data handling for pointer information. A secure input unit encrypts pointing device operations in the secure window. A user (host) computing device performs location calculations on encrypted data, which conceals pointing device operations in the secure window from the host operating system. The secure unit decrypts the encrypted data returned by the host operating system to determine the calculated pointer location information. The secure unit relays the calculated pointer operation information to the source of the secure window (e.g., remote cloud server) to process user interaction with the secure window while keeping the host operating system unaware of user activity in the secure window (e.g., other than position, if the host renders the pointer).
In certain embodiments, a time series-based anomaly detection method is provided, which is able to identify anomalous user accounts highly effectively. An activity predictor is used to model normal behaviors of individual accounts and to assess an extent to which a current behavior associated with differs from its past normal behavior. Part of an activity sequence is inputted to the activity predictor, and a resulting activity prediction (the activity predictor's prediction of normal behavior) is compared with the remaining part of the sequence. In preferred embodiments, a multi-stage approach is used, with a more lightweight form of anomaly detection applied in a first stage, and the time-series based detection performed in a second stage only on a subset of activity sequences escalated from the first stage.
The present disclosure provides methods, systems and computer readable media for training and implementing a generative machine learning model for identifying and mitigating security threats. Certain examples relate to generative model training, in which a training image is provided to a generative machine learning (ML) model in a training prompt, with an Indicator of Compromise (IoC) prediction instruction pertaining to the first security image. The model generates a predicted IoC and a parameter of the model is updated based on a loss function that quantifies error between a ground truth IoC and the predicted IoC. Other examples relate to the use of trained generative models for cybersecurity. A mitigation prompt comprising a second security image and an associated mitigation instruction is provided to a trained generative model. The model outputs an indication of a cybersecurity mitigation action based on the mitigation prompt, and the cybersecurity mitigation action is performed on the system. Certain example embodiments identify and automatically mitigate security issues using a multimodal generative model (MGM) though appropriate prompt engineering.
Endpoint security groups include computing device endpoints that are classified according to commonly shared device features and capabilities including device type, function, role, or location. Endpoint security groups are used as an alternative identity mechanism for endpoints for purposes of security and data traffic policy enforcement rather than using conventional IP (Internet Protocol) addressing. Grouping endpoints reduces the scope of network management to enable dynamic policy enforcement for endpoints as they join, leave, and then rejoin computing networks, which is a common behavior, particularly for IoT (Internet-of-Things) devices in manufacturing environments. In an illustrative example, a private multi-access edge compute (MEC) platform supports a scalable policy definition and enforcement framework that provides consistent endpoint handling independent of network access methodology. Endpoint security groups facilitate improvements in security of network access and utilization and segmentation of data traffic on a fine-grained basis.
Aspects of the disclosure include methods for evaluating a predictive model. An exemplary method includes training an evaluation model to output, for an input first entity-second entity pair, a content relevancy prediction. A large language model encoder of the evaluation model generates a first embedding for the first entity and a second embedding for the second entity. The embeddings are fed to an interaction tower to produce a logit and the logit is passed with true labels to a loss function for fine-tuning. The true labels include labeled training data generated by modifying training data having a first proportion of negative labeled data to provide a second proportion of negative labeled data greater than the first proportion. The evaluation model is used to score a performance of a predictive model based at least in part on a comparison of predictions made by the respective models for a same entity pair.
Embodiments of the disclosed technologies are capable of evaluating typeahead suggestions using a partial search query. The embodiments describe obtaining a typeahead suggestion responsive to a partial search query. The embodiments further describe creating a prompt based on the typeahead suggestion. The embodiments further describe causing a large language model (LLM) to evaluate the typeahead suggestion based on the prompt. The embodiments further describe providing, to a computing device, an evaluation output by the LLM in response to the prompt.
A heat exchanger comprising a heatsink and/or coldplate is disposed on a semiconductor having a heat-producing die within. A layer of thermal interface material (TIM) is disposed between the heat exchanger and semiconductor to enhance heat dissipation as the semiconductor is operated. A seal including a gasket or edgebond adhesive is provided around the perimeter edges of the heat exchanger and semiconductor to seal the gap around the periphery of the TIM layer to prevent the TIM from getting pumped out with cyclical thermal loading of the assembly. A capillary tube in the heat exchanger extending from the internal TIM layer to an opening exposed to the surrounding environment provides a reservoir to capture TIM that would otherwise be pumped out. Dimensions of the capillary tube are selected to prevent environmental air from passing by the TIM in the tube and getting entrapped in the TIM layer as voids.
Devices are automatically paired (e.g., without user involvement) for wireless communication based on proximity. A first device may authorize (e.g., wired or wireless) bridge device(s) to participate in (e.g., initiate) pairing first and second devices. The first or bridge devices engage in wireless proximity communication with second device(s), indicating the second device(s) is (are) physically co-located with the first or bridge devices. Co-location is used to initiate automated pairing of the first and second devices. The second device provides a pairing address to the first device (e.g., through the bridge device). The first device provides a temporary security key for a secure channel between the first and second devices (e.g., through the bridge device). A non-temporary security key is provided by the first device to the second device (e.g., through the bridge device) over the secure channel. The first and second devices complete automated wireless pairing using the non-temporary security key.
H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
H04W 12/04 - Key management, e.g. using generic bootstrapping architecture [GBA]
A user can select a capacity setting for a transitional partition that determines the allocation between a low-density partition and a high-density partition in the transitional partition. The transitional partition can dynamically change among multiple settings having different capacities for the low-density partition. If the current setting of the transitional partition does not efficiently utilize the available storage space based on the user's preferences for storing different types of data in the low-density partition and the high-density partition, then the user can choose to change the transitional partition to a different setting that better suits the individual user's storage allocation preferences. Therefore, valuable storage space will not be under-utilized but instead will be repurposed for more efficient use by converting a low-density partition to a high-density partition, and vice versa.
A database management system for managing a database includes each document being stored as a number of replicas for accessibility and data preservation. The system includes: a processor; a network interface; and a memory comprising programming instructions for execution by the processor to implement a database management service, the service configured to maintain a primary replica of a document, a number of secondary replicas of the document, and another log-only replica storing a log of changes to the document rather than contents of the document. The service makes head reads to the primary replica as needed when a read request to the number of secondary replicas does not result in a quorum.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
The disclosure relates to a semiconductor-superconductor hybrid structure, which includes a substrate, a buffer region having a superlattice sub-region over the substrate and a graded lattice sub-region over the superlattice sub-region, an active region over the buffer region, a superconductor over the active region consisting of one or more patterned nanowires, and a cap layer encapsulating the superconductor and top surface portions of the active region not covered by the superconductor. The active region covers an entire top surface of the buffer region, is configured to quantum confine electrons, and has a top barrier layer configured to tune coupling between the superconductor and the active region to a desired value. The superlattice sub-region is configured to prevent impurity diffusion and crystalline defects propagating from the substrate to the active region, while the graded lattice sub-region is configured to provide a lattice constant transition between the substrate and the active region.
Systems and methods for providing content events that are relevant to a first user of a social network are provided. In particular, a computing device may obtain content data associated with one or more content events, obtain user engagement data associated with the first user, determine a relevance score for each of the one or more content events using a relevance predictive model based on the user engagement data and attributes associated with the respective content event, the relevance score of each of the one or more content events representing a likelihood of the first user to engage with the respective content event, ranking the content events based on the relevance score for each of the one or more content events, and presenting a subset of the content events to the first user on a user interface of a device based on the ranking.
G06Q 50/00 - Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
Aspects of the present disclosure relate to multi-user, multi-device gaze tracking. In examples, a system includes at least one processor, and memory storing instructions that, when executed by the at least one processor, causes the system to perform a set of operations. The set of operations include identifying a plurality of computing devices, and identifying one or more users. The set of operations may further include receiving gaze input data and load data, from two or more of the plurality of computing devices. The set of operations may further include performing load balancing between the plurality of devices, wherein the load balancing comprises assigning one or more tasks from a first of the plurality of computing devices to a second of the plurality of computing devices based upon the gaze input data.
In example embodiments, specialized machine learning techniques may be utilized to automatically create summaries for viewers based at least partially on viewer intent. Viewer intent refers to the intention of the viewer with respect to performing a particular action in a computer system, namely what the viewer is attempting to accomplish. In some example embodiments, this viewer intent may be expressed in the form of a plurality of different intent categories, each providing, at a high level, what the viewer intends to accomplish. Examples of such categories in a social networking service include “job seeker,” “information gatherer,” “salesperson,” and “recruiter.”
G06Q 50/00 - Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
H04L 67/1396 - Protocols specially adapted for monitoring users’ activity
A method involves first receiving a set of data on rewards associated with previously chosen content variant choices, selected based on an initial content variant choice model. This initial model is informed by a prior set of data. A second, updated content variant choice model is then determined based on this first set of reward data. When a request for selecting a content variant choice is received, it comes with contextual features. The method involves estimating the expected rewards for a range of content variant choices, considering these contextual features. Subsequently, a specific content variant choice is chosen based on both the updated model and the anticipated rewards. Finally, the chosen content variant is displayed on a device, responding to the initial request.
A remote monitoring and management (RMM) system is configured to receive a stream of events generated in response to interactions of users from multiple tenants with one or more applications and store the events in a database. A plurality of different insight types is defined for one or more event types for the events. Insights of the different insight types are generated based on the events in the database, the event types of the events, and numbers of events of the event types. The insights are ranked using an artificial intelligence (AI) model trained to generate a predicted success score for each of the insights. A predetermined number of top insights are selected based on the ranking of the insights and aggregated into a feed. The feed is to at least one computing device associated with the RMM system.
H04L 41/50 - Network service management, e.g. ensuring proper service fulfilment according to agreements
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
23.
DETECTING AND PREVENTING HARMFUL GENERATIVE IMAGE OUTPUTSUSING DIGITAL SIGNATURES
This disclosure describes utilizing an image model protection system to improve the defensive robustness of a large generative image model against the generation of harmful digital images. For example, the image model protection system uses digital signatures of identified harmful images to determine whether a particular harmful image was generated by a specific large generative image model. Using digital signatures, the image model protection system matches the harmful image to images generated by the large generative image model. The image model protection system then identifies the prompt used to generate the image at the large generative image model. Furthermore, the image model protection system uses the harmful prompt to implement new security measures to safeguard the large generative image model against the generation of similar harmful images in the future.
Endpoint security groups include computing device endpoints that are classified according to commonly shared device features and capabilities including device type, function, role, or location. Endpoint security groups are used as an alternative identity mechanism for endpoints for purposes of security and data traffic policy enforcement rather than using conventional IP (Internet Protocol) addressing. Grouping endpoints reduces the scope of network management to enable dynamic policy enforcement for endpoints as they join, leave, and then rejoin computing networks, which is a common behavior, particularly for IoT (Internet-of-Things) devices in manufacturing environments. In an illustrative example, a private multi-access edge compute (MEC) platform supports a scalable policy definition and enforcement framework that provides consistent endpoint handling independent of network access methodology. Endpoint security groups facilitate improvements in security of network access and utilization and segmentation of data traffic on a fine-grained basis.
A method for securely providing a remote desktop session includes receiving, at a user device, an encrypted video stream that includes graphics content of the remote desktop session and that is characterized by a frame rate that is variable. The method further provides for reducing variability in the frame rate of the encrypted video stream by duplicating select encrypted frames of the video stream and inserting the duplicated encrypted frames into the video stream. The method additionally provides for delivering the video stream to a local application configured to generate control signals that cause a graphics processing unit (GPU) of the user machine to render the video stream to a display of the user machine.
H04N 21/254 - Management at additional data server, e.g. shopping server or rights management server
H04N 21/4405 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video stream decryption
A system is configurable to access a precomputed topology associated with a mesh that comprises a plurality of object components. The precomputed topology defines a plurality of object component groups that each comprise a respective set of object components of the mesh. The system is configurable to determine a traversal likelihood metric associated with the mesh that indicates a likelihood that rays of a ray trace operation will traverse acceleration structure nodes representing object components of the mesh, and use the plurality of object component groups as inputs to construct an acceleration structure. When the traversal likelihood metric satisfies a threshold, leaf nodes of at least one intermediate node of the acceleration structure each comprise a respective object component of an object component group. When the traversal likelihood metric fails to satisfy the threshold, at least one leaf node of the acceleration structure comprises an object component group.
Bidirectional flows of a communication session in a software defined network (SDN) are efficiently managed. A smart switch comprises a digital processing unit (DPU) complex comprising one or more DPUs, and a switching complex comprising one or more network processing units (NPUs). The DPU complex is configured to disaggregate enforcement of policies of the SDN from hosts of the SDN. The switching complex is configured to perform network routing of packets in the SDN. The hosts are implemented on servers communicatively coupled to network interfaces of the SDN. The switching complex is configured to perform policy enforcement of data flows for communication sessions that are offloaded from the DPU complex to the switching complex.
Methods, systems, and computer storage media for providing workload management using a workload management engine in an artificial intelligence (AI) system. In particular, workload management incorporates adaptive strategies that adjust the neural network models employed by a processing unit (e.g., NPU/GPU/TPU) based on the dynamic nature of workloads, workload management factors, and workload management logic. The workload management engine provides the workload management logic to support strategic decision-making for processor optimization. In operation, a plurality states of workload management factors are identified. A task associated with a workload processing unit is identified. Based on the task and the plurality of states of the workload processing unit, a neural network model from a plurality of neural network models is selected. The plurality of neural network models include a full neural network model and a reduced neural network model. The task is caused to be executed using the identified neural network model.
A heat exchanger comprising a heatsink and/or coldplate is disposed on a semiconductor having a heat-producing die within. A layer of thermal interface material (TIM) is disposed between the heat exchanger and semiconductor to enhance heat dissipation as the semiconductor is operated. A seal including a gasket or edgebond adhesive is provided around the perimeter edges of the heat exchanger and semiconductor to seal the gap around the periphery of the TIM layer to prevent the TIM from getting pumped out with cyclical thermal loading of the assembly. A capillary tube in the heat exchanger extending from the internal TIM layer to an opening exposed to the surrounding environment provides a reservoir to capture TIM that would otherwise be pumped out. Dimensions of the capillary tube are selected to prevent environmental air from passing by the TIM in the tube and getting entrapped in the TIM layer as voids.
H01L 23/10 - ContainersSeals characterised by the material or arrangement of seals between parts, e.g. between cap and base of the container or between leads and walls of the container
H01L 23/42 - Fillings or auxiliary members in containers selected or arranged to facilitate heating or cooling
H01L 23/367 - Cooling facilitated by shape of device
H01L 23/473 - Arrangements for cooling, heating, ventilating or temperature compensation involving the transfer of heat by flowing fluids by flowing liquids
30.
ADAPTIVE QUERY ROUTING FOR NATURAL LANGUAGE GENERATORS BASED ON QUERY DIFFICULTY
Natural language generators (NLGs), including large language models, are powerful technologies that are in widespread use. However, typically, as NLGs become more powerful and sophisticated, their correspondingly increased complexity requires substantial processing resources. The present disclosure provides automated techniques for dynamically routing queries between at least two NLGs based on an assessment of query difficulty. Less difficult queries can be routed to a less resource intensive NLG, while more difficult queries are routed to a more sophisticated, but more resource intensive NLG. Routing less difficult queries to a less resource intensive model can thus conserve computing resources, while providing little to no drop in response quality, and in some cases providing improved response quality.
A computer implemented method comprising: obtaining a simulated input image simulating a second imaging modality based on a source image in a first imaging modality; inputting the simulated input image into a first machine learning model trained based on simulated training images in the second imaging modality, thereby generating a latent representation of the simulated input image; and causing the latent representation to be input into a second machine learning model trained based on empirical training images in the second image modality, thereby resulting in the second machine learning model generating a synthesized output image in the second modality.
Methods, systems, and computer storage media for providing workload management using a workload management engine in an artificial intelligence (AI) system. In particular, workload management incorporates adaptive strategies that adjust the neural network models employed by a processing unit (e.g., NPU/GPU/TPU) based on the dynamic nature of workloads, workload management factors, and workload management logic. The workload management engine provides the workload management logic to support strategic decision-making for processor optimization. In operation, a plurality states of workload management factors are identified. A task associated with a workload processing unit is identified. Based on the task and the plurality of states of the workload processing unit, a neural network model from a plurality of neural network models is selected. The plurality of neural network models include a full neural network model and a reduced neural network model. The task is caused to be executed using the identified neural network model.
Liquid-cooled coldplates are mounted to racks receiving solid state drives (SSDs) in an electronic component rack. The SSDs have heat spreaders with externally exposed surfaces that are thermally coupled to the coldplates using dry-contact interfaces. The SSD heat spreaders and rack-mounted coldplates provide a thermal path from the heat-producing semiconductors inside the SSD to a fluid distribution system in the rack that is operatively coupled to a liquid-cooling system. The SSDs are slideably mounted in the racks to support easy “hot-swapping.” A technician slides an SSD into the rack racks and uses a finger-operated mechanism in the SSD to simultaneously seat SSD power and data connectors to mating connectors in the rack and place the coldplate in intimate thermal contact with the SSD heat spreader.
The disclosed technology is generally directed to a distributed query-and-command system. In one example of the technology, in a trusted execution environment (TEE) of a first node, database code of the first node and distributed ledger code of the first node is executed, such that execution of the distributed ledger code of the first node instantiates a first instance of a distributed ledger of a consortium blockchain, and such that execution of the query-and-command code of the first node instantiates a first instance of a query-and-command system. The consortium blockchain is distributed among a plurality of nodes, and the query-and-command system is distributed among the plurality of nodes. A first transaction that is associated with modifying the query-and-command system is received. The first transaction is executed. Changes associated with the first transaction to the distributed ledger are persisted.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
35.
END-TO-END STREAMING SPEECH TRANSLATION WITH NEURAL TRANSDUCER
Systems and methods are provided for obtaining, training, and using an end-to-end AST model based on a neural transducer, the end-to-end AST model comprising at least (i) an acoustic encoder which is configured to receive and encode audio data, (ii) a prediction network which is integrated in a parallel model architecture with the acoustic encoder in the end-to-end AST model, and (iii) a joint layer which is integrated in series with the acoustic encoder and prediction network. The end-to-end AST model is configured to generate a transcription in the second language of input audio data in the first language such that the acoustic encoder learns a plurality of temporal processing paths.
The presently disclosed magnetic locking mechanism(s) for a rectangular computing device is directed at providing a fast, but tamper resistant, and anti-theft solution for assembly and disassembly of a rectangular computing device having a top and a base that come together to form an overall enclosure for the rectangular computing device. The top and base that incorporate one or more of the presently disclosed magnetic locking mechanisms are capable of being quickly and easily attached and detached without damaging the rectangular computing device, so long as a correct magnetic key is used. This aids both repairability and upgradability of the rectangular computing device during its life cycle, as well as recyclability at the end of its life cycle. Without the correct magnetic key, it is difficult to separate the top and base without damaging the rectangular computing device.
Techniques are described herein that are capable of providing time-of-scan protection for a scannable encoded image in an electronic message or an electronic form. An electronic message or electronic form is received. The electronic message or electronic form includes a scannable encoded image. A uniform resource identifier (URI) that is encoded in the scannable encoded image is identified by decoding the scannable encoded image. The URI identifies a target data source. A wrapped URI is generated by wrapping the URI in a wrapper. The wrapped URI identifies a substitute data source. A replacement scannable encoded image is generated by encoding the wrapped URI. A replacement electronic message or replacement electronic form is generated by replacing the scannable encoded image in the electronic message or electronic form with the replacement scannable encoded image. The replacement electronic message is provided, or the replacement electronic form is published.
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
A computer-implemented method comprising receiving a plural number of candidate parameter value sets in a specified order, each comprising a respective candidate parameter value for at least one parameter of an optimisation algorithm, wherein the number of candidate parameter value sets is based on a processing budget; for each candidate parameter value set in the sequence: applying the optimisation algorithm, with the at least one parameter set to the respective candidate parameter value, to a plurality of initial states of a model representing a system to generate corresponding candidate updated states, and evaluating each of the candidate updated states according to an optimality metric to generate a corresponding optimality score; selecting, as an estimated optimal state of the model, the candidate updated state having the highest optimality score; and outputting the selected estimated optimal state of the model to a user interface, network interface or other application.
Techniques, software, and systems for enhanced notification of connector coupling quality between a connector and a user device are included. In one implementation a method includes obtaining indications of magnetic coupling properties of a connector with respect to a device. Based on at least the indications of the magnetic coupling properties, the method includes determining a coupling quality of a connection between the connector and the device. The method also includes providing an indication based at least on the coupling quality of the connection falling below a threshold quality level.
G01R 31/66 - Testing of connections, e.g. of plugs or non-disconnectable joints
G01D 5/14 - Mechanical means for transferring the output of a sensing memberMeans for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for convertingTransducers not specially adapted for a specific variable using electric or magnetic means influencing the magnitude of a current or voltage
A system is configurable to access a precomputed topology associated with a mesh that comprises a plurality of object components. The precomputed topology defines a plurality of object component groups that each comprise a respective set of object components of the mesh. The system is configurable to determine a traversal likelihood metric associated with the mesh that indicates a likelihood that rays of a ray trace operation will traverse acceleration structure nodes representing object components of the mesh, and use the plurality of object component groups as inputs to construct an acceleration structure. When the traversal likelihood metric satisfies a threshold, leaf nodes of at least one intermediate node of the acceleration structure each comprise a respective object component of an object component group. When the traversal likelihood metric fails to satisfy the threshold, at least one leaf node of the acceleration structure comprises an object component group.
A method for securely providing a remote desktop session includes receiving, at a user device, an encrypted video stream that includes graphics content of the remote desktop session and that is characterized by a frame rate that is variable. The method further provides for reducing variability in the frame rate of the encrypted video stream by duplicating select encrypted frames of the video stream and inserting the duplicated encrypted frames into the video stream. The method additionally provides for delivering the video stream to a local application configured to generate control signals that cause a graphics processing unit (GPU) of the user machine to render the video stream to a display of the user machine.
H04N 21/2347 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving video stream encryption
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
42.
Expediting Generative Token Production using Speculative Sampling, Added Guidance, and Language Models of Different Capacities
A technique accelerates the generative production of tokens using a target language model that operates in cooperation with a draft language model. The target language model is more capable, but slower, compared to the draft language model. In operation, the draft language model transforms prompt tokens into draft tokens. The target language model edits the draft tokens, e.g., by selecting zero, one, or more of the draft tokens, and by also predicting a next token to follow the draft token(s) (if any) that are selected. Further, the target language model produces guidance vector information. In a subsequent cycle, the draft language model uses the guidance vector information to produce an updated set of set of draft tokens. The guidance vector information informs the draft language model of the embedding space being used by the target language model. This achieves a more effective cooperative relation between the two models.
The disclosed techniques provide enhanced eye tracking systems utilizing joint estimation of biological parameters and hardware parameters. A system uses joint estimation of biological parameters, e.g., direction and position of an eye, with concurrent estimation of hardware parameters, e.g., camera position or camera direction, to self-calibrate and provide eye tracking estimations to accommodate for deformations and other changes of a device. Sensor data is used to select hardware parameters of a camera for use in the joint estimation with the biological parameters, where the hardware parameters are estimated based on glint and pupil position of a user. The disclosed techniques include a method to model changes of a device, as well as detect and compensate for them while the eye-tracking device is in normal use, without requiring a factory-calibration procedure to be repeated.
A computer-implemented approach for assessing and managing risk of a cloud service is disclosed. Cloud computing resource data for a cloud service is received. A risk assessment framework is applied to the cloud computing resource data. The risk assessment framework includes a set of security criteria including a subset of data plane criteria and a subset of control plane criteria. The risk assessment framework assigns an individual risk score to each security criteria of the set. The individual risk scores of the set of security criteria are aggregated to generate an overall risk score for the cloud service. A graphical user interface including the overall risk score is visually presented via a display. A computer-automated risk management operation that automatically adjusts security settings of the cloud service based at least on the cloud computing resource data for the cloud service is executed to enhance security of the cloud service.
A method for network traffic arbitration includes, at a network router, receiving two or more network packets over two or more input ports. During an observation window, traffic parameters for the two or more network packets are stored in a traffic history table, the traffic parameters including a Quality-of-Service (QOS) priority value for a network packet of the two or more network packets. Based at least in part on the traffic parameters recorded in the traffic history table, including the QoS priority value, arbitration weights are calculated for each of the two or more input ports for a weighted round robin arbitration process.
A data processing system implements receiving, via a first software application on a client device, a call requesting a schedule to be generated for a user by a generative model. The system further implements identifying online and/or offline data source(s) indicating activities specific to the user, the online and/or offline data source(s) including software application(s) within a workspace; constructing a first prompt by a prompt construction unit as an input to the generative model, the prompt construction unit constructing the first prompt by appending the activities and context data to an instruction string, the instruction string comprising instructions to the generative model to schedule the activities based on the context data, and to assign the scheduled activities into the schedule, the context data being associated with the user and/or the activities; providing the schedule to the client device; and causing a user interface of the client device to present the schedule.
This patent relates to hinged devices, such as computing devices. One example includes a first portion including a first input/output device and a second portion including a second input/output device. A hinge assembly includes a flexible hinge that removably couples the first and second portions and allows relative rotation between the first and second portions. The flexible hinge is biased into the first portion to reduce a percentage of the flexible hinge exposed between the first and second portions at a given rotational or angular orientation of the first and second portions.
Disclosed is a semiconductor-superconductor hybrid structure (10), particularly for topological quantum computing, which includes a substrate (12), a buffer region (14) having a superlattice sub-region (24) over the substrate and a graded lattice sub-region (26) over the superlattice sub-region, an active region (16) over the buffer region, a superconductor (18) consisting of one or more patterned nanowires over the active region, and a cap layer (20) encapsulating the superconductor and top surface portions of the active region not covered by the superconductor. The active region covers an entire top surface of the buffer region, is configured to quantum confine electrons, and has a top barrier layer (34) configured to tune coupling between the superconductor and the active region. The superlattice sub-region is configured to prevent impurity diffusion and crystalline defects propagating from the substrate to the active region, while the graded lattice sub-region is configured to provide a lattice constant transition between the substrate and the active region.
H10D 62/81 - Semiconductor bodies, or regions thereof, of devices having potential barriers characterised by the materials of structures exhibiting quantum-confinement effects, e.g. single quantum wellsSemiconductor bodies, or regions thereof, of devices having potential barriers characterised by the materials of structures having periodic or quasi-periodic potential variation
H10D 48/00 - Individual devices not covered by groups
H01L 21/02 - Manufacture or treatment of semiconductor devices or of parts thereof
49.
TOPOLOGICAL DEVICES WITH AN ASYMMETRIC JUNCTION DESIGN
Topological devices with asymmetric junction(s) are described. An example topological device (100) includes a superconducting wire (112) comprising a first segment (114) and a second segment (116), where the first segment (114) is configurable to be in a trivial phase and the second segment (116) is configurable to be in a topological phase. The topological device further includes an asymmetric junction (182), at an interface of the first segment (114) and the second segment (116). The asymmetric junction (182) is operable to couple a Majorana zero mode, MZM, in the second segment (116) to a quantum dot (172) or a transport lead (153) such that the asymmetric junction (182) increases strength of a coupling between the MZM and the quantum dot (172) or the transport lead (153) while reducing strength of a coupling between any states formed in the first segment (114) of the superconducting wire (112) and the quantum dot (172) or the transport lead (153).
Example implementations include a method, apparatus, and computer-readable medium configured for implementing a workflow using a large language model (LLM). A workflow automation application sends a first prompt to a large language model (LLM) to transform a first input data source in a first format to a second format. The workflow automation application sends a second prompt to the LLM to define multiple steps of a workflow starting on the data source in the second format. The workflow automation application sends a third prompt to the LLM to define execution of business logic for each step of the workflow. The workflow automation application receives, from the LLM, output data indicating that each step of the workflow has been executed.
In certain embodiments, a time series-based anomaly detection method is provided, which is able to identify anomalous user accounts highly effectively. An activity predictor is used to model normal behaviors of individual accounts and to assess an extent to which a current behavior associated with differs from its past normal behavior. Part of an activity sequence is inputted to the activity predictor, and a resulting activity prediction (the activity predictor's prediction of normal behavior) is compared with the remaining part of the sequence. In preferred embodiments, a multi-stage approach is used, with a more lightweight form of anomaly detection applied in a first stage, and the time-series based detection performed in a second stage only on a subset of activity sequences escalated from the first stage.
This patent relates to hinged devices, such as computing devices. One example includes a first portion including a first input/output device and a second portion including a second input/output device. A hinge assembly includes a flexible hinge that removably couples the first and second portions and allows relative rotation between the first and second portions. The flexible hinge is biased into the first portion to reduce a percentage of the flexible hinge exposed between the first and second portions at a given rotational or angular orientation of the first and second portions.
A technique accelerates the generative production of tokens using a target language model that operates in cooperation with a draft language model. The target language model is more capable, but slower, compared to the draft language model. In operation, the draft language model transforms prompt tokens into draft tokens. The target language model edits the draft tokens, e.g., by selecting zero, one, or more of the draft tokens, and by also predicting a next token to follow the draft token(s) (if any) that are selected. Further, the target language model produces guidance vector information. In a subsequent cycle, the draft language model uses the guidance vector information to produce an updated set of set of draft tokens. The guidance vector information informs the draft language model of the embedding space being used by the target language model. This achieves a more effective cooperative relation between the two models.
A virtual directory is created in a software development tool that lists the files having source code components (e.g., files, functions, methods, types, classes) of a codebase that relate to a user query about the codebase. The files of the codebase are partitioned into chunks with each chunk having a respective embedding. A search for the source code components relevant to the query is performed using an embedding of the query and the chunk embeddings representing the files of codebase. As a file of the virtual directory is edited, the chunk embeddings are updated and the virtual directory is updated with a reference to the revised file.
Systems and methods for converting the result of a radio frequency (RF) measurement into the quantum capacitance of a device are described. An example method includes, by performing a radio frequency (RF) measurement, extracting frequency shift and resonator loss shift of a resonator relative to a reference trace of the resonator, where the resonator is coupled to a quantum device. The method further includes from the extracted frequency shift and the resonator loss shift, without resonator fitting, deriving both a real part and an imaginary part of a quantum capacitance associated with the quantum device.
Methods for fabricating packages with dummy dies having a construction that mimics warpage of the other components included in the package are described. A method for fabricating a package with a floor plan having sections for placement of components includes arranging a first component in a first section of the floor plan and arranging a second component in a second section of the floor plan, where each of the first component and the second component comprises active circuitry for providing at least one of compute, storage, or communication functionality. The method further includes forming a dummy die having a construction that mimics warpage of at least one of the first component or the second component. The method further includes arranging the dummy die in an unoccupied section of the floor plan for the package.
H01L 25/00 - Assemblies consisting of a plurality of individual semiconductor or other solid-state devices
H01L 25/065 - Assemblies consisting of a plurality of individual semiconductor or other solid-state devices all the devices being of a type provided for in a single subclass of subclasses , , , , or , e.g. assemblies of rectifier diodes the devices not having separate containers the devices being of a type provided for in group
H01L 25/16 - Assemblies consisting of a plurality of individual semiconductor or other solid-state devices the devices being of types provided for in two or more different subclasses of , , , , or , e.g. forming hybrid circuits
57.
PROMPT AUTO-GENERATION FOR AI ASSISTANT BASED ON SCREEN UNDERSTANDING
Large language models (LLMs) and visual-language models (VLMs) are able to provide robust results based on specified formatting and organization. Although LLMs and VLMs are designed to receive natural language input, users often lack the skill, knowledge, or patience to utilize LLMs and VLMs to their full potential. By leveraging screen understanding, AI prompts (or “pills”) may automatically be generated for artificial-intelligence (AI) assistance and query resolution in a VLM/LLM environment. Using an image encoder, a current screenshot is processed into an image embedding and compared to text embeddings representing screenshot activities. By identifying the text embedding having the closest similarity to the image embedding, a screen activity being performed by the user may be determined. Suggested AI prompts (or “pills”) may then be generated in real-time to assist the user in performing the screen activity.
A call model is generated that takes into account location-specific information and target attributes such as throughput per user. A cluster of different machine learning models is utilized to compute dynamic call model characteristics for each location, and merges the outputs into a dynamic call model. Additionally, techniques such as feature vector extraction, clustering algorithms, and ensemble models are employed to improve the accuracy and predictive performance of the machine learning models.
Techniques are described herein that are capable of converting pages written in page storage during a user session to page-embedded blocks in block storage for reading. Blocks of first data are read from block storage during a user session. Pages of second data are written to page storage during the user session. The pages of the second data indicate changes to be made with regard to at least a subset of the blocks of the first data in the block storage. At a time instance at which no pages are being written to the page storage, the pages of the second data are transferred from the page storage to the block storage by converting the pages of the second data, which are configured to have a page format associated with the page storage, to page-embedded blocks, which are configured to have a block format associated with the block storage.
Systems and methods for resource-efficient retrieval of information using a generative AI model are disclosed. An input query requesting information from a set of documents is used in a prompt for a generative AI model to generate a search query to identify the documents relevant to the input query and their respective relevancy scores. The input query is used as an input another model to determine a depth score indicating a predicted number of documents needed to retrieve the information. Based on the depth score and the relevancy scores of the relevant documents, the system extracts grounding data from the identified relevant documents to generate an answer synthesis prompt for the generative AI model. The generative AI model processes the second to produce a response to the input query including the requested information.
G06F 16/383 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
A server computing device is provided, including a processor configured to receive a homomorphically encrypted input embedding vector from a client computing device. At a transformer network, the processor may generate a plurality of homomorphically encrypted intermediate vectors at least in part by performing inferencing on the homomorphically encrypted input embedding vector. The processor may transmit the plurality of homomorphically encrypted intermediate output vectors to the client computing device. The processor may receive a plurality of homomorphically encrypted intermediate input vectors from the client computing device subsequently to transmitting the homomorphically encrypted intermediate output vectors to the client computing device. At the transformer network, the processor may generate a homomorphically encrypted output vector at least in part by performing additional inferencing on the homomorphically encrypted intermediate input vectors. The processor may transmit the homomorphically encrypted output vector to the client computing device.
A fine-grain selectable partially privileged container virtual computing environment provides a vehicle by which processes that are directed to modifying specific aspects of a host computing environment can be delivered to, and executed upon, the host computing environment while simultaneously maintaining the advantageous and desirable protections and isolations between the remaining aspects of the host computing environment and the partially privileged container computing environment. Such partial privilege is provided based upon directly or indirectly delineated actions that are allowed to be undertaken on the host computing environment by processes executing within the partially privileged container virtual computing environment and actions which are not allowed. Aspects of the host computing environment operating system, such as the kernel, are extended to interface with container-centric mechanisms to receive information upon which actions can be allowed or denied by the kernel even if the process attempting such actions would otherwise have sufficient privilege.
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
63.
EYE AND HAND TRACKING UTILIZING LENSLESS CAMERA AND MACHINE LEARNING
Eye and hand tracking systems in head-mounted display (HMD) devices are arranged with lensless camera systems using optical masks as encoding elements that apply convolutions to optical images of body parts (e.g., eyes or hands) of HMD device users. The convolved body images are scrambled or coded representations that are captured by a sensor in the system, but are not human-recognizable. A machine learning system such as a neural network is configured to extract body features directly from the coded representation without performance of deconvolutions conventionally utilized to reconstruct the original body images in human-recognizable form. The extracted body features are utilized by the respective eye or hand tracking systems to output relevant tracking data for the user's eyes or hands which may be utilized by the HMD device to support various applications and user experiences. The lensless camera and machine learning system are jointly optimizable on an end-to-end basis.
Various embodiments provide a so-called companion experience in which content consumed on a primary screen can serve as a source for an automatic search that returns related content that can be presented on an auxiliary screen. The companion experience can be considered to reside in a layer that can be moved across different screens. The different screens can include different physical screens, such as those associated with different computing devices, or the same physical screen in which the companion experience would be rendered in a frame or sub-window.
This disclosure describes utilizing a generative document system to dynamically build and provide generative search result documents. The generative document system utilizes an aggregated framework that leverages one or more large generative models (LGMs). For example, the aggregated framework includes three stages where local processes are applied to generative outputs from LGMs, with each stage building upon the generative inputs from previous stages. The generative document system uses the aggregated framework to create generative search result documents based on search queries and their corresponding search result links. These generative search result documents provide interactive, intuitive, comprehensive, and flexible curation of answers that address the respective search queries.
A computing device assembly (100) is provided, including a rack (10), and a plurality of compute units (12) that are horizontally oriented and mounted within the rack (10) in one of two vertical stacks (12A, 12B). The computing device assembly (100) further includes a plurality of switches (16) that are vertically oriented and mounted along a front side (24) of the rack (10) laterally between the two vertical stacks (12A, 12B) of compute units (12). The computing device assembly (100) further includes a plurality of horizontal cable backplanes (14) mounted in a vertical stack along a rear side (22) of the rack (10). The computing device assembly (100) further includes a plurality of vertical cable shuffles (20) mounted between the two vertical stacks (12A, 12B) of compute units (12) and between the vertically oriented switches (16) and the vertical stack of horizontal cable backplanes (14).
Disclosed herein is a system for implementing a management controller on a node, or network server, that is dedicated to monitoring the individual health of a plurality of accelerator modules configured on the node. Based on the monitored health, the management controller is configured to implement autonomous power cycle control of individual accelerator modules. The autonomous power cycle control is implemented without violating the requirements of standards established for accelerator modules (e.g., OPEN COMPUTE PROJECT requirements, PERIPHERAL COMPONENT INTERCONNECT EXPRESS (PCIe) interface requirements).
Systems and techniques for facilitating unified multichannel communication are provided. The described systems and techniques improve communication technology through an encompassing, channel-agnostic approach which unifies disparate communication modes into a singular coherent thread. A unified multichannel communication ("UMC") service of a UMC platform can initialize a UMC thread for a UMC session, where the UMC thread can be used to facilitate unified multichannel communication. The UMC session can involve multiple participants, including human users and software agents (e.g., conversational bots, virtual agents, digital assistants, and other dialog interfaces). The UMC platform can facilitate creating and interacting with a digital assistant providing unified multichannel communication.
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
H04L 51/216 - Handling conversation history, e.g. grouping of messages in sessions or threads
H04L 51/56 - Unified messaging, e.g. interactions between e-mail, instant messaging or converged IP messaging [CPM]
69.
SYSTEM AND METHOD FOR SPEECH LANGUAGE IDENTIFICATION
A method, computer program product, and computing system for speech language identification. An input speech signal is received in a particular language. The input speech signal is processed by a plurality of speech recognition processing paths, each speech recognition processing path to recognize an associated subset of the languages. Processing, with each of the speech recognition processing paths, the input speech signal using machine learning to identify a language which is a closest match to the particular language of the input speech signal, resulting in a plurality of identified languages. The input speech signal and an indication of each of the plurality of identified languages is received in a further speech recognition processing path. The input speech signal is processed, using machine learning, to recognize one of the identified languages as a closest match to the particular language.
A computing system including one or more processing devices configured to identify one or more severe hook faults in a stabilizer channel. Identifying the severe hook faults includes receiving a circuit channel check matrix, the columns of which indicate values of checks associated with elementary faults of the stabilizer channel. Identifying the severe hook faults further includes receiving a phenomenological channel check matrix as a sub-matrix of the circuit channel check matrix, receiving a logical effect matrix, and receiving a weight vector that indicates probability weights of the elementary faults. Based at least in part on the circuit channel check matrix, the logical effect matrix, the phenomenological channel check matrix, and the weight vector, identifying the severe hook faults further includes computing column indices of columns of the circuit channel check matrix that correspond to the severe hook faults. The processing devices output an indication of the severe hook faults.
A computing device assembly is provided, including a rack, and a plurality of compute units that are horizontally oriented and mounted within the rack in one of two vertical stacks. The computing device assembly further includes a plurality of switches that are vertically oriented and mounted along a front side of the rack laterally between the two vertical stacks of compute units. The computing device assembly further includes a plurality of horizontal cable backplanes mounted in a vertical stack along a rear side of the rack. The computing device assembly further includes a plurality of vertical cable shuffles mounted between the two vertical stacks of compute units and between the vertically oriented switches and the vertical stack of horizontal cable backplanes.
A signal conditioning connector assembly is provided, including an enclosure, and a plurality of signal conditioner layers mounted within the enclosure. Each signal conditioner layer includes a substrate, signal conditioner circuitry mounted to the substrate, first electrodes forming a first connector on a first side of the signal conditioner circuitry, second electrodes forming a second connector on a second side of the signal conditioner circuitry, a heat spreader in thermal communication with a side of the signal conditioner circuitry opposite the substrate, and a liquid cooling pipe positioned adjacent and in thermal communication with the heat spreader. The liquid cooling pipe is configured to draw heat away from the heat spreader for thermal management. The signal conditioning connector assembly can be positioned adjacent an interface between the vertical cable shuffle and the horizontal cable backplane within the rack of the computing device assembly of the first and second aspects.
Bidirectional flows of a communication session in a software defined network (SDN) are efficiently managed. A smart switch comprises a digital processing unit (DPU) complex comprising one or more DPUs, and a switching complex comprising one or more network processing units (NPUs). The DPU complex is configured to disaggregate enforcement of policies of the SDN from hosts of the SDN. The switching complex is configured to perform network routing of packets in the SDN. The hosts are implemented on servers communicatively coupled to network interfaces of the SDN. The switching complex is configured to perform policy enforcement of data flows for communication sessions that are offloaded from the DPU complex to the switching complex.
A method, computer program product, and computing system for speech language identification. An input speech signal in a particular language of a plurality of languages is received and processed by a plurality of speech recognition processing paths, each speech recognition processing path being configured to recognize a subset of the plurality languages. Each of the plurality of speech recognition processing paths processes the input speech signal using machine learning to identify a language in the associated subset of languages which is a closest match to the particular language of the input speech signal. The processing of the input speech signal by the plurality of speech recognition processing paths results in a plurality of identified languages. The input speech signal and an indication of each of the plurality of identified languages are processed in a further speech recognition processing path to recognize one of the plurality of identified languages as a closest match to the particular language of the input speech signal.
A heuristic that solves an optimization problem is analyzed to determine how and why it underperforms a benchmark solution. A novel intermediate representation (IR) is used to construct a network flow graph that models the optimization problem. Solutions to the optimization problem are defined programmatically with reference to the network flow graph. A compiler translates the programmatic definitions of the heuristic and a benchmark solution to a low-level model of constraints and objectives. A heuristic analyzer iteratively analyzes the constraints and objectives to identify inputs that cause the heuristic to yield inefficient results relative to the benchmark. Properties of inputs and properties of the heuristic that cause the heuristic to underperform are identified, and an explanation of when, how, and why the heuristic underperforms is generated.
A computer-implemented technique is described herein for defining and applying constraints that regulate a supervisee's interaction with applications. In one implementation, the technique provides a user interface presentation to a supervisor that lists a set of applications that run on plural application execution platforms. The user interface presentation also allows the supervisor to set platform-agnostic constraint information for any identified application. The platform-agnostic constraint information, once set for an application, constrains interaction by a supervisee with all versions of that same application. That is, the constraint information is said to be agnostic with respect to platform in the sense that it applies to a variety of application execution platforms that run the application. In one example, the platform-agnostic constraint information specifies a permitted amount of an activity that the supervisee is permitted to perform across all versions of an application.
According to examples, an apparatus may include a processor and a memory on which is stored machine-readable instructions that when executed by the processor, may cause the processor to cause a graphical user interface to be displayed, the graphical user interface including graphical icons of a plurality of authentication types available for assignment to users and a graphical icon of a first user. The instructions may also cause the processor to detect a movement of a graphical icon of a first authentication type from a first location to a second location in the graphical user interface, the second location corresponding to the graphical icon of the first user and based on the detected movement, assign the first authentication type to the first user.
Implementations of the subject matter described herein provide a solution for rate control based on reinforcement learning. In this solution, an encoding state of a video encoder is determined, the encoding state being associated with encoding of a first video unit by the video encoder. An encoding parameter associated with rate control in the video encoder is determining by a reinforcement learning model and based on the encoding state of the video encoder. A second video unit different from the first video unit is encoded based on the encoding parameter. In this way, it is possible to achieve a better quality of experience (QOE) for real time communication with computation overhead being reduced.
H04N 19/196 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
79.
TARGETING OPERATING SYSTEM PROFILES FOR BARE METAL RESTORE
Example solutions enhance security of bootable media images during bare metal restores. A boot image generation request and original image integrity data is received from a first computing device. An original image timestamp associated with the boot image generation request is stored. A message is received from a second computing device that includes current image integrity data generated by the second computing device using a current boot image. The original image integrity data is verified to match the current image integrity data. The message is determined to have been received within a length of time from the original image timestamp. A registration of the second computing device is performed within the device management system based on the verification and the determination.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
80.
ALGORITHM AND DIFFERENTIATED WEIGHTS FOR RESOURCE CONSTRAINED PROCESSES
The techniques disclosed herein enhance the functionality of network computing infrastructure in resource constrained processes. This is accomplished by assigning differentiated weights to instances of a software service based on the role of the instance. In the context of the present disclosure, a role is a defined set of functionalities within a software service. An individual weight quantitatively represents the computing resource demand imposed by the functionalities of the role. A software orchestration system subsequently places the instances of the software service within a computing environment (e.g., a node, a cluster) for execution. As such, the computing environment can include a resource constraint that represents the capacity of the constituent computing resources to execute the instances of the software service. Accordingly, the instances are placed such that the sum of the weights of the instances is less than or equal to the resource constraint.
A method of and system for method for ensuring data compliance in a computer environment includes retrieving data rules from a rule repository, the rule repository being a repository that stores one or more rules that are associated with storage or transfer of data by one or more devices in the computing environment, retrieving metadata about data flow in the computing environment from a policy governor, retrieving information about a data classification of data used by one or more services provided by the computing environment, and retrieving data about a network topography of the computing environment. The retrieved data is then used to generate a configuration file for configuring a Field Programmable Gate Array (FPGA) based on at least one of the retrieved data. The configuration file is transmitted to an FPGA configuration loader for loading the configuration file onto the FPGA, where the FPGA utilizes the configuration file to implement the data rules in the computing environment to ensure compliance with the rules.
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
H04L 41/084 - Configuration by using pre-existing information, e.g. using templates or copying from other elements
82.
PROBING LARGE LANGUAGE MODEL HIDDEN STATE VALUES FOR DETECTING GROUNDING ERRORS IN GENERATION
A large language model has multiple different layers, each layer generating a set of hidden state values that are passed on to a subsequent layer, during generation. A probe accesses the hidden state values and generates a probe output indicative of how likely a next token to be generated will be an undesirable token (such as a hallucination). An action signal is generated based upon the probe output. The action signal can be used to terminate generation, to generate an alert, or to perform other actions.
Disclosed herein is a system for implementing a management controller on a node, or network server, that is dedicated to monitoring the individual health of a plurality of accelerator modules configured on the node. Based on the monitored health, the management controller is configured to implement autonomous power cycle control of individual accelerator modules. The autonomous power cycle control is implemented without violating the requirements of standards established for accelerator modules (e.g., OPEN COMPUTE PROJECT requirements, PERIPHERAL COMPONENT INTERCONNECT EXPRESS (PCIe) interface requirements).
Techniques are described for a multi-platform test framework that is configured to generate a target test script indicative of commands and responses for verifying virtual functions implemented in a virtualized computing environment executing a plurality of virtual machines or containers. A translation layer is used to translate the commands and responses of the source test script to equivalent commands and responses usable to verify the virtual function in a second node configured to operate on a second platform of the virtualized computing environment.
This document relates to providing adaptive teleconferencing experiences using generative image models. For example, the disclosed implementations can employ inpainting and/or image-to-image restyling modes of a generative image model to generate images for a teleconference. The images can be generated based on prompts relating to the teleconference. Users can be superimposed on the generated images, thus giving the appearance that the users are present in an environment generated by the generative image model.
A natural language query is received from an operator of a telecommunications network. A metadata request is computed from the natural language query. The metadata request is sent to a repository of metadata, the metadata describing telemetry data of the telecommunications network, the telemetry data stored in a relational database. Metadata is received from the metadata repository in response to the metadata request. Using the received metadata and a language model a relational database query is computed and the relational database is queried. A response is received from the relational database, triggering an action.
Example implementations include a method, apparatus, and computer-readable medium configured for implementing a workflow using a large language model (LLM). A workflow automation application sends a first prompt to a large language model (LLM) to transform a first input data source in a first format to a second format. The workflow automation application sends a second prompt to the LLM to define multiple steps of a workflow starting on the data source in the second format. The workflow automation application sends a third prompt to the LLM to define execution of business logic for each step of the workflow. The workflow automation application receives, from the LLM, output data indicating that each step of the workflow has been executed.
Computer-implemented techniques for multimodal content relevance prediction using neural networks involves processing multimodal content comprising a digital image and text. Initially, dense embeddings are obtained: an image embedding from a pretrained convolutional neural network, and a text embedding from a pretrained transformer network. These embeddings encapsulate the features of the image and text respectively. Two pretrained dense neural sub-networks then reduce the dimensionality of these embeddings. A third dense neural sub-network determines a numerical score for the multimodal content using the reduced embeddings and an additional feature embedding. This score reflects various aspects of the multimodal content, leading to an action taken based on this numerical evaluation, providing a comprehensive and nuanced understanding and management of multimodal digital content.
G06V 10/77 - Processing image or video features in feature spacesArrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]Blind source separation
G06F 16/2457 - Query processing with adaptation to user needs
G06F 16/28 - Databases characterised by their database models, e.g. relational or object models
G06F 40/284 - Lexical analysis, e.g. tokenisation or collocates
89.
DYNAMIC CHARGE RATE CONTROL OF BATTERY POWERED DEVICES
Techniques, software, and systems for enhanced management of computing system battery charge rates are included. In one implementation, a method includes identifying a preference level for battery charging performance for a computing system when supplied by a power supply having a power supply capacity. Based at least on the preference level, the method includes allocating the power supply capacity to bias a primary allocation of the power supply capacity to charging operations of a battery of the computing system while providing a remainder allocation to at least a system processor of the computing system.
The technology described herein provides an improved framework for novel view synthesis utilizing scene-level features and pixel-level features. In particular, the technology provides semantic representations corresponding to the scene, along with semantic representations corresponding to the each pixel, so that inherent interconnections within objects in the scene can be determined by transformer encoders that would not otherwise be determined by the pixel-level feature representations alone. In this regard, the technology described herein improves the generalizability of Neural Radiance Fields (NeRF) based techniques to novel scenes to avoid the need for retraining for specific scenes and the few-shot capability of NeRF-based techniques to render novel views using a limited number of reference images.
H04N 13/117 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
G06T 7/90 - Determination of colour characteristics
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/56 - Extraction of image or video features relating to colour
G06V 10/771 - Feature selection, e.g. selecting representative features from a multi-dimensional feature space
A computer-implemented method includes obtaining a training data set for multiple monitors for various services, which includes service properties and monitor metadata. The metadata for a given monitor defines resources utilized by a corresponding service. The method determines N feature vectors and a target resource class for each service based on the training data set. A machine learning model is trained in multiple training iterations using the training data set. In a given training iteration, N feature vectors of a selected service are provided to the machine learning model, which predicts a resource class of the selected service. A difference between the predicted resource class and the target resource class for the selected service is determined, based on which one or more parameters of the machine learning model can be updated. The trained machine learning model can be used to recommend a new monitor for a new service.
A method for user intent evaluation includes receiving recorded speech (306) of a human user (104). One or more attention indicators (406) are detected in an image (400) of the human user (104). Using a trained command recognition model (504), a command confidence (506) is estimated indicating a confidence that the recorded human speech (306) includes a command for a smart assistant computing system (100). Based at least in part on detecting the one or more attention indicators (406), and the command confidence (506) exceeding a command confidence threshold, the human user (104) is classified as intending to interact with the smart assistant computing system (100).
A signal conditioning connector assembly (50) is provided, including an enclosure (52), and a plurality of signal conditioner layers (68) mounted within the enclosure (52). Each signal conditioner layer (68) includes a substrate (66), signal conditioner circuitry (60) mounted to the substrate (66), first electrodes (64) forming a first connector on a first side of the signal conditioner circuitry (60), second electrodes (65) forming a second connector on a second side of the signal conditioner circuitry (60), a heat spreader (62) in thermal communication with a side of the signal conditioner circuitry (60) opposite the substrate (66), and a liquid cooling pipe (54) positioned adjacent and in thermal communication with the heat spreader (62). The liquid cooling pipe (54) is configured to draw heat away from the heat spreader (62) for thermal management. The signal conditioning connector assembly (50) can be positioned adjacent an interface between the vertical cable shuffle (20) and the horizontal cable backplane (14) within the rack (10) of the computing device assembly (100) of the first and second aspects.
Large language models (LLMs) and visual-language models (VLMs) are able to provide robust results based on specified formatting and organization. Although LLMs and VLMs are designed to receive natural language input, users often lack the skill, knowledge, or patience to utilize LLMs and VLMs to their full potential. By leveraging screen understanding, AI prompts (or "pills") may automatically be generated for artificial-intelligence (AI) assistance and query resolution in a VLM/LLM environment. Using an image encoder, a current screenshot is processed into an image embedding and compared to text embeddings representing screenshot activities. By identifying the text embedding having the closest similarity to the image embedding, a screen activity being performed by the user may be determined. Suggested AI prompts (or "pills") may then be generated in real-time to assist the user in performing the screen activity.
Systems and techniques for facilitating unified multichannel communication are provided. The described systems and techniques improve communication technology through an encompassing, channel-agnostic approach which unifies disparate communication modes into a singular coherent thread. A unified multichannel communication ("UMC") service of a UMC platform can initialize a UMC thread for a UMC session, where the UMC thread can be used to facilitate unified multichannel communication.
This document relates to providing adaptive teleconferencing experiences using generative image models. For example, the disclosed implementations can employ inpainting and/or image-to-image restyling modes of a generative image model to generate images for a teleconference. The images can be generated based on prompts relating to the teleconference. Users can be superimposed on the generated images, thus giving the appearance that the users are present in an environment generated by the generative image model.
A system, method, and computer-readable media for executing applications for radio interface controller (RIC) management are disclosed. The system includes far-edge datacenters configured to execute a radio access network (RAN) function and a real-time RIC; near-edge datacenters configured to execute a core network function and a near-real-time RIC or a non-real-time RIC; and a central controller. The central controller is configured to: receive inputs of application requirements, hardware constraints, and a capacity of first and second computing resources at the far-edge datacenters and near-edge datacenters; enumerate a plurality of feasible combinations of application locations and configurations that satisfy the application requirements and hardware constraints; incrementally allocate a quant of the first or second computing resources to a feasible combination that would produce a greatest utility from the quant based on a utility function; and deploy each of the plurality of applications.
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
98.
TARGETING OPERATING SYSTEM PROFILES FOR BARE METAL RESTORE
Example solutions enhance security of bootable media images during bare metal restores. A boot image generation request and original image integrity data is received from a first computing device. An original image timestamp associated with the boot image generation request is stored. A message is received from a second computing device that includes current image integrity data generated by the second computing device using a current boot image. The original image integrity data is verified to match the current image integrity data. The message is determined to have been received within a length of time from the original image timestamp. A registration of the second computing device is performed within the device management system based on the verification and the determination.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
Embodiments of the disclosed technologies include parsing a query into a first query portion and at least one second query portion, matching an embedding of the at least one second query portion with an embedding that corresponds to a portion of a document of a document set, mapping the portion of the document to a first node of a graph; by a generative artificial intelligence model, constructing a graph query based on at least the first node, executing the graph query on the graph to identify a second node of the graph, extracting a path from the graph, and configuring the path for output at a device.
Clickable trackpad designs generally aim to achieve a thin form factor with a minimized footprint that provides consistent user perception of click feel at scale within cost constraints. The clickable trackpad designs found herein adopt a mechanical depression mechanism that allows a user to register a click at any point on the trackpad surface with little variance in the depression force required to register the click. This provides a consistent click feel for the user. Further consistency of the click feel is achieved by establishing simultaneous travel of the entire trackpad surface, no matter where the click force is applied. This motion is a downward translation of the trackpad surface. An end user may press at any location of the trackpad surface to achieve a push button input (or click).
G06F 3/041 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks