A method may determine that a predetermined input gesture has been detected when an input area in an application has focus. A method may in response to determining that the predetermined input gesture has been detected when the input area has focus, initiating a display of a user interface including. A method may display a first selectable option configured to insert content related to a file into the input area in response to selection. A method may display a second selectable option configured to, in response to selection, perform a default operation of the predetermined input gesture.
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 16/14 - Details of searching files based on file metadata
2.
Implicit Calibration from Screen Content for Gaze Tracking
The technology relates to methods and systems for implicit calibration for gaze tracking. This can include receiving, by a neural network module, display content that is associated with presentation on a display screen. The neural network module may also receive uncalibrated gaze information, in which the uncalibrated gaze information includes an uncalibrated gaze trajectory that is associated with a viewer gaze of the display content on the display screen. A selected function is applied by the neural network module to the uncalibrated gaze information and the display content to generate a user-specific gaze function. The user-specific gaze function has one or more personalized parameters. And the neural network module can then apply the user-specific gaze function to the uncalibrated gaze information to generate calibrated gaze information associated with the display content on the display screen. Training and testing information may alternatively be created for implicit gaze calibration.
A method includes receiving, by a browser application, a first user interaction of a first participant of a virtual meeting to join the virtual meeting between the first participant and one or more other participants of the virtual meeting. The method further includes receiving, by the browser application and via a virtual meeting user interface (UI), a second user interaction of the first participant to share content of a first application within the virtual meeting. The method further includes causing information regarding the content of the first application to be transmitted to one or more client devices of the one or more other participants of the virtual meeting. The method further includes causing a first state of the content of the first application to be presented to the first participant and the one or more other participants of the virtual meeting.
Methods, systems, and apparatus, including microelectronic circuits and processors are described for estimating energy consumption during a sampling window. The processor is divided into multiple processor cores, which are each sub-divided into processor units. The units can perform operations from a set of operations. Based on the number of times each unit has performed a particular operation during a sampling window, an estimate of the energy consumed by that unit for that sampling window is calculated. By summing the estimated energy consumption of many units on a core and many cores of the entire system, an energy consumption estimate is prepared for the sampling window. Other power parameters may be calculated from a sampling window duration and the energy consumption estimate. If a power parameter exceeds a threshold, action may be taken to alter operation of the microelectronic circuit.
A first user equipment (UE) that is initially coupled to a second UE and a base station enters an out-of-service (OOS) state upon losing its wireless connection to a service via the base station. The first UE attempts to reestablish the connection while sending OOS recovery parameters to the second UE. The second UE does not automatically scan for service if a signal strength of its connection to the first UE is above a predetermined threshold level. In response to losing its connection to the first UE or determining that the signal strength of its connection to the first UE is less than the predetermined threshold level, the second UE activates one or more corresponding modems and executes OOS recovery protocols based on the OOS recovery parameters received from the first UE to attempt to regain service.
A method includes receiving a transcription of an utterance, processing, using a first model, the transcription to generate a first text segment that represents an initial portion of a response to the utterance, processing, using a TTS system, the first text segment to generate a first synthesized speech representation, and providing, for audible output, the first synthesized speech representation. The method also includes providing, to a second model different from the first model, the transcription and the first text segment, the second model comprising an LLM configured to process the transcription and the first text segment to generate a second text segment that represents a remaining portion of the response to the utterance. The method further includes obtaining a second synthesized speech representation generated from the second text segment, and providing, for audible output by the user device, the second synthesized speech representation.
This document describes systems and techniques directed at an interposer assembly for system-level failure analysis. In aspects, the interposer assembly may include a main body with a central aperture configured to provide optical access to an exposed surface of a semiconductor device. A plurality of probe pins may be arranged on the main body surrounding the central aperture, the plurality of probe pins configured to make electrical contact with a corresponding plurality of fine-pitch contact pads on the semiconductor device. The interposer assembly may further include at least one connector disposed on the main body and physically offset from the central aperture configured to receive a memory module. Circuitry within the main body electrically couples the plurality of probe pins to the at least one connector. In such a configuration, accurate optical imaging and fault isolation analysis can be enabled while a semiconductor device is fully operational.
In described techniques, a first sensor signal is received from a first sensor coupled to an extended reality device within a moving frame of reference. A second sensor signal is received from an image sensor within the moving frame of reference and in communication with the extended reality device. A motion of the extended reality device with respect to the moving frame of reference may be determined, based on the first sensor signal and the second sensor signal.
A method of power switch placement and optimization in an integrated circuit is described. The method includes designating multiple sections in the integrated circuit. The sections are defined by a section width parallel to a first direction and a section length perpendicular to the first direction. For each section, a section type is determined based on a ratio of the section length to the section width belonging to a certain range of values. Each section has a corresponding power switch geometry specification determined in part by the section type. A power switch geometry specification is selected based on the section type, and the power switches are placed in each section according to the power switch geometry specification.
Implementations relate to obtaining input data and responsive output(s), where the responsive output(s) are determined based on processing the input data using a generative model (GM); processing, using a generative reward model (GRM), GRM input to generate corresponding GRM output, where the GRM input includes the responsive output(s); determining, based on the GRM output, a generative verdict, where the generative verdict is indicative of a relative quality of each of the responsive output(s); determining, based on the generative verdict, a reward value; and causing, based on at least the reward value, the GRM to be trained.
Provided herein are systems and methods for performing dynamic adaption and correction for internal delays in devices connected to a common time-multiplexed bus. The methods allow devices to operate reliably at a higher bus frequency by correcting for inherent and unknown delays within the components and in the system by measuring the actual delays using multiple readings with the bus. Intrinsic noise and jitter are used to increase the precision of the measurements, thereby essentially using these uncertainties as self-dithering for increased measurement resolution. During adaption, delays may be adjusted in multiple step sizes to speed adaption time.
Methods, systems, and apparatus, including computer programs encoded on computer-storage media, for increasing sparsity to improve neural network efficiency. In some implementations, a system stores parameter values of parameter matrices of one or more layers of a neural network. The parameter values of the parameter matrices include (i) weight values of the one or more layers of the neural network, and (ii) predictor values that have been trained to predict levels of importance of items processed by the neural network. The system generates an output, including: determining a value for each of multiple items using the predictor values, selecting a proper subset of the items based on the values in the vector based on a threshold, and generating output of the one or more layers limiting computation based on the selected proper subset.
A method of interpreting time-series data is provided. This method includes receiving a prompt containing at least one time series of data. The method may also include extracting, using at least one machine learning model, the at least one time series of data from the prompt. Further, the method may include generating, using the at least one machine learning model based on the at least one time series of data, at least one plot representative of the at least one time series of data. Even further, the method may include applying the at least one machine learning model to the prompt and the at least one plot to generate an output responsive to the prompt.
A method for enhanced integration with cameras for virtual meetings includes presenting, at a first client device, a virtual meeting user interface (UI) during a virtual meeting between participants associated with one or more client devices. The virtual meeting UI includes one or more regions each corresponding to a video stream provided by a respective client device. The method includes obtaining, at a client application on the first client device, a first video stream via a first data channel between an image capture device and the client application, and metadata via a second data channel between the image capture device and the client application. The method includes causing a visual representation of the first video stream to be modified during the virtual meeting based on the metadata obtained via the second data channel.
Techniques and apparatuses are described that implement a flip-flop with a high-speed architecture. In example aspects, the high-speed architecture is a two-path architecture, which represents a hybrid combination of multiple topologies controlled by different clock signals. At a first path (602-1) of the flip-flop (106), the high-speed architecture has a pulsed-latch topology (504), which enables the flip-flop (106) to have a smaller insertion delay relative to other flip-flops with a single-path architecture based on the master-slave topology. At a second path (602-2) of the flip-flop (106), the high-speed architecture has a master-slave topology (506) to satisfy the hold time requirement of the flip-flop without relying on additional buffers. The high-speed architecture can be used to implement a scan-type flip-flop, including settable and/or resettable versions of the scan-type flip-flop. With the high-speed architecture, the flip-flop (106) can operate at higher clock frequencies compared to other flip-flops with single-path architectures.
A method includes receiving, by a browser application, a first user interaction to initiate a virtual meeting in a first tab of the browser application. The method further includes receiving, during the virtual meeting, a second user interaction to switch a focus of the browser application to a second tab of the browser application. The method further includes, responsive to the focus of the browser application switching to the second tab of the browser application, causing a floating window to appear along with content of the second tab. The floating window includes one or more elements associated with the virtual meeting.
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
H04L 12/18 - Arrangements for providing special services to substations for broadcast or conference
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating an output image using a text-to-image model and conditioned on both the input text and image and text pairs selected from a multi-modal knowledge base. In one aspect, a method includes, at each of multiple time steps: generating a first feature map for the time step; selecting one or more neighbor image and text pairs based on their similarities to the input text; for each of the one or more neighbor images and text pairs, generating a second feature map for the neighbor image and text pair; applying an attention mechanism over the one or more second feature maps to generate an attended feature map; and generating an updated intermediate representation of the output image for the time step.
A method is performed by a device of a group of devices in a distributed data replication system. The method includes storing an index of objects in the distributed data replication system, the index being replicated while the objects are stored locally by the plurality of devices in the distributed data replication system. The method also includes conducting a scan of at least a portion of the index and identifying a redundant replica(s) of the at least one of the objects based on the scan of the index. The method further includes de-duplicating the redundant replica(s), and updating the index to reflect the status of the redundant replica.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database systemDistributed database system architectures therefor
19.
MODIFYING VISUAL REPRESENTATIONS OF MEDIA STREAMS IN VIRTUAL CONFERENCING PLATFORMS USING EMBEDDED SEMANTIC METADATA
A virtual meeting user interface (UI) is presented during a virtual meeting between a plurality of participants. The UI comprises a plurality of regions each corresponding to a media stream provided by one of a plurality of client devices. The plurality of regions comprises a region corresponding to one or more media streams provided to a client device. The one or more media streams, each comprising respective metadata, are received at the client device. Respective metadata of a media stream indicates a spatial location in the media stream of a participant. One or more content presentation layout characteristics of the client device are identified. A visual representation of the media stream is caused to be modified in the region based at least on the location and the layout characteristics. The virtual meeting UI comprising the region with the modified visual representation is presented on the client device.
Implementations described herein relate to configuring a dynamic warm word button, that is associated with a client device, with particular assistant commands based on detected occurrences of warm word activation events at the client device. In response to detecting an occurrence of a given warm word activation event at the client device, implementations can determine whether user verification is required for a user that actuated the warm word button. Further, in response to determining that the user verification is required for the user that actuated the warm word button, the user verification can be performed. Moreover, in response to determining that the user that actuated the warm word button has been verified, implementations can cause an automated assistant to perform the particular assistant command associated with the warm word activation event. Audio-based and/or non-audio-based techniques can be utilized to perform the user verification.
Methods, apparatus, systems, and computer-readable media are provided for tailoring composite graphical assistant interfaces for interacting with multiple different connected devices. The composite graphical assistant interfaces can be generated in response to a user providing a request for an automated assistant to cause a connected device to perform a particular function. In response to the automated assistant receiving the request, the automated assistant can identify other functions that the connected device is capable of performing. The other functions can then be mapped to various graphical control elements in order to provide a composite graphical assistant interface from which the user can interact with the connected device. Each graphical control element can be arranged according to a status of the connected device, in order to reflect how the connected device is operating simultaneous to the presentation of the composite graphical assistant interface.
G06F 3/04847 - Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
G06F 3/04883 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
G10L 15/22 - Procedures used during a speech recognition process, e.g. man-machine dialog
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
22.
A Metering Stack and System for Collecting a Target Sample for Testing
A metering stack for collecting a target sample includes a channel layer spacing a top layer from a bottom layer, where the top, bottom, and channel layers together define a channel. The channel has an inlet end, a main channel portion, a separation portion, and one or more dispensing portions. A vent is defined within the metering stack proximate the separation portion, where the vent allows air to enter the metering stack into the separation portion. The vent has a first wall extending between a first end and a second end, and a curved wall extending between the first end and the second end, with at least a portion of the first wall being closer than the curved wall to the main channel portion and with the first wall being at an angle relative to a main axis of the main channel portion.
A method includes receiving a prompt directed towards an assistant large language model (LLM) and generating, using the assistant LLM, a sequence of output tokens based on the prompt. The sequence of output tokens includes a sequence of textual tokens including one or more correct textual tokens and one or more incorrect textual tokens, and one or more revision tokens each indicating a corresponding N number of incorrect textual tokens generated prior to the respective revision token and corresponding replacement textual tokens generated after the respective revision token for replacement of the corresponding N number of incorrect textual tokens. The method also includes generating a revised sequence of output tokens for the prompt based on the sequence of output tokens.
A media application performs object recognition on an initial image to identify a set of objects in the initial image. The media application determines whether the initial image is an outdoor scene. Responsive to the initial image being an outdoor scene, the media application determining a sky segment from the initial image. The media application determines whether the initial image includes a subject that is human or animal. Responsive to the initial image including the subject, the media application determines a subject segment from the initial image. The media application receives at a user interface that includes the initial image, user input corresponding to selection of a selected object from the set of objects. The media application updates the user interface to include an indication that the selected object was selected.
G06V 10/26 - Segmentation of patterns in the image fieldCutting or merging of image elements to establish the pattern region, e.g. clustering-based techniquesDetection of occlusion
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/04845 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
A computing system can include one or more processors and one or more example non-transitory computer-readable media storing instructions that are executable by one or more processors to cause the computing system to perform operations. The operations can include receiving, from a second computing system comprising one or more second computing devices, first cryptographic data indicative of at least one file modification. The operations can include receiving, from a third computing system comprising one or more third computing devices, second cryptographic data indicative of the at least one file modification. The operations can include providing, to the third computing system, verification data indicative of a correctness of metadata associated with the at least one file modification.
H04L 9/06 - Arrangements for secret or secure communicationsNetwork security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
26.
MODIFYING TARGET REGIONS WITHIN AN IMAGE USING A DIFFUSION NEURAL NETWORK
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a diffusion neural network using a region-aware fine-tuning process. After training, the diffusion neural network can be used to generate an image conditioned on a conditioning input.
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for issue pipe sharing. One of the methods includes concurrently issuing a plurality of instructions to different execution units using selection logic to select which source data elements to obtain from the N read ports into the PRF by a process that includes a physical register file (PRF), a plurality of execution units, and a reservation station (RSV) that includes a plurality of issue pipes.
A method may determine that a predetermined input gesture has been detected when an input area in an application has focus. A method may in response to determining that the predetermined input gesture has been detected when the input area has focus, initiating a display of a user interface including. A method may display a first selectable option configured to insert content related to a file into the input area in response to selection. A method may display a second selectable option configured to, in response to selection, perform a default operation of the predetermined input gesture.
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/04842 - Selection of displayed objects or displayed text elements
G06F 3/04883 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
29.
SYSTEMS AND METHODS FOR EFFICIENT IMAGE TRANSFORMATION OPERATIONS
Methods, systems, and apparatus for receiving a request to perform a plurality of image transformation operations on an input image to generate a plurality of output image frames. An identifier is assigned to each respective image transformation operation and a corresponding output image frame. Image data for the input image is obtained from memory and stored in a local cache. An output image frame portion is generated based on a portion of image data for the input image stored in the local cache and an image transformation operation corresponding to the identifier of the respective output image frame. The output image frame portion is stored in memory within a separate container associated with each of the plurality of output image frames.
Aspects of battery current control via a hardware controller for semiconductor devices are disclosed. For example, a hardware-based control architecture enables real time monitoring and control of current drawn from a battery coupled with a semiconductor device. The hardware controller is configured to continuously monitor and control the current drawn from the battery. A total current drawn from the battery is communicated to the hardware controller to compare the total current draw n to a target current and generate a controller output based on the comparison. Operation points, for elements of the semiconductor device that draw current from the battery, are determined based on the controller output. The operation points are communicated to the elements to control, in real time, the current drawn from the battery by the elements.
The disclosure provides systems, devices, apparatuses, and methods, for managing configuration for requesting transmission of SIB, for example on-demand SIB1 transmission. A UE (102) receives (304), from a candidate cell (126B) supporting on-demand SIB, an indication that periodic SIB transmission is deactivated for the candidate cell (126B). The UE (102) transmits (310), to the candidate cell (126B) based on an UL WUS configuration list with a first UL WUS configuration for the candidate cell, a WUS on uplink resources to request an on-demand SIB transmission from the candidate cell (126B). Based on the UL WUS configuration list being inapplicable to the candidate cell, the UE (102) bars (309) access to the candidate cell (126B) and selects (311) a different candidate cell (124) to access.
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for sharing tag comparators. One of the methods includes obtaining, by a shared comparator, a broadcast destination tag, wherein the shared comparator operates for an instruction stored in the RSV; selecting, by a selection logic module, a source tag from among tags that include a tag of a first source and a tag of a second source, wherein the first source and the second source are used by the instruction; and comparing, by the shared comparator, the selected tag and the broadcast destination tag.
Systems and methods for tuning text-to-image models utilizing a modified direct performance optimization technique. The method can include generating training data by accessing a number of images associated with content items. The method can include, accessing, for each image, image quality data. The method can include accessing content item performance signal data associated with the content items. The method can include generating preference data from the content item performance signals by selecting a first image and second image based on the image quality data and content item performance signal data. The method can include determining a preferred image based on an analysis of the image quality data and the content item performance signal data. The method can include storing the first and second image as well as the preferred image in a preference data structure and tuning the text-to-image model using the data in the preference data structure.
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for protecting HLL-derived results with differential privacy. In one aspect, a method includes receiving a set of sketches from a set of data owners. Each sketch represents a sampling of items in a dataset and comprising a set of registers that store values representing a number of leading zeros. A minimum register value for the sketches is determined based on differential privacy parameters. The sketches are merged to generate a merged sketch. The merging includes storing the minimum register value in each register of the merged sketch that has a value that is less than the minimum register value. A number of unique items represented by the merged sketch is estimated. The estimating includes adding noise to the estimated number of unique entities. Data indicating the estimated number of unique entities is provided to a recipient.
Methods and devices in a wireless network enable layer 1 (L1) report retransmission when an expected L1 report is not received or decoded by a network entity (NE). A user equipment (UE) receives (304) from the NE a control signal configuring a first L1 report configuration enabled for retransmission. When not receiving or not being able to decode an expected first L1 report, the NE sends (312) a scheduling message to the UE for scheduling a retransmission of the first L1 report associated with the first L1 report configuration. The UE then transmits (314), to the NE, the retransmission of the first L1 report.
This disclosure provides methods and devices for reporting linked channel state information (CSI) in wireless communication system. In an aspect, a UE (102) receives (304), from a network entity (104), a first configuration for a first CSI report and a second configuration for a second CSI report linked with the first CSI report. The first configuration indicates a first channel measurement resource (CMR) and the second configuration indicates a second CMR linked with the first CMR. The UE receives (308), from the network entity (104), a first reference signal on the first CMR and a second reference signal on the second CMR. The UE transmits (310), to the network entity (104), the second CSI report including second CSI calculated based on measurements of the first reference signal and the second reference signal. This disclosure also provides a method of wireless communication at a network entity.
A user equipment (UE) receives, in a currently serving cell associated with a first radio access technology (RAT), a broadcast message indicating (i) a first one or more cells of a first type and associated with a second RAT and (ii) a second one or more cells of a second type and associated with the second RAT; and reselects to a new cell in view of whether the new cell is of the first type or the second type.
A method for providing data to additional devices in a virtual meeting includes causing a virtual meeting UI, including one or more regions each corresponding to respective media streams generated by a client device, to be presented during a virtual meeting. The method includes obtaining an indication that a first additional device associated with a first client device of a first participant of the plurality of participants is available at a location of the first participant. The method includes causing the virtual meeting UI to be modified to present, in a first region corresponding to a first media stream generated by the first client device, a visual indication of the first additional device. The method includes causing first data indicated by a second client device to be sent to the first additional device to cause the first additional device to perform a first predetermined action.
Generally disclosed herein is an approach to mitigating hardware degradation of server machines caused by frequent chip temperature fluctuations based on controlling the power consumption level, changes in xPU temperature of server machines, and the job start latency for the server machines altogether. According to some examples, a power and temperature optimization system may monitor xPU temperature fluctuations caused by inter-job fluctuations related to the xPU's deep idle state. The xPU's deep idle state may refer to a state where the xPU turns off or reduces the voltage of the xPU components to save power when a job or a unit of work assigned to the xPU stops. The xPU's deep idle state may continue until the next job or unit of work starts.
Systems and methods for generating a personalized newsletter. The system can include a database storing a plurality of content items and a machine-learned model that is configured to generate the personalized newsletter. The system can process a plurality of content items of a publisher to generate an attribute for each content item in the plurality of content items. Additionally, the system can select, based on the attribute for each content item in the plurality of content items, a subset of content items from the plurality of content items. Moreover, the system can process the subset of content items, using the machine-learned model, to generate a summary. Furthermore, the system can generate a newsletter based on the summary and the subset of content items.
Techniques and devices for radar-based gesture determination at long ranges are described in this document. The techniques described herein enable a computing device to detect and recognize gestures at long-range extents of up to eight meters. The computing device of this disclosure does not require the user to perform a gestural command at a specific location, in a specific orientation, contingent upon a wake-up trigger, or at a specific time, enabling the user to freely provide commands whenever and wherever is most convenient. This continual recognition of gestures may be enabled by a machine-learned model, generation of augmented data, and inclusion of negative data.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G01S 7/41 - Details of systems according to groups , , of systems according to group using analysis of echo signal for target characterisationTarget signatureTarget cross-section
G01S 13/58 - Velocity or trajectory determination systemsSense-of-movement determination systems
A method is described for determining power grid density on an integrated circuit. The method includes determining an initial placement of power switches, logic circuits, the power grid, and the logic grid. The method creates tiles covering the entire integrated circuit and assigns a power grid density and a logic grid density for each tile. Power losses are simulated for the entire chip and chip timing function is simulated. Based on the simulated power losses and the simulated chip timing, the power grid density and the logic grid density are adjusted on a per tile basis. The assignment of power grid density and logic grid density and simulations are iterated until a cessation condition is met. After the cessation condition is met, a final chip simulation is performed and the final routing is determined.
H02J 1/00 - Circuit arrangements for dc mains or dc distribution networks
H03K 17/56 - Electronic switching or gating, i.e. not by contact-making and -breaking characterised by the use of specified components by the use, as active elements, of semiconductor devices
43.
MANAGING DATA COMMUNICATION BEFORE AND AFTER A STATE TRANSITION
To manage communications from a UE during a state transition, a central unit (CU) of a base station performs, by processing hardware, an early data transmission procedure with the UE while the UE is in an inactive state, including transmitting at least one data packet of a sequence of data packets to the UE (802). The CU determines to transition the UE from the inactive state to a connected state (808). In response to transitioning the UE to the connected state, the CU transmits, by the processing hardware, a next data packet in the sequence of data packets to the UE, or retransmits, by the processing hardware, the at least one data packet to the UE in response to determining the at least one data packet has not been received (812).
Provided is a system that automatically evaluates the output of machine-learned models. A computing system receives, from a user computing device, an input query. The computing system processes the input query with a generative model to generate a model output based on the input query. The computing system identifies one or more representative subsequences that correspond to a representation based on the textual response. The computing system generates a plurality of tuple pairs based on the one or more representative subsequences that correspond to a representation and the one or more media elements. For each of the relevant tuple pairs, the computing system processes the respective tuple pair with an entailment-scoring machine-learned model to generate an entailment score for the respective tuple pair. The computing system provides an entailment output for the model output based on the respective entailment scores generated for the one or more relevant tuple pairs.
A computing device for generating content includes one or more memories to store instructions and one or more processors to execute the instructions to perform operations, the operations including: receiving an input prompt requesting to generate content including an image with text; implementing one or more first machine-learned models configured to: determine, based on the input prompt, the text to be displayed in the image and one or more first features associated with the text, and determine one or more second features relating to generating an initial image which excludes the text; implementing one or more second machine-learned models configured to generate the initial image based on the one or more second features; and generating the content including the image with the text, based on the initial image generated via the one or more second machine-learned models and the one or more first features associated with the text.
Systems and methods for the generation of a comparative data structure using a large language model. The method includes obtaining, by a computing system, input data comprising a user query and query context data; processing, by a large language model (LLM) operating on the computing system, the user query and the query context data to generate a set of search results; defining, by the LLM, a schema associated with a plurality of differentiators based on the user query, the query context data, and the set of search results; extracting, by the LLM, information associated with the plurality of differentiators from the set of search results; generating, by the LLM, a comparative data structure using the schema and the information associated with the plurality of differentiators; and comparing, by the LLM, the set of search results based on the information associated with the plurality of differentiators using the comparative data structure.
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for rendering a new image that depicts a scene from a perspective of a camera at a new camera viewpoint at a given time point in a video.
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/56 - Extraction of image or video features relating to colour
H04N 13/117 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
48.
Detecting a Soft Short Circuit at a Charging Interface of an Electronic Device
A computer-implemented method for detecting soft short circuits at a charging interface of an electronic device is provided. The method includes obtaining an initial voltage measurement of a voltage reference that is electrically coupled to the charging interface of the electronic device. The method includes obtaining a plurality of additional voltage measurements of the voltage reference. The method includes detecting a soft short circuit at the charging interface based, at least in part, on the initial voltage measurement and a voltage measurement of the plurality of additional voltage measurements that is most recent in time. The method further includes causing the electronic device to perform one or more control actions in response to detecting the soft short circuit at the charging interface.
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for estimation techniques for efficient neural network inference processing. In some implementations, parameter values for a trained neural network comprising multiple layers are stored, including (i) a matrix of parameter values for at least one layer and (ii) an approximate matrix of values corresponding to the at least one layer. Input is processed using the trained neural network, including determining an input for the at least one layer, computing approximate outputs corresponding to elements in a set using the approximate matrix, and computing intermediate outputs for only a proper subset of the elements in the set using the matrix of parameter values for the at least one layer. The proper subset is determined based on the approximate outputs.
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing a task. In one aspect, a method comprises: receiving a query for a task to be performed; receiving a plurality of context content items for the task; for each content item of the plurality of content items, processing an input comprising a representation of the content item using a trained compression model to generate a compressed representation of the content item comprising one or more vectors of a fixed size; generating, using the compressed representations, an aggregated compressed representation comprising one or more vectors that represents the plurality of content items; and processing an input comprising (i) the query and (ii) the aggregated compressed representation using a generative neural network to generate a response to the query.
In described techniques, a first sensor signal is received from a first sensor coupled to an extended reality device within a moving frame of reference. A second sensor signal is received from an image sensor within the moving frame of reference and in communication with the extended reality device. A motion of the extended reality device with respect to the moving frame of reference may be determined, based on the first sensor signal and the second sensor signal.
B60K 35/00 - Instruments specially adapted for vehiclesArrangement of instruments in or on vehicles
B60K 35/10 - Input arrangements, i.e. from user to vehicle, associated with vehicle functions or specially adapted therefor
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
B60R 11/04 - Mounting of cameras operative during driveArrangement of controls thereof relative to the vehicle
A method includes obtaining an input image that has been captured by an image sensor using a first color filter array (CFA) having a first color pattern. The method also includes generating an output image by remosaicing or demosaicing the input image using a machine learning model configured to map pixel values determined using the first CFA to a second color pattern that (i) differs from the first color pattern of the first CFA and (ii) is associated with a second CFA. The machine learning model may have been trained using a simulated training sample. The method further includes outputting the output image.
An example computing system retrieves context information from one or more background applications and applies a language model to the context information to identify one or more tasks. The computing system determines, for each of the one or more tasks, one or more associated applications, in which each of the one or more associated applications includes one or more functions for performing a respective task. The computing system assigns a respective priority score to each of the one or more tasks and generates instructions for generating a graphical user interface. The graphical user interface includes graphical components, in which each graphical component is associated with a respective task, and each graphical component is arranged within the graphical user interface based on the priority score assigned to the respective task.
This disclosure provides systems, methods, and apparatuses for a user equipment (UE) (102A) to request a system information block type 1 (SIB1) (180A) from a network entity (104A). To conserve energy, the network entity might refrain from sending one or more SIB 1s associated with one or more synchronization signal blocks (SSBs) (130A) in a cell. In accordance with this disclosure, the network entity may employ techniques that reduce the overhead incurred when responding to multiple SIB1 requests from multiple UEs. The techniques include configuring ROs and PRACH preambles that increase the probability that the network entity can transmit a single RAR and a single SIB1 in response to the multiple requests for the SIB1. The techniques further include the UE refraining from transmitting a PRACH when beam quality is low.
A method can be implemented in a network function (NF) of a core network (CN) or in a certificate authority (CA), for renewing an Automated Certificate Management Environment (ACME) certificate. The method can include receiving or providing an ACME certificate. The method can include transmitting or receiving a request, in response to a trigger event and in accordance with a configuration for the trigger event, related to renewal of the security certificate. The method can include receiving or providing a renewed security certificate in response to the request. Other methods and apparatuses are described.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
H04W 12/069 - Authentication using certificates or pre-shared keys
56.
SUPPORTING MULTIPLE FUNCTIONS OF A NETWORK-ON-CHIP USING A BUFFER
Techniques and apparatuses are described for supporting multiple functions of a network-on-chip using a buffer. In example aspects, a network-on-chip (110) uses a credit-based protocol and facilitates communication between different subsystems (108) associated with different clock domains (206), different voltage domains 208, different power domains 210, different data widths (212), or some combination thereof. The network-on-chip (110) includes a buffer (118), which provides storage for credit management (214), storage for an asynchronous clock-domain crossing (216), and storage for data upscaling (218). While other network-on-chips or other system-on-chips can have individual storage elements to support each function, the buffer (118) acts as a single storage element capable of supporting each of these functions. With the buffer (118), the network-on-chip (110) can have a smaller footprint, can consume less power, and can have less latency compared to other network-on-chips that utilize multiple storage elements.
This disclosure provides systems, devices, apparatus, and methods, including computer programs encoded on storage media, for a reduction of latency for random access procedures based on PRACH adaptation. A UE (102) receives (304), from a network entity (104), control signaling configuring a first PRACH configuration and a second PRACH configuration. The control signaling enables dynamic adaptation for a time-domain parameter and a non-time-domain parameter of the second PRACH configuration. The UE (102) transmits (312), to the network entity (104) on a valid RO, an initial transmission or a PRACH based on the first PRACH configuration or the second PRACH configuration.
Techniques are described for operating a qubit controller to generate a signal to apply to a qubit using a tunable coupler that controls the amplitude of at least part of the signal. The qubit controller may comprise a plurality of digital-to-analog converters that each convert digital values to an analog waveform. The qubit controller may further comprise a plurality of tunable couplers each coupled to a respective DAC that adjusts the amplitude of the analog waveform from the respective DAC. The tunable couplers thereby produce a plurality of analog waveforms, which may be combined to produce a signal to apply to the qubit. In some embodiments, the tunable couplers may each be configured to receive a respective control signal that dictates the scaling factor which that tunable coupler applies to the analog waveform.
G06N 10/40 - Physical realisations or architectures of quantum processors or components for manipulating qubits, e.g. qubit coupling or qubit control
H03K 3/38 - Generators characterised by the type of circuit or by the means used for producing pulses by the use, as active elements, of superconductive devices
H03K 19/195 - Logic circuits, i.e. having at least two inputs acting on one outputInverting circuits using specified components using superconductive devices
61.
Display screen or portion thereof with transitional graphical user interface
Techniques are disclosed that enable training a goal-conditioned policy based on multiple data sets, where each of the data sets describes a robot task in a different way. For example, the multiple data sets can include: a goal image data set, where the task is captured in the goal image; a natural language instruction data set, where the task is described in the natural language instruction; a task ID data set, where the task is described by the task ID, etc. In various implementations, each of the multiple data sets has a corresponding encoder, where the encoders are trained to generate a shared latent space representation of the corresponding task description. Additional or alternative techniques are disclosed that enable control of a robot using a goal-conditioned policy network. For example, the robot can be controlled, using the goal-conditioned policy network, based on free-form natural language input describing robot task(s).
Implementations are directed to receiving unstructured free-form natural language input, generating a chatbot based on the unstructured free-form natural language input and in response to receiving the unstructured free-form natural language input, and causing the chatbot to perform engage in corresponding conversations with additional users. In various implementations, the unstructured free-form natural language input implicitly defines a corresponding dialog state map (e.g., defines corresponding dialog states and/or corresponding dialog state transitions) without defining any explicit dialog states and/or explicit dialog state transitions. In other implementations, the unstructured free-form natural language input is assigned to explicit dialog states and/or explicit dialog state transitions. Nonetheless, the unstructured free-form natural language input may be utilized to fine-tune and/or primed a machine learning model that is already capable of being utilized in conducting generalized conversations. As a result, the chatbot can be generated and deployed in a quick and efficient manner.
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
64.
MITIGATING LATENCY AND/OR RESOURCE USAGE IN TRIGGERING ACTIONABLE SUGGESTIONS RELATED TO RENDERED CONTENT
Implementations relate to triggering suggestion(s) for a document that is at least partially displayed by a content access application at a user interface of a client computing device. The suggestions(s) can be triggered when one or more triggering conditions that provide when to trigger the suggestion(s) are satisfied. The one or more triggering conditions can include, for example, a coordinate condition, a DOM node condition, and/or a temporal condition.
Implementations set forth herein relate to an automated assistant that can render selectable suggestion(s) at a display interface of computerized glasses, and can adapt the suggestions according to changes to a gaze direction of the user and/or other further inputs from the user. The selectable suggestion(s) can be initially rendered based on contextual data that may be associated with a user who is directing their gaze into an environment that includes different environmental features. Certain environmental features can be identified by the automated assistant as being predicted to be of interest to the user and—when a user expresses interest in a particular feature—the selectable suggestions can be adapted. Interest of the user in the particular environmental feature can be expressed by redirecting their gaze towards the particular feature and/or providing further input relevant to the particular feature.
An aspect of the disclosed technology is a process, apparatus, and/or system that provides a capability to automatically derive the bandwidth required for a given bypass tunnel, and use it to compute a compliant path across the network, without reserving bandwidth of a given bypass along its path. This may be implemented by computing (e.g., summing) the signaled bandwidth required of all MPLS LSPs supported by a given bypass tunnel. The computation may be done as part of bypass tunnel re-optimization or re-signaling, prior to the Constrained Shortest Path First (CSPF) algorithm or procedure being run by the PLR or periodically.
An example method for validating data at a node in a hierarchical input pipeline for a machine-learned model system includes receiving, from a first child node of the node, a first input context component. The example method includes receiving, from a second child node of the node, a second input context component. The example method includes generating, at a context generation time, an output context component based on a validated set of context components comprising at least the first input context component. In the example method: the validated set of context components includes the second input context component based on receiving, from the second child node, a communication that indicates a valid context status for the second input context component at the context generation time; or the validated set of context components does not comprise the second input context component based on not receiving the communication that indicates the valid context status at the context generation time. The example method includes outputting, to a parent node of the node, the output context component.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
68.
ENHANCED TELEVISION VIEWING EXPERIENCE BASED ON GROUP VIEWING WATCH PATTERNS
According to an aspect, a method includes receiving, by a computing device, an indication to obtain a picture of a viewing audience of the computing device. A method may receive, by the computing device, the picture of the viewing audience that includes individuals. A method may scan the picture to identify attributes of the individuals included in the viewing audience. A method may create a watch clique that includes the individuals. A method may associate the attributes with the watch clique. A method may determine media content recommendations for viewing by the viewing audience based on the attributes of the watch clique.
Implementations are described herein for improving the identification of entities in image data. In various implementations, image data is received depicting one or more food items. A geolocation associated with the image data can be obtained, as well as additional contextual data about one or more of the digital images. The image data along with the additional contextual data can be assembled into an input prompt for a generative model and processed by the generative model. The output of the generative mode can include a classification of one or more of the food items present in the image data. This classification can be rendered as output at a user device.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/778 - Active pattern-learning, e.g. online learning of image or video features
G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
G06V 20/70 - Labelling scene content, e.g. deriving syntactic or semantic representations
A method for generating a new video from a source video includes determining that the source video is associated with one or more components, and identifying a source starting segment within the source video at least in part by selecting a segment identification model, from among a plurality of candidate segment identification models, based at least in part on the segment identification module being configured to operate upon at least one of the one or more components. The method also includes identifying the source starting segment by using the selected segment identification model to process at least a portion of the source video. The method also includes generating the new video using one or more portions of the source video, wherein generating the new video includes generating an initial segment of the new video based on the source starting segment.
A battery-powered portable computing device, including a digital key for providing access to an external secure system, 2024/043979 detects that a remaining battery energy has reduced to a second predefined level higher than a first predefined level, wherein at least a portion of the device is programmed to shut down when the remaining battery energy drops to the first predefined level. In response to the detection, the device prompts a user to selec t a configuration to allow′ use of the digital key when at least a portion of the device has shut down and allows use of the digital key-after the device is shut down if allowed by the user selected configuration.
G06F 21/81 - Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer by operating on the power supply, e.g. enabling or disabling power-on, sleep or resume operations
G06F 1/3212 - Monitoring battery levels, e.g. power saving mode being initiated when battery voltage goes below a certain level
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for managing an interface between a pair of processing cores of a device that are configured to exchange data. The device is configured to enable or disable one or more of the pair of processing cores. One of the methods includes configuring a connect/disconnect interface implemented as logic circuitry between the pair of processing cores to assume a connected state in which the pair of processing cores and exchange data, and configuring the connect/disconnect interface between the pair of processing cores to assume a disconnected state in which one or more of the processing cores is unable to receive data.
Systems and methods for determining a glucose value for a user are disclosed herein. The method includes receiving a plurality of data inputs associated with biometric data of the user, the plurality of data inputs including at least one data input representative of a past estimated glucose value of the user and processing the plurality of data inputs with a multi-headed temporal convolutional neural network to generate a blood glucose value for the user. The method also includes providing a notification to the user based at least in part on the blood glucose value.
Aspects of policy-defined connection management of opportunistic network capacity are described. In some aspects, a mobile device having a connection manager may be configured to determine, based on a wireless network policy of the mobile device, contextual information for a connection available through an access point (AP) of a wireless local area network (WLAN) associated with a mobile network operator (MNO). The connection manager measures signal-related characteristics of the WLAN connection and determines, based on the contextual information and the characteristics, a first quality metric. The connection manager also measures second signal-related characteristics of a connection available through a base station of a cellular network associated with the MNO and determines, based on the characteristics, a second quality metric. Based on a comparison of the quality metrics, the connection manager connects the mobile device to the WLAN through the AP or the cellular network through the base station.
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for analyzing images for generating query responses. One of the methods includes determining, using a textual query, an image category for images responsive to the textual query, and an output type that identifies a type of requested content; selecting, using data that associates a plurality of images with a corresponding category, a subset of the images that each belong to the image category, each image in the plurality of images belonging to one of the two or more categories; analyzing, using the textual query, data for the images in the subset of the images to determine images responsive to the textual query; determining a response to the textual query using the images responsive to the textual query; and providing, using the output type, the response to the textual query for presentation.
G06F 16/583 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
G06F 16/58 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
G10L 13/08 - Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
G10L 15/22 - Procedures used during a speech recognition process, e.g. man-machine dialog
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating an output video. One of the methods include: obtaining an input video; obtaining input text that includes a description of an output video; generating, based at least on applying downsampling to the input video, a degraded version of the input video; and generating the output video based on the description in the input text by updating the degraded version of the input video by using a video diffusion model across a plurality of reverse diffusion steps.
A method includes segmenting a source video into source video segments, and generating a script for a new video using a generative artificial intelligence (AI) engine. The script includes, for each of one or more new video segments arranged according to a sequential order, a segment descriptor and a segment voice-over transcript. For each new video segment, a voice-over segment is generated from among the source video segments based on the respective segment voice-over transcript, and a set of source video segment(s) is selected based on the respective segment descriptor, for use in generating the new video segment. The method also includes generating the new video, at least in part by inserting the generated voice-over segments for the new video segment(s), and the selected set(s) of source video segment(s) for the new video segment(s), in accordance with the sequential order.
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
78.
Systems and Methods for Providing Feedback for Artificial Intelligence-Based Image Capture Devices
The present disclosure provides systems and methods that provide feedback to a user of an image capture device that includes an artificial intelligence system that analyzes incoming image frames to, for example, determine whether to automatically capture and store the incoming frames. An example system can also, in the viewfinder portion of a user interface presented on a display, a graphical intelligence feedback indicator in association with a live video stream. The graphical intelligence feedback indicator can graphically indicate, for each of a plurality of image frames as such image frame is presented within the viewfinder portion of the user interface, a respective measure of one or more attributes of the respective scene depicted by the image frame output by the artificial intelligence system.
A method includes identifying a target within a three-dimensional scene based on input from a user, generating a two-dimensional image based on the target, determining that a query based on the two-dimensional image is to be performed, and performing the query based on the two-dimensional image.
A folding device comprising: a first housing comprising first electronic components; a second housing comprising second electronic components; a hinge assembly rotatably connected to the first housing and the second housing; a continuous display connected to and across the first housing and the second housing; a bridge flex comprising one or more electrical connections between the first electronic components and the second electronic components, the bridge flex routed across the hinge assembly; and a guide attached to the bridge flex, wherein the hinge assembly further comprises a track configured to constrain movement of the guide along an axis substantially perpendicular to an apex of the continuous display between a first position and a second position as an angle between the first housing and the second housing changes.
According to at least one implementation, a method includes identifying a gaze associated with a user of a device and identifying a first state of a gesture from the user. The method further includes causing display of a cursor over a first portion of content on a display of the device based on the gaze and the first state of the gesture. The method also includes identifying a second state of the gesture from the user and causing display of the cursor over a second portion of the content on the display based on the gaze and the second state of the gesture, the second portion being different than the first portion.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/04812 - Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 1/16 - Constructional details or arrangements
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
G06F 3/04842 - Selection of displayed objects or displayed text elements
G06F 3/04845 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
G06F 3/04847 - Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
82.
Systems and Methods of Delegated Analytic Collection
A method including tagging a feature of a mobile application as an event generator, in response to a user interacting with the feature of the mobile application on a mobile device, generating an event having an event type, requesting, by an interaction measurement software development kit (SDK) from the mobile application, interaction data associated with the user interaction with the mobile application based on the event type, and securely transmitting, by the interaction measurement SDK, the interaction data to a first computing device indicated by the interaction measurement SDK.
A method and system is disclosed for adapting speech synthesis according to user-interface input. While synthesizing speech from a text segment with a text-to-speech (TTS) system and concurrently displaying the text segment in a display device, the system may receive tracking operation input tracking a portion of text undergoing synthesis and identifying a context portion of the text for which prior-synthesized speech has been synthesized at a canonical speech-pace. The tracking information may be used to adjust a speech-pace of TTS synthesis of the portion from the canonical speech-pace to an adapted speech-pace, and speech characteristics of synthesized speech of the portion may be adapted by applying both the adapted speech-pace and synthesized speech characteristics of the prior-synthesized speech of the context portion to TTS synthesis processing of the portion. The synthesized speech of the identified portion may be output at the adapted speech-pace and with the adapted speech characteristics.
G10L 13/02 - Methods for producing synthetic speechSpeech synthesisers
G10L 13/08 - Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
84.
USING LARGE LANGUAGE MODEL IN REDUCING EXTENT OF CALENDAR RELATED INTERACTION
Some implementations process structured calendar data of an electronic calendar for a first user, to generate a natural language representation of the structured calendar data. Versions of those implementations further, in response to receiving a query determined to be relevant to the electronic calendar, prime a large language model (LLM) using a priming input (e.g., process the priming input using the LLM), where the priming input is based on the natural language representation of the structured calendar data. Following priming of the LLM using the priming input, some of those versions process, using the LLM, query input that is based on the query, to generate a LLM output and determine, based on the LLM output, a response to the query. The response can include a natural language response that can be rendered.
Aspects of the disclosure are directed to merging data lake openness with scalable metadata for managed tables in a cloud database platform, allowing for atomicity, consistency, isolation, and durability (ACID) transactions, performant data manipulation language (DML), higher throughput stream ingestion, data consistency, schema evolution, time travel, clustering, fine-grained security, and/or automatic storage optimization. Table data is stored in various open-source file formats in cloud storage while physical metadata of the table data is stored in a scalable metadata storage system.
A computing device may detect user input on a presence-sensitive screen. In response to detecting the input, the method obtains indications representing the input and generates a touch sensing image from these indications. Information extracted from the touch sensing image is then input into an artificial intelligence (AI) model. The computing device applies the AI model to the information extracted from the touch sensing image to generate a distribution of candidate keys and their corresponding scores based on the touch sensing image. From this distribution, the method selects an alphanumeric key, which is subsequently outputted to the device's user interface in response to the selection.
G06F 3/04886 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
G06F 40/40 - Processing or translation of natural language
G06T 11/20 - Drawing from basic elements, e.g. lines or circles
87.
FINE-TUNING GENERATIVE NEURAL NETWORKS TO IMPROVE FEW-SHOT PERFORMANCE
Systems and methods for training a generative neural network, e.g., a large language model (LLM) neural network. The generative neural network is trained on training examples that each include (i) a training input that includes a training query for a corresponding task and a subset of demonstration examples for the task that are most similar to the training query and (ii) the ground truth output for the training query for the task.
Implementations determine, based on account data for an account of a user, target content that is likely to be undesired by the user. Those implementations further determine whether the target content is included in certain content, such as a video or a webpage, that is being rendered or is to be rendered at a client device of the user. Yet further, those implementations perform remediating action(s) in response to determining that the target content is included in the certain content. The remediating action(s) that are performed can reduce or eliminate a quantity of user inputs and/or a duration of time needed for bypassing at least segment(s), of the certain content, that are determined to include the target content.
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
H04N 21/2387 - Stream processing in response to a playback request from an end-user, e.g. for trick-play
H04N 21/258 - Client or end-user data management, e.g. managing client capabilities, user preferences or demographics or processing of multiple end-users preferences to derive collaborative data
H04N 21/431 - Generation of visual interfacesContent or additional data rendering
A method for training hotword detection includes receiving a training input audio sequence including a sequence of input frames that define a hotword that initiates a wake-up process on a device. The method also includes feeding the training input audio sequence into an encoder and a decoder of a memorized neural network. Each of the encoder and the decoder of the memorized neural network include sequentially-stacked single value decomposition filter (SVDF) layers. The method further includes generating a logit at each of the encoder and the decoder based on the training input audio sequence. For each of the encoder and the decoder, the method includes smoothing each respective logit generated from the training input audio sequence, determining a max pooling loss from a probability distribution based on each respective logit, and optimizing the encoder and the decoder based on all max pooling losses associated with the training input audio sequence.
The technology is directed to systems and methods of high-bandwidth memory allocation. High-bandwidth memory may include a plurality of data channels and shareable memory that can be selectively allocated to particular data channels. In addition, bandwidth may be selectively allocated to the data channels independent of the shareable memory. The allocation of memory and bandwidth to particular data channels may be based on identified attributes of workloads that are to be associated with each data channel.
A method for digital shared connections spaces includes causing a collaborative visual space to be presented to one or more participants of a shared connections space. The collaborative visual space includes one or more images each representing a media item. The method includes receiving, from a first client device of a first participant, a first media item and an indication of a location of the first media item in the collaborative visual space. The method includes causing an image of the first media item to be added to the collaborative visual space at the indicated location. The method includes, responsive to a second client device of a second participant accessing the image of the first media item, causing the first media item to perform an action in the collaborative visual space presented on the second client device.
H04L 65/403 - Arrangements for multi-party communication, e.g. for conferences
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
H04L 12/18 - Arrangements for providing special services to substations for broadcast or conference
H04L 65/1089 - In-session procedures by adding mediaIn-session procedures by removing media
92.
SYSTEMS AND METHODS FOR USING ARTIFICIAL INTELLIGENCE WITH DIGITAL SHARED CONNECTIONS SPACES
A method for using artificial intelligence (AI) with a digital shared connections spaces includes causing a virtual meeting user interface (UI) to be presented during a virtual meeting between one or more participants. The method includes determining, using an AI model and one or more participant actions during the virtual meeting as input to the AI model, that at least one participant is interested in using a shared connections space that is configured to present one or more images of one or more media items referenced during the virtual meeting. The method includes instructing a shared connections space platform to generate the shared connections space. The one or more images of the one or more media items referenced during the virtual meeting are viewable on a shared connections space UI after the virtual meeting is concluded.
A liquid-cooled rack for liquid-cooling trays of computing equipment includes an inlet manifold including an inlet for receiving a coolant at a first temperature and at least one outlet interface for discharging the coolant to one of the liquid-cooled trays; and an outlet manifold including at least one inlet interface for receiving the coolant at a second temperature higher than the first temperature from one of the liquid-cooled trays and an outlet for discharging the coolant. At least one of the inlet manifold or the outlet manifold includes at least one coolant drain port to facilitate the draining and purge drying process The drain ports can also be added to each tray. The draining and purge drying process can be enhanced by using high pressure air (desiccated and/or heated), flushing with volatile fluid, tilting, vibration, vacuuming.
According to at least one implementation, a method includes identifying a gaze associated with a user of a device and identifying a first state of a gesture from the user. The method further includes causing display of a cursor over a first portion of content on a display of the device based on the gaze and the first state of the gesture. The method also includes identifying a second state of the gesture from the user and causing display of the cursor over a second portion of the content on the display based on the gaze and the second state of the gesture, the second portion being different than the first portion.
G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
95.
REPRESENTATION LEARNING MODELS FOR IMPROVED GENOMICS
Improved methods for determining full-genome associations with phenotype data represented by medical images, ECG traces, spirometry-traces, or other high-dimensional phenotype-representing physiosignals are provided. These methods include training an encoder, as pan of an autoencoder, to project input physiosignals into a phenotypically representative set of lower-dimensional latent variables. In some examples, the latent variables are augmented by clinical correlates of the input physiosignals (e.g., a. force vital capacity-determined from a spirometry trace), The latent variables and/or clinical correlates are then used to determine genetic loci that are associated with each of the latent variables. These associations can then be used to focus drug development and/or to predict polygenic scores tor ram diseases for which sufficient, data, may-not be available for a full genome-wide association study or other genomic data-to-phenotype association.
A system of multiple radar-enabled computing devices, along with related techniques, are described in this document. These techniques are employed with this system to coordinate information and operations across multiple radar-enabled computing devices to create a seamless experience. In particular, each computing device of the computing system may have access to stored radar-signal characteristics that enable detection and distinction of users and detection and recognition of gestures. Computing devices may coordinate in-progress operations to provide continuity across multiple devices. When positioned in different locations, each device may also learn over time users, gestures, and versions of gestures associated with that location to anticipate them in the future.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G01S 7/41 - Details of systems according to groups , , of systems according to group using analysis of echo signal for target characterisationTarget signatureTarget cross-section
G01S 13/50 - Systems of measurement based on relative movement of target
G01S 13/88 - Radar or analogous systems, specially adapted for specific applications
97.
TIME-EFFICIENT IMPLEMENTATION OF CACHE REPLACEMENT POLICY
A cache includes multiple sets with each set having multiple respective ways, and replacement logic configured to implement a two-stage least recently used (LRU) replacement computation. The two-stage LRU replacement computation causes the cache to perform a first stage during which the cache computes an LRU way for a set, and a second stage during which the cache updates an LRU data structure with information of a transaction accessed way.
G06F 12/123 - Replacement control using replacement algorithms with age lists, e.g. queue, most recently used [MRU] list or least recently used [LRU] list
G06F 12/126 - Replacement control using replacement algorithms with special data handling, e.g. priority of data or instructions, handling errors or pinning
98.
Ambiguous Gesture Determination Using Contextual Information
Techniques and devices for ambiguous gesture determination using contextual information are described in this document for radar-enabled computing devices. Contextual information may include a status of operations that are performed by the radar-enabled computing device or an associated device at a current time, past time, or future time. Contextual information may also or instead include foreground and background operations, a history of operations saved to a memory, scheduled or anticipated operations, a location of a user or device, room-related context, user habits, and so forth. Two or more computing devices may coordinate this contextual information across a communication network to form a computing system.
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for detecting and decoding a visually imperceptible or perceptible watermark. A watermark detection apparatus determines whether the particular image includes a visually imperceptible or perceptible watermark using detector a machine learning model. If the watermark detection apparatus detects a watermark, the particular image is routed to a watermark decoder. If the watermark detection apparatus cannot detect a watermark in the particular image, the particular image is filtered from further processing. The watermark decoder decodes the visually imperceptible or perceptible watermark detected in the particular image. After decoding, an item depicted in the particular image is validated based data extracted from the decoded visually imperceptible or perceptible watermark.
G06T 3/40 - Scaling of whole images or parts thereof, e.g. expanding or contracting
G06T 5/20 - Image enhancement or restoration using local operators
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing topological scheduling on a machine-learning accelerator having an array of tiles. One of the methods includes performing, at each time step of a plurality of time steps corresponding respectively to columns within each of a plurality of wide columns of the tile array, operations comprising: performing respective multiplications using tiles in a respective tile column for the time step, computing a respective output result for each respective tile column for the time step including computing a sum of results of the multiplications for the tile column, and storing the respective output result for the tile column in a particular output RAM having a location within the same tile column and on a row from which the output result will be read by a subsequent layer of the model.