09 - Scientific and electric apparatus and instruments
Goods & Services
Computer software for editing musical recordings;
reverberators; computer software for generating high-end
audio effects; electroacoustic apparatus for use in sound,
recording, television and film studios to generate room
acoustic effects.
41 - Education, entertainment, sporting and cultural services
Goods & Services
Providing entertainment, sports, music, reality,
documentary, and arts and culture programming by means of
telecommunications networks, computer networks, the
Internet, and wireless communications networks; providing
non-downloadable entertainment, sports, music, reality,
documentary, and arts and culture programming; providing
non-downloadable pre-recorded music, video, and graphics for
use on mobile communications devices.
09 - Scientific and electric apparatus and instruments
Goods & Services
Downloadable audio files, image files, video files, music
files and multimedia content; downloadable audio and visual
recordings, musical recordings, movies, films and television
shows; downloadable films and videos featuring 3D and
360-degree viewing in the field of entertainment and sports.
Aspects of the subject technology relate to providing frame rate arbitration for electronic devices. Frame rate arbitration can include determining a global frame rate based on frame rate parameters from one or more animation sources, and providing the global frame rate to the animation sources. The frame rate parameters for various animations sources can have differing preferred, minimum, and/or maximum frame rates, and the global frame rate may be determined for concurrent display of multiple animations from the multiple animation sources. In one or more implementations, frame rate arbitration can also be performed based on frame rate parameters from an input source.
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/472 - End-user interface for requesting content, additional data or servicesEnd-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content
Some embodiments provide a mapping application that displays a rotation of a 3D map and corresponding rotation of a set of map labels overlaying the 3D map in response to receiving input to rotate the 3D map. When a particular map label in the set of map labels rotates towards an upside down orientation, the mapping application also replaces the particular map label with a version of the particular map label arranged in a right side up orientation to prevent the particular map label from being displayed in the upside down orientation in the 3D map.
The present disclosure generally describes user interfaces related to time. In accordance with embodiments, user interfaces for displaying and enabling an adjustment of a displayed time zone are described. In accordance with embodiments, user interfaces for initiating a measurement of time are described. In accordance with embodiments, user interfaces for enabling and displaying a user interface using a character are described. In accordance with embodiments, user interfaces for enabling and displaying a user interface that includes an indication of a current time are described. In accordance with embodiments, user interfaces for enabling configuration of a background for a user interface are described. In accordance with embodiments, user interfaces for enabling configuration of displayed applications on a user interface are described.
G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
G04G 21/08 - Touch switches specially adapted for time-pieces
G06F 3/0362 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 1D translations or rotations of an operating part of the device, e.g. scroll wheels, sliders, knobs, rollers or belts
G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
G06F 3/04842 - Selection of displayed objects or displayed text elements
G06F 3/04845 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
G06F 3/04847 - Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
G06F 3/0487 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
G06F 3/04883 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
G06T 3/60 - Rotation of whole images or parts thereof
G06T 11/40 - Filling a planar surface by adding surface attributes, e.g. colour or texture
G06T 11/60 - Editing figures and textCombining figures or text
Aspects of the subject technology relate to a device including a microphone and a processor. The processor receives an audio signal corresponding to the microphone. The processor detects one or more of an ambient sound or a voice of a user of the device in the received audio signal. The processor applies a first gain to the ambient sound when the ambient sound is detected in the received audio signal and apply a second gain different than the first gain to the voice of the user of the device when the voice of the user of the device is detected in the received audio signal.
G10L 17/02 - Preprocessing operations, e.g. segment selectionPattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal componentsFeature selection or extraction
H04R 3/04 - Circuits for transducers for correcting frequency response
9.
LAYER 1 MEASUREMENT ON SYNCHRONIZATION SIGNAL BLOCK-LESS SECONDARY CELL
The present application relates to devices and components including apparatus, systems, and methods to configure a synchronization signal block (SSB)-less secondary cell (SCell) for layer 1 measurement.
In an embodiment, an automation controller periodically generates stop trajectories and controls actuators to follow the stop trajectories. As long as new stop trajectories continue to be generated, the automation controller may follow a destination trajectory that is formed from the first portion of each stop trajectory. If stop trajectories are not generated for a period of time (e.g., due to failure in one or more computers generating the stop trajectories), the automation controller may continue to follow the most recent stop trajectory and bring the mobile machine to a stop.
An integrated sensor package for an electronic device may include a matrix material defining a body structure of the integrated sensor package, a light emitting diode at least partially encapsulated in the matrix material, a photodiode at least partially encapsulated in the matrix material and configured to detect light emitted by the light emitting diode and reflected by an object external to the integrated sensor package, a via structure at least partially encapsulated in the matrix material, a permanent magnet at least partially encapsulated in the matrix material, a first conductive member on a first side of the integrated sensor package and conductively coupling the light emitting diode to a first end of the via structure, and a second conductive member on a second side of the integrated sensor package opposite the first side and conductively coupled to a second end of the via structure.
H01L 31/167 - SEMICONDUCTOR DEVICES NOT COVERED BY CLASS - Details thereof structurally associated with, e.g. formed in or on a common substrate with, one or more electric light sources, e.g. electroluminescent light sources, and electrically or optically coupled thereto the semiconductor device sensitive to radiation being controlled by the light source or sources the light sources and the devices sensitive to radiation all being semiconductor devices characterised by at least one potential or surface barrier
A61B 5/00 - Measuring for diagnostic purposes Identification of persons
G04C 10/00 - Arrangements of electric power supplies in time-pieces
G04G 21/02 - Detectors of external physical values, e.g. temperature
G06F 1/16 - Constructional details or arrangements
H02J 50/10 - Circuit arrangements or systems for wireless supply or distribution of electric power using inductive coupling
A cartridge including a housing, a processor positioned within the housing to provide video output to a display unit of a head-mounted display (HMD), and an attachment interface configured to removably attach the cartridge to the display unit.
Systems and methods for providing non-active subscriber identity module (SIM) services leveraging multiple SIMs configured at multiple UEs are disclosed herein. A primary user equipment (UE), activating a first SIM of a dual SIM dual standby (DSDS) configuration used by the primary UE to perform active-mode network communications on the first SIM and sends, to a secondary UE, an indication to activate a second SIM of the DSDS configuration in response to the activating the first SIM of the DSDS configuration used by the primary UE. The secondary UE receives the indication to activate the second SIM of the first DSDS configuration used by the primary UE and activates the second SIM for use by the secondary UE in response to receiving the indication. Accordingly, the secondary UE may handle any network communications on the second SIM during the network communications on the first SIM at the primary UE.
A hearing aid profile is updated by sending a hearing aid profile update request to a hearing aid profile service, receiving the updated hearing aid profile from the hearing aid profile service, and replacing the current hearing aid profile in the hearing aid with the updated hearing aid profile. Other aspects are also described and claimed.
An image sensor includes a semiconductor substrate. The semiconductor substrate includes a set of one or more substrate portions. Each substrate portion of the set of one or more substrate portions is electrically isolated from other substrate portions of the set of substrate portions. The image sensor further includes a set of photodiodes, a set of charge storage nodes, a set of charge transfer gates, and a control circuit. Each charge transfer gate of the set of charge transfer gates is disposed on and biased by a different substrate portion of the set of substrate portions. Each charge transfer gate of the set of charge transfer gates is operable to selectively connect a respective photodiode of the set of photodiodes to the charge storage node. The control circuit is operable to dynamically bias each substrate portion of the set of substrate portions independently of each other substrate portion of the set of substrate portions.
A system may include an electronic device such as a head-mounted device and a handheld input device for controlling the electronic device. A lanyard may be removably attached to the handheld input device or a non-electronic object. The lanyard may include visual markers, such as infrared light-emitting diodes and/or fiducials, that can be detected by an external camera and used to track a location of the lanyard. For example, the lanyard may be fabric, and the visual markers may be incorporated into the fabric or attached to the fabric. The lanyard may also include motion sensors, visual-inertial odometry cameras, or other sensors to determine the location of the lanyard. The lanyard may be electrically coupled to the handheld input device, such as to transfer power and/or data. Alternatively, the lanyard may be coupled to a non-electronic object and may include a battery and/or a haptic output component.
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
A housing for an electronic device can include an exterior titanium portion, an interior metal joined to the exterior titanium portion, the interior metal being a different metal than the exterior titanium portion, and an intermetallic compound having a thickness of less than 1 μm disposed between the interior metal and the exterior titanium portion.
The subject technology receives assessment values determined by a first machine learning model deployed on a client electronic device, the assessment values being indicative of classifications of input data and the assessment values being associated with constraint data that comprises a probability distribution of the assessment values with respect to the classifications of the input data. The subject technology applies the assessment values determined by the first machine learning model to a second machine learning model to determine the classifications of the input data. The subject technology determines whether accuracies of the classifications determined by the second machine learning model conform with the probability distribution for corresponding assessment values determined by the first machine learning model. The subject technology retrains the first machine learning model when the accuracies of the classifications determined by the second machine learning model do not conform with the probability distribution.
This application describes a phased approach to provision eSIM profiles to a wireless device. Credentials are preloaded to an eUICC during manufacture of the eUICC and used subsequently to load eSIM profiles to the eUICC without requiring an active, real-time connection to an MNO provisioning server. Multiple bound profile packages (BPPs) can be pre-generated and encrypted by MNO provisioning servers for an eUICC and transferred to a BPP aggregator server before assembly of the eUICC in a respective wireless device. A local provisioning server in a manufacturing facility mutually authenticates and connects to the BPP aggregator server to download and store one or more of the encrypted BPPs for later installation on the eUICC. The local provisioning server subsequently mutually authenticates and connects to the eUICC to load at least one of the one or more pre-generated, encrypted BPPs to the eUICC during assembly and/or testing of the wireless device.
In an example method, a mobile device obtains a signal indicating an acceleration measured by a sensor over a time period. The mobile device determines an impact experienced by the user based on the signal. The mobile device also determines, based on the signal, one or more first motion characteristics of the user during a time prior to the impact, and one or more second motion characteristics of the user during a time after the impact. The mobile device determines that the user has fallen based on the impact, the one or more first motion characteristics of the user, and the one or more second motion characteristics of the user, and in response, generates a notification indicating that the user has fallen.
A61B 5/11 - Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
G01C 5/06 - Measuring heightMeasuring distances transverse to line of sightLevelling between separated pointsSurveyors' levels by using barometric means
G01C 21/12 - NavigationNavigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigatedDead reckoning
While displaying a three-dimensional scene including physical elements and virtual elements, a computer system detects a sequence of user inputs for increasing a level of immersion of the three-dimensional scene. In response, the computer system increases the quantity of virtual elements displayed in the scene, including displaying an animated transition to replace a portion of a first region occupied by a first set of physical elements with virtual elements in response to a first input, and displaying another animated transition that replaces a portion of a second region occupied by a subset of the remaining physical elements with virtual elements in response to a second input following the first input, wherein virtual elements occupy increasing portions of the scene after each of the first input and the second input.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/04845 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
G06T 19/00 - Manipulating 3D models or images for computer graphics
G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
G06V 10/143 - Sensing or illuminating at different wavelengths
G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestriansBody parts, e.g. hands
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
Optoelectronic apparatus includes an array of emitters configured to emit respective beams of optical radiation. A steering module is mounted to intercept the emitted beams and includes an optical substrate and an active diffraction grating, which is fixed to the optical substrate and has a pitch that varies in response to an electrical signal applied thereto, so as to deflect the beams of optical radiation by a variable angle dependent on the pitch. An optical metasurface is disposed on the optical substrate and configured to collimate the beams of optical radiation so that the beams form a pattern of spots on a target scene. A controller is coupled to vary the electrical signal applied to the active diffractive grating so as to shift the pattern of spots across the target scene.
G02F 1/29 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulatingNon-linear optics for the control of the position or the direction of light beams, i.e. deflection
G03B 21/00 - Projectors or projection-type viewersAccessories therefor
An actuator assembly comprising: a multi-layer board comprising a first side and a second side defining an actuation region having a thickness that is reduced relative to a remainder of the multi-layer board; a planar voice coil formed by a trace on at least one of the first side or the second side defining the actuation region; and a polarized magnet array having a magnetic field aligned to the planar voice coil to actuate the actuation region upon application of a voltage to the planar voice coil.
An electronic device includes a first interconnection between a first circuitry and a second circuitry that does not include a switch. A second interconnection may couple or connect third circuitry to the first interconnection via a first switch. Signals sent from the first circuitry to the second circuitry may have a higher priority and/or a different frequency than those sent from the third circuitry to the second circuitry. The first switch may be disposed near a first junction of the first interconnection and the second interconnection. In some embodiments, a ground or a termination resistance may also be coupled to the first interconnection via a second switch. The second switch may be coupled to the first interconnection at a second junction near the first circuitry.
H04B 1/00 - Details of transmission systems, not covered by a single one of groups Details of transmission systems not characterised by the medium used for transmission
One embodiment described herein takes the form of a user equipment (UE), such as a phone. The UE includes a transceiver and a processor. The processor is configured to establish a first radio resource control (RRC) connection with a base station via the transceiver. The processor is configured to associate with a secondary UE for collaboration of transmission of a data payload of one of the UE or the secondary UE to the base station, and transmit a first portion of the data payload to the base station via the transceiver.
A wearable device that communicates with a host device can be used to initiate a communication functionality of the host device (e.g., telephone calls, text messages). The wearable device can obtain user input indicating a recipient of the communication and in some instances content for the communication and can provide an instruction to the host device. The host device can use the indicated recipient and content to initiate communication and where applicable to send the content. Recipients and/or content can be selected from predefined lists available on the wearable device.
H04M 1/72484 - User interfaces specially adapted for cordless or mobile telephones wherein functions are triggered by incoming communication events
H04M 1/72409 - User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
H04M 1/72412 - User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories using two-way short-range wireless interfaces
H04M 1/7243 - User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
H04M 1/72454 - User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
H04W 68/00 - User notification, e.g. alerting or paging, for incoming communication, change of service or the like
A video decoding method according to the present invention may comprise: a step for determining whether to divide a current block into a plurality of sub-blocks; a step for determining an intra prediction mode for the current block; and a step for performing intra prediction for each sub-block on the basis of the intra prediction mode, when the current block is divided into the plurality of sub-blocks.
H04N 19/119 - Adaptive subdivision aspects e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
H04N 19/105 - Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
H04N 19/159 - Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
H04N 19/172 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/179 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scene or a shot
H04N 19/182 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
H04N 19/194 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding the adaptation method, adaptation tool or adaptation type being iterative or recursive involving only two passes
H04N 19/196 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
H04N 19/31 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the temporal domain
H04N 19/33 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
H04N 19/426 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements using memory downsizing methods
H04N 19/46 - Embedding additional information in the video signal during the compression process
H04N 19/61 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
H04N 19/625 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]
An electronic device may include a transceiver with mixer circuitry that up-converts or down-converts signals based on a voltage-controlled oscillator (VCO) signal. The transceiver circuitry may include first, second, third, and fourth VCOs. Each VCO may include a VCO core that receives a control voltage and an inductor coupled to the VCO core. Fixed linear capacitors may be coupled between the VCO cores. A switching network may be coupled between the VCOs. Control circuitry may place the VCO circuitry in one of four different operating modes and may switch between the operating modes to selectively control current direction in each of the inductors. The VCO circuitry may generate the VCO signal within a respective frequency range in each of the operating modes. The VCO circuitry may exhibit a relatively wide frequency range across all of the operating modes while introducing minimal phase noise to the system.
H03B 5/12 - Generation of oscillations using amplifier with regenerative feedback from output to input with frequency-determining element comprising lumped inductance and capacitance active element in amplifier being semiconductor device
H03B 5/04 - Modifications of generator to compensate for variations in physical values, e.g. power supply, load, temperature
H03L 7/089 - Details of the phase-locked loop concerning mainly the frequency- or phase-detection arrangement including the filtering or amplification of its output signal the phase or frequency detector generating up-down pulses
H03L 7/097 - Details of the phase-locked loop concerning mainly the frequency- or phase-detection arrangement including the filtering or amplification of its output signal using a comparator for comparing the voltages obtained from two frequency to voltage converters
H03L 7/099 - Details of the phase-locked loop concerning mainly the controlled oscillator of the loop
The present disclosure generally relates to user interfaces for altering visual media. In some embodiments, user interfaces capturing visual media (e.g., via a synthetic depth-of-field effect), playing back visual media (e.g., via a synthetic depth-of-field effect), editing visual media (e.g., that has a synthetic depth-of-field effect applied), and/or managing media capture.
H04N 23/959 - Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics
The present disclosure generally relates to user interfaces for editing avatars. In some embodiments, user interfaces are shown for editing an avatar and avatar stickers. In some embodiments, user interfaces are shown for editing colors of one or more avatar features.
G06F 3/04845 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
31.
RANDOM ACCESS IN LOWER-LAYER TRIGGERED MOBILITY PROCEDURE
A method for performing a contention-free random access (CFRA) in a lower-layer triggered mobility (LTM) procedure from a serving cell to a target candidate cell is provided. The method includes determining, by a user equipment (UE), that random access response (RAR) reception is not configured for the UE. The method includes receiving, by the UE, a physical downlink control channel (PDCCH) order for transmitting a physical random access channel (PRACH) signal to the target candidate cell. The method includes determining, by the UE and based on the PDCCH order, a transmission power for transmitting the PRACH signal.
H04W 52/36 - Transmission power control [TPC] using constraints in the total amount of available transmission power with a discrete range or set of values, e.g. step size, ramping or offsets
Electronic package structures and systems are described in which a 3D interconnect structure is integrated into a package redistribution layer and/or chiplet for power and signal delivery to a die. Such structures may significantly improve input output (IO) density and routing quality for signals, while keeping power delivery feasible.
H01L 23/538 - Arrangements for conducting electric current within the device in operation from one component to another the interconnection structure between a plurality of semiconductor chips being formed on, or in, insulating substrates
H01L 25/065 - Assemblies consisting of a plurality of individual semiconductor or other solid-state devices all the devices being of a type provided for in a single subclass of subclasses , , , , or , e.g. assemblies of rectifier diodes the devices not having separate containers the devices being of a type provided for in group
H05K 1/11 - Printed elements for providing electric connections to or between printed circuits
H05K 1/18 - Printed circuits structurally associated with non-printed electric components
A separator of a battery cell includes first pores disposed through a first segment of a thickness of the separator, and second pores disposed through a second segment of the thickness. In an unwound state of the separator, a first cross-sectional area of the first pores differs in size from a second cross-sectional area of the second pores by a first amount. In a wound state of the separator, the first cross-sectional area is substantially equal in size to the second cross-sectional area, or the first cross-sectional area differs in size from the second cross-sectional area by a second amount that is less than the first amount.
In an aspect, a method is performed by user equipment (UE) that is in communication with a network. The method includes determining that an artificial intelligence (AI) model is to be used by the UE to perform an AI task. In response to the UE being in a radio resource control (RRC) idle or RRC inactive state, the UE sends to the network, an AI indicator that is associated with the AI model. The UE receives from the network, the AI model or a training dataset that is associated with the AI model.
A base station configured to decode, from signaling received from a user equipment (UE), a parameter indicating the UE supports gapless measuring for a first type of measurement of a Synchronization Signal Block (SSB), wherein the SSB is located in frequency within a channel bandwidth (CBW) of the UE and outside an active bandwidth part (BWP) of the UE in the CBW, determine, based on at least the parameter, the UE supports gapless measuring for a second type of measurement of the SSB and configure transceiver circuitry to transmit to the UE a configuration for the second type of measurement of the SSB, wherein the configuration comprises a gapless measurement.
A user equipment (UE) configured to identify a condition configured to trigger a power headroom report (PHR) for a first panel and a second panel of the UE that are configured for simultaneous multi-panel transmission (STxMP) and configure transceiver circuitry to transmit the PHR to a network using a physical uplink shared channel (PUSCH) on a transmission occasion using one of the first panel or the second panel of the UE.
H04W 52/36 - Transmission power control [TPC] using constraints in the total amount of available transmission power with a discrete range or set of values, e.g. step size, ramping or offsets
37.
CANDIDATE CELL ACTIVATION FOR LAYER 1/LAYER 2 TRIGGERED MOBILITY
A user equipment (UE) configured to decode, from signaling received from a base station, configuration information for a layer 1 (L1) /layer 2 (L2) triggered mobility (LTM) procedure, configure transceiver circuitry to transmit a channel state information (CSI) report for LTM to the base station of the serving cell, the CSI report comprising measurement data for one or more candidate cells, decode, from signaling received from the base station, a medium access control (MAC) control element (CE) cell switch command for the LTM procedure and switch from the serving cell to one of the candidate cells based on the decoded MAC CE cell switch command for the LTM procedure.
An apparatus configured to decode, based on signaling received from a base station, a configured maximum transmission power, wherein the signaling is radio resource control (RRC) signaling and generate, for simultaneous transmission via a first antenna panel and a second antenna panel, uplink transmissions that do not exceed the configured maximum transmission power.
H04W 52/36 - Transmission power control [TPC] using constraints in the total amount of available transmission power with a discrete range or set of values, e.g. step size, ramping or offsets
H04W 52/42 - TPC being performed in particular situations in systems with time, space, frequency or polarisation diversity
39.
TECHNIQUES TO SUPPORT CARRIER AGGREGATION WITH AGGREGATED BANDWIDTH REPORTING
Embodiments provide resource-efficient techniques to report UE carrier aggregation capabilities. Various embodiments may utilize a maximum aggregated bandwidth supported by a UE to reduce signaling overhead in reporting UE carrier aggregation capabilities. In various such embodiments, the maximum aggregated bandwidth supported by the UE may indicate that the UE is compatible with each combination of component carriers up to the maximum aggregated bandwidth. Thus, the need to report UE carrier aggregation capabilities for different combinations of component carriers can be reduce or removed. In some embodiments, the maximum aggregated bandwidth may include a maximum aggregated bandwidth per band combination. In many embodiments, the maximum aggregated bandwidth may include a maximum aggregated bandwidth per band in a band combination. In several embodiments, the maximum aggregated bandwidth may include a maximum aggregated bandwidth per carrier aggregation bandwidth class in a fall back group.
09 - Scientific and electric apparatus and instruments
Goods & Services
Downloadable and recorded computer software for storing,
recording, encrypting, searching, accessing, prompting,
viewing, updating, sharing, synchronizing, and managing
passwords, passkeys, and other personal identity
information; downloadable and recorded computer software for
the suggestion and generation of passwords for use with
third-party applications and websites and for providing
alerts on data breaches and the security of passwords;
downloadable and recorded computer software for the
automatic retrieval and insertion of passwords, passkeys,
and other personal identity information for use with
third-party applications and websites; downloadable and
recorded computer software for user identification and
authentication; downloadable and recorded computer software
for protecting and monitoring the security of passwords,
passkeys, and other personal identity information, and for
providing related alerts and notifications; downloadable and
recorded computer software used in developing other software
applications.
Techniques are disclosed relating to selective rate limiting and reducing clock frequency of fabric circuitry in response to certain power management events. Disclosed techniques may advantageously allow power management circuitry to reduce or avoid negative impacts of power events by reducing the clock frequency of a communication fabric while using rate limiting of relatively lower-priority traffic to reduce impacts of the frequency reduction on high-priority traffic. For example, rate limiting of lower-quality-of-service virtual channels may continue after recovery of the clock frequency until higher-quality-of-service virtual channels have recovered from the frequency reduction.
This disclosure relates to techniques for time division multiplexing sounding reference signal ports in a wireless communication system. A wireless device and a cellular base station may establish a wireless link. The wireless device may be configured to perform a sounding reference signal transmission with time division multiplexing sounding reference signal ports. A maximum transmit power for the wireless device for the sounding reference signal transmission may be determined based at least in part on the time division multiplexing sounding reference signal ports. A transmit power may be selected for the sounding reference signal transmission based at least in part on the maximum transmit power for the wireless device for the sounding reference signal transmission. The sounding reference signal transmission may be performed using the selected transmit power.
H04W 52/36 - Transmission power control [TPC] using constraints in the total amount of available transmission power with a discrete range or set of values, e.g. step size, ramping or offsets
Routing substrates, methods of manufacture, and electronic assemblies including routing substrates are described. In an embodiment, a routing substrate includes a plurality of metal routing layers, a plurality of dielectric layers including a top dielectric layer forming a topmost surface, and a cavity formed in the topmost surface. The cavity may include a bottom cavity surface, a first plurality of first surface mount (SMT) metal bumps embedded within the top dielectric layer and protruding from the topmost surface of the top dielectric layer, and a second plurality of second SMT metal bumps embedded within an intermediate dielectric layer of the plurality of dielectric layers and protruding from the bottom cavity surface.
A flexible busbar comprises a first busbar layer including a first plurality of busbar segments and a first bend between first busbar segments of the first plurality of busbar segments and a second busbar layer coupled to the first busbar layer. The second busbar layer includes a second plurality of busbar segments and a second bend between second busbar segments of the second plurality of busbar segments. The first plurality of busbar segments is substantially colinear with the second plurality of busbar segments and the second bend is substantially parallel with the first bend.
H01M 50/503 - Interconnectors for connecting terminals of adjacent batteriesInterconnectors for connecting cells outside a battery casing characterised by the shape of the interconnectors
H01M 50/204 - Racks, modules or packs for multiple batteries or multiple cells
H01M 50/514 - Methods for interconnecting adjacent batteries or cells
A battery pack that comprises a plurality of battery cells and a plurality of plates coupled between adjacent battery cells of the plurality of battery cells. The plates include a plate body and support ledges extending from the plate body. The battery pack further comprises a plurality of support tabs. Each support tab of the plurality of support tabs is positioned between, and abuts against, support ledges of adjacent plates of the plurality of plates. Each support tab of the plurality of support tabs secures adjacent plates of the plurality of plates to each other. The battery pack further comprises a busbar electrically coupling the plurality of battery cells together.
H01M 50/291 - MountingsSecondary casings or framesRacks, modules or packsSuspension devicesShock absorbersTransport or carrying devicesHolders characterised by spacing elements or positioning means within frames, racks or packs characterised by their shape
H01M 50/204 - Racks, modules or packs for multiple batteries or multiple cells
H01M 50/244 - Secondary casingsRacksSuspension devicesCarrying devicesHolders characterised by their mounting method
H01M 50/502 - Interconnectors for connecting terminals of adjacent batteriesInterconnectors for connecting cells outside a battery casing
H01M 50/516 - Methods for interconnecting adjacent batteries or cells by welding, soldering or brazing
A head-mountable device includes a display portion a facial interface, a stiffness profile modifier, a sensor, and a securement assembly. The display portion includes a display. The facial interface has a variable stiffness profile. The stiffness profile modifier is configured to automatically change the facial interface from having a first stiffness profile to having a second stiffness profile in response to sensor data. The sensor is configured to generate the sensor profile data. The securement assembly is connectable to the display portion. The securement assembly includes a removable strap and a retention band that is connectable to the removable strap. The removable strap includes electronics.
In one implementation, a method of storing object information in association with contextual information is performed at a device including an image sensor, one or more processors, and non-transitory memory. The method includes capturing, using the image sensor, an image of an environment. The method includes detecting a user engagement with an object in the environment based on the image of the environment. The method includes, in response to detecting the user engagement with the object, obtaining information regarding the object, obtaining contextual information, and storing, in a database, an entry including the information regarding the object in association with the contextual information.
G06F 16/583 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 16/58 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
G06F 16/587 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
48.
METHOD FOR ENCODING/DECODING VIDEO SIGNAL, AND APPARATUS THEREFOR
A method for decoding a video, according to the present invention, may comprise the steps of: parsing a first flag indicating whether inter prediction on the basis of a merge mode is applied to a current block; if the first flag is true, parsing a second flag indicating whether a regular merge mode or a merge offset encoding mode is applied to the current block; and if the second flag is true, parsing a third flag indicating whether the merge offset encoding mode is applied to the current block.
H04N 19/51 - Motion estimation or motion compensation
H04N 19/109 - Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
H04N 19/159 - Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
49.
VOICE ACTIVATED DEVICE FOR USE WITH A VOICE-BASED DIGITAL ASSISTANT
A voice activated device for interaction with a digital assistant is provided. The device comprises a housing, one or more processors, and memory, the memory coupled to the one or more processors and comprising instructions for automatically identifying and connecting to a digital assistant server. The device further comprises a power supply, a wireless network module, and a human-machine interface. The human-machine interface consists essentially of: at least one speaker, at least one microphone, an ADC coupled to the microphone, a DAC coupled to the at least one speaker, and zero or more additional components selected from the set consisting of: a touch-sensitive surface, one or more cameras, and one or more LEDs. The device is configured to act as an interface for speech communications between the user and a digital assistant of the user on the digital assistant server.
A portable computer includes a display portion comprising a display and a base portion pivotally coupled to the display portion. The base portion may include a bottom case and a top case, formed from a dielectric material, coupled to the bottom case. The top case may include a top member defining a top surface of the base portion and a sidewall integrally formed with the top member and defining a side surface of the base portion. The portable computer may also include a sensing system including a first sensing system configured to determine a location of a touch input applied to the top surface of the base portion and a second sensing system configured to determine a force of the touch input.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 1/16 - Constructional details or arrangements
G06F 1/26 - Power supply means, e.g. regulation thereof
G06F 3/02 - Input arrangements using manually operated switches, e.g. using keyboards or dials
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
G06F 3/041 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
G06F 3/044 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
G06F 3/04886 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestriansBody parts, e.g. hands
H01H 13/703 - Switches having rectilinearly-movable operating part or parts adapted for pushing or pulling in one direction only, e.g. push-button switch having a plurality of operating members associated with different sets of contacts, e.g. keyboard with contacts carried by or formed from layers in a multilayer structure, e.g. membrane switches characterised by spacers between contact carrying layers
H01H 13/705 - Switches having rectilinearly-movable operating part or parts adapted for pushing or pulling in one direction only, e.g. push-button switch having a plurality of operating members associated with different sets of contacts, e.g. keyboard with contacts carried by or formed from layers in a multilayer structure, e.g. membrane switches characterised by construction, mounting or arrangement of operating parts, e.g. push-buttons or keys
H01H 13/785 - Switches having rectilinearly-movable operating part or parts adapted for pushing or pulling in one direction only, e.g. push-button switch having a plurality of operating members associated with different sets of contacts, e.g. keyboard characterised by the contacts or the contact sites characterised by the material of the contacts, e.g. conductive polymers
H01H 13/85 - Switches having rectilinearly-movable operating part or parts adapted for pushing or pulling in one direction only, e.g. push-button switch having a plurality of operating members associated with different sets of contacts, e.g. keyboard characterised by ergonomic functions, e.g. for miniature keyboardsSwitches having rectilinearly-movable operating part or parts adapted for pushing or pulling in one direction only, e.g. push-button switch having a plurality of operating members associated with different sets of contacts, e.g. keyboard characterised by operational sensory functions, e.g. sound feedback characterised by tactile feedback features
H01H 13/86 - Switches having rectilinearly-movable operating part or parts adapted for pushing or pulling in one direction only, e.g. push-button switch having a plurality of operating members associated with different sets of contacts, e.g. keyboard characterised by the casing, e.g. sealed casings or casings reducible in size
H03K 17/94 - Electronic switching or gating, i.e. not by contact-making and -breaking characterised by the way in which the control signals are generated
H03K 17/98 - Switches controlled by moving an element forming part of the switch using a capacitive movable element having a plurality of control members, e.g. keyboard
H10N 30/20 - Piezoelectric or electrostrictive devices with electrical input and mechanical output, e.g. functioning as actuators or vibrators
Various embodiments disclosed herein include mechanical iris assemblies, as well as camera modules and devices that incorporate these mechanical iris assemblies. The mechanical iris assemblies described herein include a housing, a rotor plate, a set of blade elements, and an actuator arrangement. The actuator assembly may include a voice coil actuator. In some variations, a voice coil actuator includes a magnet, a first coil, and a second coil, and a controller is configured to concurrently drive current through the first coil and the second coil in opposite directions. In other variations, a voice coil actuator includes at least one magnet and a set of arcuate-shaped coils. In still other variations, a voice coil actuator includes at least one magnet and a printed circuit that defines a coil that includes an inner trace, an outer trace, and a set of connecting traces that connect the inner trace to the outer trace.
A computer system displays a first user interface object in a first view of a three-dimensional environment at a first position in the three-dimensional environment and with a first spatial arrangement relative to a respective portion of the user. The computer system detects movement of a viewpoint of the user from a first location to a second location in a physical environment. In accordance with a determination that the movement does not satisfy a threshold amount of movement, the computer system maintains display of the first user interface object at the first position. In accordance with a determination that the movement satisfies the threshold amount of movement, the computer system ceases to display the first user interface object at the first position and displays the first user interface object at a second position having the first spatial arrangement relative to the respective portion of the user.
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/04845 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
A system including a stylus with an inertial measurement unit (IMU) and force/touch/temperature sensors on its sides and/or tip, in some instances in conjunction with force/touch sensors on a tablet computing device or sensors on a watch, is disclosed that obtains measurements such as stylus grip pressure, tilt and touch location, stylus tip pressure, stylus motion, and user temperature. In some instances, these measurements can be obtained through everyday use of the stylus, while in other instances, the user can be prompted to perform certain tasks (e.g., draw specific contours) to assist in data collection. With these measurements, a user profile and baseline profile data can be established and tracked over time, which can include parameters such as tremor amplitude and grip strength. Deviations from baseline profile data can be computed, and when those deviations exceed a threshold, an alert or other wellness insights can be presented to the user.
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
G16H 50/30 - ICT specially adapted for medical diagnosis, medical simulation or medical data miningICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indicesICT specially adapted for medical diagnosis, medical simulation or medical data miningICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for individual health risk assessment
A first electronic device establishes a wireless connection with a second electronic device that controls display of a user interface on a second display. The first device displays a first user interface that includes a first representation of a media item. While displaying the first user interface, the first electronic device detects a first user input. In response to detecting the first user input, the first electronic device transmits to the second electronic device instructions enabling display of at least a portion of the media item on substantially the entire second display controlled by the second electronic device. While the at least the portion of the media item is displayed on the second electronic device, the first electronic device displays additional information different from but related to the at least the portion of the media item that is displayed on the second electronic device.
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
G06F 3/04883 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
A glasses assembly can include a front cover, a suspension, and a waveguide held by the suspension. The front cover and the waveguide can define a gap. The glasses assembly can include a shroud at least partially disposed in the gap. The shroud can define a first end and a second end. The shroud can include an elastomer body and a core coupled with the front cover at the first end of the shroud.
A user equipment (UE) may initiate an application on a user equipment (UE) which utilizes a network slice via a packet data network (PDN) connection with a first cellular network. The first cellular network may support network slicing and voice over new radio (VONR) and one or more cells of the first cellular network may support evolved packet system (EPS) fallback procedures. The UE may receive or initiate a voice call with another UE and adjust one or more capabilities of the UE such that an EPS fallback procedure is deemphasized. The UE may receive signaling from the first cellular network to establish, based at least in part on the one or more adjusted capabilities of the UE, the voice call as a VoNR call via the first cellular network and the network slice may be maintained during the VoNR call.
An electronic device displays a compass user interface with a direction indicator and a bearing indicator. The direction indicator provides an indication of a respective compass direction, wherein the appearance of the direction indicator is determined based on the orientation of the electronic device relative to the respective compass direction. The bearing indicator provides an indication of an offset from the respective compass direction. While displaying the bearing indicator, the electronic device detects rotation of the rotatable input mechanism and, in response, changes the displayed position of the bearing indicator from a first position to a second position by an amount that is determined in accordance with a magnitude of the rotation of the rotatable input mechanism.
G06F 3/04847 - Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
G01C 21/20 - Instruments for performing navigational calculations
G06F 1/16 - Constructional details or arrangements
G06F 3/0362 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 1D translations or rotations of an operating part of the device, e.g. scroll wheels, sliders, knobs, rollers or belts
G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
H04W 4/02 - Services making use of location information
The present disclosure relates to techniques that may be utilized to transmit radio frequency (RF) signals to an electronic device. A distance that a first electronic device of a plurality of electronic devices is located from a second electronic device of the plurality of electronic devices may be determined. Additionally, that the distance exceeds a first threshold distance may be determined. Whether the first electronic device is communicatively coupled to a cellular network, whether the distance exceeds a second threshold distance that is greater than the first threshold distance, or both are also determined. The RF signals radio frequency (RF) signals to be transmitted to the first electronic device by a satellite based on the first electronic device not being communicatively coupled to the cellular network, the distance exceeding the second threshold, or both.
Systems and methods for down-scaling are provided. In one example, a method for processing image data includes determining a plurality of output pixel locations using a position value stored by a position register, using the current position value to select a center input pixel from the image data and selecting an index value, selecting a set of input pixels adjacent to the center input pixel, selecting a set of filtering coefficients from a filter coefficient lookup table using the index value, filtering the set of source input pixels to apply a respective one of the set of filtering coefficients to each of the set of source input pixels to determine an output value for the current output pixel at the current position value, and correcting chromatic aberrations in the set of source input pixels.
H04N 9/64 - Circuits for processing colour signals
H04N 9/77 - Circuits for processing the brightness signal and the chrominance signal relative to each other, e.g. adjusting the phase of the brightness signal relative to the colour signal, correcting differential gain or differential phase
H04N 23/63 - Control of cameras or camera modules by using electronic viewfinders
H04N 23/661 - Transmitting camera control signals through networks, e.g. control via the Internet
H04N 23/68 - Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
H04N 23/80 - Camera processing pipelinesComponents thereof
H04N 23/84 - Camera processing pipelinesComponents thereof for processing colour signals
H04N 25/13 - Arrangement of colour filter arrays [CFA]Filter mosaics characterised by the spectral characteristics of the filter elements
H04N 25/133 - Arrangement of colour filter arrays [CFA]Filter mosaics characterised by the spectral characteristics of the filter elements including elements passing panchromatic light, e.g. filters passing white light
A broadcast service network (separate from a cellular network) includes broadcast service signal (BSS) nodes that broadcast signals to receivers of user equipment (UE). The BSS may provide broadcast service information to user equipment, such as the emergency notifications, even when the user equipment is out of range of the cellular network. Further, the broadcast service signals may provide information that improves the user equipment's search and access time for connecting to the cellular network (e.g., a cellular service station) and/or that assists the user equipment in locating a nearby region of cellular network coverage. In addition, the BSS may be wake-up signals configured to activate the receiver when the receiver is in a sleep or inactive mode. In some embodiments, the BSS may be pointer signals configured to direct the user equipment to a broadcast service channel to receive the broadcast service information.
A method includes receiving an indication to transmit a first set of signals using a first standard (e.g., Long Term Evolution) via a first set of antennas of a radio frequency device and a second set of signals using a second standard (e.g., New Radio) via a second set of antennas. The method also includes transmitting the first set of signals via the first set of antennas using a first power based on positions of the first set and second set of antennas, exposure conditions of the first set and the second set of signals on a user, and/or priorities of the first and the second set of signals. Moreover, the method includes transmitting the second set of signals via the second set of antennas using a second power based on the positions of the antennas, the exposure conditions of the signals on the user, and/or priorities of the signals.
H04B 7/06 - Diversity systemsMulti-antenna systems, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
H01Q 21/28 - Combinations of substantially independent non-interacting antenna units or systems
H04W 52/28 - TPC being performed according to specific parameters using user profile, e.g. mobile speed, priority or network state, e.g. standby, idle or non-transmission
H04W 72/044 - Wireless resource allocation based on the type of the allocated resource
H04W 72/566 - Allocation or scheduling criteria for wireless resources based on priority criteria of the information or information source or recipient
A separator of a battery cell includes first pores disposed through a first segment of a thickness of the separator, and second pores disposed through a second segment of the thickness. In an unwound state of the separator, a first cross-sectional area of the first pores differs in size from a second cross-sectional area of the second pores by a first amount. In a wound state of the separator, the first cross-sectional area is substantially equal in size to the second cross-sectional area, or the first cross-sectional area differs in size from the second cross-sectional area by a second amount that is less than the first amount.
H01M 10/04 - Construction or manufacture in general
H01M 50/449 - Separators, membranes or diaphragms characterised by the material having a layered structure
63.
SYSTEMS AND METHODS FOR MULTICAST DATA TRAFFIC SERVICE CONTINUITY UNDER MULTICAST-BROADCAST SERVICES OPERATION DURING RADIO RESOURCE CONTROL STATE TRANSITION
Systems and methods for multicast data traffic service continuity under multicast-broadcast services (MBS) operation during a radio resource control (RRC) state or mode transition are described. In some embodiments, a user equipment (UE) receives, from a network, while in an RRC connected mode, an RRC release message comprising an RRC inactive mode point to multipoint (PTM) configuration that uses an MBS session to receive multicast data traffic while the UE is in an RRC inactive mode; enters the RRC inactive mode from the RRC connected mode in response to the RRC release message; and uses the RRC inactive mode PTM configuration to receive, from the network, the multicast data traffic of the MBS session while the UE is in the RRC inactive mode. Cases where a UE instead re-uses an RRC connected mode PTM configuration after transitioning to the RRC inactive mode are also described. Analogous network behaviors are described.
A module comprising: a module substrate; a system-on-chip die coupled to the module substrate; a thermal interface material layer coupled to the system-on-chip die; a stiffener structure positioned around the system-on-chip die and coupled to the module substrate; and a lid having a first portion coupled to the thermal interface material layer, a second portion coupled to the stiffener structure and a recessed region formed around the first portion and having a reduced thickness relative to the first portion and the second portion.
A baseband processor includes a memory and is configured to transmit, to a cellular carrier, a request to activate the UE with the cellular carrier. The baseband processor is also configured to, in response to receiving an authentication request for authenticating a user of the UE, transmit, to the cellular carrier, information identifying another UE and authentication information for authenticating the user; obtain verification information transmitted to the other UE; transmit the verification information to the cellular carrier; and after transmitting the verification information to the cellular carrier, receive an embedded subscriber identity module (eSIM) subscription transferred from the other UE.
The present application relates to devices and components including apparatus, systems, and methods for performance of candidate beam detection operations with discontinuous transmission implementation.
H04B 7/08 - Diversity systemsMulti-antenna systems, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the receiving station
A voice-controlled electronic device that includes a device housing having a longitudinal axis bisecting opposing top and bottom surfaces and a side surface extending between the top and bottom surfaces. The device can further include one or more microphones disposed within the device housing and distributed radially around the longitudinal axis; a processor configured to execute computer instructions stored in a computer-readable memory for interacting with a user and processing voice commands received by the one or more microphones and first transducer and second transducers configured to generate sound waves within different frequency ranges.
F21V 23/04 - Arrangement of electric circuit elements in or on lighting devices the elements being switches
F21V 33/00 - Structural combinations of lighting devices with other articles, not otherwise provided for
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
G06F 3/041 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
G06F 3/044 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
H01H 13/02 - Switches having rectilinearly-movable operating part or parts adapted for pushing or pulling in one direction only, e.g. push-button switch Details
H04R 1/26 - Spatial arrangement of separate transducers responsive to two or more frequency ranges
H04R 1/28 - Transducer mountings or enclosures designed for specific frequency responseTransducer enclosures modified by provision of mechanical or acoustic impedances, e.g. resonator, damping means
H04R 1/30 - Combinations of transducers with horns, e.g. with mechanical matching means
H04R 1/40 - Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
Certain embodiments relate generally to modifying audio playing on a first device based on detection of that audio by a second device. Other embodiments relate to transferring audio between a first device and a second device. More particularly, audio playing from a first device may be muted, stopped, or adjusted in volume based on detection of that audio by, or interaction with, a second device. Likewise, audio may be transferred from a first device to a second device based on communications between the first and second devices, proximity of the first and second devices relative to one another, proximity of a user to either the first or second device, and so on.
In one implementation, a method includes: obtaining a user input to view SR content associated with video content; if the video content includes a first scene when the user input was detected: obtaining first SR content for a first time period of the video content associated with the first scene; obtaining a task associated with the first scene; and causing presentation of the first SR content and a first indication of the task associated with the first scene; and if the video content includes a second scene when the user input was detected: obtaining second SR content for a second time period of the video content associated with the second scene; obtaining a task associated with the second scene; and causing presentation of the second SR content and a second indication of the task associated with the second scene.
G06F 3/04883 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
71.
Electronic Device with Idle Mode Out-of-Service Mitigation Capabilities
A communications system may include a user equipment (UE) device that communicates with a cellular network having base stations. A UE may enter a connected mode in which the UE and a first base station in a first cell convey wireless data. While in connected mode, the first base station may transmit measurement objects to the UE, each being associated with a different cell neighboring the first cell. The UE may store the measurement objects. When the UE enters an idle mode, the UE may perform measurements on the neighboring cells using the measurement objects. The UE may then use the measurements to determine whether to re-select to a neighbor cell when the UE is unable to acquire a measurement configuration from the base station in idle mode. This may help to mitigate the occurrence of idle mode out-of-service (OOS) conditions for the UE device.
A method for performing a contention-free random access (CFRA) in a lower-layer triggered mobility (LTM) procedure from a serving cell to a target candidate cell is provided. The method includes determining, by a user equipment (UE), that random access response (RAR) reception is not configured for the UE. The method includes receiving, by the UE, a physical downlink control channel (PDCCH) order for transmitting a physical random access channel (PRACH) signal to the target candidate cell. The method includes determining, by the UE and based on the PDCCH order, a transmission power for transmitting the PRACH signal.
In one aspect, a method performed by user equipment (UE) in communication with a network includes in response to one or more first conditions being satisfied, transitioning to a first sub-state of a radio resource control (RRC) inactive state or an RRC idle state, and in response to one or more second conditions being satisfied, transitioning to a second sub-state of the RRC inactive or the RRC idle state. In the first sub-state, the UE decodes a message associated with an artificial intelligence (AI) model when the message is received from the network. In the second sub-state, the UE ignores the message associated with the AI model when the message is received from the network.
A user equipment (UE) configured to decode, from signaling received from a base station, one or more transmission configuration indicator (TCI) switch conditions for each of one or more candidate TCI of a serving cell, determine that one of the one or more candidate TCI satisfies the corresponding one or more TCI switch conditions and perform a TCI switch from a current active TCI to the one of the one or more candidate TCI.
The present application relates to devices and components including apparatus, systems, and methods for Internet Protocol multimedia subsystem security.
A user equipment (UE configured to configure transceiver circuitry to transmit, to a base station, dynamic waveform switching (DWS) capability information, decode, from signaling received from the base station, a DWS indication, enable, based on the DWS indication, a DWS configuration and perform one or more uplink transmissions to the base station using the DWS configuration.
A method includes obtaining a first map element from a first map, identifying second map elements from a second map based on locations of the second map elements relative to the first map element, and identifying first and second point on the second map elements based on proximity to beginning and ending points of the first map element. One or more of the second map elements define a corresponding portion of the second map between the first point and the second point. The method also includes determining a registration score for the first map element relative to the corresponding portion of the second map, and in response to determining that the registration score indicates a match between the first map element and the corresponding portion of the second map, defining registration information that describes a relationship between the first map element and the corresponding portion of the second map.
An apparatus includes a retainer, a rotational portion that is connected to the retainer so that it is able to rotate with respect to the retainer on a rotation axis, a rotor that is connected to the rotational portion for rotation in unison with the rotational portion, and a caliper assembly that is connected to the retainer so that the caliper assembly is able to move according to a line of action. The apparatus also includes a damper assembly that is connected to the retainer and is connected to the caliper assembly to regulate movement of the caliper assembly with respect to the retainer along the line of action, wherein the caliper assembly and the damper assembly cooperate to define a mass damper system that damps vibration of the rotational portion.
F16D 55/226 - Brakes with substantially-radial braking surfaces pressed together in axial direction, e.g. disc brakes with axially-movable discs or pads pressed against axially-located rotating members by clamping an axially-located rotating disc between movable braking members, e.g. movable brake discs or brake pads with a common actuating member for the braking members the braking members being brake pads in which the common actuating member is moved axially
F16D 65/18 - Actuating mechanisms for brakesMeans for initiating operation at a predetermined position arranged in or on the brake adapted for drawing members together
A flexure for a camera module is provided. The flexure includes a dynamic platform to which an image sensor is connected such that the image sensor moves with the dynamic platform. The flexure also includes a static platform connected to a static portion of the camera. The flexure further includes a plurality of flexure arms that mechanically connect the dynamic platform to the static platform. The plurality of flexure arms includes a first flexure arm including one or more signal traces having a first impedance and a base layer having a first width. The plurality of flexure arms includes a second flexure arm including one or more signal traces having a second impedance and a base layer having a second width. The first impedance is greater than the second impedance. The first width is less than the second width. The second flexure arm routes image data from the image sensor.
Techniques are disclosed that relate to executing pairs of instructions. A processor may include fusion detector circuitry configured to detect a pair of fetched instructions and fuse the pair of fetched instructions into a fused instruction operation, and execution circuitry coupled to the fusion detector circuitry and configured to execute the fused instruction operation. In some embodiments the pair of instructions is executable to generate a remainder of a division operation. In some embodiments the pair of instructions is executable to compare two operands and perform a write operation based on the comparison. In some embodiments the pair of instructions is executable to perform an operation and apply a mask bit sequence to the result. The fusion detector circuitry may also be configured to obtain first and second portions of a constant value from first and second instructions and store the first and second portions in a destination register.
Techniques are disclosed, whereby graphical information for a first image frame to be rendered is obtained at a first device, the graphical information comprising at least depth information for at least a portion of the pixels within the first image frame. Next, a regional depth value may be determined for a region of pixels in the first image frame. Next, the region of pixels may be coded as either a “skipped” region or a “non-skipped” region based, at least in part, on the determined regional depth value for the region of pixels. Finally, if the region of pixels is coded as a non-skipped region, a representation of the region of pixels may be rendered and composited with any other graphical content, as desired, to a display of the first device; whereas, if the region of pixels is coded as a skipped region, the first device may avoid rendering the region.
A computer-generated environment may include a virtual agent and a plurality of targets. Movements of the virtual agent to the plurality of targets can be defined and the movements of the virtual agent to the plurality of targets may be interpolated, such that to generate an interpolated animation path of movement of the virtual agent to the first target and to the second target.
A physical keyboard can be used to collect user input in a typing mode or in a tracking mode. To use a tracking mode, first movement data is detected for a hand of a user in relation to a physical keyboard at a first location. A determination is made that the first movement data is associated with a tracking movement. In response to determining that the movement type is associated with the tracking movement, a tracking mode is initiated. User input is provided based on the movement data and in accordance with the tracking mode. Contact data and non-contact data is used to determine a user intent, and a user instruction is processed based on the user intent.
A ball bearing sensor shift arrangement for a camera may include one or more voice coil motor (VCM) actuators that include the fixed magnets, optical image stabilization (OIS) coils, and/or one or more autofocus (AF) coils. The ball bearing sensor shift arrangement may be coupled with an image sensor of the camera, and may include carrier frames configured to move on ball bearings so as to enable motion of the image sensor in multiple degrees-of-freedom (DOF). An OIS carrier frame(s) may be coupled with the OIS coils, which may be positioned proximate the fixed magnets and used for moving the image sensor in directions orthogonal to an optical axis of the camera. An AF carrier frame may be coupled with the AF coil(s), which may be positioned proximate the fixed magnets and used for moving the image sensor in at least one direction parallel to the optical axis.
In one implementation, a camera rig comprises: a first array of image sensors arranged in a planar configuration, wherein the first array of image sensors is provided to capture a first image stream from a first perspective of a physical environment; a second array of image sensors arranged in a non-planar configuration, wherein the second array of image sensors is provided to capture a second image stream from a second perspective of the physical environment different from the first perspective; a buffer provided to store the first and second image streams; and an image processing engine provided to generate a 3D reconstruction of the physical environment based on the first and second image streams.
H04N 13/25 - Image signal generators using stereoscopic image cameras using two or more image sensors with different characteristics other than in their location or field of view, e.g. having different resolutions or colour pickup characteristicsImage signal generators using stereoscopic image cameras using image signals from one sensor to control the characteristics of another sensor
G06T 15/00 - 3D [Three Dimensional] image rendering
G06T 19/00 - Manipulating 3D models or images for computer graphics
G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
09 - Scientific and electric apparatus and instruments
Goods & Services
Computer software; voice recognition software; speech to
text conversion software; voice-enabled software
applications; mobile telephone software; information
retrieval software; computer software for enabling
hands-free use of consumer electronics through voice
recognition; computer software for accessing, browsing and
searching online databases; computer software for the
redirection of messages, Internet e-mail, and/or other data
to one or more electronic handheld devices from a data store
on or associated with a personal computer or a server;
computer software used to process voice commands, and create
audio responses to voice commands; computer software for
dictation; computer software for scheduling appointments,
reminders, and events on an electronic calendar; computer
software for storing, organizing, and accessing phone
numbers, addresses, and other personal contact information;
computer software for global positioning and for providing
travel directions; computer software for making travel
arrangements; computer software for making reservations at
hotels and restaurants; computer software for personal
information management; computer software for travel and
tourism, travel planning, navigation, travel route planning,
geographic, destination, transportation and traffic
information, driving and walking directions, customized
mapping of locations, street atlas information, electronic
map display, and destination information; computer software
for distributing, downloading, transmitting, receiving,
playing, editing, displaying, storing and organizing text,
data, graphics, images, audio, video, and other multimedia
content, electronic publications; computer software for use
in recording, organizing, transmitting, manipulating, and
reviewing text, data, audio files, video files and
electronic games in connection with computers, audio
players, video players, media players; computer software for
providing consumer resources for searching, locating,
rating, evaluating and providing directions for the
purchase, consumption and use of a wide range of consumer
products, services and information over a global
communications network namely, computer software for
accessing, browsing and searching online databases; home
automation and home device integration software; personal
vehicle integration software; computer software used for
controlling stand alone devices; computer software for use
in communicating with, monitoring, configuring, adjusting,
and controlling residential alarm, security, and
surveillance devices and systems, smoke and carbon monoxide
detectors and monitors, lighting, electrical and electronic
switches and outlets, energy management devices and systems,
air conditioning, heating, and ventilation devices and
systems, and doors, drapes, curtains, window shades,
shutters, blinds, and garage doors; computer software for
creating, authoring, distributing, downloading,
transmitting, receiving, playing, editing, extracting,
encoding, decoding, displaying, storing and organizing text,
data, graphics, images, audio, video, and other multimedia
content; computer software for use in recording, organizing,
transmitting, manipulating, and reviewing text, data, audio
files, video files in connection with computers, television
set-top boxes, audio players, video players, media players,
and handheld digital electronic devices.