The disclosed computer-implemented method may include rendering frame data for pixel data values, converting a first part of each pixel data value using a first digital driving scheme, and converting a second part of each pixel data value using a second digital driving scheme. The method may also include displaying the frame data by driving pixels of a display using pulse width modulation based on a converted pixel value from the converted first and second portions of the pixel data value for each pixel. Various other methods, systems, and computer-readable media are also disclosed.
G09G 3/20 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix
2.
SWITCHABLE BATTERY CELL CONFIGURATION, AND SYSTEMS AND METHODS OF USE THEREOF
A method of switching a battery cell configuration for a head-worn extended-reality headset is described. The method includes, in accordance with a determination that a battery of the head-worn extended-reality headset is in a first state, operating at least two cells of the battery in series using a first control switch to produce a first voltage and, in accordance with a determination that the battery of the head-worn extended-reality headset is in a second state, operating the at least two cells of the battery in parallel using a second control switch to produce a second voltage, wherein the first and second voltages are within an operating voltage of the electrical components of the head-worn extended-reality headset. Switching the configuration in which the battery cells operate increases voltage headroom and decreases power losses by increasing the voltage of the battery cells and decreasing the current drawn by the battery cells.
G06F 1/16 - Constructional details or arrangements
H01M 50/509 - Interconnectors for connecting terminals of adjacent batteriesInterconnectors for connecting cells outside a battery casing characterised by the type of connection, e.g. mixed connections
H02J 7/00 - Circuit arrangements for charging or depolarising batteries or for supplying loads from batteries
3.
METHODS, DEVICES, AND SYSTEMS FOR DIRECTIONAL SPEECH RECOGNITION WITH ACOUSTIC ECHO CANCELLATION
An example method of providing speech-to-text transcription includes receiving, at an electronic device, multiple channels of audio data from a plurality of microphones, where the multiple channels of audio data comprise speech from a user of the electronic device and speech from one or more other persons. The method also includes generating refined audio data by applying a multi-path acoustic echo cancellation (AEC) technique to the multiple channels of audio data. The method further includes generating directional audio data by applying beamforming to the refined audio data. The method also includes identifying, by inputting the directional audio data to an automatic speech recognizer (ASR), the speech from the user of the electronic device and the speech from the one or more other persons, and generating a textual transcription for the conversation.
The disclosed system may include (1) a support structure (e.g., a frame), (2) a lens, mounted to the support structure, and (3) a slot antenna (e.g., an open-ended slot antenna) formed from the support structure. Various other wearable devices, apparatuses, and methods of manufacturing are also disclosed.
In one embodiment, a method includes rendering, for one or more displays of a VR display device, a first output image of a VR environment based on a field of view of a first user. The method includes determining whether a second user is approaching within a threshold distance of the first user and outside the field of view of the first user. The method includes rendering, responsive to determining the second user is approaching within the threshold distance of the first user and outside the field of view of the first user, for the one or more displays of the VR display device, a second output image comprising a directional warning. The directional warning may indicate a direction of movement of the second user relative to the first user.
G08B 7/06 - Signalling systems according to more than one of groups Personal calling systems according to more than one of groups using electric transmission
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
This disclosure is related to automatic generation of short clips based on a user input at a device including a camera. A method can include capturing image frames of a scene with the camera, selecting key frames from among the image frames by detecting targets in the scene and motion, applying, with processing logic that is remote from the device that includes the camera, a visual effect to the key frames, and recording an audio recording of the scene in a same time frame that the image frames are captured with the camera. The audio recording is recorded with a microphone of the device, and the method also includes generating an audio clip that matches the visual effect applied to the key frames and generating the short clip by combining the key frames having the visual effect applied and the audio clip.
A system is provided. The system includes a display panel and an imaging assembly including a plurality of optical elements configured to guide a backlight to illuminate the display panel. The system also includes a waveguide including a reflective polarizer and disposed between the plurality of optical elements included in the imaging assembly. The display panel is configured to modulate the backlight into an image light representing a virtual image. The imaging assembly is configured to guide the image light toward the reflective polarizer. The reflective polarizer is configured to couple the image light into the waveguide.
A method of generating recommended commands using artificial intelligence is described. The method includes, in response to a first user input, initiating a recommended command workflow. The recommended command workflow includes presenting a first recommended command that can be performed by the computing device and/or an application in communication with the computing device. The first recommended command is one of a plurality of recommended commands determined based on user data and/or device data. The recommended command workflow also includes, in response to a second user input selecting the first recommended command, causing performance of the first recommended command at the computing device and/or the application, and presenting a second recommended command that can be performed by the computing device and/or the application. The second recommended command is one of the plurality of recommended commands and augments the first recommended command.
Devices may include an optical assembly and a frame supporting the optical assembly. The optical assembly may include a first optical element, a second optical element, and a third optical element. The second optical element and the third optical element may form a cavity therebetween. The first optical element may be mounted within the cavity. Various other systems, devices, and methods are also disclosed.
A depth sub-frame is captured with a first region of depth pixels configured to image a first zone of a field illuminated by near-infrared illumination light. A visible-light sub-frame is captured with a second region of image pixels that is distanced from the first region of the depth pixels. The second region of the image pixels is configured to image a second zone of the field while the first region of the depth pixels images the first zone of the field while the near-infrared illumination light illuminates the first zone.
H04N 25/705 - Pixels for depth measurement, e.g. RGBZ
G01S 7/4865 - Time delay measurement, e.g. time-of-flight measurement, time of arrival measurement or determining the exact position of a peak
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
H04N 23/11 - Cameras or camera modules comprising electronic image sensorsControl thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
H04N 23/56 - Cameras or camera modules comprising electronic image sensorsControl thereof provided with illuminating means
H04N 23/698 - Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
H04N 25/532 - Control of the integration time by controlling global shutters in CMOS SSIS
H04N 25/771 - Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components comprising storage means other than floating diffusion
H04N 25/79 - Arrangements of circuitry being divided between different or multiple substrates, chips or circuit boards, e.g. stacked image sensors
The disclosed system may include (1) a support structure (e.g., a frame), (2) a lens, mounted to the support structure, and (3) a slot antenna (e.g., an open-ended slot antenna) formed from the support structure. Various other wearable devices, apparatuses, and methods of manufacturing are also disclosed.
A display panel of a near-eye display system includes a first substrate having a first region and a second region adjacent to the first region; a first active region on the first region of the first substrate and characterized by a first display resolution; a silicon backplane bonded on the second region of the first substrate; and a second active region on the silicon backplane, the second active region characterized by a second display resolution higher than the first display resolution.
H01L 25/16 - Assemblies consisting of a plurality of individual semiconductor or other solid-state devices the devices being of types provided for in two or more different subclasses of , , , , or , e.g. forming hybrid circuits
An actuator includes a primary electrode, a secondary electrode overlapping at least a portion of the primary electrode, and an electroactive layer disposed between and abutting the primary electrode and the secondary electrode, where a mechanical deformation of the electroactive layer is locally controllable over an area of the actuator. The mechanical deformation of the electroactive layer may be configured to generate compound curvature, e.g., in an optical element co-integrated with the actuator, without buckling the optical element.
An artificial reality system is described that renders, presents, and controls user interface elements within an artificial reality environment, and performs actions in response to one or more detected gestures of the user. The artificial reality system can include a menu that can be activated and interacted with using one hand. In response to detecting a menu activation gesture performed using one hand, the artificial reality system can cause a menu to be rendered. A menu sliding gesture (e.g., horizontal motion) of the hand can be used to cause a slidably engageable user interface (UI) element to move along a horizontal dimension of the menu while horizontal positioning of the UI menu is held constant. Motion of the hand orthogonal to the menu sliding gesture (e.g., non-horizontal motion) can cause the menu to be repositioned. The implementation of the artificial reality system does require use of both hands or use of other input devices in order to interact with the artificial reality system.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/04842 - Selection of displayed objects or displayed text elements
G06F 3/04847 - Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
G06T 19/00 - Manipulating 3D models or images for computer graphics
A method includes receiving scene image data, comprising left and right images for each eye. Through an eye tracking module, the method includes determining the user's gaze direction or eye vergence. Using this information, the method includes identifying an object in the scene that the user is focusing on, and determining, using a depth estimation module, a left depth from the left eye to the object and a right depth from the right eye to the object. Further, based on the computed left and right depths, the method includes generating, for the left and right eye, constant left and right depth meshes, generating a left and right output images by projecting the left and right images on the corresponding constant left and right depth meshes. Additionally, the method includes displaying the left output image for the left eye and displaying the right output image for the right eye.
The disclosed computer-implemented method may include systems and methods for optimizing the timing of when intelligent selection suggestions are provided within a VR/AR environment. In one example, the systems and methods described herein determine a probability that a potential action within a user interface is an intended action; quantify, over a period of time, a value of suggesting the potential action within the user interface; select a time at which to suggest the potential action based on the quantified value over the period of time; and suggest the potential action within the user interface at the selected time. Various other methods, systems, and computer-readable media are also disclosed.
Augmented and/or virtual reality (AR/VR), near-eye display devices that implement eye tracking are disclosed. In examples, an eye tracking system for an augmented reality (AR)/virtual reality (VR) display device includes a light source to emit an incident light beam in a first direction, a fisheye lens to propagate the incident light beam in the first direction, and a micro-electromechanical systems (MEMS) mirror to pivot from a first position to a second position to reflect the incident light beam in a second direction to enable angular amplification and scanning of a field-of-view (FOV).
G02B 26/08 - Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
Augmented and/or virtual reality (AR/VR), near-eye display devices that implement eye tracking via dense point cloud scanning are disclosed. In examples, an eye tracking system for an augmented reality/virtual reality (AR/VR) display device comprises a light beam emission and sensor element located in a first location of the display device to emit a light beam. The eye tracking system may comprises a redirection element located in a second location of the display device to redirect the light beam to illuminate one of line-of-sight and a field-of-view (FOV) of the display device.
The disclosed system may include (1) a first wearable device with an antenna array that alternates between different antenna configurations, each of which yields a different radiation pattern, (2) a second wearable device with at least one antenna, and (3) a control module configured to select, for the antenna array in response to a triggering event, an antenna configuration yielding a radiation pattern that, relative to the radiation patterns yielded by the other antenna configurations within the different antenna configurations, results in the highest radiation correlation with a radiation pattern yielded by the second wearable device's at least one antenna. Various other wearable devices, apparatuses, and methods of manufacturing are also disclosed.
A display engine includes an illumination module, an LCoS display panel, a waveguide, and projection optics configured to direct source light from the illumination module to the LCoS display panel and direct image light from the LCoS display panel to the waveguide.
Techniques are described for utilizing photonic integrated circuits (PICs) to provide light to a spatial light modulator for holographic display systems. A device may comprise a PIC arranged to output light onto a spatial light modulator (SLM). The SLM may have multiple pixels that can be independently controlled to modulate the amplitude and/or phase of light received from the PIC and thereby produce modulated light suitable for generation of a holographic image. One advantage of such a configuration may be that conventional light sources may be unable to generate a sufficiently large eyebox because the beams incident on the SLM produce light with a small cone angle. With a PIC, however, light with a greater cone angle may be generated by the SLM, thereby allowing the SLM to produce images capable of generating a larger eyebox.
The disclosed system may include (1) a first wearable device with an antenna array that alternates between different antenna configurations, each of which yields a different radiation pattern, (2) a second wearable device with at least one antenna, and (3) a control module configured to select, for the antenna array in response to a triggering event, an antenna configuration yielding a radiation pattern that, relative to the radiation patterns yielded by the other antenna configurations within the different antenna configurations, results in the highest radiation correlation with a radiation pattern yielded by the second wearable device's at least one antenna. Various other wearable devices, apparatuses, and methods of manufacturing are also disclosed.
A headset includes a camera a 3D stacked memory configured to store image data captured by the camera, and a System-on-Chip (SoC) configured to process the image data stored in the 3D stacked memory. The 3D stacked memory includes a plurality of first drivers/receivers and a plurality of memory banks that are accessible in parallel. Each memory bank is accessible via a corresponding first driver/receiver. The SoC includes a memory controller with a plurality of second drivers/receivers. The plurality of the second drivers/receivers of the SoC are respectively connected to the plurality of the first drivers/receivers of the 3D stacked memory by a plurality of channels. The SoC and the 3D stacked memory are vertically stacked together. The plurality of the memory banks include at least eight memory banks.
An extended-reality headset with a joint assembly that includes a first ring, a second ring, and a concentric ring is described. The first ring is a rigid ring connected to an arm of the extended-reality headset. The second ring is a rigid ring connected to a main body of the extended-reality headset. The connecting ring is a flexible ring that couples the first ring to the second ring and allows the arm and the main body to move relative to each other in at least two degrees of freedom.
An example collapsible charging case for a pair of smart glasses comprises a foldable component that is configured to operate in two states, including: a folded state that defines a first internal volume, and an unfolded state that defines a second internal volume that is greater than the first interior volume, and the second interior volume being configured to house a pair of smart glasses. The collapsible charging case also includes a deployable mechanism configured to contact a nose bridge portion of a pair of smart glasses, and the deployable mechanism is configured to operate in two states, including: an undeployed state that occurs when the foldable component of the collapsible charging case is in the first foldable state, and a deployed state that occurs when the foldable component of the collapsible charging case is in the second foldable state.
A first wireless multi-link device (MLD) may include a transceiver configured to transmit a first frame to a first station (STA) of a second wireless MLD over a first link of a plurality of links after waiting for an amount of time during which the first STA switches from a listening mode over the first link to a frame exchange mode over the first link. The transceiver may determine whether the first STA is in a power save mode or an active mode, and can transmit, in response to determining that the first STA is in a power save mode, a second frame to a second STA of the second wireless MLD over a second link of the plurality of links without waiting for an amount of time during which the second STA switches from a listening mode over the second link to a frame exchange mode over the second link.
An apparatus, system, and method for a lens assembly has corner voids. The lens assembly may include a glass layer including four corners and a plastic layer including an optical element. In some examples, the plastic layer includes four corner voids aligned over the four corners of the rectangle glass layer.
An example collapsible charging case for a pair of smart glasses comprises a foldable component that is configured to operate in two states, including: a folded state that defines a first internal volume, and an unfolded state that defines a second internal volume that is greater than the first interior volume, and the second interior volume being configured to house a pair of smart glasses. The collapsible charging case also includes a deployable mechanism configured to contact a nose bridge portion of a pair of smart glasses, and the deployable mechanism is configured to operate in two states, including: an undeployed state that occurs when the foldable component of the collapsible charging case is in the first foldable state, and a deployed state that occurs when the foldable component of the collapsible charging case is in the second foldable state.
The disclosed computer-implemented method can include sensing, by a computer device, a plurality of wireless communication channels available for narrow band communication. The method can also include detecting, by the computer device, wireless communication data usage of the sensed plurality of communication channels over a predetermined time period. Additionally, the method can include estimating, by the computing device, the wireless communication data usage of the plurality of communication channels over a predetermined time period. Finally, the method can include selecting, by the computing device, at least one channel from the plurality of communication channels to be used for narrow band data transmission, such the wireless communication data and the narrow band data transmission coexist without interference.
Aspects of the disclosure are directed to an artificial reality (XR) browser configured to trigger an immersive experience. Implementations display an element at a browser chrome of the XR browser when an immersive experience is loaded. For example, the XR browser can include an application programming interface (API) that supports configuration of a browser chrome element by components of a webpage. The API call can cause the display of the browser chrome element, change a display property for the browser chrome element, or configure the browser chrome element in any other suitable manner. Upon receiving input at the browser chrome element (configured by the API call), the XR browser can transition from displaying a two-dimensional panel view of a webpage supported by loaded web resources (e.g., hypertext transfer protocol (HTTP) pages, graphic images, etc.) to a three-dimensional environment supported by preloaded immersive resources (e.g., three-dimensional models, graphic images, etc.)
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/04847 - Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
G06T 19/00 - Manipulating 3D models or images for computer graphics
H04L 67/02 - Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
A method of displaying images by vector scanning in a head-mounted display includes determining, based on content of an image to be displayed by the head-mounted display, a vector scanning pattern for displaying the image, the vector scanning pattern including a plurality of scan lines and one or more transition lines connecting the plurality of scan lines, at least one transition line of the one or more transition lines including a curved section (or loop); generating control signals for controlling a scanner to steer, according to the vector scanning pattern, a light beam emitted by a light source towards a waveguide display of the head-mounted display; and generating modulation signals for modulating the light source such that the light beam is turned on when the scanner operates according to the plurality of scan lines and is turned off when the scanner operates according to the one or more transition lines.
Optimizing storage of images at an electronic device by monitoring available storage and providing for multiple image-management modes, and systems and methods of use thereof
A method for optimizing storage of images at a wearable device includes obtaining information about an amount of storage remaining at the wearable device. Upon determining the amount of storage at the wearable device is less than a first of multiple storage-depletion thresholds, the method provides the user an indication that a first image-management mode is available. In the first image-management mode, the method deletes images that are not of a predetermined image type. Then, upon a determination that the amount of storage at the wearable device is less than or equal to a second of multiple storage-depletion thresholds, the method automatically causes the wearable device to operate in a second image-management mode. While in the second image-management mode, the method blocks a user from storing additional images until the method determines that the amount of storage remaining at the wearable device is above the second storage-depletion threshold.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/04842 - Selection of displayed objects or displayed text elements
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
A method for improved heat discharge may include sensing a temperature value of a battery circuit; activating a heat dissipation element within the battery circuit when the temperature value reaches a threshold; discharging heat from the battery circuit via the activated heat dissipation element; and deactivating the heat dissipation element when the temperature value falls below the threshold.
H01M 10/637 - Control systems characterised by the use of reversible temperature-sensitive devices, e.g. NTC, PTC or bimetal devicesControl systems characterised by control of the internal current flowing through the cells, e.g. by switching
G06F 1/16 - Constructional details or arrangements
G06T 19/00 - Manipulating 3D models or images for computer graphics
H01M 10/48 - Accumulators combined with arrangements for measuring, testing or indicating the condition of cells, e.g. the level or density of the electrolyte
A deformable substrate includes first and second sets of layers. The first and second sets of layers encapsulate a channel. Spacers within the channel contact the first and second sets of layers. The channel is bounded by an interior layer of each of the first and second sets of layers that includes a wicking composition, an interior face, and an exterior face. An intermediate layer overlays the exterior face of each of the first and second sets of layers. An exterior layer overlays the intermediate layer of the first set of layers, and an exterior layer overlays the intermediate layer of the second set of layers. The exterior layer of each of the first and second sets of layers includes a first polymeric composition, and the intermediate layer of each of the first and second sets of layers includes a second polymeric composition.
The present application is directed to optical dispatching circuits that may reduce the footprint of an illumination system. In particular, embodiments of the present application provide an illumination system that splits and spreads incoming light sources (e.g., RGB laser light sources) into a plurality of emitters that cover a two-dimensional (2D) area. The present application describes various implementations of an optical dispatching circuit, which receives light as input and is configured to spread this light across a number of waveguides that each emit light from a plurality of locations. The optical dispatching circuits described herein may be configured to receive light from multiple sources emitting at different wavelengths (such as, but not limited to, red, green and blue light) and effectively deliver the light from the multiple sources in a substantially uniform manner to a plurality of emitters that cover a 2D area.
G02B 6/293 - Optical coupling means having data bus means, i.e. plural waveguides interconnected and providing an inherently bidirectional system by mixing and splitting signals with wavelength selective means
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
37.
Solution Of Body-Garment Collisions In Avatars For Immersive Reality Applications
A method for resolving body-garment collisions in avatars for immersive reality applications is provided. The method includes forming a two-dimensional projection of a dressed avatar in an immersive reality application running in a headset, identifying, from the two-dimensional projection, an area that includes a garment collision, and replacing a pixel in the area that includes the garment collision, with a pixel indicative of a garment for the dressed avatar, to form a new two-dimensional projection of the dressed avatar. A system and a non-transitory, computer-readable medium storing instructions to perform the above method, are also provided.
An eyewear device comprising (1) a graphics pipeline configured to output a digital representation of an image, (2) an image compensation component configured to shift the digital representation of the image in at least one direction in response to a head movement of a user, and (3) a display device configured to display a shifted version of the image to the user due at least in part to the digital representation having been shifted. Various other apparatuses, systems, and methods are also disclosed.
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
39.
REPRESENTATIONS OF HUMAN BODY PARTS WITH APPROPRIATELY-POSITIONED SENSORS AND MECHANICAL CHARACTERISTICS FOR TESTING AND VALIDATING WEARABLE DEVICES, AND SYSTEMS AND METHODS OF USE THEREOF
A physical representation of a human body part is described. The physical representation of the human body part includes a first physical representation of a first portion of the human body part, a second physical representation of a second portion of the human body part, and an amplifier. The first physical representation of the first portion of the human body part includes a first actuator and interfaces with a portion of a head-wearable device. The second physical representation of the second portion of the human body part includes a second actuator. The amplifier is coupled with the first and second actuators. The amplifier drives the first and second actuators based on an incoming signal, such that at respective physical representation of respective portions of the human body part are caused to imitate human reactions.
An extended-reality headset with a joint assembly that includes a first ring, a second ring, and a concentric ring is described. The first ring is a rigid ring connected to an arm of the extended-reality headset. The second ring is a rigid ring connected to a main body of the extended-reality headset. The connecting ring is a flexible ring that couples the first ring to the second ring and allows the arm and the main body to move relative to each other in at least two degrees of freedom.
TECHNIQUES FOR COORDINATING ARTIFICIAL-REALITY INTERACTIONS USING AUGMENTED-REALITY INTERACTION GUIDES FOR PERFORMING INTERACTIONS WITH PHYSICAL OBJECTS WITHIN A USER'S PHYSICAL SURROUNDINGS, AND SYSTEMS AND METHODS FOR USING SUCH TECHNIQUES
A method of coordinating artificial-reality (AR) interactions by presenting augmented-reality interaction guides is provided. The method includes, after receiving a user input requesting an augmented-reality assisted interaction to be directed to a physical surface, presenting, via the AR headset, an augmented-reality interaction guide that is (i) co-located with the physical surface, and (ii) presented with a first orientation relative to the AR headset. The method includes obtaining data indicating that a user interaction has caused a modification to the physical surface. And the method includes, responsive to obtaining additional data, via the AR headset, indicating movement of the physical surface relative to the orientation of the AR headset: presenting the augmented-reality interaction guide to appear at the physical surface with a second orientation relative, different than the first orientation. The second orientation is determined based on: the modification to the physical surface, and the movement of the physical surface.
G06T 19/00 - Manipulating 3D models or images for computer graphics
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
42.
METHODS AND APPARATUSES FOR LOW LATENCY BODY STATE PREDICTION BASED ON NEUROMUSCULAR DATA
A computer-implemented method is disclosed. The method includes receiving signal data from at least one neuromuscular sensor in contact with a user's body in response to a gesture performed by the user. The received signal data is representative of a plurality of neuromuscular signals associated with a plurality of biological structures. The method further includes separating the received signal data into a plurality of data channels. Each data channel is associated with a respective one of the plurality of the biological structures. The method further includes controlling a device based, at least in part, on one or more of the plurality of data channels.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/02 - Input arrangements using manually operated switches, e.g. using keyboards or dials
G06F 3/0487 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
43.
WEARABLE DEVICES AND ASSOCIATED BAND STRUCTURES FOR SENSING NEUROMUSCULAR SIGNALS AND IDENTIFYING HAND GESTURES AND METHODS OF USE THEREOF
A wearable apparatus for gesture control is disclosed. The wearable apparatus includes one or more first sensors configured to contact skin on a wrist of a user when the wearable apparatus is worn by the user. The one or more first sensors are configured to generate first signals associated with muscle activity of the user. The wearable apparatus further includes an inertial sensor configured to generate second signals associated with a motion of the user. The wearable apparatus further includes one or more processors configured to receive the first signals generated by the one or more first sensors and receive the second signals generated by the inertial sensor. The one or more processors are further configured to determine a gesture of the user based at least in part on an analysis of the first signals and the second signals, and perform an action associated with the gesture.
A system may include an antenna feed having various antenna feed components. The system may also include a multi-layer capacitive touch sensor that is secured to at least a portion of the support structure. The system may further include a conductive element that electrically connects the antenna feed to the multi-layer capacitive touch sensor, so that at least a portion of the multi-layer capacitive touch sensor acts as a radiator for the antenna feed. Various other mobile electronic devices and apparatuses are also disclosed.
A deformable substrate includes first and second sets of layers. The first and second sets of layers encapsulate a channel. Spacers within the channel contact the first and second sets of layers. The channel is bounded by an interior layer of each of the first and second sets of layers that includes a wicking composition, an interior face, and an exterior face. An intermediate layer overlays the exterior face of each of the first and second sets of layers. An exterior layer overlays the intermediate layer of the first set of layers, and an exterior layer overlays the intermediate layer of the second set of layers. The exterior layer of each of the first and second sets of layers includes a first polymeric composition, and the intermediate layer of each of the first and second sets of layers includes a second polymeric composition.
Features described herein pertain to low latency hierarchical image capture. An image can be captured using a pixel array of an image sensing system. A region-of-interest (ROI) can be detected in the image. Image characteristics of the image can be determined based on the region-of-interest (ROI). Determining the image characteristics of the image can include determining an image quality level of the region-of-interest (ROI). Image capturing instructions for capturing a set of images can be determined based on the image characteristics. The set of images can be combined into an image and an object can be recognized in the image.
H04N 23/61 - Control of cameras or camera modules based on recognised objects
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/98 - Detection or correction of errors, e.g. by rescanning the pattern or by human interventionEvaluation of the quality of the acquired patterns
47.
Wireless configuration based on antenna connection status
Disclosed herein are related to adaptively configuring a wireless device, according to a connection status of one or more antennas. In one aspect, the wireless device includes a wireless interface, and a plurality of antenna ports coupled to the wireless interface. In one aspect, the wireless device includes one or more sensors coupled to the plurality of antenna ports. The one or more sensors may be configured to provide an indication of whether each of the plurality of antenna ports is connected to a corresponding antenna. In one aspect, the wireless device includes a processor configured to set a configuration of the wireless interface, according to the indication.
The various implementations described herein include methods and devices for artificial-reality systems. In one aspect, a head-wearable device includes a depth-tracking component configured to obtain depth information associated with one or more objects in a physical environment of the head-wearable device. The head-wearable device also includes a set of peripheral camera components, and a set of forward-facing camera components, where each forward-facing camera component of the set of forward-facing camera components includes a monochrome camera and a color camera.
H04N 23/90 - Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
G06T 7/521 - Depth or shape recovery from laser ranging, e.g. using interferometryDepth or shape recovery from the projection of structured light
G06T 7/70 - Determining position or orientation of objects or cameras
H04N 23/13 - Cameras or camera modules comprising electronic image sensorsControl thereof for generating image signals from different wavelengths with multiple sensors
H04N 23/84 - Camera processing pipelinesComponents thereof for processing colour signals
Aspects of the present disclosure integrate pixelated multi-state panels (e.g., liquid crystal dimming panels) into artificial reality (XR) displays (e.g., augmented reality (AR) glasses) or conventional glasses. Conventional visual displays for XR displays can be expensive, have a low field-of-view, and consume high power. On the other hand, pixelated multi-state panels consume lower power, have a wide field-of-view, and are light, inexpensive, and computationally simple. The multi-state panels can have 4 configurations: 1) included on the periphery of an XR display, 2) as a standalone display, 3) as a display that's overlaid onto an XR display, and/or 4) as a secondary externally facing display. The multi-state panels can be selectively activated based on a trigger, such as identification of an object in the real-world environment, based on an event occurring in an XR application executing on the XR display, based on detected audio, etc.
A method for seamless display switching includes a head-wearable device at head-wearable device and one or more display devices is described. The method includes causing the head-wearable device to display a user interface. The method further includes, in accordance with a determination, at a first point in time, that the user is looking at a first display device of the one or more display devices: (i) causing the head-wearable device to cease displaying the user interface and (ii) causing the first display device to display the user interface. The method further includes, in accordance with a determination, at a second point in time after the first point in time, that the user is not looking at the one or more displays devices: (i) causing the first display device to cease displaying the user interface and (ii) causing the head-wearable device to display the user interface.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
52.
REPRESENTATIONS OF HUMAN BODY PARTS WITH APPROPRIATELY-POSITIONED SENSORS AND MECHANICAL CHARACTERISTICS FOR TESTING AND VALIDATING WEARABLE DEVICES, AND SYSTEMS AND METHODS OF USE THEREOF
A physical representation of a human body part is described. The physical representation of the human body part includes a first physical representation of a first portion of the human body part, a second physical representation of a second portion of the human body part, and an amplifier. The first physical representation of the first portion of the human body part includes a first actuator and interfaces with a portion of a head-wearable device. The second physical representation of the second portion of the human body part includes a second actuator. The amplifier is coupled with the first and second actuators. The amplifier drives the first and second actuators based on an incoming signal, such that at respective physical representation of respective portions of the human body part are caused to imitate human reactions.
The disclosed computer-implemented method may include (i) detecting a battery condition of a wearable battery-operated device that indicates a threat to a battery's health and (ii) in response to detecting the battery condition, performing a battery-protection action by initiating a reverse power flow across a bidirectional connection from the wearable battery-operated device to a portable charging case that is designed to charge the wearable battery-operated device. Various other methods, systems, and computer-readable media are also disclosed.
A method for seamless display switching includes a head-wearable device at head-wearable device and one or more display devices is described. The method includes causing the head wearable device to display a user interface. The method further includes, in accordance with a determination, at a first point in time, that the user is looking at a first display device of the one or more display devices: (i) causing the head-wearable device to cease displaying the user interface and (ii) causing the first display device to display the user interface. The method further includes, in accordance with a determination, at a second point in time after the first point in time, that the user is not looking at the one or more displays devices: (i) causing the first display device to cease displaying the user interface and (ii) causing the head-wearable device to display the user interface.
Ghost images can interfere with eye tracking in a system that uses folded optics, such as is an artificial-reality display. Optical elements, such as waveplates and/or polarizers, can be used to attenuate or eliminate light causing the ghost images.
An optical dimmer including a stack of active dimming elements is disclosed. Each active dimming element includes a pair of transparent electrodes and an electroactive material between the electrodes. A first electrical bridge couples first transparent electrodes of the active dimming elements. A second electrical bridge couples second transparent electrodes of the active dimming elements. A powering structure is coupled to one of the active dimming elements for applying voltage to that active dimming element. The first and second electrical bridges couple the voltage to other active dimming element(s) of the stack. The first and second electrical bridges and the powering structure are spaced apart from one another along a perimeter of the stack, providing for a compact and customizable overall configuration.
G02F 1/1347 - Arrangement of liquid crystal layers or cells in which the final condition of one light beam is achieved by the addition of the effects of two or more layers or cells
G02F 1/137 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulatingNon-linear optics for the control of the intensity, phase, polarisation or colour based on liquid crystals, e.g. single liquid crystal display cells characterised by the electro-optical or magneto-optical effect, e.g. field-induced phase transition, orientation effect, guest-host interaction or dynamic scattering
57.
SYSTEMS AND METHODS OF INDICATING CONGESTION SEVERITY
Systems and methods of indicating congestion severity may include a wireless communication endpoint that receives, via a transceiver from a base station, one or more first packets. The one or more first packets may include one or more bits configured by the base station based according to a network congestion level. The wireless communication endpoint may determine the network congestion level from a plurality of network congestion levels. The network congestion level may be based on the one or more bits configured by the base station in the one or more packets. The wireless communication endpoint may selectively update one or more configurations for generating one or more second packets for transmission to the base station. The one or more configurations may be selectively updated according to the network congestion level.
Disclosed herein are pupil expanders for pupil replication in near-eye display systems. In some examples, a pupil expander includes a waveguide and a variable reflectivity transflective mirror, where display light propagating in the waveguide is transmitted through a plurality of regions of the variable reflectivity transflective mirror to form a plurality of replicas of the pupil along one direction. Two such pupil expanders can be used to replicate the pupil in two directions. In some examples, a pupil expander includes a waveguide, an output coupler formed on or in the waveguide, and two or more transflective mirrors in the waveguide and parallel to a surface the waveguide for improving the pupil replication density and light intensity uniformity of the eyebox.
Systems and methods for selection of cellular slices for application traffic may include a user equipment which receives a message indicating a change in session capabilities, from a first session capability indicating one or more first services used in a first session to a second session capability indicating one or more second services to be used in a second session. The first session may correspond to a first bearer within a first quality of service (QOS) slice and associated with a first application identifier of an application. The user equipment may configure a second application identifier for the application based on the second session capability. The user equipment may transmit, via a transceiver to a base station, traffic of the application, using the second application identifier, via the second session.
Systems and method of using transmission blanking may include one or more processors of a communication device which receives data corresponding to a thermal level of a device. The communication device may receive one or more packets for transmission to a wireless communication node. The communication device may apply the thermal level to the one or more criterion, to selectively transmit, or forego transmission of, the one or more packets to the wireless communication node.
H04W 72/21 - Control channels or signalling for resource management in the uplink direction of a wireless link, i.e. towards the network
H04W 72/566 - Allocation or scheduling criteria for wireless resources based on priority criteria of the information or information source or recipient
61.
SYSTEMS AND METHODS FOR EXPOSED PIXEL ARRAY CHIP SCALE PACKAGING
The disclosed method may $include applying adhesive and a release layer to a temporary carrier. The method may additionally include bonding a sensor wafer to the temporary carrier face down. The method may also include forming one or more package features at least one of in or on the sensor wafer bonded to the temporary carrier. Various other methods, systems, and computer-readable media are also disclosed.
H10F 39/00 - Integrated devices, or assemblies of multiple devices, comprising at least one element covered by group , e.g. radiation detectors comprising photodiode arrays
H01L 21/78 - Manufacture or treatment of devices consisting of a plurality of solid state components or integrated circuits formed in, or on, a common substrate with subsequent division of the substrate into plural individual devices
H01L 23/00 - Details of semiconductor or other solid state devices
62.
Techniques For Determining Tasks Based On Data From A Sensor Set Of An Extended-Reality System Using A Sensor Set Agnostic Contrastively-Trained Learning Model, And Systems And Methods Of Use Thereof
Systems and methods of using sensor sets for detecting multiple tasks are disclosed. An example method includes receiving, via a first sensor of the shared sensor set, first data representative of visual intent and receiving, via a second sensor of the shared sensor set, second data representative of a hand input. The method includes determining, using a contrastively-trained model, third data that describes a relationship between the first data and the second data and determining, using a task-inferring model and the third data, a task to be performed at an XR system. The method further includes providing instructions for causing performance of the task at the XR system.
Systems and methods for performing low-density parity-check (LDPC) coding may include a wireless communication device that determines a low density parity check (LDPC) codeword length from among a predetermined set of codeword lengths, according to a payload bit count corresponding to a plurality of information bits. The wireless communication device may determine a number of LDCP codewords according to the payload bit count and the selected LDCP codeword length. The wireless communication device may provide, via an LDPC encoder, the number of LDPC codewords, each LDPC codeword encoding a respective portion of the plurality of information bits and having the selected LDPC codeword length.
H03M 13/25 - Error detection or forward error correction by signal space coding, i.e. adding redundancy in the signal constellation, e.g. Trellis Coded Modulation [TCM]
H03M 13/00 - Coding, decoding or code conversion, for error detection or error correctionCoding theory basic assumptionsCoding boundsError probability evaluation methodsChannel modelsSimulation or testing of codes
64.
HIGH REFRACTIVE INDEX AND HIGHLY BIREFRINGENT SOLID ORGANIC MATERIALS
An organic thin film includes an organic solid crystal material and has mutually orthogonal refractive indices, nx, ny, nz, each having a value at 589 nm of between 1.5 and 2.6, with Δnxy<Δnxz<Δnyz, where the organic solid crystal material includes an organic molecule selected from 1,2,3-trichlorobenzene, 1,2-diphenylethyne, phenazine, terphenyl, 1,2-bis(4-(methylthio)phenyl)ethyne, and anthracene. The organic thin film may be birefringent and may be configured as a single layer thin film, or plural organic thin films may be stacked to form a multilayer that may be incorporated into an optical element, such as a reflective polarizer.
C30B 7/08 - Single-crystal growth from solutions using solvents which are liquid at normal temperature, e.g. aqueous solutions by cooling of the solution
Systems, devices, and methods for eye tracking and other sensing using self-mixing interferometry (SMI) through a waveguide in a near-eye device, i.e., in-field SMI eye tracking. The near-eye display device may include a waveguide having SMI coupler(s) and eye-side coupler(s) where one or more SMI(s): (i) project light into the SMI coupler(s) through the waveguide and out the eye-side coupler(s) to the user's eye; and (ii) receive light reflected back from the user's eye into the eye-side coupler(s) through the waveguide and out the SMI coupler(s). The one or more SMIs mix/modulate the received reflected light with presently projected light to provide an electrical signal for eye tracking. The SMI and/or eye-side couplers may be reflective, refractive, and/or diffractive; interior/exterior gratings, lenses, and/or coatings may be utilized in/on the waveguide to redirect, modulate, and/or otherwise alter the projected/received light.
Systems and methods for forward error correction for cellular communication may include an endpoint which receives, from a wireless communication node, an indication of a portion of packets which are to be dropped by the wireless communication node, according to a first forward error correction (FEC) ratio. The endpoint may receive quality of service (QoS) feedback from another endpoint. The endpoint may determine whether to switch from the first FEC ratio to a second FEC ratio, according to the QoS feedback and the indication.
Battery assemblies may include a battery cell unit enclosed in a steel can and a battery management unit overmolded to the steel can with an overmold material. Various other devices and systems are also disclosed.
H01M 10/42 - Methods or arrangements for servicing or maintenance of secondary cells or secondary half-cells
H01M 50/103 - Primary casingsJackets or wrappings characterised by their shape or physical structure prismatic or rectangular
H01M 50/209 - Racks, modules or packs for multiple batteries or multiple cells characterised by their shape adapted for prismatic or rectangular cells
H01M 50/247 - MountingsSecondary casings or framesRacks, modules or packsSuspension devicesShock absorbersTransport or carrying devicesHolders specially adapted for portable devices, e.g. mobile phones, computers, hand tools or pacemakers
H01M 50/503 - Interconnectors for connecting terminals of adjacent batteriesInterconnectors for connecting cells outside a battery casing characterised by the shape of the interconnectors
H01M 50/572 - Means for preventing undesired use or discharge
Systems and methods of adjusting a codec rate may include a wireless communication endpoint which estimates an uplink rate for a wireless communication node, based on a grant provided by the wireless communication node to the wireless communication endpoint. The wireless communication endpoint may send information relating to the estimated uplink rate to an application of the wireless communication endpoint. The wireless communication endpoint may transmit, to the wireless communication node, one or more packets generated using a codec rate configured by the application according to the information relating to the estimated uplink rate.
Systems and methods for adjusting throttling based on device performance may include a wireless communication device which identifies a thermal condition of a device. The wireless communication device may determine, from a plurality of performance characteristics, one or more performance characteristics in which to throttle performance of the wireless communication device, according to an application executing on the device. The wireless communication device may modify the one or more performance characteristics to reduce the thermal condition of the device to within a thermal condition range.
H04W 52/36 - Transmission power control [TPC] using constraints in the total amount of available transmission power with a discrete range or set of values, e.g. step size, ramping or offsets
70.
SYSTEMS AND METHODS FOR SELECTION OF CELLULAR SLICE FOR APPLICATION TRAFFIC
Systems and methods for selection of cellular slices for application traffic may include a user equipment which receives a message indicating a change in session capabilities, from a first session capability indicating one or more first services used in a first session to a second session capability indicating one or more second services to be used in a second session. The first session may correspond to a first bearer within a first quality of service (QoS) slice and associated with a first application identifier of an application. The user equipment may configure a second application identifier for the application based on the second session capability. The user equipment may transmit, via a transceiver to a base station, traffic of the application, using the second application identifier, via the second session.
Systems and method of using transmission blanking may include one or more processors of a communication device which receives data corresponding to a thermal level of a device. The communication device may receive one or more packets for transmission to a wireless communication node. The communication device may apply the thermal level to the one or more criterion, to selectively transmit, or forego transmission of, the one or more packets to the wireless communication node.
A system comprising (1) an eyewear device dimensioned to be worn by a user and configured to provide an artificial-reality experience to the user, (2) at least one coherent light source coupled to the eyewear device and configured to illuminate an eye of a user, (3) at least one optical sensor configured to generate data that represents images of the eye, and (4) circuitry configured to (A) identify a representation of at least one speckle pattern in the data and (B) determine at least one attribute of the eye based at least in part on the speckle pattern. Various other apparatuses, systems, and methods are also disclosed.
A thermal system configured to bend in multiple directions and provide a thermal conduit to transfer heat through an electronic device having a bent or curved profile. A thermal system may include a first thermal management component, a second thermal management component, and a memory material coupler coupled to the first thermal management component and the second thermal management component. The memory material coupler is configured to provide mechanical articulation of the first thermal management component relative to the second thermal management component and transfer heat from a first location to a second location of the electronic device.
Described herein are non-drying, ionic liquid-based electrolyte gels (ion gel) fabricated using a mixture of free and polymerizable imidazolium bistriflimide (TFSI) ionic liquids and polyurethane diacrylate crosslinker. The ionic liquid-based electrolyte gels as described herein exhibit comparable softness with epidermal skin and water-based electrolyte gels, with suitable storage moduli values. The ion gels described herein not dry out when exposed to air and has stable storage modulus values, thus making it possible to be used long term and in a wide range of temperatures.
C08L 39/04 - Homopolymers or copolymers of monomers containing heterocyclic rings having nitrogen as ring member
A61B 5/266 - Bioelectric electrodes therefor characterised by the electrode materials containing electrolytes, conductive gels or pastes
C08F 2/50 - Polymerisation initiated by wave energy or particle radiation by ultraviolet or visible light with sensitising agents
C08F 26/06 - Homopolymers or copolymers of compounds having one or more unsaturated aliphatic radicals, each having only one carbon-to-carbon double bond, and at least one being terminated by a single or double bond to nitrogen or by a heterocyclic ring containing nitrogen by a heterocyclic ring containing nitrogen
75.
SYSTEMS AND METHODS OF ADJUSTING MONITORING PERIODS
Systems and methods for adjusting monitoring periods may include a wireless communication device which receives, from a wireless communication node, an information element comprising a timing offset and a frequency block. The wireless communication device may switch, from a radio resource control (RRC) connected mode to at least one of an RRC-idle mode or an RRC-inactive mode, subsequent to receiving the information element. The wireless communication device may perform a wake-up process at a time determined according to a cycle period, and the timing offset and the frequency block of the information element, to receive a tracking reference signal (TRS) message from the wireless communication node.
Systems and methods of adjusting a codec rate may include a wireless communication endpoint which estimates an uplink rate for a wireless communication node, based on a grant provided by the wireless communication node to the wireless communication endpoint. The wireless communication endpoint may send information relating to the estimated uplink rate to an application of the wireless communication endpoint. The wireless communication endpoint may transmit, to the wireless communication node, one or more packets generated using a codec rate configured by the application according to the information relating to the estimated uplink rate.
Systems and methods for adjusting throttling based on device performance may include a wireless communication device which identifies a thermal condition of a device. The wireless communication device may determine, from a plurality of performance characteristics, one or more performance characteristics in which to throttle performance of the wireless communication device, according to an application executing on the device. The wireless communication device may modify the one or more performance characteristics to reduce the thermal condition of the device to within a thermal condition range.
Systems and methods of low latency communication may include receiving, by a communication device, a grant from a wireless communication node, defining a plurality of sub-frames in which to transmit one or more data packets. The communication device may generate, for a data packet, a plurality of duplicate packets of the data packet. The communication device may transmit, in successive sub-frames of the plurality of sub-frames, a respective packet from the data packet and the plurality of duplicate packets.
Aspects of the present disclosure are directed to controlling an immersive environment via an application programming interface (API). Some applications executing via artificial reality systems provide immersive content for display to the user. However, other types of artificial reality applications provide lighter weight content (e.g., two-dimensional content, three-dimensional content that is not immersive, etc.), such as a web browser, video player, social media application, communication application, and many others. These executing applications that provide content for portions of the artificial reality system's display often have limited control over the immersive elements of the artificial reality system, such as the immersive environment in which a two-dimensional or three-dimensional virtual object is displayed. Implementations of an immersive controller provide these applications an API for controlling these elements of the immersive environment via an API call.
Battery assemblies may include a battery cell unit enclosed in a steel can and a battery management unit overmolded to the steel can with an overmold material. Various other devices and systems are also disclosed.
Methods and systems are described to facilitate automated determination of responses to detected statements or events. The system may detect a statement(s), question, event or action. The system may further determine an intent(s), and a starter sentence(s) associated with the statement(s), question(s), event(s) or action(s). The system may further output audio or text of one or more words of the starter sentence(s). The system may further provide the intent(s) and the starter sentence(s) to at least one large language model to enable the large language model to determine a complete sentence(s) in response to the statement(s), question(s), event(s) or action(s). The system may further output audio or text of a subset(s) of the complete sentence(s) immediately after the output of the audio or the text of the one or more words of the starter sentence(s).
A63F 13/424 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
G06F 40/40 - Processing or translation of natural language
82.
SYSTEMS AND METHODS OF INDICATING CONGESTION SEVERITY
Systems and methods of indicating congestion severity may include a wireless communication endpoint that receives, via a transceiver from a base station, one or more first packets. The one or more first packets may include one or more bits configured by the base station based according to a network congestion level. The wireless communication endpoint may determine the network congestion level from a plurality of network congestion levels. The network congestion level may be based on the one or more bits configured by the base station in the one or more packets. The wireless communication endpoint may selectively update one or more configurations for generating one or more second packets for transmission to the base station. The one or more configurations may be selectively updated according to the network congestion level.
Systems and methods of low latency communication may include receiving, by a communication device, a grant from a wireless communication node, defining a plurality of sub-frames in which to transmit one or more data packets. The communication device may generate, for a data packet, a plurality of duplicate packets of the data packet. The communication device may transmit, in successive sub-frames of the plurality of sub-frames, a respective packet from the data packet and the plurality of duplicate packets.
Systems and methods for adjusting monitoring periods may include a wireless communication device which receives, from a wireless communication node, an information element comprising a timing offset and a frequency block. The wireless communication device may switch, from a radio resource control (RRC) connected mode to at least one of an RRC-idle mode or an RRC-inactive mode, subsequent to receiving the information element. The wireless communication device may perform a wake-up process at a time determined according to a cycle period, and the timing offset and the frequency block of the information element, to receive a tracking reference signal (TRS) message from the wireless communication node.
An antenna system comprising (1) an array of antennas capable of beamforming and (2) at least one controller communicatively to the array of antennas, wherein the controller (A) collects a first set of measurements taken at each of the antennas as the antennas are activated individually, (B) collects a second set of measurements taken at each of the antennas as pairs of the antennas are activated together, (C) determines one or more inefficiencies in the beamforming of the antennas based at least in part on the first and second sets of measurements, and (D) calibrates the antennas to improve the beamforming by modifying one or more phase shifters of the antennas to compensate for the inefficiencies in the beamforming Various other apparatuses, systems, and methods are also disclosed.
H04B 17/12 - MonitoringTesting of transmitters for calibration of transmit antennas, e.g. of amplitude or phase
H04B 7/06 - Diversity systemsMulti-antenna systems, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
86.
Combined charging stand with controllers and headborne device
Systems and methods are provided for presenting a qualitative descriptor of a user's physiological state at a wrist-wearable device. The method includes monitoring, via one or more sensors, values for a plurality of physiological parameters for a user wearing the wrist-wearable device. The method includes comparing the values for the plurality of physiological parameters to baseline values for the physiological parameters. The baseline values are determined based on values for the plurality of physiological parameters that were measured over a predetermined period of time. The method includes, based on the comparison, determining a qualitative descriptor of the user's physiological state from among a set of three or more predefined qualitative descriptors. The method includes presenting, on a display that is in communication with the wrist-wearable device, the qualitative descriptor of the user's physiological state without displaying a numeric score representing the user's physiological state.
A method and system to conduct thermal energy between two hinged portions of an electric device. In examples, the method employs a thermal hinge system configured to transfer or spread thermal energy, and optionally electrical energy, through a mechanical articulation or hinge in an electronic device. A thermal hinge may include a thermally conductive living hinge, complementary and/or mating thermal interface components, or a combination of both.
The disclosed method may include using a wafer-level process to build up a plurality of redistribution layers. The method may additionally include wafer-level mounting a plurality of flip chip die atop the plurality of redistribution layers. The method may also include wafer-level wire bonding the plurality of flip chip die to the plurality of redistribution layers. Various other methods, systems, and computer-readable media are also disclosed.
H01L 21/48 - Manufacture or treatment of parts, e.g. containers, prior to assembly of the devices, using processes not provided for in a single one of the groups or
H01L 21/56 - Encapsulations, e.g. encapsulating layers, coatings
H01L 21/683 - Apparatus specially adapted for handling semiconductor or electric solid state devices during manufacture or treatment thereofApparatus specially adapted for handling wafers during manufacture or treatment of semiconductor or electric solid state devices or components for supporting or gripping
H01L 23/00 - Details of semiconductor or other solid state devices
H01L 23/31 - Encapsulation, e.g. encapsulating layers, coatings characterised by the arrangement
H01L 23/538 - Arrangements for conducting electric current within the device in operation from one component to another the interconnection structure between a plurality of semiconductor chips being formed on, or in, insulating substrates
H01L 25/07 - Assemblies consisting of a plurality of individual semiconductor or other solid-state devices all the devices being of a type provided for in a single subclass of subclasses , , , , or , e.g. assemblies of rectifier diodes the devices not having separate containers the devices being of a type provided for in subclass
H10B 80/00 - Assemblies of multiple devices comprising at least one memory device covered by this subclass
90.
DEVICES, SYSTEMS, AND METHODS FOR IMPROVED ANTENNA PERFORMANCE IN EYEWEAR FRAMES
An eyewear device that facilitates and/or supports improved antenna performance may include a temple that comprises an at least partial cavity. In some examples, the eyewear device may also include an antenna that is placed inside the at least partial cavity and/or dimensioned commensurate with a full wavelength of a carrier frequency to be applied to the antenna. Various other devices, systems, and methods are also disclosed.
Systems and methods for forward error correction for cellular communication may include an endpoint which receives, from a wireless communication node, an indication of a portion of packets which are to be dropped by the wireless communication node, according to a first forward error correction (FEC) ratio. The endpoint may receive quality of service (QOS) feedback from another endpoint. The endpoint may determine whether to switch from the first FEC ratio to a second FEC ratio, according to the QoS feedback and the indication.
Systems, methods, and/or apparatuses for a holographic projection module in a near-eye display device, which may display augmented reality/virtual reality (AR/VR) content to the user. In one aspect, a spatial light modulator (SLM) is illuminated by a planar wavefront, which the SLM modulates with a pattern and projects the patterned light through a projection lens to form a 2D hologram which is input to a waveguide display. Some examples may include a high-order filter to form an aperture in the Fourier domain controllable by the phase pattern displayed on the SLM; other examples may have no projection lenses, where the displayed SLM image corresponds to the user perceived Fourier domain image representation. Some examples may use two stacked SLMs, a complex wavefront modulation SLM, and/or a phase SLM with a mask (such as, e.g., a binary amplitude mask).
The disclosed semiconductor device package may include a compute chip configured to perform contextual artificial intelligence and machine perception operations. The disclosed semiconductor device package may additionally include a sensor positioned above the compute chip in the semiconductor device package. The disclosed semiconductor device package may also include one or more electrical connections configured to facilitate communication between the compute chip and the sensor, between the compute chip and a printed circuit board, and between the sensor and the printed circuit board. Various other methods, systems, and computer-readable media are also disclosed.
H10F 39/00 - Integrated devices, or assemblies of multiple devices, comprising at least one element covered by group , e.g. radiation detectors comprising photodiode arrays
H01L 23/00 - Details of semiconductor or other solid state devices
H01L 23/538 - Arrangements for conducting electric current within the device in operation from one component to another the interconnection structure between a plurality of semiconductor chips being formed on, or in, insulating substrates
H01L 25/00 - Assemblies consisting of a plurality of individual semiconductor or other solid-state devices
H01L 25/16 - Assemblies consisting of a plurality of individual semiconductor or other solid-state devices the devices being of types provided for in two or more different subclasses of , , , , or , e.g. forming hybrid circuits
H10F 39/95 - Assemblies of multiple devices comprising at least one integrated device covered by group , e.g. comprising integrated image sensors
94.
METHODS OF UTILIZING REINFORCEMENT LEARNING FOR ENHANCED TEXT SUGGESTIONS, AND SYSTEMS AND DEVICES THEREFOR
Techniques and apparatuses for enhanced text suggestions are described. An example method includes detecting a user gesture performed by a user of the computing system based on data from one or more neuromuscular sensors and identifying a set of text characters corresponding to the user gesture. The method further includes causing display of the set of text terms in a user interface and determining whether a cognitive load of the user meets one or more criteria. The method also includes providing a text suggestion to the user based on the set of text characters in accordance with a determination that the cognitive load of the user meets the one or more criteria, and forgoing providing the text suggestion to the user based on the set of text characters, in accordance with a determination that the cognitive load of the user does not meet the one or more criteria.
Methods and systems of coordinating display of biometric data at a head-worn wearable device based on sensor data from a wrist-wearable device are disclosed. A method includes receiving an indication that a user of a head-worn wearable device is performing a physical activity. The head-worn wearable device includes a light-emitting diode visible to the user while wearing the head-worn wearable device and is in communication with a wrist-wearable device worn by the user. The wrist-wearable device is configured to sense biometric data for the user during the physical activity. The method includes, after receiving the indication and while the user is performing the activity, in accordance with a determination that the biometric data satisfies a physiological-based threshold indicating that information about the biometric data would assist the user in performing the physical activity, causing the head-worn wearable device to present, via the light-emitting diode, the information about the biometric data.
WEARABLE DEVICE INCLUDING AN ARTIFICIALLY INTELLIGENT ASSISTANT FOR GENERATING RESPONSES BASED ON SHARED CONTEXTUAL DATA, AND SYSTEMS AND METHODS OF USE THEREOF
System and method including an artificially intelligent assistant are described. An example method includes, in response to a user input initiating an artificially intelligent (AI) assistant, capturing contextual data including one or more of image data and audio data. The method includes generating, based on the contextual data, user query data including a user query and a portion of the contextual data. The method includes determining, using an AI assistant model that receives the user query data, a user prompt based on, at least the user query and the portion of the contextual data, and generating, by the AI assistant model, a response to the user prompt. The method further includes causing presentation of the response to the user prompt at a head-wearable device.
An apparatus that facilitates and/or supports efficiently testing device radiation for spurious emissions may include a chamber that includes a plurality of interior sides. This apparatus may also include a plurality of antennas coupled to the plurality of interior sides, and the plurality of antennas may be configured to receive radiation emitted by a device under test. This apparatus may further include a controller communicatively coupled to the plurality of antennas, and the controller may be configured to obtain measurements of spurious emissions in the radiation. Various other apparatuses, systems, and methods are also disclosed.
An eyewear device that facilitates and/or supports improved antenna performance may include a temple that comprises an at least partial cavity. In some examples, the eyewear device may also include an antenna that is placed inside the at least partial cavity and/or dimensioned commensurate with a full wavelength of a carrier frequency to be applied to the antenna. Various other devices, systems, and methods are also disclosed.
A computer implemented method for facilitating system user interface (UI) interactions in an artificial reality (XR) environment is provided. The method includes rendering the system UI in the XR environment as a 3D virtual element. The method further includes tracking a position of a hand of a user and a pre-defined stable point on the user. The method further includes identifying, based on the tracking, that the hand has grasped a portion of the system UI and, in response, rotating the position of the system UI around the grasped portion of the system UI such that a line, between the stable point and the surface of the system UI, is moved, to be perpendicular or at a predefined angle from perpendicular to the surface of the system UI, as the user moves the system UI via the grasped portion.
Systems and methods for streaming-based object recognition may include a device which transmits, to a server, video data and first coordinates of a first viewport for the device at a first time instance. The video data may include one or more objects in the first viewport. The device may transmit second coordinates of a second viewport for the device at a second time instance. The device may receive, from the server, data corresponding to the one or more objects within the second viewpoint. The data may be received in a sequence according to coordinates of the one or more objects relative to the second viewport. The device may render the data relative to the one or more objects within a third viewport at a third time instance.