Augmented reality and virtual reality display systems and devices are configured for efficient use of projected light. In some aspects, a display system includes a light projection system and a head-mounted display configured to project light into an eye of the user to display virtual image content. The head-mounted display includes at least one waveguide comprising a plurality of in-coupling elements each configured to receive, from the light projection system, light corresponding to a portion of the user's field of view and to in-couple the light into the waveguide; and a plurality of out-coupling elements configured to out-couple the light out of the waveguide to display the virtual content, wherein each of the out-coupling elements are configured to receive light from different ones of the in-coupling elements.
Examples of an imaging system for use with a head mounted display (HMD) are disclosed. The imaging system can include a forward-facing imaging camera and a surface of a display of the HMD can include an off-axis diffractive optical element (DOE) or hot mirror configured to reflect light to the imaging camera. The DOE or hot mirror can be segmented. The imaging system can be used for eye tracking, biometric identification, multiscopic reconstruction of the three-dimensional shape of the eye, etc.
A61B 3/10 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions
A61B 3/113 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions for determining or recording eye movement
A61B 3/12 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
A61B 3/14 - Arrangements specially adapted for eye photography
A61B 5/00 - Measuring for diagnostic purposes Identification of persons
A61B 5/11 - Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
Methods and systems for triggering presentation of virtual content based on sensor information. The display system may be an augmented reality display system configured to provide virtual content on a plurality of depth planes using different wavefront divergences. The system may monitor information detected via the sensors, and based on the monitored information, trigger access to virtual content identified in the sensor information. Virtual content can be obtained, and presented as augmented reality content via the display system. The system may monitor information detected via the sensors to identify a QR code, or a presence of a wireless beacon. The QR code or wireless beacon can trigger the display system to obtain virtual content for presentation.
A63F 13/211 - Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
G06T 19/00 - Manipulating 3D models or images for computer graphics
4.
METHOD OF WAKING A DEVICE USING SPOKEN VOICE COMMANDS
Disclosed herein are systems and methods for processing speech signals in mixed reality applications. A method may include receiving an audio signal; determining, via first processors, whether the audio signal comprises a voice onset event; in accordance with a determination that the audio signal comprises the voice onset event: waking a second one or more processors; determining, via the second processors, that the audio signal comprises a predetermined trigger signal; in accordance with a determination that the audio signal comprises the predetermined trigger signal: waking third processors; performing, via the third processors, automatic speech recognition based on the audio signal; and in accordance with a determination that the audio signal does not comprise the predetermined trigger signal: forgoing waking the third processors; and in accordance with a determination that the audio signal does not comprise the voice onset event: forgoing waking the second processors.
A foveated display for projecting an image to an eye of a viewer is provided. The foveated display includes a first projector and a dynamic eyepiece optically coupled to the first projector. The dynamic eyepiece comprises a waveguide having a variable surface profile. The foveated display also includes a second projector and a fixed depth plane eyepiece optically coupled to the second projector.
G02B 30/52 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels the 3D volume being constructed from a stack or sequence of 2D planes, e.g. depth sampling systems
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
6.
OPTICAL LAYERS TO IMPROVE PERFORMANCE OF EYEPIECES FOR USE WITH VIRTUAL AND AUGMENTED REALITY DISPLAY SYSTEMS
Improved diffractive optical elements for use in an eyepiece for an extended reality system, The diffractive optical elements comprise a diffraction structure having a waveguide substrate, a surface grating positioned on a first side of the waveguide substrate, and one or more optical layer pairs disposed between the waveguide substrate and the surface grating. Each optical layer pair comprises a low index layer and a high index layer disposed directly on an exterior side of the low index layer.
In some embodiments, an augmented reality system includes at least one waveguide that is configured to receive and redirect light toward a user, and is further configured to allow ambient light from an environment of the user to pass therethrough toward the user. The augmented reality system also includes a first adaptive lens assembly positioned between the at least one waveguide and the environment, a second adaptive lens assembly positioned between the at least one waveguide and the user, and at least one processor operatively coupled to the first and second adaptive lens assemblies. Each lens assembly of the augmented reality system is selectively switchable between at least two different states in which the respective lens assembly is configured to impart at least two different optical powers to light passing therethrough, respectively. The at least one processor is configured to cause the first and second adaptive lens assemblies to synchronously switch between different states in a manner such that the first and second adaptive lens assemblies impart a substantially constant net optical power to ambient light from the environment passing therethrough.
G02F 1/01 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulatingNon-linear optics for the control of the intensity, phase, polarisation or colour
G02F 1/13 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulatingNon-linear optics for the control of the intensity, phase, polarisation or colour based on liquid crystals, e.g. single liquid crystal display cells
G02F 1/1337 - Surface-induced orientation of the liquid crystal molecules, e.g. by alignment layers
G02F 1/29 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulatingNon-linear optics for the control of the position or the direction of light beams, i.e. deflection
8.
ANGULARLY SELECTIVE ATTENUATION OF LIGHT TRANSMISSION ARTIFACTS IN WEARABLE DISPLAYS
A wearable display system includes an eyepiece stack having a world side and a user side opposite the world side. During use, a user positioned on the user side views displayed images delivered by the wearable display system via the eyepiece stack which augment the user's field of view of the user's environment. The system also includes an optical attenuator arranged on the world side of the of the eyepiece stack, the optical attenuator having a layer of a birefringent material having a plurality of domains each having a principal optic axis oriented in a corresponding direction different from the direction of other domains. Each domain of the optical attenuator reduces transmission of visible light incident on the optical attenuator for a corresponding different range of angles of incidence.
G02B 27/28 - Optical systems or apparatus not provided for by any of the groups , for polarising
G02F 1/01 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulatingNon-linear optics for the control of the intensity, phase, polarisation or colour
G02F 1/1335 - Structural association of cells with optical devices, e.g. polarisers or reflectors
G02F 1/13363 - Birefringent elements, e.g. for optical compensation
G02F 1/1337 - Surface-induced orientation of the liquid crystal molecules, e.g. by alignment layers
G02F 1/139 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulatingNon-linear optics for the control of the intensity, phase, polarisation or colour based on liquid crystals, e.g. single liquid crystal display cells characterised by the electro-optical or magneto-optical effect, e.g. field-induced phase transition, orientation effect, guest-host interaction or dynamic scattering based on orientation effects in which the liquid crystal remains transparent
A head mounted display system configured to project a first image to an eye of a user, the head mounted display system includes at least one waveguide comprising a first major surface, a second major surface opposite the first major surface, and a first edge and a second edge between the first major surface and second major surface. The at least one waveguide also includes a first reflector disposed between the first major surface and the second major surface. The head mounted display system also includes at least one light source disposed closer to the first major surface than the second major surface and a spatial light modulator configured to form a second image and disposed closer to the first major surface than the second major surface, wherein the first reflector is configured to reflect light toward the spatial light modulator.
An apparatus including a set of three illumination sources disposed in a first plane. Each of the set of three illumination sources is disposed at a position in the first plane offset from others of the set of three illumination sources by 120 degrees measured in polar coordinates. The apparatus also includes a set of three waveguide layers disposed adjacent the set of three illumination sources. Each of the set of three waveguide layers includes an incoupling diffractive element disposed at a lateral position offset by 180 degrees from a corresponding illumination source of the set of three illumination sources.
A wearable display system includes one or more emissive micro-displays, e.g., micro-LED displays. The micro-displays may be monochrome micro-displays or full-color micro-displays. The micro-displays may include arrays of light emitters. Light collimators may be utilized to narrow the angular emission profile of light emitted by the light emitters. Where a plurality of emissive micro-displays is utilized, the micro-displays may be positioned at different sides of an optical combiner, e.g., an X-cube prism which receives light rays from different micro-displays and outputs the light rays from the same face of the cube. The optical combiner directs the light to projection optics, which outputs the light to an eyepiece that relays the light to a user's eye. The eyepiece may output the light to the user's eye with different amounts of wavefront divergence, to place virtual content on different depth planes.
G02B 27/14 - Beam splitting or combining systems operating by reflection only
G02B 27/18 - Optical systems or apparatus not provided for by any of the groups , for optical projection, e.g. combination of mirror and condenser and objective
G02B 27/62 - Optical apparatus specially adapted for adjusting optical elements during the assembly of optical systems
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
G09G 3/32 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix using controlled light sources using electroluminescent panels semiconductive, e.g. using light-emitting diodes [LED]
H02N 2/02 - Electric machines in general using piezoelectric effect, electrostriction or magnetostriction producing linear motion, e.g. actuatorsLinear positioners
Apparatus and methods for displaying an image by a rotating structure are provided. The rotating structure can comprise blades of a fan. The fan can be a cooling fan for an electronics device such as an augmented reality display. In some embodiments, the rotating structure comprises light sources that emit light to generate the image. The light sources can comprise light-field emitters. In other embodiments, the rotating structure is illuminated by an external (e.g., non-rotating) light source.
G09G 3/02 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes by tracing or scanning a light beam on a screen
F04D 17/16 - Centrifugal pumps for displacing without appreciable compression
F04D 25/08 - Units comprising pumps and their driving means the working fluid being air, e.g. for ventilation
F04D 29/00 - Details, component parts, or accessories
F04D 29/42 - CasingsConnections for working fluid for radial or helico-centrifugal pumps
G02B 30/56 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels by projecting aerial or floating images
Head-mounted augmented reality (AR) devices can track pose of a wearer's head to provide a three-dimensional virtual representation of objects in the wearer's environment. An electromagnetic (EM) tracking system can track head or body pose. A handheld user input device can include an EM emitter that generates an EM field, and the head-mounted AR device can include an EM sensor that senses the EM field. EM information from the sensor can be analyzed to determine location and/or orientation of the sensor and thereby the wearer's pose. The EM emitter and sensor may utilize time division multiplexing (TDM) or dynamic frequency tuning to operate at multiple frequencies. Voltage gain control may be implemented in the transmitter, rather than the sensor, allowing smaller and lighter weight sensor designs. The EM sensor can implement noise cancellation to reduce the level of EM interference generated by nearby audio speakers.
G01S 1/68 - Marker, boundary, call-sign, or like beacons transmitting signals not carrying directional information
G01S 1/70 - Beacons or beacon systems transmitting signals having a characteristic or characteristics capable of being detected by non-directional receivers and defining directions, positions, or position lines fixed relatively to the beacon transmittersReceivers co-operating therewith using electromagnetic waves other than radio waves
G01S 5/02 - Position-fixing by co-ordinating two or more direction or position-line determinationsPosition-fixing by co-ordinating two or more distance determinations using radio waves
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
G06F 1/16 - Constructional details or arrangements
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
A method of presenting a signal to a speech processing engine is disclosed. According to an example of the method, an audio signal is received via a microphone. A portion of the audio signal is identified, and a probability is determined that the portion comprises speech directed by a user of the speech processing engine as input to the speech processing engine. In accordance with a determination that the probability exceeds a threshold, the portion of the audio signal is presented as input to the speech processing engine. In accordance with a determination that the probability does not exceed the threshold, the portion of the audio signal is not presented as input to the speech processing engine.
In some embodiments, eye tracking is used on an AR or VR display system to determine if a user of the display system is blinking or otherwise cannot see. In response, current drain or power usage of a display associated with the display system may be reduced, for example, by dimming or turning off a light source associated with the display, or by configuring a graphics driver to skip a designated number of frames or reduce a refresh rate for a designated period of time.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 1/3231 - Monitoring the presence, absence or movement of users
G06F 1/3234 - Power saving characterised by the action undertaken
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
G09G 3/34 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix by control of light from an independent source
G09G 3/36 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix by control of light from an independent source using liquid crystals
16.
SYSTEMS AND METHODS FOR CROSS-APPLICATION AUTHORING, TRANSFER, AND EVALUATION OF RIGGING CONTROL SYSTEMS FOR VIRTUAL CHARACTERS
Various examples of cross-application systems and methods for authoring, transferring, and evaluating rigging control systems for virtual characters are disclosed. Embodiments of a method include the steps or processes of creating, in a first application which implements a first rigging control protocol, a rigging control system description; writing the rigging control system description to a data file; and initiating transfer of the data file to a second application. In such embodiments, the rigging control system description may be defined according to a different second rigging control protocol. The rigging control system description may specify a rigging control input, such as a lower-order rigging element (e.g., a core skeleton for a virtual character), and at least one rule for operating on the rigging control input to produce a rigging control output, such as a higher-order skeleton or other higher-order rigging element.
Various embodiments of a user-wearable device can comprise a frame configured to mount on a user. The device can include a display attached to the frame and configured to direct virtual images to an eye of the user. The device can also include a light source configured to provide polarized light to the eye of the user and that the polarized light is configured to reflect from the eye of the user. The device can further include a light analyzer configured to determine a polarization angle rotation of the reflected light from the eye of the user such that a glucose level of the user can be determined based at least in part on the polarization angle rotation of the reflected light.
A61B 5/1455 - Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value using optical sensors, e.g. spectral photometrical oximeters
A61B 5/00 - Measuring for diagnostic purposes Identification of persons
A61B 5/0205 - Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
A display system includes a wearable display device for displaying augmented reality content. The display device comprises a display area comprising light redirecting features that are configured to direct light to a user. The display area is at least partially transparent and is configured to provide a view of an ambient environment through the display area. The display device is configured to determine that a reflection of the user is within the user's field of view through the display area. After making this determination, augmented reality content is displayed in the display area with the augmented reality content augmenting the user's view of the reflection. In some embodiments, the augmented reality content may overlie on the user's view of the reflection, thereby allowing all or portions of the reflection to appear to be modified to provide a realistic view of the user with various modifications made to their appearance.
An optical device may include a wedge-shaped light turning element, a first surface that is parallel to a horizontal axis, a second surface opposite to the first surface that is inclined with respect to the horizontal axis by a wedge angle, and a light module including light emitters. The light module can be configured to combine light emitted by the emitters. The optical device can further include a light input surface that is between the first and the second surfaces and disposed with respect to the light module to receive light emitted from the emitters. The optical device may include an end reflector disposed on a side opposite the light input surface. Light coupled into the light turning element may be reflected by the end reflector and/or reflected from the second surface towards the first surface.
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
G02B 27/14 - Beam splitting or combining systems operating by reflection only
G02B 30/26 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes of the autostereoscopic type
G02B 30/52 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels the 3D volume being constructed from a stack or sequence of 2D planes, e.g. depth sampling systems
G02F 1/137 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulatingNon-linear optics for the control of the intensity, phase, polarisation or colour based on liquid crystals, e.g. single liquid crystal display cells characterised by the electro-optical or magneto-optical effect, e.g. field-induced phase transition, orientation effect, guest-host interaction or dynamic scattering
G03B 21/00 - Projectors or projection-type viewersAccessories therefor
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06T 19/00 - Manipulating 3D models or images for computer graphics
G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
G09G 3/02 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes by tracing or scanning a light beam on a screen
G09G 3/24 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix using controlled light sources using incandescent filaments
H04N 13/00 - Stereoscopic video systemsMulti-view video systemsDetails thereof
H04N 13/239 - Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
H04N 13/279 - Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
20.
SYSTEMS AND METHODS FOR VIRTUAL AND AUGMENTED REALITY
Examples of the disclosure describe systems and methods relating to mobile computing. According to an example method, a first user location of a user of a mobile computing system is determined. A first communication device in proximity to the first user location is identified based on the first user location. A first signal is communicated to the first communication device. A first information payload based on the first user location is received from the first communication device, in response to the first communication device receiving the first signal. Video or audio data based on the first information payload is presented to the user at a first time during which the user is at the first user location.
Head mounted display systems configured to project light to an eye of a user to display augmented reality image content in a vision field of the user are disclosed. In embodiments, the system includes a frame configured to be supported on a head of the user, an image projector configured to project images into the user's eye, a camera coupled to the frame, a waveguide optically coupled to the camera, an optical coupling optical element me, an out-coupling element configured to direct light emitted from the waveguide to the camera, and a first light source configured to direct light to the user's eye through the waveguide. Electronics control the camera to capture images periodically and farther control the first light source to pulse in time with the camera such that light emitted by the light source has a reduced intensity when the camera is not capturing images.
An eyepiece waveguide for augmented reality applications includes a substrate and a set of incoupling diffractive optical elements coupled to the substrate. A first subset of the set of incoupling diffractive optical elements is operable to diffract light into the substrate along a first range of propagation angles and a second subset of the set of incoupling diffractive optical elements is operable to diffract light into the substrate along a second range of propagation angles. The eyepiece waveguide also includes a combined pupil expander diffractive optical element coupled to the substrate.
Examples of a light field metrology system for use with a display are disclosed. The light field metrology may capture images of a projected light field, and determine focus depths or lateral focus positions for various regions of the light field using the captured images. The determined focus depths or lateral positions may then be compared with intended focus depths or lateral positions, to quantify the imperfections of the display. Based on the measured imperfections, an appropriate error correction may be performed on the light field to correct for the measured imperfections. The display can be an optical display element in a head mounted display, for example, an optical display element capable of generating multiple depth planes or a light field display.
G01B 11/14 - Measuring arrangements characterised by the use of optical techniques for measuring distance or clearance between spaced objects or spaced apertures
G01B 11/22 - Measuring arrangements characterised by the use of optical techniques for measuring depth
G06T 19/00 - Manipulating 3D models or images for computer graphics
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
G09G 3/20 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix
G09G 3/34 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix by control of light from an independent source
G09G 5/02 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
H04N 13/144 - Processing image signals for flicker reduction
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
H04N 13/383 - Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
H04N 13/395 - Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume with depth sampling, i.e. the volume being constructed from a stack or sequence of 2D image planes
In some implementations, an optical device includes a one-way mirror formed by a polarization selective mirror and an absorptive polarizer. The absorptive polarizer has a transmission axis aligned with the transmission axis of the reflective polarizer. The one-way mirror may be provided on the world side of a head-mounted display system. Advantageously, the one-way mirror may reflect light from the world, which provides privacy and may improve the cosmetics of the display. In some implementations, the one-way mirror may include one or more of a depolarizer and a pair of opposing waveplates to improve alignment tolerances and reduce reflections to a viewer. In some implementations, the one-way mirror may form a compact integrated structure with a dimmer for reducing light transmitted to the viewer from the world.
A display system comprises a waveguide having light incoupling or light outcoupling optical elements formed of a metasurface. The metasurface is a multilevel (e.g., bi-level, tri-level, etc.) structure having a first level defined by spaced apart protrusions formed of a first optically transmissive material and a second optically transmissive material between the protrusions. The metasurface can also include a second level formed by the second optically transmissive material. The protrusions on the first level may be patterned by nanoimprinting the first optically transmissive material, and the second optically transmissive material may be deposited over and between the patterned protrusions. The widths of the protrusions and the spacing between the protrusions may be selected to diffract light, and a pitch of the protrusions may be 10-600 nm.
G02B 6/00 - Light guidesStructural details of arrangements comprising light guides and other optical elements, e.g. couplings
G02B 6/122 - Basic optical elements, e.g. light-guiding paths
G02B 6/293 - Optical coupling means having data bus means, i.e. plural waveguides interconnected and providing an inherently bidirectional system by mixing and splitting signals with wavelength selective means
The disclosure describes an improved drop-on-demand, controlled volume technique for dispensing resist onto a substrate, which is then imprinted to create a patterned optical device suitable for use in optical applications such as augmented reality and/or mixed reality systems. The technique enables the dispensation of drops of resist at precise locations on the substrate, with precisely controlled drop volume corresponding to an imprint template having different zones associated with different total resist volumes. Controlled drop size and placement also provides for substantially less variation in residual layer thickness across the surface of the substrate after imprinting, compared to previously available techniques. The technique employs resist having a refractive index closer to that of the substrate index, reducing optical artifacts in the device. To ensure reliable dispensing of the higher index and higher viscosity resist in smaller drop sizes, the dispensing system can continuously circulate the resist.
F21V 8/00 - Use of light guides, e.g. fibre optic devices, in lighting devices or systems
G03F 7/00 - Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printed surfacesMaterials therefor, e.g. comprising photoresistsApparatus specially adapted therefor
27.
DISPLAY SYSTEM WITH SPATIAL LIGHT MODULATOR ILLUMINATION FOR DIVIDED PUPILS
Illuminations systems that separate different colors into laterally displaced beams may be used to direct different color image content into an eyepiece for displaying images in the eye. Such an eyepiece may be used, for example, for an augmented reality head mounted display. Illumination systems may be provided that utilize one or more waveguides to direct light from a light source towards a spatial light modulator. Light from the spatial light modulator may be directed towards an eyepiece. Some aspects of the invention provide for light of different colors to be outcoupled at different angles from the one or more waveguides and directed along different beam paths.
In an example method for forming a variable optical viewing optics assembly (VOA) for a head mounted display, a prepolymer is deposited onto a substrate having a first optical element for the VOA. Further, a mold is applied to the prepolymer to conform the prepolymer to a curved surface of the mold on a first side of the prepolymer and to conform the prepolymer to a surface of the substrate on a second side of the prepolymer opposite the first side. Further, the prepolymer is exposed to actinic radiation sufficient to form a solid polymer from the prepolymer, such that the solid polymer forms an ophthalmic lens having a curved surface corresponding to the curved surface of the mold, and the substrate and the ophthalmic lens form an integrated optical component. The mold is released from the solid polymer, and the VOA is assembled using the integrated optical component.
In one aspect, an optical device comprises a plurality of waveguides formed over one another and having formed thereon respective diffraction gratings, wherein the respective diffraction gratings are configured to diffract visible light incident thereon into respective waveguides, such that visible light diffracted into the respective waveguides propagates therewithin. The respective diffraction gratings are configured to diffract the visible light into the respective waveguides within respective field of views (FOVs) with respect to layer normal directions of the respective waveguides. The respective FOVs are such that the plurality of waveguides are configured to diffract the visible light within a combined FOV that is continuous and greater than each of the respective FOVs
G02B 6/10 - Light guidesStructural details of arrangements comprising light guides and other optical elements, e.g. couplings of the optical waveguide type
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
G02F 1/29 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulatingNon-linear optics for the control of the position or the direction of light beams, i.e. deflection
H04N 9/31 - Projection devices for colour picture display
30.
METHODS AND APPARATUSES FOR PROVIDING INPUT FOR HEAD-WORN IMAGE DISPLAY DEVICES
An apparatus for use with an image display device configured for head-worn by a user, includes: a screen; and a processing unit configured to assign a first area of the screen to sense finger-action of the user; wherein the processing unit is configured to generate an electronic signal to cause a change in a content displayed by the display device based on the finger-action of the user sensed by the assigned first area of the screen of the apparatus.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/04812 - Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
G06F 3/04886 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
G06T 19/00 - Manipulating 3D models or images for computer graphics
31.
IMAGING MODIFICATION, DISPLAY AND VISUALIZATION USING AUGMENTED AND VIRTUAL REALITY EYEWEAR
A display system can include a head-mounted display configured to project light to an eye of a user to display augmented reality image content to the user. The display system can include one or more user sensors configured to sense the user and can include one or more environmental sensors configured to sense surroundings of the user. The display system can also include processing electronics in communication with the display, the one or more user sensors, and the one or more environmental sensors. The processing electronics can be configured to sense a situation involving user focus, determine user intent for the situation, and alter user perception of a real or virtual object within the vision field of the user based at least in part on the user intent and/or sensed situation involving user focus. The processing electronics can be configured to at least one of enhance or de-emphasize the user perception of the real or virtual object within the vision field of the user.
A61B 17/00 - Surgical instruments, devices or methods
A61B 34/00 - Computer-aided surgeryManipulators or robots specially adapted for use in surgery
A61B 34/20 - Surgical navigation systemsDevices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
A61B 90/00 - Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups , e.g. for luxation treatment or for protecting wound edges
A61B 90/50 - Supports for surgical instruments, e.g. articulated arms
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
G06T 19/00 - Manipulating 3D models or images for computer graphics
32.
HYBRID POLYMER WAVEGUIDE AND METHODS FOR MAKING THE SAME
In some embodiments, a head-mounted augmented reality display system comprises one or more hybrid waveguides configured to display images by directing modulated light containing image information into the eyes of a viewer. Each hybrid waveguide is formed of two or more layers of different materials. A first (e.g., thicker) layer is a highly optically transparent core layer, and a second (e.g., thinner) auxiliary layer includes a pattern of protrusions and indentations, e.g., to form a diffractive optical element. The pattern may be formed by imprinting. The hybrid waveguide may include additional layers, e.g., forming a plurality of alternating core layers and thinner patterned layers. Multiple waveguides may be stacked to form an integrated eyepiece, with each waveguide configured to receive and output light of a different component color.
G02B 1/04 - Optical elements characterised by the material of which they are madeOptical coatings for optical elements made of organic materials, e.g. plastics
A viewing optics assembly for augmented reality includes a projector configured to generate image light and an eyepiece optically coupled to the projector. The eyepiece includes at least one eyepiece layer comprising a waveguide having a surface, an incoupling grating coupled to the waveguide, and an outcoupling grating coupled to the waveguide. The outcoupling grating comprises a first array of first ridges protruding from the surface of the waveguide, each of the first ridges having a first height in a direction perpendicular to the surface and a first width in a direction parallel to the surface and a plurality of second ridges, each of the plurality of second ridges protruding from a respective first ridge of the first ridges and having a second height and a second width. At least one of the first width or the second width varies as a function of position across the surface.
Disclosed are methods, systems, and articles of manufacture for managing and displaying web pages and web resources in a virtual three-dimensional (3D) space with an extended reality system. These techniques receive an input for 3D transform for a web page or a web page panel therefor. In response to the input, a browser engine coupled to a processor of an extended reality system determines 3D transform data for the web page or the web page panel based at least in part upon the 3D transform of the web page or the web page panel, wherein the 3D transform comprises a change in 3D position, rotation, or scale of the web page or the web page panel therefor in a virtual 3D space. A universe browser engine may present contents of the web page in a virtual 3D space based at least in part upon the 3D transform data.
Systems and methods for rendering audio signals are disclosed. In some embodiments, a method may receive an input signal including a first portion and the second portion. A first processing stage comprising a first filter is applied to the first portion to generate a first filtered signal. A second processing stage comprising a second filter is applied to the first portion to generate a second filtered signal. A third processing stage comprising a third filter is applied to the second portion to generate a third filtered signal. A fourth processing stage comprising a fourth filter is applied to the second portion to generate a fourth filtered signal. A first output signal is determined based on a sum of the first filtered signal and the third filtered signal. A second output signal is determined based on a sum of the second filtered signal and the fourth filtered signal. The first output signal is presented to a first ear of a user of a virtual environment, and the second output signal is presented to the second ear of the user. The first portion of the input signal corresponds to a first location in the virtual environment, and the second portion of the input signal corresponds to a second location in the virtual environment.
The present disclosure relates to display systems and, more particularly, to augmented reality display systems including diffraction grating(s), and methods of fabricating same. A diffraction grating includes a plurality of different diffracting zones having a periodically repeating lateral dimension corresponding to a grating period adapted for light diffraction. The diffraction grating additionally includes a plurality of different liquid crystal layers corresponding to the different diffracting zones. The different liquid crystal layers include liquid crystal molecules that are aligned differently, such that the different diffracting zones have different optical properties associated with light diffraction.
Disclosed are dimming assemblies and display systems for reducing artifacts produced by optically-transmissive displays. A system may include a substrate upon which a plurality of electronic components are disposed. The electronic components may include a plurality of pixels, a plurality of conductors, and a plurality of circuit modules. The plurality of pixels may be arranged in a two-dimensional array, with each pixel having a two-dimensional geometry corresponding to a shape with at least one curved side. The plurality of conductors may be arranged adjacent to the plurality of pixels. The system may also include control circuitry electrically coupled to the plurality of conductors. The control circuitry may be configured to apply electrical signals to the plurality of circuit modules by way of the plurality of conductors.
G02F 1/139 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulatingNon-linear optics for the control of the intensity, phase, polarisation or colour based on liquid crystals, e.g. single liquid crystal display cells characterised by the electro-optical or magneto-optical effect, e.g. field-induced phase transition, orientation effect, guest-host interaction or dynamic scattering based on orientation effects in which the liquid crystal remains transparent
38.
METHOD AND SYSTEM FOR PERFORMING DYNAMIC FOVEATION BASED ON EYE GAZE
A method of forming a foveated image includes (a) setting dimensions of a first region, (b) receiving an image having a first resolution, and (c) forming the foveated image including a primary quality region having the dimensions of the first region and the first resolution and a secondary quality region having a second resolution less than the first resolution. The method also includes (d) outputting the foveated image, (e) determining an eye gaze location, and (f) determining an eye gaze velocity. If the eye gaze velocity is less than a threshold velocity, the method includes decreasing the dimensions of the primary quality region and repeating (b) - (f). If the eye gaze velocity is greater than or equal to the threshold velocity, the method includes repeating (a) - (f).
A method includes rendering an original image at a first processor, encoding the original image to provide an encoded image, and transmitting the encoded image to a second processor. The method also includes decoding the encoded image to provide a decoded image, determining an eye gaze location, splitting the decoded image into N sections based on the eye gaze location, and processing N-1 sections of the N sections to produce N-1 secondary quality sections. The method further includes processing one section of the N sections to provide one primary quality section, combining the one primary quality section and the N-1 secondary quality sections to form a foveated image, and transmitting the foveated image to a display.
A mixed reality virtual environment is sharable among multiple users through the use of multiple view modes that are selectable by a presenter. Multiple users with wearable display systems may wish to view a common virtual object, which may be presented in a virtual room to any suitable number of users. A presentation may be controlled by a presenter using a presenter wearable system that leads multiple participants through information associated with the virtual object. Use of different viewing modes allows individual users to see different virtual content through their wearable display systems, despite being in a shared viewing space or alternatively, to see the same virtual content in different locations within a shared space.
G09B 5/12 - Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations different stations being capable of presenting different information simultaneously
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06T 19/00 - Manipulating 3D models or images for computer graphics
H04L 12/18 - Arrangements for providing special services to substations for broadcast or conference
Systems and methods for generating a face model for a user of a head-mounted device are disclosed. The head-mounted device can include one or more eye cameras configured to image the face of the user while the user is putting the device on or taking the device off. The images obtained by the eye cameras may be analyzed using a stereoscopic vision technique, a monocular vision technique, or a combination, to generate a face model for the user. The face model can be used to generate a virtual image of at least a portion of the user's face, for example to be presented as an avatar.
A mixed reality (MR) device can allow a user to switch between input modes to allow interactions with a virtual environment via devices such as a six degrees of freedom (6DoF) handheld controller and a touchpad input device. A default input mode for interacting with virtual content may rely on the user's head pose, which may be difficult to use in selecting virtual objects that are far away in the virtual environment. Thus, the system may be configured to allow the user to use a 6DoF cursor, and a visual ray that extends from the handheld controller to the cursor, to enable precise targeting. Input via a touchpad input device (e.g., that allows three degrees of freedom movements) may also be used in conjunction with the 6DoF cursor.
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
H04N 13/361 - Reproducing mixed stereoscopic imagesReproducing mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background
H04N 13/383 - Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
43.
TECHNIQUES FOR DETERMINING SETTINGS FOR A CONTENT CAPTURE DEVICE
A method includes receiving a first image captured by a content capture device, identifying a first object in the first image and determining a first update to a first setting of the content capture device. The method further includes receiving a second image captured by the content capture device, identifying a second object in the second image, and determining a second update to a second setting of the content capture device. The method further includes updating the first setting of the content capture device using the first update, receiving a third image using the updated first setting of the content capture device, updating the second setting of the content capture device using the second update, receiving a fourth image using the updated second setting of the content capture device, and stitching the third image and the fourth image together to form a composite image.
H04N 23/63 - Control of cameras or camera modules by using electronic viewfinders
H04N 23/71 - Circuitry for evaluating the brightness variation
H04N 23/73 - Circuitry for compensating brightness variation in the scene by influencing the exposure time
H04N 23/741 - Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
H04N 23/743 - Bracketing, i.e. taking a series of images with varying exposure conditions
H04N 23/76 - Circuitry for compensating brightness variation in the scene by influencing the image signals
44.
METHOD AND SYSTEM FOR PERFORMING EYE TRACKING IN AUGMENTED REALITY DEVICES
A wearable device for projecting image light to an eye of a viewer and forming an image of virtual content in an augmented reality display is provided. The wearable device includes a projector and stack of waveguides optically connected to the projector. The wearable device also includes an eye tracking system comprising a plurality of illumination sources, an optical element having optical power, and a set of cameras. The optical element is disposed between the plurality of illumination sources and the set of cameras. In some embodiments, the augmented reality display includes an eyepiece operable to output virtual content from an output region and a plurality of illumination sources. At least some of the plurality of illumination sources overlap with the output region.
A high-resolution image sensor suitable for use in an augmented reality (AR) system to provide low latency image analysis with low power consumption. The AR system can be compact, and may be small enough to be packaged within a wearable device such as a set of goggles or mounted on a frame resembling ordinary eyeglasses. The image sensor may receive information about a region of an imaging array associated with a movable object, selectively output imaging information for that region, and synchronously output high-resolution image frames. The region may be updated dynamically as the image sensor and/or the object moves. The image sensor may output the high-resolution image frames less frequently than the region being updated when the image sensor and/or the object moves. Such an image sensor provides a small amount of data from which object information used in rendering an AR scene can be developed.
A head-mounted display system configured to be worn over eyes of a user includes a frame configured to be worn on a head of the user. The system also includes a display disposed on the frame over the eyes of the user. The system further includes an inwardly-facing light source disposed on the frame and configured to emit light toward the eyes of the user to improve visibility of respective portions of a face and the eyes of the user through the display. Moreover, the system includes a processor configured to control a brightness of the display, an opacity of the display, and an intensity of the light emitted by the inwardly-facing light source.
G09G 3/20 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix
G09G 5/36 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of individual graphic patterns using a bit-mapped memory
A method is disclosed, the method comprising the steps of receiving, from a first client application, first graphical data comprising a first node; receiving, from a second client application independent of the first client application, second graphical data comprising a second node; and generating a scenegraph, wherein the scenegraph describes a hierarchical relationship between the first node and the second node according to visual occlusion relative to a perspective from a display.
A display system, such as a virtual reality or augmented reality display system, can control a display to present image data including a plurality of color components, on a plurality of depth planes supported by the display. The presentation of the image data through the display can be controlled based on control information that is embedded in the image data, for example to activate or inactive a color component and/or a depth plane. In some examples, light sources and/or spatial light modulators that relay illumination from the light sources may receive signals from a display controller to adjust a power setting to the light source or spatial light modulator based on control information embedded in an image data frame.
Systems and methods for managing multi-objective alignments in imprinting (e.g., single-sided or double-sided) are provided. An example system includes rollers for moving a template roll, a stage for holding a substrate, a dispenser for dispensing resist on the substrate, a light source for curing the resist to form an imprint on the substrate when a template of the template roll is pressed into the resist on the substrate, a first inspection system for registering a fiducial mark of the template to determine a template offset, a second inspection system for registering the imprint on the substrate to determine a wafer registration offset between a target location and an actual location of the imprint, and a controller for controlling to move the substrate with the resist below the template based on the template offset, and determine an overlay bias of the imprint on the substrate based on the wafer registration offset.
G03F 9/00 - Registration or positioning of originals, masks, frames, photographic sheets or textured or patterned surfaces, e.g. automatically
G03F 7/00 - Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printed surfacesMaterials therefor, e.g. comprising photoresistsApparatus specially adapted therefor
50.
DISPLAY SYSTEM HAVING A PLURALITY OF LIGHT PIPES FOR A PLURALITY OF LIGHT EMITTERS
A display system includes a plurality of light pipes and a plurality of light sources configured to emit light into the light pipes. The display system also comprises a spatial light modulator configured to modulate light received from the light pipes to form images. The display system may also comprise one or more waveguides configured to receive modulated light from the spatial light modulator and to relay that light to a viewer.
AR/VR display systems limit displaying content that exceeds an accommodation-vergence mismatch threshold, which may define a volume around the viewer. The volume may be subdivided into two or more zones, including an innermost loss-of-fusion zone (LoF) in which no content is displayed, and one or more outer AVM zones in which the displaying of content may be stopped, or clipped, under certain conditions. For example, content may be clipped if the viewer is verging within an AVM zone and if the content is displayed within the AVM zone for more than a threshold duration. A further possible condition for clipping content is that the user is verging on that content. In addition, the boundaries of the AVM zone and/or the acceptable amount of time that the content is displayed may vary depending upon the type of content being displayed, e.g., whether the content is user-locked content or in-world content.
Methods and systems for depth-based foveated rendering in a display system are disclosed. The display system may be an augmented reality display system configured to provide virtual content on a plurality of depth planes using different wavefront divergence. Some embodiments include monitoring eye orientations of a user of the display system. A fixation point can be determined based on the eye orientations, the fixation point representing a three-dimensional location with respect to a field of view. Location information of virtual object(s) to present is obtained, with the location information including three-dimensional position(s) of the virtual object(s). A resolution of the virtual object(s) can be adjusted based on a proximity of the location(s) of the virtual object(s) to the fixation point. The virtual object(s) are presented by the display system according to the adjusted resolution(s).
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
G06T 15/00 - 3D [Three Dimensional] image rendering
G06T 19/00 - Manipulating 3D models or images for computer graphics
H04N 13/279 - Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
H04N 13/341 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using temporal multiplexing
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
H04N 13/383 - Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
H04N 13/395 - Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume with depth sampling, i.e. the volume being constructed from a stack or sequence of 2D image planes
A virtual, augmented, or mixed reality display system includes a display configured to display virtual, augmented, or mixed reality image data, the display including one or more optical components which introduce optical distortions or aberrations to the image data. The system also includes a display controller configured to provide the image data to the display. The display controller includes memory for storing optical distortion correction information, and one or more processing elements to at least partially correct the image data for the optical distortions or aberrations using the optical distortion correction information.
Methods and systems for depth-based foveated rendering in the display system are disclosed. The display system may be an augmented reality display system configured to provide virtual content on a plurality of depth planes using different wavefront divergence. Some embodiments include determining a fixation point of a user's eyes. Location information associated with a first virtual object to be presented to the user via a display device is obtained. A resolution-modifying parameter of the first virtual object is obtained. A particular resolution at which to render the first virtual object is identified based on the location information and the resolution-modifying parameter of the first virtual object. The particular resolution is based on a resolution distribution specifying resolutions for corresponding distances from the fixation point. The first virtual object rendered at the identified resolution is presented to the user via the display system.
An eyepiece for an augmented reality display system. The eyepiece can include a waveguide substrate. The waveguide substrate can include an input coupler grating (ICG), an orthogonal pupil expander (OPE) grating, a spreader grating, and an exit pupil expander (EPE) grating. The ICG can couple at least one input light beam into at least a first guided light beam that propagates inside the waveguide substrate. The OPE grating can divide the first guided light beam into a plurality of parallel, spaced-apart light beams. The spreader grating can receive the light beams from the OPE grating and spread their distribution. The spreader grating can include diffractive features oriented at approximately 90° to diffractive features of the OPE grating. The EPE grating can re-direct the light beams from the first OPE grating and the first spreader grating such that they exit the waveguide substrate.
A wearable display device, such as an augmented reality display device, can present virtual content to the wearer for applications in a healthcare setting. The wearer may be a patient or a healthcare provider (HCP). Applications can include, but are not limited to, access, display, and modification of patient medical records and sharing patient medical records among authorized HCPs, detecting one or more anomalies in a medical environment and presenting virtual content (e.g., alerts) indicating the one or more anomalies, detecting the presence of physical objects (e.g., medical instruments or devices) in the medical environment, enabling communication with and/or remove control of a medical device in the environment, and so forth.
G16H 10/60 - ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
A61B 3/00 - Apparatus for testing the eyesInstruments for examining the eyes
A61B 3/10 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions
A61B 3/113 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions for determining or recording eye movement
A61B 5/00 - Measuring for diagnostic purposes Identification of persons
A61B 5/06 - Devices, other than using radiation, for detecting or locating foreign bodies
A61B 5/1171 - Identification of persons based on the shapes or appearances of their bodies or parts thereof
A61B 17/00 - Surgical instruments, devices or methods
A61B 34/20 - Surgical navigation systemsDevices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
A61B 90/00 - Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups , e.g. for luxation treatment or for protecting wound edges
A61B 90/50 - Supports for surgical instruments, e.g. articulated arms
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
G16H 30/40 - ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
G16H 40/67 - ICT specially adapted for the management or administration of healthcare resources or facilitiesICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
A method of operating an optical system includes identifying a set of angle dependent transmittance levels for light passing through pixels of a segmented dimmer exhibiting viewing angle transmittance variations for application of a same voltage to all pixels of the segmented dimmer. The method also includes determining a set of voltages to apply to pixels of the segmented dimmer. Determining the set of voltages includes using the set of angle dependent transmittance levels. The method includes applying the set of voltages to the pixels of the segmented dimmer to achieve light transmittance through the segmented dimmer corresponding to the set of angle dependent transmittance levels.
G09G 3/36 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix by control of light from an independent source using liquid crystals
Apparatuses and methods for displaying a 3-D representation of an object are described. Apparatuses can include a rotatable structure, motor, and multiple light field sub-displays disposed on the rotatable structure. The apparatuses can store a light field image to be displayed, the light field image providing multiple different views of the object at different viewing directions. A processor can drive the motor to rotate the rotatable structure and map the light field image to each of the light field sub-displays based in part on the rotation angle, and illuminate the light field sub-displays based in part on the mapped light field image. The apparatuses can include a display panel configured to be viewed from a fiducial viewing direction, where the display panel is curved out of a plane that is perpendicular to the fiducial viewing direction, and a plurality of light field sub-displays disposed on the display panel.
H04N 13/393 - Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume the volume being generated by a moving, e.g. vibrating or rotating, surface
G02B 30/27 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes of the autostereoscopic type involving lenticular arrays
G02B 30/54 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels the 3D volume being generated by moving a 2D surface, e.g. by vibrating or rotating the 2D surface
G02B 30/56 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels by projecting aerial or floating images
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
H04N 13/307 - Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using fly-eye lenses, e.g. arrangements of circular lenses
H04N 13/32 - Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using arrays of controllable light sourcesImage reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using moving apertures or moving light sources
An augmented reality system includes a projector assembly and a set of imaging optics optically coupled to the projector assembly. The augmented reality system also includes an eyepiece optically coupled to the set of imaging optics. The eyepiece has a world side and a user side opposite the world side and includes one or more eyepiece waveguides. Each of the one or more eyepiece waveguides includes an incoupling interface and an outcoupling interface operable to output virtual content toward the user side. The augmented reality system further includes an optical notch filter disposed on the world side of the eyepiece.
A method of operating an eyepiece waveguide of an augmented reality system includes projecting virtual content using a projector assembly and diffracting the virtual content into the eyepiece waveguide via a first order diffraction. A first portion of the virtual content is clipped to produce a remaining portion of the virtual content. The method also includes propagating the remaining portion of the virtual content in the eyepiece waveguide, outcoupling the remaining portion of the virtual content out of the eyepiece waveguide, and diffracting the virtual content into the eyepiece waveguide via a second order diffraction. A second portion of the virtual content is clipped to produce a complementary portion. The method further includes propagating the complementary portion of the virtual content in the eyepiece waveguide and outcoupling the complementary portion of the virtual content out of the eyepiece waveguide.
A wearable display device includes waveguide(s) that present virtual image elements as an augmentation to the real-world environment. The display device includes a first extended depth of field (EDOF) refractive lens arranged between the waveguide(s) and the user's eye(s), and a second EDOF refractive lens located outward from the waveguide(s). The first EDOF lens has a (e.g., negative) optical power to alter the depth of the virtual image elements. The second EDOF lens has a substantially equal and opposite (e.g., positive) optical power to that of the first EDOF lens, such that the depth of real-world objects is not altered along with the depth of the virtual image elements. To reduce the weight and/or size of the device, one or both EDOF lenses is a compact lens, e.g., Fresnel lens or flattened periphery lens. The compact lens may be coated and/or embedded in another material to enhance its performance.
G02B 6/10 - Light guidesStructural details of arrangements comprising light guides and other optical elements, e.g. couplings of the optical waveguide type
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
62.
CUSTOMIZED POLYMER/GLASS DIFFRACTIVE WAVEGUIDE STACKS FOR AUGMENTED REALITY/MIXED REALITY APPLICATIONS
A diffractive waveguide stack includes first, second, and third diffractive waveguides for guiding light in first, second, and third visible wavelength ranges, respectively. The first diffractive waveguide includes a first material having first refractive index at a selected wavelength and a first target refractive index at a midpoint of the first visible wavelength range. The second diffractive waveguide includes a second material having a second refractive index at the selected wavelength and a second target refractive index at a midpoint of the second visible wavelength range. The third diffractive waveguide includes a third material having a third refractive index at the selected wavelength and a third target refractive index at a midpoint of the third visible wavelength range. A difference between any two of the first target refractive index, the second target refractive index, and the third target refractive index is less than 0.005 at the selected wavelength.
Methods and systems for reductions in switching between depth planes of a multi-depth plane display system are disclosed. The display system may be an AR display system configured to provide virtual content on a plurality of depth planes using different wavefront divergence. The system may monitor the fixation points based upon the gaze of each of the user's eyes, with each fixation point being a three-dimensional location in the user's field of view. Location information of virtual objects to be presented to the user are obtained, with each virtual object being associated with a depth plane. The depth plane on which the virtual object is to be presented may modified based upon the fixation point of the user's eyes.
H04N 13/332 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
H04N 13/383 - Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
H04N 13/395 - Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume with depth sampling, i.e. the volume being constructed from a stack or sequence of 2D image planes
64.
WEARABLE SYSTEM WITH HEADSET AND CONTROLLER INSIDE-OUT TRACKING
Wearable systems and method for operation thereof incorporating headset and controller inside-out tracking are disclosed. A wearable system may include a headset and a controller. The wearable system may cause fiducials of the controller to flash. The wearable system may track a pose of the controller by capturing headset images using a headset camera, identifying the fiducials in the headset images, and tracking the pose of the controller based on the identified fiducials in the headset images and based on a pose of the headset. While tracking the pose of the controller, the wearable system may capture controller images using a controller camera. The wearable system may identify two-dimensional feature points in each controller image and determine three-dimensional map points based on the two-dimensional feature points and the pose of the controller.
Techniques for operating a depth sensor are discussed. A first sequence of operation steps and a second sequence of operation steps can be stored in memory on the depth sensor to define, respectively, a first depth sensing mode of operation and a second depth sensing mode of operation. In response to a first request for depth measurement(s) according to the first depth sensing mode of operation, the depth sensor can operate in the first mode of operation by executing the first sequence of operation steps. In response to a second request for depth measurement(s) according to the second depth sensing mode of operation, and without performing an additional configuration operation, the depth sensor can operate in the second mode of operation by executing the second sequence of operation steps.
H04N 23/667 - Camera operation mode switching, e.g. between still and video, sport and normal or high and low resolution modes
G01S 17/894 - 3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computerOutput arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
H04N 13/139 - Format conversion, e.g. of frame-rate or size
H04N 23/959 - Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics
66.
DIFFRACTIVE OPTICAL ELEMENTS WITH MITIGATION OF REBOUNCE-INDUCED LIGHT LOSS AND RELATED SYSTEMS AND METHODS
Display devices include waveguides with in-coupling optical elements that mitigate re-bounce of in-coupled light to improve in-coupling efficiency and/or uniformity. A waveguide receives light from a light source and includes an in-coupling optical element that in-couples the received light to propagate by total internal reflection within the waveguide. The in-coupled light may undergo re-bounce, in which the light reflects off a waveguide surface and, after the reflection, strikes the in-coupling optical element. Upon striking the in-coupling optical element, the light may be partially absorbed and/or out-coupled by the optical element, thereby reducing the amount of in-coupled light propagating through the waveguide. The in-coupling optical element can be truncated or have reduced diffraction efficiency along the propagation direction to reduce the occurrence of light loss due to re-bounce of in-coupled light, resulting in less in-coupled light being prematurely out-coupled and/or absorbed during subsequent interactions with the in-coupling optical element.
F21V 8/00 - Use of light guides, e.g. fibre optic devices, in lighting devices or systems
G02B 6/10 - Light guidesStructural details of arrangements comprising light guides and other optical elements, e.g. couplings of the optical waveguide type
A wearable display system includes an eyepiece stack having a world side and a user side opposite the world side, wherein during use a user positioned on the user side views displayed images delivered by the system via the eyepiece stack which augment the user's view of the user's environment. The wearable display system also includes an angularly selective film arranged on the world side of the of the eyepiece stack. The angularly selective film includes a polarization adjusting film arranged between pair of linear polarizers. The linear polarizers and polarization adjusting film significantly reduces transmission of visible light incident on the angularly selective film at large angles of incidence without significantly reducing transmission of light incident on the angularly selective film at small angles of incidence.
G02F 1/1335 - Structural association of cells with optical devices, e.g. polarisers or reflectors
G02F 1/137 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulatingNon-linear optics for the control of the intensity, phase, polarisation or colour based on liquid crystals, e.g. single liquid crystal display cells characterised by the electro-optical or magneto-optical effect, e.g. field-induced phase transition, orientation effect, guest-host interaction or dynamic scattering
68.
DISPLAY SYSTEMS AND METHODS FOR DETERMINING REGISTRATION BETWEEN A DISPLAY AND A USER'S EYES
A display system may include a head-mounted display (HMD) for rendering a three-dimensional virtual object which appears to be located in an ambient environment of a user of the display. One or more eyes of the user may not be in desired positions, relative to the HMD, to receive, or register, image information outputted by the HMD and/or to view an external environment. For example, the HMD-to-eye alignment may vary for different users and/or may change over time (e.g., as the HMD is displaced). The display system may determine a relative position or alignment between the HMD and the user's eyes. Based on the relative positions, the wearable device may determine if it is properly fitted to the user, may provide feedback on the quality of the fit to the user, and/or may take actions to reduce or minimize effects of any misalignment.
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
A61B 3/11 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions for measuring interpupillary distance or diameter of pupils
A61B 3/113 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions for determining or recording eye movement
G02B 30/00 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
G02B 30/40 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images giving the observer of a single two-dimensional [2D] image a perception of depth
G06F 1/16 - Constructional details or arrangements
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06T 3/40 - Scaling of whole images or parts thereof, e.g. expanding or contracting
G06V 10/42 - Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
G06V 10/46 - Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]Salient regional features
G06V 10/60 - Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
G06V 40/18 - Eye characteristics, e.g. of the iris
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
H04N 13/383 - Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
Disclosed herein are systems and methods for displays, such as for a head wearable device. An example display can include an infrared illumination layer, the infrared illumination layer including a substrate, one or more LEDs disposed on a first surface of the substrate, and a first encapsulation layer disposed on the first surface of the substrate, where the encapsulation layer can include a nano-patterned surface. In some examples, the nano-patterned surface can be configured to improve a visible light transmittance of the illumination layer. In one or more examples, embodiments disclosed herein may provide a robust illumination layer that can reduce the haze associated with an illumination layer.
A display system can include a head-mounted display configured to project light to an eye of a user to display virtual image content at different amounts of divergence and collimation. The display system can include an inward-facing imaging system possibly comprising a plurality of cameras that image the user's eye and glints for thereon and processing electronics that are in communication with the inward-facing imaging system and that are configured to obtain an estimate of a center of rotation of the user's eye using cornea data derived from the glint images. The display system may render virtual image content with a render camera positioned at the determined position of the center of rotation of said eye.
An eyepiece waveguide for an augmented reality display system may include an optically transmissive substrate, an input coupling grating (ICG) region, a multi-directional pupil expander (MPE) region, and an exit pupil expander (EPE) region. The ICG region may receive an input beam of light and couple the input beam into the substrate as a guided beam. The MPE region may include a plurality of diffractive features which exhibit periodicity along at least a first axis of periodicity and a second axis of periodicity. The MPE region may be positioned to receive the guided beam from the ICG region and to diffract it in a plurality of directions to create a plurality of diffracted beams. The EPE region may overlap the MPE region and may out couple one or more of the diffracted beams from the optically transmissive substrate as output beams.
Head-mounted display systems with power saving functionality are disclosed. The systems can include a frame configured to be supported on the head of the user. The systems can also include a head-mounted display disposed on the frame, one or more sensors, and processing electronics in communication with the display and the one or more sensors. In some implementations, the processing electronics can be configured to cause the system to reduce power of one or more components in response to at least in part on a determination that the frame is in a certain position (e.g., upside-down or on top of the head of the user). In some implementations, the processing electronics can be configured to cause the system to reduce power of one or more components in response to at least in part on a determination that the frame has been stationary for at least a threshold period of time.
Wearable systems and method for operation thereof incorporating headset and controller localization using headset cameras and controller fiducials are disclosed. A wearable system may include a headset and a controller. The wearable system may alternate between performing headset tracking and performing controller tracking by repeatedly capturing images using a headset camera of the headset during headset tracking frames and controller tracking frames. The wearable system may cause the headset camera to capture a first exposure image an exposure above a threshold and cause the headset camera to capture a second exposure image having an exposure below the threshold. The wearable system may determine a fiducial interval during which fiducials of the controller are to flash at a fiducial frequency and a fiducial period. The wearable system may cause the fiducials to flash during the fiducial interval in accordance with the fiducial frequency and the fiducial period.
A head mounted display system can process images by assessing relative motion between the head mounted display and one or more features in a user's environment. The assessment of relative motion can include determining whether the head mounted display has moved, is moving and/or is expected to move with respect to one or more features in the environment. Additionally or alternatively, the assessment can include determining whether one or more features in the environment have moved, are moving and/or are expected to move relative to the head mounted display. The image processing can further include determining one or more virtual image content locations in the environment that correspond to a location where renderable virtual image content appears to a user when the location appears in the display and comparing the one or more virtual image content locations in the environment with a viewing zone.
A method for measuring performance of a head-mounted display module, the method including arranging the head-mounted display module relative to a plenoptic camera assembly so that an exit pupil of the head-mounted display module coincides with a pupil of the plenoptic camera assembly; emitting light from the head-mounted display module while the head-mounted display module is arranged relative to the plenoptic camera assembly; filtering the light at the exit pupil of the head-mounted display module; acquiring, with the plenoptic camera assembly, one or more light field images projected from the head-mounted display module with the filtered light; and determining information about the performance of the head-mounted display module based on acquired light field image.
Architectures are provided for selectively outputting light for forming images, the light having different wavelengths and being outputted with low levels of crosstalk. In some embodiments, light is incoupled into a waveguide and deflected to propagate in different directions, depending on wavelength. The incoupled light then outcoupled by outcoupling optical elements that outcouple light based on the direction of propagation of the light. In some other embodiments, color filters are between a waveguide and outcoupling elements. The color filters limit the wavelengths of light that interact with and are outcoupled by the outcoupling elements. In yet other embodiments, a different waveguide is provided for each range of wavelengths to be outputted. Incoupling optical elements selectively incouple light of the appropriate range of wavelengths into a corresponding waveguide, from which the light is outcoupled.
Examples of wearable devices that can present to a user of the display device an audible or visual representation of an audio file comprising a plurality of stem tracks that represent different audio content of the audio file are described. Systems and methods are described that determine the pose of the user; generate, based on the pose of the user, an audio mix of at least one of the plurality of stem tracks of the audio file; generate, based on the pose of the user and the audio mix, a visualization of the audio mix; communicate an audio signal representative of the audio mix to the speaker; and communicate a visual signal representative of the visualization of the audio mix to the display.
Systems and methods for fusing multiple types of sensor data to determine a heart rate of a user. An accelerometer obtains accelerometer data associated with the user over a time period, and a gyroscope obtains gyroscope data associated with the user over the time period. Also, a camera obtains a plurality of images of the user's eye over the time period. The plurality images are analyzed to generate image data of the user's eyelid over the time period. The accelerometer data, the gyroscope data, and the image data are fused into fused sensor data, and a heart rate of the user is determined from the fused sensor data.
A device for viewing a projected image includes an input coupling grating operable to receive light related to the projected image from a light source and an expansion grating having a first grating structure characterized by a first set of grating parameters varying in one or more dimensions. The expansion grating structure is operable to receive light from the input coupling grating and to multiply the light related to the projected image. The device also includes an output coupling grating having a second grating structure characterized by a second set of grating parameters and operable to output the multiplied light in a predetermined direction.
G02B 6/293 - Optical coupling means having data bus means, i.e. plural waveguides interconnected and providing an inherently bidirectional system by mixing and splitting signals with wavelength selective means
G02B 6/34 - Optical coupling means utilising prism or grating
G02B 7/00 - Mountings, adjusting means, or light-tight connections, for optical elements
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/147 - Digital output to display device using display panels
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
G09G 3/20 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix
H04N 9/31 - Projection devices for colour picture display
H05K 7/20 - Modifications to facilitate cooling, ventilating, or heating
82.
METHOD AND SYSTEM FOR DETECTING FIBER POSITION IN A FIBER SCANNING PROJECTOR
A method of measuring a position of a scanning cantilever includes providing a housing including an actuation region, a position measurement region including an aperture, and an oscillation region. The method also includes providing a drive signal to an actuator disposed in the actuation region, oscillating the scanning cantilever in response to the drive signal, generating a first light beam using a first optical source, directing the first light beam toward the aperture, detecting at least a portion of the first light beam using a first photodetector, generating a second light beam using a second optical source, directing the second light beam toward the aperture, detecting at least a portion of the second light beam using a second photodetector, and determining the position of the scanning cantilever based on the detected portion of the first light beam and the detected portion of the second light beam.
A wearable display system includes a mixed reality display for presenting a virtual image to a user, an outward-facing imaging system configured to image an environment of the user, and a hardware processor operably coupled to the mixed reality display and to the imaging system. The hardware processor is programmed to generate a virtual remote associated with a parent device, render the virtual remote and the virtual control element on the mixed reality display, determine when the user of the wearable system interacts with the virtual control element of the virtual remote, and perform certain functions in response to user interaction with a virtual control element of the virtual remote. These functions may include generation the virtual control element to move on the mixed reality display; and when movement of the virtual control element surpasses a threshold condition, generate a focus indicator for the virtual control element.
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06T 19/00 - Manipulating 3D models or images for computer graphics
G06V 10/46 - Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]Salient regional features
G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestriansBody parts, e.g. hands
Disclosed herein are systems and methods for presenting audio content in mixed reality environments. A method may include receiving a first input from an application program; in response to receiving the first input, receiving, via a first service, an encoded audio stream; generating, via the first service, a decoded audio stream based on the encoded audio stream; receiving, via a second service, the decoded audio stream; receiving a second input from one or more sensors of a wearable head device; receiving, via the second service, a third input from the application program, wherein the third input corresponds to a position of one or more virtual speakers; generating, via the second service, a spatialized audio stream based on the decoded audio stream, the second input, and the third input; presenting, via one or more speakers of the wearable head device, the spatialized audio stream.
A virtual image generation system for use by an end user comprises memory, a display subsystem, an object selection device configured for receiving input from the end user and persistently selecting at least one object in response to the end user input, and a control subsystem configured for rendering a plurality of image frames of a three-dimensional scene, conveying the image frames to the display subsystem, generating audio data originating from the at least one selected object, and for storing the audio data within the memory.
A63F 13/00 - Video games, i.e. games using an electronically generated display having two or more dimensions
A63F 13/424 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
A63F 13/5255 - Changing parameters of virtual cameras according to dedicated instructions from a player, e.g. using a secondary joystick to rotate the camera around a player's character
A63F 13/5372 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 3/04842 - Selection of displayed objects or displayed text elements
G06T 19/00 - Manipulating 3D models or images for computer graphics
H04R 1/32 - Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
H04R 1/40 - Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
A user may interact and view virtual elements such as avatars and objects and/or real world elements in three-dimensional space in an augmented reality (AR) session. The system may allow one or more spectators to view from a stationary or dynamic camera a third person view of the users AR session. The third person view may be synchronized with the user view and the virtual elements of the user view may be composited onto the third person view.
F21V 13/04 - Combinations of only two kinds of elements the elements being reflectors and refractors
F21V 8/00 - Use of light guides, e.g. fibre optic devices, in lighting devices or systems
G02B 6/12 - Light guidesStructural details of arrangements comprising light guides and other optical elements, e.g. couplings of the optical waveguide type of the integrated circuit kind
G02B 23/06 - Telescopes, e.g. binocularsPeriscopesInstruments for viewing the inside of hollow bodiesViewfindersOptical aiming or sighting devices involving prisms or mirrors having a focusing action, e.g. parabolic mirror
G02B 30/50 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels
H04N 13/315 - Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using parallax barriers the parallax barriers being time-variant
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
A cross reality system enables any of multiple devices to efficiently and accurately access previously stored maps and render virtual content specified in relation to those maps. Both stored maps and tracking maps used by portable devices may have wireless fingerprints associated with them. The portable devices may maintain wireless fingerprints based on wireless scans performed repetitively, based on one or more trigger conditions, as the devices move around the physical world. The wireless information obtained from these scans may be used to create or update wireless fingerprints associated with locations in a tracking map on the devices. One or more of these wireless fingerprints may be used when a previously stored map is to be selected based on its coverage of an area in which the portable device is operating. Maintaining wireless fingerprints in this way provides a reliable and low latency mechanism for performing map-related operations.
H04W 24/02 - Arrangements for optimising operational condition
G01S 5/02 - Position-fixing by co-ordinating two or more direction or position-line determinationsPosition-fixing by co-ordinating two or more distance determinations using radio waves
G06T 19/00 - Manipulating 3D models or images for computer graphics
H04W 4/02 - Services making use of location information
H04W 4/029 - Location-based management or tracking services
A display system aligns the location of its exit pupil with the location of a viewer's pupil by changing the location of the portion of a light source that outputs light. The light source may include an array of pixels that output light, thereby allowing an image to be displayed on the light source. The display system includes a camera that captures image(s) of the eye and negatives of the eye image(s) are displayed by the light source. In the negative image, the dark pupil of the eye is a bright spot which, when displayed by the light source, defines the exit pupil of the display system, such that image content may be presented by modulating the light source. The location of the pupil of the eye may be tracked by capturing the images of the eye.
A virtual reality (VR) and/or augmented reality (AR) display system is configured to control a display using control information that is embedded in or otherwise included with imagery data to be presented through the display. The control information can indicate depth plane(s) and/or color plane(s) to be used to present the imagery data, depth plane(s) and/or color plane(s) to be activated or inactivated, shift(s) of at least a portion of the imagery data (e.g., one or more pixels) laterally within a depth plane and/or longitudinally between depth planes, and/or other suitable controls.
G06T 7/579 - Depth or shape recovery from multiple images from motion
G06T 19/00 - Manipulating 3D models or images for computer graphics
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
G09G 3/20 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix
G09G 5/00 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
H04N 13/395 - Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume with depth sampling, i.e. the volume being constructed from a stack or sequence of 2D image planes
Disclosed herein are systems and methods for distributed computing and/or networking for mixed reality systems. A method may include capturing an image via a camera of a head-wearable device. Inertial data may be captured via an inertial measurement unit of the head-wearable device. A position of the head-wearable device can be estimated based on the image and the inertial data via one or more processors of the head-wearable device. The image can be transmitted to a remote server. A neural network can be trained based on the image via the remote server. A trained neural network can be transmitted to the head-wearable device.
G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
G06V 40/18 - Eye characteristics, e.g. of the iris
A waveguide stack having color-selective regions on one or more waveguides. The color-selective regions are configured to absorb incident light of a first wavelength range in such a way as to reduce or prevent the incident light of the first wavelength range from coupling into a waveguide configured to transmit a light of a second wavelength range.
Systems and methods of generating a three-dimensional (3D) reconstruction of a scene or environment surrounding a user of a spatial computing system, such as a virtual reality, augmented reality or mixed reality system, using only multiview images comprising, and without the need for depth sensors or depth data from sensors. Features are extracted from a sequence of frames of RGB images and back-projected using known camera intrinsics and extrinsics into a 3D voxel volume wherein each pixel of the voxel volume is mapped to a ray in the voxel volume. The back-projected features are fused into the 3D voxel volume. The 3D voxel volume is passed through a 3D convolutional neural network to refine the and regress truncated signed distance function values at each voxel of the 3D voxel volume.
Images perceived to be substantially full color or multi-colored may be formed using component color images that are distributed in unequal numbers across a plurality of depth planes. The distribution of component color images across depth planes may vary based on color. In some embodiments, a display system includes a stack of waveguides that each output light of a particular color, with some colors having fewer numbers of associated waveguides than other colors. The waveguide stack may include multiple pluralities (e.g., first and second pluralities) of waveguides, each configured to produce an image by outputting light corresponding to a particular color. The total number of waveguides in the second plurality of waveguides may be less than the total number of waveguides in the first plurality of waveguides.
Systems and methods of presenting an output audio signal to a listener located at a first location in a virtual environment are disclosed. According to embodiments of a method, an input audio signal is received. A first intermediate audio signal corresponding to the input audio signal is determined, based on a location of the sound source in the virtual environment, and the first intermediate audio signal is associated with a first bus. A second intermediate audio signal is determined. The second intermediate audio signal corresponds to a reverberation of the input audio signal in the virtual environment. The second intermediate audio signal is determined based on a location of the sound source, and further based on an acoustic property of the virtual environment. The second intermediate audio signal is associated with a second bus. The output audio signal is presented to the listener via the first and second buses.
G10K 15/10 - Arrangements for producing a reverberation or echo sound using time-delay networks comprising electromechanical or electro-acoustic devices
H04R 3/04 - Circuits for transducers for correcting frequency response
H04R 3/12 - Circuits for transducers for distributing signals to two or more loudspeakers
H04R 5/033 - Headphones for stereophonic communication
A method of operating a virtual image generation system comprises allowing an end user to interact with a three-dimensional environment comprising at least one virtual object, presenting a stimulus to the end user in the context of the three-dimensional environment, sensing at least one biometric parameter of the end user in response to the presentation of the stimulus to the end user, generating biometric data for each of the sensed biometric parameter(s), determining if the end user is in at least one specific emotional state based on the biometric data for the each of the sensed biometric parameter(s), and performing an action discernible to the end user to facilitate a current objective at least partially based on if it is determined that the end user is in the specific emotional state(s).
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
A63F 13/21 - Input arrangements for video game devices characterised by their sensors, purposes or types
A63F 13/212 - Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
An eyepiece includes an optical waveguide, a transmissive input coupler at a first end of the optical waveguide, an output coupler at a second end of the optical waveguide, and a polymeric color absorbing region along a portion of the optical waveguide between the transmissive input coupler and the output coupler. The transmissive input coupler is configured to couple incident visible light to the optical waveguide, and the color-absorbing region is configured to absorb a component of the visible light as the visible light propagates through the optical waveguide.
G02B 1/118 - Anti-reflection coatings having sub-optical wavelength surface structures designed to provide an enhanced transmittance, e.g. moth-eye structures
Systems and methods for enhanced depth determination using projection spots. An example method includes obtaining images of a real-world object, the images being obtained from image sensors positioned about the real-world object, and the images depicting projection spots projected onto the real-world object via projectors positioned about the real-world object. A projection spot map is accessed, the projection spot map including information indicative of real-world locations of projection spots based locations of the projection spots in the obtained images. Location information is assigned to the projection spots based on the projection spot map. Generation of a three-dimensional representation of the real-world object is caused.
A method for determining a focal point depth of a user of a three-dimensional (“3D”) display device includes tracking a first gaze path of the user. The method also includes analyzing 3D data to identify one or more virtual objects along the first gaze path of the user. The method further includes when only one virtual object intersects the first gaze path of the user identifying a depth of the only one virtual object as the focal point depth of the user.
Enhanced eye-tracking techniques for augmented or virtual reality display systems. An example method includes obtaining an image of an eye of a user of a wearable system, the image depicting glints on the eye caused by respective light emitters, wherein the image is a low dynamic range (LDR) image; generating a high dynamic range (HDR) image via computation of a forward pass of a machine learning model using the image; determining location information associated with the glints as depicted in the HDR image, wherein the location information is usable to inform an eye pose of the eye.
Devices are described for high accuracy displacement of tools. In particular, embodiments provide a device for adjusting a position of a tool. The device includes a threaded shaft having a first end and a second end and a shaft axis extending from the first end to the second end, a motor that actuates the threaded shaft to move in a direction of the shaft axis. In some examples, the motor is operatively coupled to the threaded shaft. The device includes a carriage coupled to the camera, and a bearing assembly coupled to the threaded shaft and the carriage. In some examples, the bearing assembly permits a movement of the carriage with respect to the threaded shaft. The movement of the carriage allows the position of the camera to be adjusted.