The present disclosure relates to display systems and, more particularly, to augmented reality display systems including diffraction grating(s), and methods of fabricating same. A diffraction grating includes a plurality of different diffracting zones having a periodically repeating lateral dimension corresponding to a grating period adapted for light diffraction. The diffraction grating additionally includes a plurality of different liquid crystal layers corresponding to the different diffracting zones. The different liquid crystal layers include liquid crystal molecules that are aligned differently, such that the different diffracting zones have different optical properties associated with light diffraction.
Systems and methods for rendering audio signals are disclosed. In some embodiments, a method may receive an input signal including a first portion and the second portion. A first processing stage comprising a first filter is applied to the first portion to generate a first filtered signal. A second processing stage comprising a second filter is applied to the first portion to generate a second filtered signal. A third processing stage comprising a third filter is applied to the second portion to generate a third filtered signal. A fourth processing stage comprising a fourth filter is applied to the second portion to generate a fourth filtered signal. A first output signal is determined based on a sum of the first filtered signal and the third filtered signal. A second output signal is determined based on a sum of the second filtered signal and the fourth filtered signal. The first output signal is presented to a first ear of a user of a virtual environment, and the second output signal is presented to the second ear of the user. The first portion of the input signal corresponds to a first location in the virtual environment, and the second portion of the input signal corresponds to a second location in the virtual environment.
Disclosed are dimming assemblies and display systems for reducing artifacts produced by optically-transmissive displays. A system may include a substrate upon which a plurality of electronic components are disposed. The electronic components may include a plurality of pixels, a plurality of conductors, and a plurality of circuit modules. The plurality of pixels may be arranged in a two-dimensional array, with each pixel having a two-dimensional geometry corresponding to a shape with at least one curved side. The plurality of conductors may be arranged adjacent to the plurality of pixels. The system may also include control circuitry electrically coupled to the plurality of conductors. The control circuitry may be configured to apply electrical signals to the plurality of circuit modules by way of the plurality of conductors.
G02F 1/139 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulatingNon-linear optics for the control of the intensity, phase, polarisation or colour based on liquid crystals, e.g. single liquid crystal display cells characterised by the electro-optical or magneto-optical effect, e.g. field-induced phase transition, orientation effect, guest-host interaction or dynamic scattering based on orientation effects in which the liquid crystal remains transparent
4.
METHOD AND SYSTEM FOR PERFORMING DYNAMIC FOVEATION BASED ON EYE GAZE
A method of forming a foveated image includes (a) setting dimensions of a first region, (b) receiving an image having a first resolution, and (c) forming the foveated image including a primary quality region having the dimensions of the first region and the first resolution and a secondary quality region having a second resolution less than the first resolution. The method also includes (d) outputting the foveated image, (e) determining an eye gaze location, and (f) determining an eye gaze velocity. If the eye gaze velocity is less than a threshold velocity, the method includes decreasing the dimensions of the primary quality region and repeating (b) - (f). If the eye gaze velocity is greater than or equal to the threshold velocity, the method includes repeating (a) - (f).
A method includes rendering an original image at a first processor, encoding the original image to provide an encoded image, and transmitting the encoded image to a second processor. The method also includes decoding the encoded image to provide a decoded image, determining an eye gaze location, splitting the decoded image into N sections based on the eye gaze location, and processing N-1 sections of the N sections to produce N-1 secondary quality sections. The method further includes processing one section of the N sections to provide one primary quality section, combining the one primary quality section and the N-1 secondary quality sections to form a foveated image, and transmitting the foveated image to a display.
A mixed reality virtual environment is sharable among multiple users through the use of multiple view modes that are selectable by a presenter. Multiple users with wearable display systems may wish to view a common virtual object, which may be presented in a virtual room to any suitable number of users. A presentation may be controlled by a presenter using a presenter wearable system that leads multiple participants through information associated with the virtual object. Use of different viewing modes allows individual users to see different virtual content through their wearable display systems, despite being in a shared viewing space or alternatively, to see the same virtual content in different locations within a shared space.
G09B 5/12 - Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations different stations being capable of presenting different information simultaneously
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06T 19/00 - Manipulating 3D models or images for computer graphics
H04L 12/18 - Arrangements for providing special services to substations for broadcast or conference
Systems and methods for generating a face model for a user of a head-mounted device are disclosed. The head-mounted device can include one or more eye cameras configured to image the face of the user while the user is putting the device on or taking the device off. The images obtained by the eye cameras may be analyzed using a stereoscopic vision technique, a monocular vision technique, or a combination, to generate a face model for the user. The face model can be used to generate a virtual image of at least a portion of the user's face, for example to be presented as an avatar.
A mixed reality (MR) device can allow a user to switch between input modes to allow interactions with a virtual environment via devices such as a six degrees of freedom (6DoF) handheld controller and a touchpad input device. A default input mode for interacting with virtual content may rely on the user's head pose, which may be difficult to use in selecting virtual objects that are far away in the virtual environment. Thus, the system may be configured to allow the user to use a 6DoF cursor, and a visual ray that extends from the handheld controller to the cursor, to enable precise targeting. Input via a touchpad input device (e.g., that allows three degrees of freedom movements) may also be used in conjunction with the 6DoF cursor.
G06F 3/0354 - Pointing devices displaced or positioned by the userAccessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
H04N 13/361 - Reproducing mixed stereoscopic imagesReproducing mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background
H04N 13/383 - Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
9.
TECHNIQUES FOR DETERMINING SETTINGS FOR A CONTENT CAPTURE DEVICE
A method includes receiving a first image captured by a content capture device, identifying a first object in the first image and determining a first update to a first setting of the content capture device. The method further includes receiving a second image captured by the content capture device, identifying a second object in the second image, and determining a second update to a second setting of the content capture device. The method further includes updating the first setting of the content capture device using the first update, receiving a third image using the updated first setting of the content capture device, updating the second setting of the content capture device using the second update, receiving a fourth image using the updated second setting of the content capture device, and stitching the third image and the fourth image together to form a composite image.
H04N 23/63 - Control of cameras or camera modules by using electronic viewfinders
H04N 23/71 - Circuitry for evaluating the brightness variation
H04N 23/73 - Circuitry for compensating brightness variation in the scene by influencing the exposure time
H04N 23/741 - Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
H04N 23/743 - Bracketing, i.e. taking a series of images with varying exposure conditions
H04N 23/76 - Circuitry for compensating brightness variation in the scene by influencing the image signals
10.
METHOD AND SYSTEM FOR PERFORMING EYE TRACKING IN AUGMENTED REALITY DEVICES
A wearable device for projecting image light to an eye of a viewer and forming an image of virtual content in an augmented reality display is provided. The wearable device includes a projector and stack of waveguides optically connected to the projector. The wearable device also includes an eye tracking system comprising a plurality of illumination sources, an optical element having optical power, and a set of cameras. The optical element is disposed between the plurality of illumination sources and the set of cameras. In some embodiments, the augmented reality display includes an eyepiece operable to output virtual content from an output region and a plurality of illumination sources. At least some of the plurality of illumination sources overlap with the output region.
A high-resolution image sensor suitable for use in an augmented reality (AR) system to provide low latency image analysis with low power consumption. The AR system can be compact, and may be small enough to be packaged within a wearable device such as a set of goggles or mounted on a frame resembling ordinary eyeglasses. The image sensor may receive information about a region of an imaging array associated with a movable object, selectively output imaging information for that region, and synchronously output high-resolution image frames. The region may be updated dynamically as the image sensor and/or the object moves. The image sensor may output the high-resolution image frames less frequently than the region being updated when the image sensor and/or the object moves. Such an image sensor provides a small amount of data from which object information used in rendering an AR scene can be developed.
A head-mounted display system configured to be worn over eyes of a user includes a frame configured to be worn on a head of the user. The system also includes a display disposed on the frame over the eyes of the user. The system further includes an inwardly-facing light source disposed on the frame and configured to emit light toward the eyes of the user to improve visibility of respective portions of a face and the eyes of the user through the display. Moreover, the system includes a processor configured to control a brightness of the display, an opacity of the display, and an intensity of the light emitted by the inwardly-facing light source.
G09G 3/20 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix
G09G 5/36 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of individual graphic patterns using a bit-mapped memory
A method is disclosed, the method comprising the steps of receiving, from a first client application, first graphical data comprising a first node; receiving, from a second client application independent of the first client application, second graphical data comprising a second node; and generating a scenegraph, wherein the scenegraph describes a hierarchical relationship between the first node and the second node according to visual occlusion relative to a perspective from a display.
A display system, such as a virtual reality or augmented reality display system, can control a display to present image data including a plurality of color components, on a plurality of depth planes supported by the display. The presentation of the image data through the display can be controlled based on control information that is embedded in the image data, for example to activate or inactive a color component and/or a depth plane. In some examples, light sources and/or spatial light modulators that relay illumination from the light sources may receive signals from a display controller to adjust a power setting to the light source or spatial light modulator based on control information embedded in an image data frame.
Systems and methods for managing multi-objective alignments in imprinting (e.g., single-sided or double-sided) are provided. An example system includes rollers for moving a template roll, a stage for holding a substrate, a dispenser for dispensing resist on the substrate, a light source for curing the resist to form an imprint on the substrate when a template of the template roll is pressed into the resist on the substrate, a first inspection system for registering a fiducial mark of the template to determine a template offset, a second inspection system for registering the imprint on the substrate to determine a wafer registration offset between a target location and an actual location of the imprint, and a controller for controlling to move the substrate with the resist below the template based on the template offset, and determine an overlay bias of the imprint on the substrate based on the wafer registration offset.
G03F 9/00 - Registration or positioning of originals, masks, frames, photographic sheets or textured or patterned surfaces, e.g. automatically
G03F 7/00 - Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printed surfacesMaterials therefor, e.g. comprising photoresistsApparatus specially adapted therefor
16.
DISPLAY SYSTEM HAVING A PLURALITY OF LIGHT PIPES FOR A PLURALITY OF LIGHT EMITTERS
A display system includes a plurality of light pipes and a plurality of light sources configured to emit light into the light pipes. The display system also comprises a spatial light modulator configured to modulate light received from the light pipes to form images. The display system may also comprise one or more waveguides configured to receive modulated light from the spatial light modulator and to relay that light to a viewer.
AR/VR display systems limit displaying content that exceeds an accommodation-vergence mismatch threshold, which may define a volume around the viewer. The volume may be subdivided into two or more zones, including an innermost loss-of-fusion zone (LoF) in which no content is displayed, and one or more outer AVM zones in which the displaying of content may be stopped, or clipped, under certain conditions. For example, content may be clipped if the viewer is verging within an AVM zone and if the content is displayed within the AVM zone for more than a threshold duration. A further possible condition for clipping content is that the user is verging on that content. In addition, the boundaries of the AVM zone and/or the acceptable amount of time that the content is displayed may vary depending upon the type of content being displayed, e.g., whether the content is user-locked content or in-world content.
Methods and systems for depth-based foveated rendering in a display system are disclosed. The display system may be an augmented reality display system configured to provide virtual content on a plurality of depth planes using different wavefront divergence. Some embodiments include monitoring eye orientations of a user of the display system. A fixation point can be determined based on the eye orientations, the fixation point representing a three-dimensional location with respect to a field of view. Location information of virtual object(s) to present is obtained, with the location information including three-dimensional position(s) of the virtual object(s). A resolution of the virtual object(s) can be adjusted based on a proximity of the location(s) of the virtual object(s) to the fixation point. The virtual object(s) are presented by the display system according to the adjusted resolution(s).
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
G06T 15/00 - 3D [Three Dimensional] image rendering
G06T 19/00 - Manipulating 3D models or images for computer graphics
H04N 13/279 - Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
H04N 13/341 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using temporal multiplexing
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
H04N 13/383 - Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
H04N 13/395 - Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume with depth sampling, i.e. the volume being constructed from a stack or sequence of 2D image planes
A virtual, augmented, or mixed reality display system includes a display configured to display virtual, augmented, or mixed reality image data, the display including one or more optical components which introduce optical distortions or aberrations to the image data. The system also includes a display controller configured to provide the image data to the display. The display controller includes memory for storing optical distortion correction information, and one or more processing elements to at least partially correct the image data for the optical distortions or aberrations using the optical distortion correction information.
Methods and systems for depth-based foveated rendering in the display system are disclosed. The display system may be an augmented reality display system configured to provide virtual content on a plurality of depth planes using different wavefront divergence. Some embodiments include determining a fixation point of a user's eyes. Location information associated with a first virtual object to be presented to the user via a display device is obtained. A resolution-modifying parameter of the first virtual object is obtained. A particular resolution at which to render the first virtual object is identified based on the location information and the resolution-modifying parameter of the first virtual object. The particular resolution is based on a resolution distribution specifying resolutions for corresponding distances from the fixation point. The first virtual object rendered at the identified resolution is presented to the user via the display system.
An eyepiece for an augmented reality display system. The eyepiece can include a waveguide substrate. The waveguide substrate can include an input coupler grating (ICG), an orthogonal pupil expander (OPE) grating, a spreader grating, and an exit pupil expander (EPE) grating. The ICG can couple at least one input light beam into at least a first guided light beam that propagates inside the waveguide substrate. The OPE grating can divide the first guided light beam into a plurality of parallel, spaced-apart light beams. The spreader grating can receive the light beams from the OPE grating and spread their distribution. The spreader grating can include diffractive features oriented at approximately 90° to diffractive features of the OPE grating. The EPE grating can re-direct the light beams from the first OPE grating and the first spreader grating such that they exit the waveguide substrate.
A wearable display device, such as an augmented reality display device, can present virtual content to the wearer for applications in a healthcare setting. The wearer may be a patient or a healthcare provider (HCP). Applications can include, but are not limited to, access, display, and modification of patient medical records and sharing patient medical records among authorized HCPs, detecting one or more anomalies in a medical environment and presenting virtual content (e.g., alerts) indicating the one or more anomalies, detecting the presence of physical objects (e.g., medical instruments or devices) in the medical environment, enabling communication with and/or remove control of a medical device in the environment, and so forth.
G16H 10/60 - ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
A61B 3/00 - Apparatus for testing the eyesInstruments for examining the eyes
A61B 3/10 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions
A61B 3/113 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions for determining or recording eye movement
A61B 5/00 - Measuring for diagnostic purposes Identification of persons
A61B 5/06 - Devices, other than using radiation, for detecting or locating foreign bodies
A61B 5/1171 - Identification of persons based on the shapes or appearances of their bodies or parts thereof
A61B 17/00 - Surgical instruments, devices or methods
A61B 34/20 - Surgical navigation systemsDevices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
A61B 90/00 - Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups , e.g. for luxation treatment or for protecting wound edges
A61B 90/50 - Supports for surgical instruments, e.g. articulated arms
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
G16H 30/40 - ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
G16H 40/67 - ICT specially adapted for the management or administration of healthcare resources or facilitiesICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
A method of operating an optical system includes identifying a set of angle dependent transmittance levels for light passing through pixels of a segmented dimmer exhibiting viewing angle transmittance variations for application of a same voltage to all pixels of the segmented dimmer. The method also includes determining a set of voltages to apply to pixels of the segmented dimmer. Determining the set of voltages includes using the set of angle dependent transmittance levels. The method includes applying the set of voltages to the pixels of the segmented dimmer to achieve light transmittance through the segmented dimmer corresponding to the set of angle dependent transmittance levels.
G09G 3/36 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix by control of light from an independent source using liquid crystals
Apparatuses and methods for displaying a 3-D representation of an object are described. Apparatuses can include a rotatable structure, motor, and multiple light field sub-displays disposed on the rotatable structure. The apparatuses can store a light field image to be displayed, the light field image providing multiple different views of the object at different viewing directions. A processor can drive the motor to rotate the rotatable structure and map the light field image to each of the light field sub-displays based in part on the rotation angle, and illuminate the light field sub-displays based in part on the mapped light field image. The apparatuses can include a display panel configured to be viewed from a fiducial viewing direction, where the display panel is curved out of a plane that is perpendicular to the fiducial viewing direction, and a plurality of light field sub-displays disposed on the display panel.
H04N 13/393 - Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume the volume being generated by a moving, e.g. vibrating or rotating, surface
G02B 30/27 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes of the autostereoscopic type involving lenticular arrays
G02B 30/54 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels the 3D volume being generated by moving a 2D surface, e.g. by vibrating or rotating the 2D surface
G02B 30/56 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels by projecting aerial or floating images
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
H04N 13/307 - Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using fly-eye lenses, e.g. arrangements of circular lenses
H04N 13/32 - Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using arrays of controllable light sourcesImage reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using moving apertures or moving light sources
An augmented reality system includes a projector assembly and a set of imaging optics optically coupled to the projector assembly. The augmented reality system also includes an eyepiece optically coupled to the set of imaging optics. The eyepiece has a world side and a user side opposite the world side and includes one or more eyepiece waveguides. Each of the one or more eyepiece waveguides includes an incoupling interface and an outcoupling interface operable to output virtual content toward the user side. The augmented reality system further includes an optical notch filter disposed on the world side of the eyepiece.
A method of operating an eyepiece waveguide of an augmented reality system includes projecting virtual content using a projector assembly and diffracting the virtual content into the eyepiece waveguide via a first order diffraction. A first portion of the virtual content is clipped to produce a remaining portion of the virtual content. The method also includes propagating the remaining portion of the virtual content in the eyepiece waveguide, outcoupling the remaining portion of the virtual content out of the eyepiece waveguide, and diffracting the virtual content into the eyepiece waveguide via a second order diffraction. A second portion of the virtual content is clipped to produce a complementary portion. The method further includes propagating the complementary portion of the virtual content in the eyepiece waveguide and outcoupling the complementary portion of the virtual content out of the eyepiece waveguide.
A wearable display device includes waveguide(s) that present virtual image elements as an augmentation to the real-world environment. The display device includes a first extended depth of field (EDOF) refractive lens arranged between the waveguide(s) and the user's eye(s), and a second EDOF refractive lens located outward from the waveguide(s). The first EDOF lens has a (e.g., negative) optical power to alter the depth of the virtual image elements. The second EDOF lens has a substantially equal and opposite (e.g., positive) optical power to that of the first EDOF lens, such that the depth of real-world objects is not altered along with the depth of the virtual image elements. To reduce the weight and/or size of the device, one or both EDOF lenses is a compact lens, e.g., Fresnel lens or flattened periphery lens. The compact lens may be coated and/or embedded in another material to enhance its performance.
G02B 6/10 - Light guidesStructural details of arrangements comprising light guides and other optical elements, e.g. couplings of the optical waveguide type
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
28.
CUSTOMIZED POLYMER/GLASS DIFFRACTIVE WAVEGUIDE STACKS FOR AUGMENTED REALITY/MIXED REALITY APPLICATIONS
A diffractive waveguide stack includes first, second, and third diffractive waveguides for guiding light in first, second, and third visible wavelength ranges, respectively. The first diffractive waveguide includes a first material having first refractive index at a selected wavelength and a first target refractive index at a midpoint of the first visible wavelength range. The second diffractive waveguide includes a second material having a second refractive index at the selected wavelength and a second target refractive index at a midpoint of the second visible wavelength range. The third diffractive waveguide includes a third material having a third refractive index at the selected wavelength and a third target refractive index at a midpoint of the third visible wavelength range. A difference between any two of the first target refractive index, the second target refractive index, and the third target refractive index is less than 0.005 at the selected wavelength.
Methods and systems for reductions in switching between depth planes of a multi-depth plane display system are disclosed. The display system may be an AR display system configured to provide virtual content on a plurality of depth planes using different wavefront divergence. The system may monitor the fixation points based upon the gaze of each of the user's eyes, with each fixation point being a three-dimensional location in the user's field of view. Location information of virtual objects to be presented to the user are obtained, with each virtual object being associated with a depth plane. The depth plane on which the virtual object is to be presented may modified based upon the fixation point of the user's eyes.
H04N 13/332 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
H04N 13/383 - Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
H04N 13/395 - Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume with depth sampling, i.e. the volume being constructed from a stack or sequence of 2D image planes
30.
WEARABLE SYSTEM WITH HEADSET AND CONTROLLER INSIDE-OUT TRACKING
Wearable systems and method for operation thereof incorporating headset and controller inside-out tracking are disclosed. A wearable system may include a headset and a controller. The wearable system may cause fiducials of the controller to flash. The wearable system may track a pose of the controller by capturing headset images using a headset camera, identifying the fiducials in the headset images, and tracking the pose of the controller based on the identified fiducials in the headset images and based on a pose of the headset. While tracking the pose of the controller, the wearable system may capture controller images using a controller camera. The wearable system may identify two-dimensional feature points in each controller image and determine three-dimensional map points based on the two-dimensional feature points and the pose of the controller.
Techniques for operating a depth sensor are discussed. A first sequence of operation steps and a second sequence of operation steps can be stored in memory on the depth sensor to define, respectively, a first depth sensing mode of operation and a second depth sensing mode of operation. In response to a first request for depth measurement(s) according to the first depth sensing mode of operation, the depth sensor can operate in the first mode of operation by executing the first sequence of operation steps. In response to a second request for depth measurement(s) according to the second depth sensing mode of operation, and without performing an additional configuration operation, the depth sensor can operate in the second mode of operation by executing the second sequence of operation steps.
H04N 23/667 - Camera operation mode switching, e.g. between still and video, sport and normal or high and low resolution modes
G01S 17/894 - 3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computerOutput arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
H04N 13/139 - Format conversion, e.g. of frame-rate or size
H04N 23/959 - Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics
32.
ATTENUATION OF LIGHT TRANSMISSION ARTIFACTS IN WEARABLE DISPLAYS
A wearable display system includes an eyepiece stack having a world side and a user side opposite the world side, wherein during use a user positioned on the user side views displayed images delivered by the system via the eyepiece stack which augment the user's view of the user's environment. The wearable display system also includes an angularly selective film arranged on the world side of the of the eyepiece stack. The angularly selective film includes a polarization adjusting film arranged between pair of linear polarizers. The linear polarizers and polarization adjusting film significantly reduces transmission of visible light incident on the angularly selective film at large angles of incidence without significantly reducing transmission of light incident on the angularly selective film at small angles of incidence.
G02F 1/1335 - Structural association of cells with optical devices, e.g. polarisers or reflectors
G02F 1/137 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulatingNon-linear optics for the control of the intensity, phase, polarisation or colour based on liquid crystals, e.g. single liquid crystal display cells characterised by the electro-optical or magneto-optical effect, e.g. field-induced phase transition, orientation effect, guest-host interaction or dynamic scattering
33.
DISPLAY SYSTEMS AND METHODS FOR DETERMINING REGISTRATION BETWEEN A DISPLAY AND A USER'S EYES
A display system may include a head-mounted display (HMD) for rendering a three-dimensional virtual object which appears to be located in an ambient environment of a user of the display. One or more eyes of the user may not be in desired positions, relative to the HMD, to receive, or register, image information outputted by the HMD and/or to view an external environment. For example, the HMD-to-eye alignment may vary for different users and/or may change over time (e.g., as the HMD is displaced). The display system may determine a relative position or alignment between the HMD and the user's eyes. Based on the relative positions, the wearable device may determine if it is properly fitted to the user, may provide feedback on the quality of the fit to the user, and/or may take actions to reduce or minimize effects of any misalignment.
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
A61B 3/11 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions for measuring interpupillary distance or diameter of pupils
A61B 3/113 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions for determining or recording eye movement
G02B 30/00 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
G02B 30/40 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images giving the observer of a single two-dimensional [2D] image a perception of depth
G06F 1/16 - Constructional details or arrangements
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06T 3/40 - Scaling of whole images or parts thereof, e.g. expanding or contracting
G06V 10/42 - Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
G06V 10/46 - Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]Salient regional features
G06V 10/60 - Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
G06V 40/18 - Eye characteristics, e.g. of the iris
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
H04N 13/383 - Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
34.
DIFFRACTIVE OPTICAL ELEMENTS WITH MITIGATION OF REBOUNCE-INDUCED LIGHT LOSS AND RELATED SYSTEMS AND METHODS
Display devices include waveguides with in-coupling optical elements that mitigate re-bounce of in-coupled light to improve in-coupling efficiency and/or uniformity. A waveguide receives light from a light source and includes an in-coupling optical element that in-couples the received light to propagate by total internal reflection within the waveguide. The in-coupled light may undergo re-bounce, in which the light reflects off a waveguide surface and, after the reflection, strikes the in-coupling optical element. Upon striking the in-coupling optical element, the light may be partially absorbed and/or out-coupled by the optical element, thereby reducing the amount of in-coupled light propagating through the waveguide. The in-coupling optical element can be truncated or have reduced diffraction efficiency along the propagation direction to reduce the occurrence of light loss due to re-bounce of in-coupled light, resulting in less in-coupled light being prematurely out-coupled and/or absorbed during subsequent interactions with the in-coupling optical element.
F21V 8/00 - Use of light guides, e.g. fibre optic devices, in lighting devices or systems
G02B 6/10 - Light guidesStructural details of arrangements comprising light guides and other optical elements, e.g. couplings of the optical waveguide type
Disclosed herein are systems and methods for displays, such as for a head wearable device. An example display can include an infrared illumination layer, the infrared illumination layer including a substrate, one or more LEDs disposed on a first surface of the substrate, and a first encapsulation layer disposed on the first surface of the substrate, where the encapsulation layer can include a nano-patterned surface. In some examples, the nano-patterned surface can be configured to improve a visible light transmittance of the illumination layer. In one or more examples, embodiments disclosed herein may provide a robust illumination layer that can reduce the haze associated with an illumination layer.
A display system can include a head-mounted display configured to project light to an eye of a user to display virtual image content at different amounts of divergence and collimation. The display system can include an inward-facing imaging system possibly comprising a plurality of cameras that image the user's eye and glints for thereon and processing electronics that are in communication with the inward-facing imaging system and that are configured to obtain an estimate of a center of rotation of the user's eye using cornea data derived from the glint images. The display system may render virtual image content with a render camera positioned at the determined position of the center of rotation of said eye.
An eyepiece waveguide for an augmented reality display system may include an optically transmissive substrate, an input coupling grating (ICG) region, a multi-directional pupil expander (MPE) region, and an exit pupil expander (EPE) region. The ICG region may receive an input beam of light and couple the input beam into the substrate as a guided beam. The MPE region may include a plurality of diffractive features which exhibit periodicity along at least a first axis of periodicity and a second axis of periodicity. The MPE region may be positioned to receive the guided beam from the ICG region and to diffract it in a plurality of directions to create a plurality of diffracted beams. The EPE region may overlap the MPE region and may out couple one or more of the diffracted beams from the optically transmissive substrate as output beams.
Head-mounted display systems with power saving functionality are disclosed. The systems can include a frame configured to be supported on the head of the user. The systems can also include a head-mounted display disposed on the frame, one or more sensors, and processing electronics in communication with the display and the one or more sensors. In some implementations, the processing electronics can be configured to cause the system to reduce power of one or more components in response to at least in part on a determination that the frame is in a certain position (e.g., upside-down or on top of the head of the user). In some implementations, the processing electronics can be configured to cause the system to reduce power of one or more components in response to at least in part on a determination that the frame has been stationary for at least a threshold period of time.
Wearable systems and method for operation thereof incorporating headset and controller localization using headset cameras and controller fiducials are disclosed. A wearable system may include a headset and a controller. The wearable system may alternate between performing headset tracking and performing controller tracking by repeatedly capturing images using a headset camera of the headset during headset tracking frames and controller tracking frames. The wearable system may cause the headset camera to capture a first exposure image an exposure above a threshold and cause the headset camera to capture a second exposure image having an exposure below the threshold. The wearable system may determine a fiducial interval during which fiducials of the controller are to flash at a fiducial frequency and a fiducial period. The wearable system may cause the fiducials to flash during the fiducial interval in accordance with the fiducial frequency and the fiducial period.
A head mounted display system can process images by assessing relative motion between the head mounted display and one or more features in a user's environment. The assessment of relative motion can include determining whether the head mounted display has moved, is moving and/or is expected to move with respect to one or more features in the environment. Additionally or alternatively, the assessment can include determining whether one or more features in the environment have moved, are moving and/or are expected to move relative to the head mounted display. The image processing can further include determining one or more virtual image content locations in the environment that correspond to a location where renderable virtual image content appears to a user when the location appears in the display and comparing the one or more virtual image content locations in the environment with a viewing zone.
A method for measuring performance of a head-mounted display module, the method including arranging the head-mounted display module relative to a plenoptic camera assembly so that an exit pupil of the head-mounted display module coincides with a pupil of the plenoptic camera assembly; emitting light from the head-mounted display module while the head-mounted display module is arranged relative to the plenoptic camera assembly; filtering the light at the exit pupil of the head-mounted display module; acquiring, with the plenoptic camera assembly, one or more light field images projected from the head-mounted display module with the filtered light; and determining information about the performance of the head-mounted display module based on acquired light field image.
Architectures are provided for selectively outputting light for forming images, the light having different wavelengths and being outputted with low levels of crosstalk. In some embodiments, light is incoupled into a waveguide and deflected to propagate in different directions, depending on wavelength. The incoupled light then outcoupled by outcoupling optical elements that outcouple light based on the direction of propagation of the light. In some other embodiments, color filters are between a waveguide and outcoupling elements. The color filters limit the wavelengths of light that interact with and are outcoupled by the outcoupling elements. In yet other embodiments, a different waveguide is provided for each range of wavelengths to be outputted. Incoupling optical elements selectively incouple light of the appropriate range of wavelengths into a corresponding waveguide, from which the light is outcoupled.
Examples of wearable devices that can present to a user of the display device an audible or visual representation of an audio file comprising a plurality of stem tracks that represent different audio content of the audio file are described. Systems and methods are described that determine the pose of the user; generate, based on the pose of the user, an audio mix of at least one of the plurality of stem tracks of the audio file; generate, based on the pose of the user and the audio mix, a visualization of the audio mix; communicate an audio signal representative of the audio mix to the speaker; and communicate a visual signal representative of the visualization of the audio mix to the display.
Systems and methods for fusing multiple types of sensor data to determine a heart rate of a user. An accelerometer obtains accelerometer data associated with the user over a time period, and a gyroscope obtains gyroscope data associated with the user over the time period. Also, a camera obtains a plurality of images of the user's eye over the time period. The plurality images are analyzed to generate image data of the user's eyelid over the time period. The accelerometer data, the gyroscope data, and the image data are fused into fused sensor data, and a heart rate of the user is determined from the fused sensor data.
A device for viewing a projected image includes an input coupling grating operable to receive light related to the projected image from a light source and an expansion grating having a first grating structure characterized by a first set of grating parameters varying in one or more dimensions. The expansion grating structure is operable to receive light from the input coupling grating and to multiply the light related to the projected image. The device also includes an output coupling grating having a second grating structure characterized by a second set of grating parameters and operable to output the multiplied light in a predetermined direction.
G02B 6/293 - Optical coupling means having data bus means, i.e. plural waveguides interconnected and providing an inherently bidirectional system by mixing and splitting signals with wavelength selective means
G02B 6/34 - Optical coupling means utilising prism or grating
G02B 7/00 - Mountings, adjusting means, or light-tight connections, for optical elements
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/147 - Digital output to display device using display panels
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
G09G 3/20 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix
H04N 9/31 - Projection devices for colour picture display
H05K 7/20 - Modifications to facilitate cooling, ventilating, or heating
48.
METHOD AND SYSTEM FOR DETECTING FIBER POSITION IN A FIBER SCANNING PROJECTOR
A method of measuring a position of a scanning cantilever includes providing a housing including an actuation region, a position measurement region including an aperture, and an oscillation region. The method also includes providing a drive signal to an actuator disposed in the actuation region, oscillating the scanning cantilever in response to the drive signal, generating a first light beam using a first optical source, directing the first light beam toward the aperture, detecting at least a portion of the first light beam using a first photodetector, generating a second light beam using a second optical source, directing the second light beam toward the aperture, detecting at least a portion of the second light beam using a second photodetector, and determining the position of the scanning cantilever based on the detected portion of the first light beam and the detected portion of the second light beam.
A wearable display system includes a mixed reality display for presenting a virtual image to a user, an outward-facing imaging system configured to image an environment of the user, and a hardware processor operably coupled to the mixed reality display and to the imaging system. The hardware processor is programmed to generate a virtual remote associated with a parent device, render the virtual remote and the virtual control element on the mixed reality display, determine when the user of the wearable system interacts with the virtual control element of the virtual remote, and perform certain functions in response to user interaction with a virtual control element of the virtual remote. These functions may include generation the virtual control element to move on the mixed reality display; and when movement of the virtual control element surpasses a threshold condition, generate a focus indicator for the virtual control element.
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06T 19/00 - Manipulating 3D models or images for computer graphics
G06V 10/46 - Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]Salient regional features
G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestriansBody parts, e.g. hands
Disclosed herein are systems and methods for presenting audio content in mixed reality environments. A method may include receiving a first input from an application program; in response to receiving the first input, receiving, via a first service, an encoded audio stream; generating, via the first service, a decoded audio stream based on the encoded audio stream; receiving, via a second service, the decoded audio stream; receiving a second input from one or more sensors of a wearable head device; receiving, via the second service, a third input from the application program, wherein the third input corresponds to a position of one or more virtual speakers; generating, via the second service, a spatialized audio stream based on the decoded audio stream, the second input, and the third input; presenting, via one or more speakers of the wearable head device, the spatialized audio stream.
A virtual image generation system for use by an end user comprises memory, a display subsystem, an object selection device configured for receiving input from the end user and persistently selecting at least one object in response to the end user input, and a control subsystem configured for rendering a plurality of image frames of a three-dimensional scene, conveying the image frames to the display subsystem, generating audio data originating from the at least one selected object, and for storing the audio data within the memory.
A63F 13/00 - Video games, i.e. games using an electronically generated display having two or more dimensions
A63F 13/424 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
A63F 13/5255 - Changing parameters of virtual cameras according to dedicated instructions from a player, e.g. using a secondary joystick to rotate the camera around a player's character
A63F 13/5372 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for tagging characters, objects or locations in the game scene, e.g. displaying a circle under the character controlled by the player
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 3/04842 - Selection of displayed objects or displayed text elements
G06T 19/00 - Manipulating 3D models or images for computer graphics
H04R 1/32 - Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
H04R 1/40 - Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
A user may interact and view virtual elements such as avatars and objects and/or real world elements in three-dimensional space in an augmented reality (AR) session. The system may allow one or more spectators to view from a stationary or dynamic camera a third person view of the users AR session. The third person view may be synchronized with the user view and the virtual elements of the user view may be composited onto the third person view.
F21V 13/04 - Combinations of only two kinds of elements the elements being reflectors and refractors
F21V 8/00 - Use of light guides, e.g. fibre optic devices, in lighting devices or systems
G02B 6/12 - Light guidesStructural details of arrangements comprising light guides and other optical elements, e.g. couplings of the optical waveguide type of the integrated circuit kind
G02B 23/06 - Telescopes, e.g. binocularsPeriscopesInstruments for viewing the inside of hollow bodiesViewfindersOptical aiming or sighting devices involving prisms or mirrors having a focusing action, e.g. parabolic mirror
G02B 30/50 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels
H04N 13/315 - Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using parallax barriers the parallax barriers being time-variant
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
A cross reality system enables any of multiple devices to efficiently and accurately access previously stored maps and render virtual content specified in relation to those maps. Both stored maps and tracking maps used by portable devices may have wireless fingerprints associated with them. The portable devices may maintain wireless fingerprints based on wireless scans performed repetitively, based on one or more trigger conditions, as the devices move around the physical world. The wireless information obtained from these scans may be used to create or update wireless fingerprints associated with locations in a tracking map on the devices. One or more of these wireless fingerprints may be used when a previously stored map is to be selected based on its coverage of an area in which the portable device is operating. Maintaining wireless fingerprints in this way provides a reliable and low latency mechanism for performing map-related operations.
H04W 24/02 - Arrangements for optimising operational condition
G01S 5/02 - Position-fixing by co-ordinating two or more direction or position-line determinationsPosition-fixing by co-ordinating two or more distance determinations using radio waves
G06T 19/00 - Manipulating 3D models or images for computer graphics
H04W 4/02 - Services making use of location information
H04W 4/029 - Location-based management or tracking services
A display system aligns the location of its exit pupil with the location of a viewer's pupil by changing the location of the portion of a light source that outputs light. The light source may include an array of pixels that output light, thereby allowing an image to be displayed on the light source. The display system includes a camera that captures image(s) of the eye and negatives of the eye image(s) are displayed by the light source. In the negative image, the dark pupil of the eye is a bright spot which, when displayed by the light source, defines the exit pupil of the display system, such that image content may be presented by modulating the light source. The location of the pupil of the eye may be tracked by capturing the images of the eye.
A virtual reality (VR) and/or augmented reality (AR) display system is configured to control a display using control information that is embedded in or otherwise included with imagery data to be presented through the display. The control information can indicate depth plane(s) and/or color plane(s) to be used to present the imagery data, depth plane(s) and/or color plane(s) to be activated or inactivated, shift(s) of at least a portion of the imagery data (e.g., one or more pixels) laterally within a depth plane and/or longitudinally between depth planes, and/or other suitable controls.
G06T 7/579 - Depth or shape recovery from multiple images from motion
G06T 19/00 - Manipulating 3D models or images for computer graphics
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
G09G 3/20 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix
G09G 5/00 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
H04N 13/395 - Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume with depth sampling, i.e. the volume being constructed from a stack or sequence of 2D image planes
Disclosed herein are systems and methods for distributed computing and/or networking for mixed reality systems. A method may include capturing an image via a camera of a head-wearable device. Inertial data may be captured via an inertial measurement unit of the head-wearable device. A position of the head-wearable device can be estimated based on the image and the inertial data via one or more processors of the head-wearable device. The image can be transmitted to a remote server. A neural network can be trained based on the image via the remote server. A trained neural network can be transmitted to the head-wearable device.
G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
G06V 40/18 - Eye characteristics, e.g. of the iris
A waveguide stack having color-selective regions on one or more waveguides. The color-selective regions are configured to absorb incident light of a first wavelength range in such a way as to reduce or prevent the incident light of the first wavelength range from coupling into a waveguide configured to transmit a light of a second wavelength range.
Systems and methods of generating a three-dimensional (3D) reconstruction of a scene or environment surrounding a user of a spatial computing system, such as a virtual reality, augmented reality or mixed reality system, using only multiview images comprising, and without the need for depth sensors or depth data from sensors. Features are extracted from a sequence of frames of RGB images and back-projected using known camera intrinsics and extrinsics into a 3D voxel volume wherein each pixel of the voxel volume is mapped to a ray in the voxel volume. The back-projected features are fused into the 3D voxel volume. The 3D voxel volume is passed through a 3D convolutional neural network to refine the and regress truncated signed distance function values at each voxel of the 3D voxel volume.
Images perceived to be substantially full color or multi-colored may be formed using component color images that are distributed in unequal numbers across a plurality of depth planes. The distribution of component color images across depth planes may vary based on color. In some embodiments, a display system includes a stack of waveguides that each output light of a particular color, with some colors having fewer numbers of associated waveguides than other colors. The waveguide stack may include multiple pluralities (e.g., first and second pluralities) of waveguides, each configured to produce an image by outputting light corresponding to a particular color. The total number of waveguides in the second plurality of waveguides may be less than the total number of waveguides in the first plurality of waveguides.
An eyepiece includes an optical waveguide, a transmissive input coupler at a first end of the optical waveguide, an output coupler at a second end of the optical waveguide, and a polymeric color absorbing region along a portion of the optical waveguide between the transmissive input coupler and the output coupler. The transmissive input coupler is configured to couple incident visible light to the optical waveguide, and the color-absorbing region is configured to absorb a component of the visible light as the visible light propagates through the optical waveguide.
G02B 1/118 - Anti-reflection coatings having sub-optical wavelength surface structures designed to provide an enhanced transmittance, e.g. moth-eye structures
Systems and methods of presenting an output audio signal to a listener located at a first location in a virtual environment are disclosed. According to embodiments of a method, an input audio signal is received. A first intermediate audio signal corresponding to the input audio signal is determined, based on a location of the sound source in the virtual environment, and the first intermediate audio signal is associated with a first bus. A second intermediate audio signal is determined. The second intermediate audio signal corresponds to a reverberation of the input audio signal in the virtual environment. The second intermediate audio signal is determined based on a location of the sound source, and further based on an acoustic property of the virtual environment. The second intermediate audio signal is associated with a second bus. The output audio signal is presented to the listener via the first and second buses.
G10K 15/10 - Arrangements for producing a reverberation or echo sound using time-delay networks comprising electromechanical or electro-acoustic devices
H04R 3/04 - Circuits for transducers for correcting frequency response
H04R 3/12 - Circuits for transducers for distributing signals to two or more loudspeakers
H04R 5/033 - Headphones for stereophonic communication
A method of operating a virtual image generation system comprises allowing an end user to interact with a three-dimensional environment comprising at least one virtual object, presenting a stimulus to the end user in the context of the three-dimensional environment, sensing at least one biometric parameter of the end user in response to the presentation of the stimulus to the end user, generating biometric data for each of the sensed biometric parameter(s), determining if the end user is in at least one specific emotional state based on the biometric data for the each of the sensed biometric parameter(s), and performing an action discernible to the end user to facilitate a current objective at least partially based on if it is determined that the end user is in the specific emotional state(s).
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
A63F 13/21 - Input arrangements for video game devices characterised by their sensors, purposes or types
A63F 13/212 - Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
Systems and methods for enhanced depth determination using projection spots. An example method includes obtaining images of a real-world object, the images being obtained from image sensors positioned about the real-world object, and the images depicting projection spots projected onto the real-world object via projectors positioned about the real-world object. A projection spot map is accessed, the projection spot map including information indicative of real-world locations of projection spots based locations of the projection spots in the obtained images. Location information is assigned to the projection spots based on the projection spot map. Generation of a three-dimensional representation of the real-world object is caused.
A method for determining a focal point depth of a user of a three-dimensional (“3D”) display device includes tracking a first gaze path of the user. The method also includes analyzing 3D data to identify one or more virtual objects along the first gaze path of the user. The method further includes when only one virtual object intersects the first gaze path of the user identifying a depth of the only one virtual object as the focal point depth of the user.
Enhanced eye-tracking techniques for augmented or virtual reality display systems. An example method includes obtaining an image of an eye of a user of a wearable system, the image depicting glints on the eye caused by respective light emitters, wherein the image is a low dynamic range (LDR) image; generating a high dynamic range (HDR) image via computation of a forward pass of a machine learning model using the image; determining location information associated with the glints as depicted in the HDR image, wherein the location information is usable to inform an eye pose of the eye.
Devices are described for high accuracy displacement of tools. In particular, embodiments provide a device for adjusting a position of a tool. The device includes a threaded shaft having a first end and a second end and a shaft axis extending from the first end to the second end, a motor that actuates the threaded shaft to move in a direction of the shaft axis. In some examples, the motor is operatively coupled to the threaded shaft. The device includes a carriage coupled to the camera, and a bearing assembly coupled to the threaded shaft and the carriage. In some examples, the bearing assembly permits a movement of the carriage with respect to the threaded shaft. The movement of the carriage allows the position of the camera to be adjusted.
Systems and methods are provided for interpolation of disparate inputs. A radial basis function neural network (RBFNN) may be used to interpolate the pose of a digital character. Input parameters to the RBFNN may be separated by data type (e.g. angular vs. linear) and manipulated within the RBFNN by distance functions specific to the data type (e.g. use an angular distance function for the angular input data). A weight may be applied to each distance to compensate for input data representing different variables (e.g. clavicle vs. shoulder). The output parameters of the RBFNN may be a set of independent values, which may be combined into combination values (e.g. representing x, y, z, w angular value in SO(3) space).
A method of presenting an audio signal to a user of a mixed reality environment is disclosed, the method comprising the steps of detecting a first audio signal in the mixed reality environment, where the first audio signal is a real audio signal; identifying a virtual object intersected by the first audio signal in the mixed reality environment; identifying a listener coordinate associated with the user; determining, using the virtual object and the listener coordinate, a transfer function; applying the transfer function to the first audio signal to produce a second audio signal; and presenting, to the user, the second audio signal.
Examples of the disclosure describe systems and methods for estimating acoustic properties of an environment. In an example method, a first audio signal is received via a microphone of a wearable head device. An envelope of the first audio signal is determined, and a first reverberation time is estimated based on the envelope of the first audio signal. A difference between the first reverberation time and a second reverberation time is determined. A change in the environment is determined based on the difference between the first reverberation time and the second reverberation time. A second audio signal is presented via a speaker of a wearable head device, wherein the second audio signal is based on the second reverberation time.
An augmented reality device includes a projector, projector optics optically coupled to the projector, and a substrate structure including a substrate having an incident surface and an opposing exit surface and a first variable thickness film coupled to the incident surface. The substrate structure can also include a first combined pupil expander coupled to the first variable thickness film, a second variable thickness film coupled to the opposing exit surface, an incoupling grating coupled to the opposing exit surface, and a second combined pupil expander coupled to the opposing exit surface.
G02B 6/13 - Integrated optical circuits characterised by the manufacturing method
F21V 8/00 - Use of light guides, e.g. fibre optic devices, in lighting devices or systems
G02B 6/12 - Light guidesStructural details of arrangements comprising light guides and other optical elements, e.g. couplings of the optical waveguide type of the integrated circuit kind
G02B 6/122 - Basic optical elements, e.g. light-guiding paths
G02B 26/08 - Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
71.
APPARATUS FOR OPTICAL SEE-THROUGH HEAD MOUNTED DISPLAY WITH MUTUAL OCCLUSION AND OPAQUENESS CONTROL
The present invention comprises a compact optical see-through head-mounted display capable of combining, a see-through image path with a virtual image path such that the opaqueness of the see-through image path can be modulated and the virtual image occludes parts of the see-through image and vice versa.
G02B 27/14 - Beam splitting or combining systems operating by reflection only
G02B 27/28 - Optical systems or apparatus not provided for by any of the groups , for polarising
G03B 37/02 - Panoramic or wide-screen photographyPhotographing extended surfaces, e.g. for surveyingPhotographing internal surfaces, e.g. of pipe with scanning movement of lens or camera
G06T 19/00 - Manipulating 3D models or images for computer graphics
H04N 23/45 - Cameras or camera modules comprising electronic image sensorsControl thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
H04N 23/698 - Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
72.
CROSS REALITY SYSTEM WITH QUALITY INFORMATION ABOUT PERSISTENT COORDINATE FRAMES
A cross reality system that provides an immersive user experience shared by multiple user devices by providing quality information about a shared map. The quality information may be specific to individual user devices rendering virtual content specified with respect to the shared map. The quality information may be provided for persistent coordinate frames (PCFs) in the map. The quality information about a PCF may indicate positional uncertainty of virtual content, specified with respect to the PCF, when rendered on the user device. The quality information may be computed as upper bounding errors by determining error statistics for one or more steps in a process of specifying position with respect to the PCF or transforming that positional expression to a coordinate frame local to the device for rendering the virtual content. Applications running on individual user devices may adjust the rendering of virtual content based on the quality information about the shared map.
This disclosure relates to the use of variable-pitch light-emitting devices for display applications, including for displays in augmented reality, virtual reality, and mixed reality environments. In particular, it relates to small (e.g., micron-size) light emitting devices (e.g., micro-LEDs) of variable pitch to provide the advantages, e.g., of compactness, manufacturability, color rendition, as well as computational and power savings. Systems and methods for emitting multiple lights by multiple panels where a pitch of one panel is different than pitch(es) of other panels are disclosed. Each panel may comprise a respective array of light emitters. The multiple lights may be combined by a combiner.
H01L 25/075 - Assemblies consisting of a plurality of individual semiconductor or other solid-state devices all the devices being of a type provided for in a single subclass of subclasses , , , , or , e.g. assemblies of rectifier diodes the devices not having separate containers the devices being of a type provided for in group
A head-worn sound reproduction device is provided in the form of left and right earphones, which can either be clipped to each ear or mounted on other headgear. The earphones deliver high fidelity audio to a user's eardrums from near-ear range, in a lightweight form factor that is fully “non-blocking” (allows coupling in and natural hearing of ambient sound). Each earphone has a woofer component that produces bass frequencies, and a tweeter component that produces treble frequencies. The woofer outputs the bass frequencies from a position close to the ear canal, while the tweeter outputs treble frequencies from a position that is either close to the ear canal or further away. In certain embodiments, the tweeter is significantly further from the ear canal than the woofer, leading to a more expansive perceived “sound stage”, but still with a “pure” listening experience.
H04R 1/26 - Spatial arrangement of separate transducers responsive to two or more frequency ranges
H04R 1/28 - Transducer mountings or enclosures designed for specific frequency responseTransducer enclosures modified by provision of mechanical or acoustic impedances, e.g. resonator, damping means
H04R 1/34 - Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by using a single transducer with sound reflecting, diffracting, directing or guiding means
A display system is configured to direct a plurality of parallactically-disparate intra-pupil images into a viewer's eye. The parallactically-disparate intra-pupil images provide different parallax views of a virtual object, and impinge on the pupil from different angles. The wavefronts of light forming the images approximate a continuous divergent wavefront and provide selectable accommodation cues for the user, depending on the amount of parallax disparity between the intra-pupil images. The images may be formed by an emissive micro-display. Each pixel formed by the micro-display may be formed by one of a group of light emitters, which are at different locations such that the emitted light takes different paths to the eye to provide different amounts of parallax disparity.
G02B 30/24 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes of the stereoscopic type involving temporal multiplexing, e.g. using sequentially activated left and right shutters
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
The disclosure relates to systems and methods for authorization of a user in a spatial 3D environment. The systems and methods can include receiving a request from an application executing on a mixed reality display system to authorize the user with a web service, displaying to the user an authorization window configured to accept user input associated with authorization by the web service and to prevent the application or other applications from receiving the user input, communicating the user input to the web service, receiving an access token from the web service, in which the access token is indicative of successful authorization by the web service, and communicating the access token to the application for authorization of the user. The authorization window can be a modal window displayed in an immersive mode by the mixed reality display system.
Diffraction gratings provide optical elements, e.g., in a head-mountable display system, that can affect light, for example by incoupling light into a waveguide, outcoupling light out of a waveguide, and/or multiplying light propagating in a waveguide. The diffraction gratings may be configured to have reduced polarization sensitivity such that light of different polarization states, or polarized and unpolarized light, is incoupled, outcoupled, multiplied, or otherwise affected with a similar level of efficiency. The reduced polarization sensitivity may be achieved through provision of a transmissive layer and a metallic layer on one or more gratings. A diffraction grating may comprise a blazed grating or other suitable configuration.
Systems and methods of disabling user control interfaces during attachment of a wearable electronic device to a portion of a user's clothing or accessory are disclosed. The wearable electronic device can include inertial measurement units (IMUs), optical sources, optical sensors or electromagnetic sensors. Based on the information provided by the IMUs, optical sources, optical sensors or electromagnetic sensors, an electrical processing and control system can make a determination that the electronic device is being grasped and picked up for attaching to a portion of a user's clothing or accessory or that the electronic device is in the process of being attached to a portion of a user's clothing or accessory and temporarily disable one or more user control interfaces disposed on the outside of the wearable electronic device.
A wearable display system includes a fiber scanner including an optical fiber and a scanning mechanism configured to scan a tip of the optical fiber along an emission trajectory defining an optical axis. The wearable display system also includes an eyepiece positioned in front of the tip of the optical fiber and including a planar waveguide, an incoupling diffractive optical element (DOE) coupled to the planar waveguide, and an outcoupling DOE coupled to the planar waveguide. The wearable display system further includes a collimating optical element configured to receive light reflected by the incoupling DOE and collimate and reflect light toward the eyepiece.
An augmented reality head mounted display system an eyepiece having a transparent emissive display. The eyepiece and transparent emissive display are positioned in an optical path of a user's eye in order to transmit light into the user's eye to form images. Due to the transparent nature of the display, the user can see an outside environment through the transparent emissive display. The transmissive emissive display comprising a plurality of emitters configured to emit light into the eye of the user.
G06T 19/00 - Manipulating 3D models or images for computer graphics
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
G09G 3/3208 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix using controlled light sources using electroluminescent panels semiconductive, e.g. using light-emitting diodes [LED] organic, e.g. using organic light-emitting diodes [OLED]
An eyepiece for projecting an image light field to an eye of a viewer for forming an image of virtual content includes a waveguide, a light source configured to deliver a light beam to be incident on the waveguide, a controller coupled to the light source and configured to modulate an intensity of the light beam in a plurality of time slots, a dynamic input coupling grating (ICG) configured to, for each time slot, diffract a respective portion of the light beam into the waveguide at a respective total internal reflection (TIR) angle corresponding to a respective field angle, and an outcoupling diffractive optical element (DOE) configured to diffract each respective portion of the light beam out of the waveguide toward the eye at the respective field angle, thereby projecting the light field to the eye of the viewer.
A multiple degree of freedom hinge system is provided, which is particularly well adapted for eyewear, such as spatial computing headsets. In the context of such spatial computing headsets having an optics assembly supported by opposing temple arms, the hinge system provides protection against over-extension of the temple arms or extreme deflections that may otherwise arise from undesirable torsional loading of the temple arms. The hinge systems also allow the temple arms to splay outwardly to enable proper fit and enhanced user comfort.
Systems include three optical elements arranged along an optical axis each having a different cylinder axis and a variable cylinder refractive power. Collectively, the three elements form a compound optical element having an overall spherical refractive power (SPH), cylinder refractive power (CYL), and cylinder axis (Axis) that can be varied according to a prescription (Rx).
In some embodiments, a first audio signal is received via a first microphone, and a first probability of voice activity is determined based on the first audio signal. A second audio signal is received via a second microphone, and a second probability of voice activity is determined based on the first and second audio signals. Whether a first threshold of voice activity is met is determined based on the first and second probabilities of voice activity. In accordance with a determination that a first threshold of voice activity is met, it is determined that a voice onset has occurred, and an alert is transmitted to a processor based on the determination that the voice onset has occurred. In accordance with a determination that a first threshold of voice activity is not met, it is not determined that a voice onset has occurred.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 17/18 - Complex mathematical operations for evaluating statistical data
G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
A method for placing content in an augmented reality system. A notification is received regarding availability of new content to display in the augmented reality system. A confirmation is received that indicates acceptance of the new content. Three dimensional information that describes the physical environment is provided, to an external computing device, to enable the external computing device to be used for selecting an assigned location in the physical environment for the new content. Location information is received, from the external computing device, that indicates the assigned location. A display location on a display system of the augmented reality system at which to display the new content so that the new content appears to the user to be displayed as an overlay at the assigned location in the physical environment is determined, based on the location information. The new content is displayed on the display system at the display location.
Systems and methods for reducing error from noisy data received from a high frequency sensor by fusing received input with data received from a low frequency sensor by collecting a first set of dynamic inputs from the high frequency sensor, collecting a correction input point from the low frequency sensor, and adjusting a propagation path of a second set of dynamic inputs from the high frequency sensor based on the correction input point either by full translation to the correction input point or dampened approach towards the correction input point.
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/0483 - Interaction with page-structured environments, e.g. book metaphor
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/04842 - Selection of displayed objects or displayed text elements
G06T 3/18 - Image warping, e.g. rearranging pixels individually
G06T 7/277 - Analysis of motion involving stochastic approaches, e.g. using Kalman filters
G06T 19/00 - Manipulating 3D models or images for computer graphics
A display system includes a waveguide assembly having a plurality of waveguides, each waveguide associated with an in-coupling optical element configured to in-couple light into the associated waveguide. A projector outputs light from one or more spatially-separated pupils, and at least one of the pupils outputs light of two different ranges of wavelengths. The in-coupling optical elements for two or more waveguides are inline, e.g. vertically aligned, with each other so that the in-coupling optical elements are in the path of light of the two different ranges of wavelengths. The in-coupling optical element of a first waveguide selectively in-couples light of one range of wavelengths into the waveguide, while the in-coupling optical element of a second waveguide selectively in-couples light of another range of wavelengths. Absorptive color filters are provided forward of an in-coupling optical element to limit the propagation of undesired wavelengths of light to that in-coupling optical element.
G02B 6/10 - Light guidesStructural details of arrangements comprising light guides and other optical elements, e.g. couplings of the optical waveguide type
Disclosed herein are systems and methods for sharing and synchronizing virtual content. A method may include receiving, from a host application via a wearable device comprising a transmissive display, a first data package comprising first data; identifying virtual content based on the first data; presenting a view of the virtual content via the transmissive display; receiving, via the wearable device, first user input directed at the virtual content; generating second data based on the first data and the first user input; sending, to the host application via the wearable device, a second data package comprising the second data, wherein the host application is configured to execute via one or more processors of a computer system remote to the wearable device and in communication with the wearable device.
G06F 30/12 - Geometric CAD characterised by design entry means specially adapted for CAD, e.g. graphical user interfaces [GUI] specially adapted for CAD
This disclosure is related to systems and methods for rendering audio for a mixed reality environment. Methods according to embodiments of this disclosure include receiving an input audio signal, via a wearable device in communication with a mixed reality environment, the input audio signal corresponding to a sound source originating from a real environment. In some embodiments, the system can determine one or more acoustic properties associated with the mixed reality environment. In some embodiments, the system can determine a signal modification parameter based on the one or more acoustic properties associated with the mixed reality environment. In some embodiments, the system can apply the signal modification parameter to the input audio signal to determine a second audio signal. The system can present the second audio signal to the user.
Disclosed is an improved diffraction structure for 3D display systems. The improved diffraction structure includes an intermediate layer that resides between a waveguide substrate and a top grating surface. The top grating surface comprises a first material that corresponds to a first refractive index value, the underlayer comprises a second material that corresponds to a second refractive index value, and the substrate comprises a third material that corresponds to a third refractive index value.
A wearable device may include a head-mounted display (HMD) for rendering a three-dimensional (3D) virtual object which appears to be located in an ambient environment of a user of the display. The relative positions of the HMD and one or more eyes of the user may not be in desired positions to receive image information outputted by the HMD. For example, the HMD-to-eye vertical alignment may be different between the left and right eyes. The wearable device may determine if the HMD is level on the user's head and may then provide the user with a left-eye alignment marker and a right-eye alignment marker. Based on user feedback, the wearable device may determine if there is any left-right vertical misalignment and may take actions to reduce or minimize the effects of any misalignment.
G09G 5/38 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of individual graphic patterns using a bit-mapped memory with means for controlling the display position
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
This disclosure describes techniques for device authentication and/or pairing. A display system can comprise a head mountable display, computer memory, and processor(s). In response to receiving a request to authenticate a connection between the display system and a companion device (e.g., controller or other computer device), first data may be determined, the first data based at least partly on biometric data associated with a user. The first data may be sent to an authentication device configured to compare the first data to second data received from the companion device, the second data based at least partly on the biometric data. Based at least partly on a correspondence between the first and second data, the authentication device can send a confirmation to the display system to permit communication between the display system and companion device.
H04M 1/60 - Substation equipment, e.g. for use by subscribers including speech amplifiers
H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
An apparatus configured for head-worn by a user, includes: a screen configured to present graphics for the user; a camera system configured to view an environment in which the user is located; and a processing unit coupled to the camera system, the processing unit configured to: obtain a feature detection response for a first image, divide the feature detection response into a plurality of patches having a first patch and a second patch, determine a first maximum value in the first patch of the feature detection response, and identify a first set of one or more features for a first region of the first image based on a first criterion that relates to the determined first maximum value.
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/46 - Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]Salient regional features
G06V 10/771 - Feature selection, e.g. selecting representative features from a multi-dimensional feature space
The disclosure describes an improved drop-on-demand, controlled volume technique for dispensing resist onto a substrate, which is then imprinted to create a patterned optical device suitable for use in optical applications such as augmented reality and/or mixed reality systems. The technique enables the dispensation of drops of resist at precise locations on the substrate, with precisely controlled drop volume corresponding to an imprint template having different zones associated with different total resist volumes. Controlled drop size and placement also provides for substantially less variation in residual layer thickness across the surface of the substrate after imprinting, compared to previously available techniques. The technique employs resist having a refractive index closer to that of the substrate index, reducing optical artifacts in the device. To ensure reliable dispensing of the higher index and higher viscosity resist in smaller drop sizes, the dispensing system can continuously circulate the resist.
F21V 8/00 - Use of light guides, e.g. fibre optic devices, in lighting devices or systems
G03F 7/00 - Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printed surfacesMaterials therefor, e.g. comprising photoresistsApparatus specially adapted therefor
97.
METHODS AND SYSTEMS FOR GENERATING VIRTUAL CONTENT DISPLAY WITH A VIRTUAL OR AUGMENTED REALITY APPARATUS
Several unique configurations for interferometric recording of volumetric phase diffractive elements with relatively high angle diffraction for use in waveguides are disclosed. Separate layer EPE and OPE structures produced by various methods may be integrated in side-by-side or overlaid constructs, and multiple such EPE and OPE structures may be combined or multiplexed to exhibit EPE/OPE functionality in a single, spatially-coincident layer. Multiplexed structures reduce the total number of layers of materials within a stack of eyepiece optics, each of which may be responsible for displaying a given focal depth range of a volumetric image. Volumetric phase type diffractive elements are used to offer properties including spectral bandwidth selectivity that may enable registered multi-color diffracted fields, angular multiplexing capability to facilitate tiling and field-of-view expansion without crosstalk, and all-optical, relatively simple prototyping compared to other diffractive element forms, enabling rapid design iteration.
G02B 30/24 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes of the stereoscopic type involving temporal multiplexing, e.g. using sequentially activated left and right shutters
G02B 30/26 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes of the autostereoscopic type
G02F 1/1334 - Constructional arrangements based on polymer-dispersed liquid crystals, e.g. microencapsulated liquid crystals
G03H 1/04 - Processes or apparatus for producing holograms
98.
METHOD OF FABRICATING DISPLAY DEVICE HAVING PATTERNED LITHIUM-BASED TRANSITION METAL OXIDE
The present disclosure generally relates to display systems, and more particularly to augmented reality display systems and methods of fabricating the same. A method of fabricating a display device includes providing a substrate comprising a lithium (Li)-based oxide and forming an etch mask pattern exposing regions of the substrate. The method additionally includes plasma etching the exposed regions of the substrate using a gas mixture comprising CHF3 to form a diffractive optical element, wherein the diffractive optical element comprises Li-based oxide features configured to diffract visible light incident thereon.
An apparatus for providing a virtual or augmented reality experience, includes: a screen, wherein the screen is at least partially transparent for allowing a user of the apparatus to view an object in an environment surrounding the user; a surface detector configured to detect a surface of the object; an object identifier configured to obtain an orientation and/or an elevation of the surface of the object, and to make an identification for the object based on the orientation and/or the elevation of the surface of the object; and a graphic generator configured to generate an identifier indicating the identification for the object for display by the screen, wherein the screen is configured to display the identifier.
G06F 3/04812 - Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 3/04842 - Selection of displayed objects or displayed text elements
G06T 7/70 - Determining position or orientation of objects or cameras
G06T 19/00 - Manipulating 3D models or images for computer graphics
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
Methods and apparatus for providing a representation of an environment, for example, in an XR system, and any suitable computer vision and robotics applications. A representation of an environment may include one or more planar features. The representation of the environment may be provided by jointly optimizing plane parameters of the planar features and sensor poses that the planar features are observed at. The joint optimization may be based on a reduced matrix and a reduced residual vector in lieu of the Jacobian matrix and the original residual vector.