A head-worn sound reproduction device is provided in the form of left and right earphones, which can either be clipped to each ear or mounted on other headgear. The earphones deliver high fidelity audio to a user's eardrums from near-ear range, in a lightweight form factor that is fully “non-blocking” (allows coupling in and natural hearing of ambient sound). Each earphone has a woofer component that produces bass frequencies, and a tweeter component that produces treble frequencies. The woofer outputs the bass frequencies from a position close to the ear canal, while the tweeter outputs treble frequencies from a position that is either close to the ear canal or further away. In certain embodiments, the tweeter is significantly further from the ear canal than the woofer, leading to a more expansive perceived “sound stage”, but still with a “pure” listening experience.
H04R 1/26 - Spatial arrangement of separate transducers responsive to two or more frequency ranges
H04R 1/28 - Transducer mountings or enclosures designed for specific frequency response; Transducer enclosures modified by provision of mechanical or acoustic impedances, e.g. resonator, damping means
H04R 1/34 - Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by using a single transducer with sound reflecting, diffracting, directing or guiding means
This disclosure relates to the use of variable-pitch light-emitting devices for display applications, including for displays in augmented reality, virtual reality, and mixed reality environments. In particular, it relates to small (e.g., micron-size) light emitting devices (e.g., micro-LEDs) of variable pitch to provide the advantages, e.g., of compactness, manufacturability, color rendition, as well as computational and power savings. Systems and methods for emitting multiple lights by multiple panels where a pitch of one panel is different than pitch(es) of other panels are disclosed. Each panel may comprise a respective array of light emitters. The multiple lights may be combined by a combiner.
H01L 25/075 - Assemblies consisting of a plurality of individual semiconductor or other solid state devices all the devices being of a type provided for in the same subgroup of groups , or in a single subclass of , , e.g. assemblies of rectifier diodes the devices not having separate containers the devices being of a type provided for in group
3.
CROSS REALITY SYSTEM WITH QUALITY INFORMATION ABOUT PERSISTENT COORDINATE FRAMES
A cross reality system that provides an immersive user experience shared by multiple user devices by providing quality information about a shared map. The quality information may be specific to individual user devices rendering virtual content specified with respect to the shared map. The quality information may be provided for persistent coordinate frames (PCFs) in the map. The quality information about a PCF may indicate positional uncertainty of virtual content, specified with respect to the PCF, when rendered on the user device. The quality information may be computed as upper bounding errors by determining error statistics for one or more steps in a process of specifying position with respect to the PCF or transforming that positional expression to a coordinate frame local to the device for rendering the virtual content. Applications running on individual user devices may adjust the rendering of virtual content based on the quality information about the shared map.
A display system is configured to direct a plurality of parallactically-disparate intra-pupil images into a viewer's eye. The parallactically-disparate intra-pupil images provide different parallax views of a virtual object, and impinge on the pupil from different angles. The wavefronts of light forming the images approximate a continuous divergent wavefront and provide selectable accommodation cues for the user, depending on the amount of parallax disparity between the intra-pupil images. The images may be formed by an emissive micro-display. Each pixel formed by the micro-display may be formed by one of a group of light emitters, which are at different locations such that the emitted light takes different paths to the eye to provide different amounts of parallax disparity.
G02B 30/24 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes of the stereoscopic type involving temporal multiplexing, e.g. using sequentially activated left and right shutters
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
H04N 13/398 - Synchronisation thereof; Control thereof
5.
METHOD AND SYSTEM FOR FIBER SCANNING PROJECTOR WITH ANGLED EYEPIECE
A wearable display system includes a fiber scanner including an optical fiber and a scanning mechanism configured to scan a tip of the optical fiber along an emission trajectory defining an optical axis. The wearable display system also includes an eyepiece positioned in front of the tip of the optical fiber and including a planar waveguide, an incoupling diffractive optical element (DOE) coupled to the planar waveguide, and an outcoupling DOE coupled to the planar waveguide. The wearable display system further includes a collimating optical element configured to receive light reflected by the incoupling DOE and collimate and reflect light toward the eyepiece.
The disclosure relates to systems and methods for authorization of a user in a spatial 3D environment. The systems and methods can include receiving a request from an application executing on a mixed reality display system to authorize the user with a web service, displaying to the user an authorization window configured to accept user input associated with authorization by the web service and to prevent the application or other applications from receiving the user input, communicating the user input to the web service, receiving an access token from the web service, in which the access token is indicative of successful authorization by the web service, and communicating the access token to the application for authorization of the user. The authorization window can be a modal window displayed in an immersive mode by the mixed reality display system.
Systems and methods of disabling user control interfaces during attachment of a wearable electronic device to a portion of a user's clothing or accessory are disclosed. The wearable electronic device can include inertial measurement units (IMUs), optical sources, optical sensors or electromagnetic sensors. Based on the information provided by the IMUs, optical sources, optical sensors or electromagnetic sensors, an electrical processing and control system can make a determination that the electronic device is being grasped and picked up for attaching to a portion of a user's clothing or accessory or that the electronic device is in the process of being attached to a portion of a user's clothing or accessory and temporarily disable one or more user control interfaces disposed on the outside of the wearable electronic device.
Diffraction gratings provide optical elements, e.g., in a head-mountable display system, that can affect light, for example by incoupling light into a waveguide, outcoupling light out of a waveguide, and/or multiplying light propagating in a waveguide. The diffraction gratings may be configured to have reduced polarization sensitivity such that light of different polarization states, or polarized and unpolarized light, is incoupled, outcoupled, multiplied, or otherwise affected with a similar level of efficiency. The reduced polarization sensitivity may be achieved through provision of a transmissive layer and a metallic layer on one or more gratings. A diffraction grating may comprise a blazed grating or other suitable configuration.
An augmented reality head mounted display system an eyepiece having a transparent emissive display. The eyepiece and transparent emissive display are positioned in an optical path of a user's eye in order to transmit light into the user's eye to form images. Due to the transparent nature of the display, the user can see an outside environment through the transparent emissive display. The transmissive emissive display comprising a plurality of emitters configured to emit light into the eye of the user.
G06T 19/00 - Manipulating 3D models or images for computer graphics
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
G09G 3/3208 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix using controlled light sources using electroluminescent panels semiconductive, e.g. using light-emitting diodes [LED] organic, e.g. using organic light-emitting diodes [OLED]
An eyepiece for projecting an image light field to an eye of a viewer for forming an image of virtual content includes a waveguide, a light source configured to deliver a light beam to be incident on the waveguide, a controller coupled to the light source and configured to modulate an intensity of the light beam in a plurality of time slots, a dynamic input coupling grating (ICG) configured to, for each time slot, diffract a respective portion of the light beam into the waveguide at a respective total internal reflection (TIR) angle corresponding to a respective field angle, and an outcoupling diffractive optical element (DOE) configured to diffract each respective portion of the light beam out of the waveguide toward the eye at the respective field angle, thereby projecting the light field to the eye of the viewer.
A multiple degree of freedom hinge system is provided, which is particularly well adapted for eyewear, such as spatial computing headsets. In the context of such spatial computing headsets having an optics assembly supported by opposing temple arms, the hinge system provides protection against over-extension of the temple arms or extreme deflections that may otherwise arise from undesirable torsional loading of the temple arms. The hinge systems also allow the temple arms to splay outwardly to enable proper fit and enhanced user comfort.
Systems include three optical elements arranged along an optical axis each having a different cylinder axis and a variable cylinder refractive power. Collectively, the three elements form a compound optical element having an overall spherical refractive power (SPH), cylinder refractive power (CYL), and cylinder axis (Axis) that can be varied according to a prescription (Rx).
A method for placing content in an augmented reality system. A notification is received regarding availability of new content to display in the augmented reality system. A confirmation is received that indicates acceptance of the new content. Three dimensional information that describes the physical environment is provided, to an external computing device, to enable the external computing device to be used for selecting an assigned location in the physical environment for the new content. Location information is received, from the external computing device, that indicates the assigned location. A display location on a display system of the augmented reality system at which to display the new content so that the new content appears to the user to be displayed as an overlay at the assigned location in the physical environment is determined, based on the location information. The new content is displayed on the display system at the display location.
Systems and methods for reducing error from noisy data received from a high frequency sensor by fusing received input with data received from a low frequency sensor by collecting a first set of dynamic inputs from the high frequency sensor, collecting a correction input point from the low frequency sensor, and adjusting a propagation path of a second set of dynamic inputs from the high frequency sensor based on the correction input point either by full translation to the correction input point or dampened approach towards the correction input point.
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0346 - Pointing devices displaced or positioned by the user; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/0483 - Interaction with page-structured environments, e.g. book metaphor
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/04842 - Selection of displayed objects or displayed text elements
G06T 3/18 - Image warping, e.g. rearranging pixels individually
G06T 7/277 - Analysis of motion involving stochastic approaches, e.g. using Kalman filters
G06T 19/00 - Manipulating 3D models or images for computer graphics
A display system includes a waveguide assembly having a plurality of waveguides, each waveguide associated with an in-coupling optical element configured to in-couple light into the associated waveguide. A projector outputs light from one or more spatially-separated pupils, and at least one of the pupils outputs light of two different ranges of wavelengths. The in-coupling optical elements for two or more waveguides are inline, e.g. vertically aligned, with each other so that the in-coupling optical elements are in the path of light of the two different ranges of wavelengths. The in-coupling optical element of a first waveguide selectively in-couples light of one range of wavelengths into the waveguide, while the in-coupling optical element of a second waveguide selectively in-couples light of another range of wavelengths. Absorptive color filters are provided forward of an in-coupling optical element to limit the propagation of undesired wavelengths of light to that in-coupling optical element.
G02B 6/10 - Light guides; Structural details of arrangements comprising light guides and other optical elements, e.g. couplings of the optical waveguide type
Disclosed herein are systems and methods for sharing and synchronizing virtual content. A method may include receiving, from a host application via a wearable device comprising a transmissive display, a first data package comprising first data; identifying virtual content based on the first data; presenting a view of the virtual content via the transmissive display; receiving, via the wearable device, first user input directed at the virtual content; generating second data based on the first data and the first user input; sending, to the host application via the wearable device, a second data package comprising the second data, wherein the host application is configured to execute via one or more processors of a computer system remote to the wearable device and in communication with the wearable device.
G06F 30/12 - Geometric CAD characterised by design entry means specially adapted for CAD, e.g. graphical user interfaces [GUI] specially adapted for CAD
Disclosed is an improved diffraction structure for 3D display systems. The improved diffraction structure includes an intermediate layer that resides between a waveguide substrate and a top grating surface. The top grating surface comprises a first material that corresponds to a first refractive index value, the underlayer comprises a second material that corresponds to a second refractive index value, and the substrate comprises a third material that corresponds to a third refractive index value.
A wearable device may include a head-mounted display (HMD) for rendering a three-dimensional (3D) virtual object which appears to be located in an ambient environment of a user of the display. The relative positions of the HMD and one or more eyes of the user may not be in desired positions to receive image information outputted by the HMD. For example, the HMD-to-eye vertical alignment may be different between the left and right eyes. The wearable device may determine if the HMD is level on the user's head and may then provide the user with a left-eye alignment marker and a right-eye alignment marker. Based on user feedback, the wearable device may determine if there is any left-right vertical misalignment and may take actions to reduce or minimize the effects of any misalignment.
G09G 5/38 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of individual graphic patterns using a bit-mapped memory with means for controlling the display position
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
This disclosure describes techniques for device authentication and/or pairing. A display system can comprise a head mountable display, computer memory, and processor(s). In response to receiving a request to authenticate a connection between the display system and a companion device (e.g., controller or other computer device), first data may be determined, the first data based at least partly on biometric data associated with a user. The first data may be sent to an authentication device configured to compare the first data to second data received from the companion device, the second data based at least partly on the biometric data. Based at least partly on a correspondence between the first and second data, the authentication device can send a confirmation to the display system to permit communication between the display system and companion device.
H04M 1/60 - Substation equipment, e.g. for use by subscribers including speech amplifiers
H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
An apparatus configured for head-worn by a user, includes: a screen configured to present graphics for the user; a camera system configured to view an environment in which the user is located; and a processing unit coupled to the camera system, the processing unit configured to: obtain a feature detection response for a first image, divide the feature detection response into a plurality of patches having a first patch and a second patch, determine a first maximum value in the first patch of the feature detection response, and identify a first set of one or more features for a first region of the first image based on a first criterion that relates to the determined first maximum value.
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
G06V 10/46 - Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
G06V 10/771 - Feature selection, e.g. selecting representative features from a multi-dimensional feature space
This disclosure is related to systems and methods for rendering audio for a mixed reality environment. Methods according to embodiments of this disclosure include receiving an input audio signal, via a wearable device in communication with a mixed reality environment, the input audio signal corresponding to a sound source originating from a real environment. In some embodiments, the system can determine one or more acoustic properties associated with the mixed reality environment. In some embodiments, the system can determine a signal modification parameter based on the one or more acoustic properties associated with the mixed reality environment. In some embodiments, the system can apply the signal modification parameter to the input audio signal to determine a second audio signal. The system can present the second audio signal to the user.
The disclosure describes an improved drop-on-demand, controlled volume technique for dispensing resist onto a substrate, which is then imprinted to create a patterned optical device suitable for use in optical applications such as augmented reality and/or mixed reality systems. The technique enables the dispensation of drops of resist at precise locations on the substrate, with precisely controlled drop volume corresponding to an imprint template having different zones associated with different total resist volumes. Controlled drop size and placement also provides for substantially less variation in residual layer thickness across the surface of the substrate after imprinting, compared to previously available techniques. The technique employs resist having a refractive index closer to that of the substrate index, reducing optical artifacts in the device. To ensure reliable dispensing of the higher index and higher viscosity resist in smaller drop sizes, the dispensing system can continuously circulate the resist.
F21V 8/00 - Use of light guides, e.g. fibre optic devices, in lighting devices or systems
G03F 7/00 - Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printed surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
25.
METHODS AND SYSTEMS FOR GENERATING VIRTUAL CONTENT DISPLAY WITH A VIRTUAL OR AUGMENTED REALITY APPARATUS
Several unique configurations for interferometric recording of volumetric phase diffractive elements with relatively high angle diffraction for use in waveguides are disclosed. Separate layer EPE and OPE structures produced by various methods may be integrated in side-by-side or overlaid constructs, and multiple such EPE and OPE structures may be combined or multiplexed to exhibit EPE/OPE functionality in a single, spatially-coincident layer. Multiplexed structures reduce the total number of layers of materials within a stack of eyepiece optics, each of which may be responsible for displaying a given focal depth range of a volumetric image. Volumetric phase type diffractive elements are used to offer properties including spectral bandwidth selectivity that may enable registered multi-color diffracted fields, angular multiplexing capability to facilitate tiling and field-of-view expansion without crosstalk, and all-optical, relatively simple prototyping compared to other diffractive element forms, enabling rapid design iteration.
G02B 30/24 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes of the stereoscopic type involving temporal multiplexing, e.g. using sequentially activated left and right shutters
G02B 30/26 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes of the autostereoscopic type
G02F 1/1334 - Constructional arrangements based on polymer-dispersed liquid crystals, e.g. microencapsulated liquid crystals
G03H 1/04 - Processes or apparatus for producing holograms
26.
METHOD OF FABRICATING DISPLAY DEVICE HAVING PATTERNED LITHIUM-BASED TRANSITION METAL OXIDE
The present disclosure generally relates to display systems, and more particularly to augmented reality display systems and methods of fabricating the same. A method of fabricating a display device includes providing a substrate comprising a lithium (Li)-based oxide and forming an etch mask pattern exposing regions of the substrate. The method additionally includes plasma etching the exposed regions of the substrate using a gas mixture comprising CHF3 to form a diffractive optical element, wherein the diffractive optical element comprises Li-based oxide features configured to diffract visible light incident thereon.
An apparatus for providing a virtual or augmented reality experience, includes: a screen, wherein the screen is at least partially transparent for allowing a user of the apparatus to view an object in an environment surrounding the user; a surface detector configured to detect a surface of the object; an object identifier configured to obtain an orientation and/or an elevation of the surface of the object, and to make an identification for the object based on the orientation and/or the elevation of the surface of the object; and a graphic generator configured to generate an identifier indicating the identification for the object for display by the screen, wherein the screen is configured to display the identifier.
G06F 3/04812 - Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 3/04842 - Selection of displayed objects or displayed text elements
G06T 7/70 - Determining position or orientation of objects or cameras
G06T 19/00 - Manipulating 3D models or images for computer graphics
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
Methods and apparatus for providing a representation of an environment, for example, in an XR system, and any suitable computer vision and robotics applications. A representation of an environment may include one or more planar features. The representation of the environment may be provided by jointly optimizing plane parameters of the planar features and sensor poses that the planar features are observed at. The joint optimization may be based on a reduced matrix and a reduced residual vector in lieu of the Jacobian matrix and the original residual vector.
A display system can include a head-mounted display configured to project light to an eye of a user to display virtual image content at different amounts of divergence and collimation. The display system can include an inward-facing imaging system that images the user's eye and processing electronics that are in communication with the inward-facing imaging system and that are configured to obtain an estimate of a center of rotation of the user's eye. The display system may render virtual image content with a render camera positioned at or relative to the center of rotation of the eye.
A61B 3/113 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions for determining or recording eye movement
A61B 3/11 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions for measuring interpupillary distance or diameter of pupils
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
G02B 30/40 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images giving the observer of a single two-dimensional [2D] image a perception of depth
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06T 3/40 - Scaling of a whole image or part thereof
G06V 10/60 - Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
G06V 40/18 - Eye characteristics, e.g. of the iris
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
H04N 13/383 - Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
An imaging system includes a light source configured to generate a light beam. The system also includes first and second light guiding optical elements having respective first and second entry portions, and configured to propagate at least respective first and second portions of the light beam by total internal reflection. The system further includes a light distributor having a light distributor entry portion, a first exit portion, and a second exit portion. The light distributor is configured to direct the first and second portions of the light beam toward the first and second entry portions, respectively. The light distributor entry portion and the first exit portion are aligned along a first axis. The light distributor entry portion and the second exit portion are aligned along a second axis different from the first axis.
An audio system and method of spatially rendering audio signals that uses modified virtual speaker panning is disclosed. The audio system may include a fixed number F of virtual speakers, and the modified virtual speaker panning may dynamically select and use a subset P of the fixed virtual speakers. The subset P of virtual speakers may be selected using a low energy speaker detection and culling method, a source geometry-based culling method, or both. One or more processing blocks in the decoder/virtualizer may be bypassed based on the energy level of the associated audio signal or the location of the sound source relative to the user/listener, respectively. In some embodiments, a virtual speaker that is designated as an active virtual speaker at a first time, may also be designated as an active virtual speaker at a second time to ensure the processing completes.
H04S 7/00 - Indicating arrangements; Control arrangements, e.g. balance control
G10L 19/008 - Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
G10L 25/21 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being power information
H04S 3/00 - Systems employing more than two channels, e.g. quadraphonic
An HMD comprises a head-mountable frame, and a light projection assembly supported by the frame. The light projection assembly comprises a micro-display supported by the frame. The micro-display has a two-dimensional array of pixels. Each of the pixels comprises a group of light emitters configured for emitting image light. The micro-display further comprises projection optics configured for receiving the image light at an entrance pupil from the group of light emitters of each of the array of pixels, and projecting focused image light from an exit pupil. The micro-display further comprises a two-dimensional array of light collimators disposed between the micro-display and the projection optics. A steerable light collimator is further configured for redirecting the emission profiles of the corresponding group of light emitters towards a center of the entrance pupil of the projection optics.
33.
MAPPING OF ENVIROMENTAL AUDIO RESPONSE ON MIXED REALITY DEVICE
This disclosure relates in general to augmented reality (AR), mixed reality (MR), or extended reality (XR) environmental mapping. Specifically, this disclosure relates to AR, MR, or XR audio mapping in an AR, MR, or XR environment. In some embodiments, the disclosed systems and methods allow the environment to be mapped based on a recording. In some embodiments, the audio mapping information is associated to voxels located in the environment.
Disclosed herein are systems and methods for presenting mixed reality audio. In an example method, audio is presented to a user of a wearable head device. A first position of the user's head at a first time is determined based on one or more sensors of the wearable head device. A second position of the user's head at a second time later than the first time is determined based on the one or more sensors. An audio signal is determined based on a difference between the first position and the second position. The audio signal is presented to the user via a speaker of the wearable head device. Determining the audio signal comprises determining an origin of the audio signal in a virtual environment. Presenting the audio signal to the user comprises presenting the audio signal as if originating from the determined origin. Determining the origin of the audio signal comprises applying an offset to a position of the user's head.
An augmented reality display having a world side and a user side includes a world side optical structure including a geometric-phase lens, an eyepiece waveguide, and a user side optical device. A dimming structure having a linear polarizer, a liquid crystal cell, and a quarter-wave plate provides attenuation. A second geometric-phase lens may be part of the user side optical device.
G02F 1/1347 - Arrangement of liquid crystal layers or cells in which the final condition of one light beam is achieved by the addition of the effects of two or more layers or cells
G02F 1/29 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the position or the direction of light beams, i.e. deflection
36.
Nose pad for a head mounted audio-visual display system
Various embodiments relate to a thermal management system for an electronic device, such as an augmented reality or virtual reality device. The thermal management system can comprise an active cooling mechanism in various embodiments to dynamically or actively cool components of the device, for example, by adjust fan speeds of a fan assembly. In some embodiments, a hardware shutdown mechanism can be provided to shut down the device if software-based thermal management devices are inoperable. In some embodiments, the air flow into and/or within the electronic device can be adjusted to cool various components of the device.
H02H 5/04 - Emergency protective circuit arrangements for automatic disconnection directly responsive to an undesired change from normal non-electric working conditions with or without subsequent reconnection responsive to abnormal temperature
H05K 7/20 - Modifications to facilitate cooling, ventilating, or heating
Disclosed herein are systems and methods for capturing a sound field, in particular, using a mixed reality device. In some embodiments, a method comprises: detecting, with a microphone of a first wearable-head device, a sound of an environment; determining a digital audio signal based on the detected sound, the digital audio signal associated with a sphere having a position in the environment; concurrently with detecting the sound, a microphone movement with respect to the environment; adjusting the digital audio signal, wherein the adjusting comprises adjusting the position of the sphere based on based on the detected microphone movement.
One embodiment is directed to a system for enabling two or more users to interact within a virtual world comprising virtual world data, comprising a computer network comprising one or more computing devices, the one or more computing devices comprising memory, processing circuitry, and software stored at least in part in the memory and executable by the processing circuitry to process at least a portion of the virtual world data; wherein at least a first portion of the virtual world data originates from a first user virtual world local to a first user, and wherein the computer network is operable to transmit the first portion to a user device for presentation to a second user, such that the second user may experience the first portion from the location of the second user, such that aspects of the first user virtual world are effectively passed to the second user.
H04L 65/401 - Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference
A63F 13/35 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers - Details of game servers
A63F 13/92 - Video game devices specially adapted to be hand-held while playing
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 16/954 - Navigation, e.g. using categorised browsing
G06T 19/00 - Manipulating 3D models or images for computer graphics
H04L 67/02 - Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
H04L 67/131 - Protocols for games, networked simulations or virtual reality
An eyepiece waveguide for an augmented reality display system includes an optically transmissive substrate, a first in-coupling grating (ICG) region, a second ICG region and one or more pupil expander and extraction gratings. The first ICG region can receive input beams of light corresponding to a first color component of an input image, and can couple them into the substrate. The second ICG region can receive input beams of light corresponding to a second color component of the input image, and can couple them into the substrate. The pupil expander and extraction gratings can replicate the in-coupled beams and out-couple them from the substrate. The first and second ICG regions can be provided at angularly separated locations around the substrate. The eyepiece waveguide can be capable of reducing color distortion in an output image.
G02B 30/22 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes of the stereoscopic type
G02B 30/25 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes of the stereoscopic type using polarisation techniques
G02B 30/26 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes of the autostereoscopic type
G02B 30/20 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes
G02B 30/27 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes of the autostereoscopic type involving lenticular arrays
G02B 30/28 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer’s left and right eyes of the autostereoscopic type involving lenticular arrays involving active lenticular arrays
G02B 30/34 - Stereoscopes providing a stereoscopic pair of separated images corresponding to parallactically displaced views of the same object, e.g. 3D slide viewers
An augmented reality system includes a light source configured to generate a virtual light beam. The system also includes a light guiding optical element having an entry portion, an exit portion, and a surface having a diverter disposed adjacent thereto. The light source and the light guiding optical element are configured such that the virtual light beam enters the light guiding optical element through the entry portion, propagates through the light guiding optical element by at least partially reflecting off of the surface, and exits the light guiding optical element through the exit portion. The light guiding optical element is transparent to a first real-world light beam. The diverter is configured to modify a light path of a second real-world light beam at the surface.
G02B 30/52 - Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels the 3D volume being constructed from a stack or sequence of 2D planes, e.g. depth sampling systems
An augmented reality (AR) device can be configured to generate a virtual representation of a user's physical environment. The AR device can capture images of the user's physical environment to generate a mesh map. The AR device can project graphics at designated locations on a virtual bounding box to guide the user to capture images of the user's physical environment. The AR device can provide visual, audible, or haptic guidance to direct the user of the AR device to look toward waypoints to generate the mesh map of the user's environment.
Disclosed herein are systems and methods for presenting and annotating virtual content. According to an example method, a virtual object is presented to a first user at a first position via a transmissive display of a wearable device. A first input is received from the first user. In response to receiving the first input, a virtual annotation is presented at a first displacement from the first position. A first data is transmitted to a second user, the first data associated with the virtual annotation and the first displacement. A second input is received from the second user. In response to receiving the second input, the virtual annotation is presented to the first user at a second displacement from the first position. Second data is transmitted to a remote server, the second data associated with the virtual object, the virtual annotation, the second displacement, and the first position.
Systems and methods for compressing dynamic unstructured point clouds. A dynamic unstructured point cloud can be mapped to a skeletal system of a subject to form one or more structured point cloud representations. One or more sequences of the structured point cloud representations can be formed. The one or more sequences of structured point cloud representations can then be compressed.
Controllable three-dimensional (3D) virtual dioramas in a rendered 3D environment such as a virtual reality or augmented reality environment including one or more rendered objects. 3D diorama is associated with a spatial computing content item such as a downloadable application executable by a computing device. 3D diorama assets may include visual and/or audio content and are presented with rendered 3D environment objects in a composite view, which is presented to a user through a display of computing device. 3D diorama is rotatable in composite view, and at least one 3D diorama asset at least partially occludes, or is at least partially occluded by, at least one rendered 3D environment object. 3D diorama may depict or provide a preview of a spatial computing user experience generated by the downloadable application.
G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06T 19/00 - Manipulating 3D models or images for computer graphics
H04N 13/279 - Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
An extended reality display system includes a display subsystem configured to present an image corresponding to image data to a user. The display subsystem includes an optical component that introduce a non-uniformity to the image, a segmented illumination light source, and a spatial light modulator (SLM) configured to receive light from the segmented illumination light source. The system also includes a display controller configured to control the segmented illumination light source. The display controller includes a memory for storing non-uniformity correction information, and a processor to control the segmented illumination light source based on the non-uniformity correction information. The segmented illumination light source is configured to differentially illuminate first and second portions of the SLM using respective first and second portions of the segmented illumination light source.
47.
METHODS AND APPARATUSES FOR CASTING POLYMER PRODUCTS
In an example method of forming a waveguide part having a predetermined shape, a photocurable material is dispensed into a space between a first mold portion and a second mold portion opposite the first mold portion. A relative separation between a surface of the first mold portion with respect to a surface of the second mold portion opposing the surface of the first mold portion is adjusted to fill the space between the first and second mold portions. The photocurable material in the space is irradiated with radiation suitable for photocuring the photocurable material to form a cured waveguide film so that different portions of the cured waveguide film have different rigidity. The cured waveguide film is separated from the first and second mold portions. The waveguide part is singulated from the cured waveguide film. The waveguide part corresponds to portions of the cured waveguide film having a higher rigidity than other portions of the cured waveguide film.
A fan assembly is disclosed. The fan assembly can include a first support frame. The fan assembly can comprise a shaft assembly having a first end coupled with the first support frame and a second end disposed away from the first end. A second support frame can be coupled with the first support frame and disposed at or over the second end of the shaft assembly. An impeller can have fan blades coupled with a hub, the hub being disposed over the shaft assembly for rotation between the first and second support frames about a longitudinal axis. Transverse loading on the shaft assembly can be controlled by the first and second support frames.
A wearable ophthalmic device may include a head-mounted light field display configured to generate a physical light field comprising a beam of light. Camera(s) on or in communication with the device may receive light from the surroundings, and a light field processor may determine, based on the light, left and right numerical light field image data describing image(s) to be displayed to the left and right eyes respectively. The left and/or right numerical light field image data can be modified to computationally introduce a shift based on a determined convergence point of the eyes, and the physical light field presented to the user can be generated corresponding to the modified numerical light field image data, e.g., to correct for a convergence deficiency of the eye(s).
Systems and methods for displaying a virtual reticle in an augmented or virtual reality environment by a wearable device are described. The environment can include real or virtual objects that may be interacted with by the user through a variety of poses, such as, e.g., head pose, eye pose or gaze, or body pose. The user may select objects by pointing the virtual reticle toward a target object by changing pose or gaze. The wearable device can recognize that an orientation of a user's head or eyes is outside of a range of acceptable or comfortable head or eye poses and accelerate the movement of the reticle away from a default position and toward a position in the direction of the user's head or eye movement, which can reduce the amount of movement by the user to align the reticle and target.
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/04812 - Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06T 19/00 - Manipulating 3D models or images for computer graphics
G06V 20/20 - Scenes; Scene-specific elements in augmented reality scenes
G06V 40/18 - Eye characteristics, e.g. of the iris
51.
EYEPIECE FOR HEAD-MOUNTED DISPLAY AND METHOD FOR MAKING THE SAME
A method, includes providing a wafer including a first surface grating extending over a first area of a surface of the wafer and a second surface grating extending over a second area of the surface of the wafer; de-functionalizing a portion of the surface grating in at least one of the first surface grating area and the second surface grating area; and singulating an eyepiece from the wafer, the eyepiece including a portion of the first surface grating area and a portion of the second surface grating area. The first surface grating in the eyepiece corresponds to an input coupling grating for a head-mounted display and the second surface grating corresponds to a pupil expander grating for the head-mounted display.
Disclosed herein are systems and methods for setting, accessing, and modifying user privacy settings using a distributed ledger. In an aspect, a system can search previously stored software contracts to locate an up-to-date version of a software contract associated with a user based on a request for access to user data for the particular user. Then, the system determines that the user data is permitted to be shared. The system transmits, to a data virtualization platform, instructions to extract encrypted user data from a data platform. The system can then make available, to a data verification system, a private encryption key and details associated with the software contract to verify that the private encryption key and the user data match. Then the system transmits, to the data virtualization platform, the private encryption key so that the data virtualization platform can decrypt the encrypted user data.
Neutral avatars are neutral with reference physical characteristics of the corresponding user, such as weight, ethnicity, gender, or even identity. Thus, neutral avatars may be desirable to use in various copresence environments where the user desires to maintain privacy with reference to the above-noted characteristics. Neutral avatars may be configured to convey, in real-time, actions and behaviors of the corresponding user without using literal forms of the user's actions and behaviors.
Systems, apparatus, and methods for double-sided imprinting are provided. An example system includes first rollers for moving a first web including a first template having a first imprinting feature, second rollers for moving a second web including a second template having a second imprinting feature, dispensers for dispensing resist, a locating system for locating reference marks on the first and second webs for aligning the first and second templates, a light source for curing the resist, such that a cured first resist has a first imprinted feature corresponding to the first imprinting feature on one side of the substrate and a cured second resist has a second imprinted feature corresponding to the second imprinting feature on the other side of the substrate, and a moving system for feeding in the substrate between the first and second templates and unloading the double-imprinted substrate from the first and second webs.
B29C 59/04 - Surface shaping, e.g. embossing; Apparatus therefor by mechanical means, e.g. pressing using rollers or endless belts
B29C 43/22 - Compression moulding, i.e. applying external pressure to flow the moulding material; Apparatus therefor of articles of indefinite length
B29C 43/28 - Compression moulding, i.e. applying external pressure to flow the moulding material; Apparatus therefor of articles of indefinite length incorporating preformed parts or layers, e.g. compression moulding around inserts or for coating articles
B29C 43/30 - Making multilayered or multicoloured articles
B29C 43/34 - Feeding the material to the mould or the compression means
B29C 51/26 - Component parts, details or accessories; Auxiliary operations
G03F 7/00 - Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printed surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
55.
POWER-EFFICIENT HAND TRACKING WITH TIME-OF-FLIGHT SENSOR
Techniques are disclosed for operating a time-of-flight (TOF) sensor. The TOF may be operated in a low power mode by repeatedly performing a low power mode sequence, which may include performing a depth frame by emitting light pulses, detecting reflected light pulses, and computing a depth map based on the detected reflected light pulses. Performing the low power mode sequence may also include performing an amplitude frame at least one time by emitting a light pulse, detecting a reflected light pulse, and computing an amplitude map based on the detected reflected light pulse. In response to determining that an activation condition is satisfied, the TOF may be switched to operate in a high accuracy mode by repeatedly performing a high accuracy mode sequence, which may include performing the depth frame multiple times.
Examples of systems and methods for rendering an avatar in a mixed reality environment are disclosed. The systems and methods may be configured to automatically select avatar characteristics that optimize gaze perception by the user, based on context parameters associated with the virtual environment.
A display assembly suitable for use with a virtual or augmented reality headset is described and includes the following: an input coupling grating; a scanning mirror configured to rotate about two or more different axes of rotation; an optical element; and optical fibers, each of which have a light emitting end disposed between the input coupling grating and the scanning mirror and oriented such that light emitted from the light emitting end is refracted through at least a portion of the optical element, reflected off the scanning mirror, refracted back through the optical element and into the input coupling grating. The scanning mirror can be built upon a MEMS type architecture.
G02B 26/08 - Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
A method of forming a waveguide for an eyepiece for a display system to reduce optical degradation of the waveguide during segmentation is disclosed herein. The method includes providing a substrate having top and bottom major surfaces and a plurality of surface features, and using a laser beam to cut out a waveguide from said substrate by cutting along a path contacting and/or proximal to said plurality of surface features. The waveguide has edges formed by the laser beam and a main region and a peripheral region surrounding the main region. The peripheral region is surrounded by the edges.
An augmented reality (AR) system includes a handheld device comprising handheld fiducials affixed to the handheld device. The AR system also includes a wearable device comprising a display operable to display virtual content and an imaging device mounted to the wearable device and having a field of view that at least partially includes the handheld fiducials and a hand of a user. The AR system also includes a computing apparatus configured to receive hand pose data associated with the hand based on an image captured by the imaging device and receive handheld device pose data associated with the handheld device based on the image captured by the imaging device. The computing apparatus is also configured to determine a pose discrepancy between the hand pose data and the handheld device pose data and perform an operation to fuse the hand pose data with the handheld device pose data.
Disclosed herein are systems and methods for colocating virtual content. A method may include receiving first persistent coordinate data, second persistent coordinate data, and relational data. A third persistent coordinate data and a fourth persistent coordinate data may be determined based on input received via one or more sensors of a head-wearable device. It can be determined whether the first persistent coordinate data corresponds to the third persistent coordinate data. In accordance with a determination that the first persistent coordinate data corresponds to the third persistent coordinate data, it can be determined whether the second persistent coordinate data corresponds to the fourth persistent coordinate data. In accordance with a determination that the second persistent coordinate data corresponds to the fourth persistent coordinate data, a virtual object can be displayed using the relational data and the second persistent coordinate data via a display of the head-wearable device. In accordance with a determination that the second persistent coordinate data does not correspond to the fourth persistent coordinate data, the virtual object can be displayed using the relational data and the first persistent coordinate data via the display of the head-wearable device. In accordance with a determination that the first persistent coordinate data does not correspond to the third persistent coordinate data, the method may forgo displaying the virtual object via the display of the head-wearable device.
An electronic device is disclosed. The electronic device comprises a first clock configured to operate at a frequency. First circuitry of the electronic device is configured to synchronize with the first clock. Second circuitry is configured to determine a second clock based on the first clock. The second clock is configured to operate at the frequency of the first clock, and is further configured to operate with a phase shift with respect to the first clock. Third circuitry is configured to synchronize with the second clock.
H03L 7/081 - Automatic control of frequency or phase; Synchronisation using a reference signal applied to a frequency- or phase-locked loop - Details of the phase-locked loop provided with an additional controlled phase shifter
62.
WIDE FIELD-OF-VIEW POLARIZATION SWITCHES WITH LIQUID CRYSTAL OPTICAL ELEMENTS WITH PRETILT
A switchable optical assembly comprises a switchable waveplate configured to be electrically activated and deactivated to selectively alter the polarization state of light incident on the switchable waveplate. The switchable waveplate comprises first and second surfaces and a liquid crystal layer disposed between the first and second surfaces. The liquid crystal layer comprises a plurality of liquid crystal molecules. The first surface and/or the second surface may be planar. The first surface and/or the second surface may be curved. The plurality of liquid crystal molecules may vary in tilt with respect to the first and second surfaces with outward radial distance from an axis through the first and second surfaces and the liquid crystal layer in a plurality of radial directions. The switchable waveplate can include a plurality of electrodes to apply an electrical signal across the liquid crystal layer.
G02F 1/137 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour based on liquid crystals, e.g. single liquid crystal display cells characterised by the electro-optical or magneto-optical effect, e.g. field-induced phase transition, orientation effect, guest-host interaction or dynamic scattering
G02B 30/34 - Stereoscopes providing a stereoscopic pair of separated images corresponding to parallactically displaced views of the same object, e.g. 3D slide viewers
G02F 1/01 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour
G02F 1/13 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour based on liquid crystals, e.g. single liquid crystal display cells
An optical projection system includes a source of collimated light, a first microelectromechanical system mirror positioned to receive collimated light from the source, and an optical relay system positioned to receive collimated light from the first microelectromechanical system mirror. The optical relay system includes a single-pass relay having a first component, a second component, and a third component. The optical projection system also includes a second microelectromechanical system mirror positioned to receive collimated light from the optical relay system and an eyepiece positioned to receive light reflected from the second microelectromechanical system mirror.
An optical scanner includes a base region and a cantilevered silicon beam protruding from the base region. The optical scanner also includes a waveguide disposed on the base region and the cantilevered silicon beam and a transducer assembly comprising one or more piezoelectric actuators coupled to the cantilevered silicon beam and configured to induce motion of the cantilevered silicon beam in a scan pattern.
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using depth data to update camera calibration data. In some implementations, a frame of data is captured including (i) depth data from a depth sensor of a device, and (ii) image data from a camera of the device. Selected points from the depth data are transformed, using camera calibration data for the camera, to a three-dimensional space that is based on the image data. The transformed points are projected onto the two-dimensional image data from the camera. Updated camera calibration data is generated based on differences between (i) the locations of the projected points and (ii) locations that features representing the selected points appear in the two-dimensional image data from the camera. The updated camera calibration data can be used in a simultaneous localization and mapping process.
An example a head-mounted display device includes a light projector and an eyepiece. The eyepiece is arranged to receive light from the light projector and direct the light to a user during use of the wearable display system. The eyepiece includes a waveguide having an edge positioned to receive light from the display light source module and couple the light into the waveguide. The waveguide includes a first surface and a second surface opposite the first surface. The waveguide includes several different regions, each having different grating structures configured to diffract light according to different sets of grating vectors.
Blazed diffraction gratings provide optical elements in head-mounted display systems to, e.g., incouple light into or out-couple light out of a waveguide. These blazed diffraction gratings may be configured to have reduced polarization sensitivity. Such gratings may, for example, incouple or outcouple light of different polarizations with similar level of efficiency. The blazed diffraction gratings and waveguides may be formed in a high refractive index substrate such as lithium niobate. In some implementations, the blazed diffraction gratings may include diffractive features having a feature height of 40 nm to 120 nm, for example, 80 nm. The diffractive features may be etched into the high index substrate, e.g., lithium niobate.
Methods are disclosed for fabricating molds for forming waveguides with integrated spacers for forming eyepieces. The molds are formed by etching features (e.g., 1 μm to 1000 μm deep) into a substrate comprising single crystalline material using an anisotropic wet etch. The etch masks for defining the large features may comprise a plurality of holes, wherein the size and shape of each hole at least partially determine the depth of the corresponding large feature. The holes may be aligned along a crystal axis of the substrate and the etching may automatically stop due to the crystal structure of the substrate. The patterned substrate may be utilized as a mold onto which a flowable polymer may be introduced and allowed to harden. Hardened polymer in the holes may form a waveguide with integrated spacers. The mold may be also used to fabricate a platform comprising a plurality of vertically extending microstructures of precise heights, to test the curvature or flatness of a sample, e.g., based on the amount of contact between the microstructures and the sample.
B29C 43/02 - Compression moulding, i.e. applying external pressure to flow the moulding material; Apparatus therefor of articles of definite length, i.e. discrete articles
B29C 33/42 - SHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING - Details thereof or accessories therefor characterised by the shape of the moulding surface, e.g. ribs or grooves
B29C 43/40 - Moulds for making articles of definite length, i.e. discrete articles with means for cutting the article
B29L 11/00 - Optical elements, e.g. lenses, prisms
69.
CROSS REALITY SYSTEM WITH SIMPLIFIED PROGRAMMING OF VIRTUAL CONTENT
A cross reality system that renders virtual content generated by executing native mode applications may be configured to render web-based content using components that render content from native applications. The system may include a Prism manager that provides Prisms in which content from executing native applications is rendered. For rendering web based content, a browser, accessing the web based content, may be associated with a Prism and may render content into its associated Prism, creating the same immersive experience for the user as when content is generated by a native application. The user may access the web application from the same program launcher menu as native applications. The system may have tools that enable a user to access these capabilities, including by creating for a web location an installable entity that, when processed by the system, results in an icon for the web content in a program launcher menu.
G06T 19/00 - Manipulating 3D models or images for computer graphics
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
Techniques are described for operating an optical system. In some embodiments, light associated with a world object is received at the optical system. Virtual image light is projected onto an eyepiece of the optical system. A portion of a system field of view of the optical system to be at least partially dimmed is determined based on information detected by the optical system. A plurality of spatially-resolved dimming values for the portion of the system field of view may be determined based on the detected information. The detected information may include light information, gaze information, and/or image information. A dimmer of the optical system may be adjusted to reduce an intensity of light associated with the world object in the portion of the system field of view according to the plurality of dimming values.
A wearable device can present virtual content to the wearer for many applications in a healthcare setting. The wearer may be a patient or a healthcare provider (HCP). Such applications can include, but are not limited to, access, display, and modification of patient medical records and sharing patient medical records among authorized HCPs.
G16H 10/60 - ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
A61B 3/00 - Apparatus for testing the eyes; Instruments for examining the eyes
A61B 3/10 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions
A61B 3/113 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions for determining or recording eye movement
A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
A61B 5/06 - Devices, other than using radiation, for detecting or locating foreign bodies
A61B 5/1171 - Identification of persons based on the shapes or appearances of their bodies or parts thereof
A61B 17/00 - Surgical instruments, devices or methods, e.g. tourniquets
A61B 34/20 - Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
A61B 90/00 - Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups , e.g. for luxation treatment or for protecting wound edges
A61B 90/50 - Supports for surgical instruments, e.g. articulated arms
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
G16H 30/40 - ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
G16H 40/67 - ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
72.
TWO-DIMENSIONAL MICRO-ELECTRICAL MECHANICAL SYSTEM MIRROR AND ACTUATION METHOD
A two-dimensional scanning micromirror device includes a base, a first platform coupled to the base by first support flexures, and a second platform including a reflector and coupled to the first platform by second support flexures. The first platform is oscillatable about a first axis and the second platform is oscillatable about a second axis orthogonal to the first axis. The first platform, the second platform, and the second support flexures together exhibit a first resonance having a first frequency, the first resonance corresponds to oscillatory motion of at least the first platform, the second platform, and the second support flexures about the first axis. The first platform, the second platform, and the second support flexures together exhibit a second resonance having a second frequency, and the second resonance corresponds to oscillatory motion of at least the second platform about the second axis.
G02B 26/08 - Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
G03B 21/00 - Projectors or projection-type viewers; Accessories therefor
A display system can include a head-mounted display configured to project light to an eye of a user to display virtual image content at different amounts of divergence and collimation. The display system can include an inward-facing imaging system possibly comprising a plurality of cameras that image the user's eye and glints thereon and processing electronics that are in communication with the inward-facing imaging system and that are configured to obtain an estimate of a center of cornea of the user's eye using data derived from the glint images. The display system may use spherical and aspheric cornea models to estimate a location of the corneal center of the user's eye.
A61B 3/113 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions for determining or recording eye movement
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
Examples of systems and methods for matching a base mesh to a target mesh for a virtual avatar or object are disclosed. The systems and methods may be configured to automatically match a base mesh of an animation rig to a target mesh, which may represent a particular pose of the virtual avatar or object. Base meshes may be obtained by manipulating an avatar or object into a particular pose, while target meshes may be obtain by scanning, photographing, or otherwise obtaining information about a person or object in the particular pose. The systems and methods may automatically match a base mesh to a target mesh using rigid transformations in regions of higher error and non-rigid deformations in regions of lower error.
A sensory eyewear system for a mixed reality device can facilitate user's interactions with the other people or with the environment. As one example, the sensory eyewear system can recognize and interpret a sign language, and present the translated information to a user of the mixed reality device. The wearable system can also recognize text in the user's environment, modify the text (e.g., by changing the content or display characteristics of the text), and render the modified text to occlude the original text.
A wearable computing system that includes a head-mounted display implements a gaze timer feature for enabling the user to temporarily extend the functionality of a handheld controller or other user input device. In one embodiment, when the user gazes at, or in the vicinity of, a handheld controller for a predetermined period of time, the functionality of one or more input elements (e.g., buttons) of the handheld controller is temporarily modified. For example, the function associated with a particular controller button may be modified to enable the user to open a particular menu using the button. The gaze timer feature may, for example, be used to augment the functionality of a handheld controller or other user input device during mixed reality and/or augmented reality sessions.
G06F 3/023 - Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
G06F 3/048 - Interaction techniques based on graphical user interfaces [GUI]
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
Examples of the disclosure describe systems and methods for presenting an audio signal to a user of a wearable head device. According to an example method, a source location corresponding to the audio signal is identified. For each of the respective left and right ear of the user, a virtual speaker position, of a virtual speaker array, is determined, the virtual speaker position collinear with the source location and with a position of the respective ear. For each of the respective left and right ear of the user, a head-related transfer function (HRTF) corresponding to the virtual speaker position and to the respective ear is determined; and the output audio signal is presented to the respective ear of the user via one or more speakers associated with the wearable head device. Processing the audio signal includes applying the HRTF to the audio signal.
The disclosure relates to systems and methods for displaying three-dimensional (3D) content in a spatial 3D environment. The systems and methods can include receiving a request from web domain to display 3D content of certain dimensions at a location within the spatial 3D environment, identifying whether the placement is within an authorized portion of the spatial 3D environment, expanding the authorized portion of the 3D spatial environment to display the 3D content based on a user authorization to resize the authorized portion, and displaying the 3D content.
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 16/954 - Navigation, e.g. using categorised browsing
H04N 13/332 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
80.
METHOD AND SYSTEM FOR PERFORMING SPATIAL FOVEATION BASED ON EYE GAZE
A method includes determining an eye gaze location of a user and generating a spatial foveation map based on the eye gaze location. The method also includes receiving an image, forming a spatially foveated image using the image and the spatial foveation map, and transmitting the spatially foveated image to a wearable device. The method further includes spatially defoveating the spatially foveated image to produce a spatially defoveated image and displaying the spatially defoveated image.
An eyepiece for projecting an image to a viewer includes a substrate positioned in a substrate lateral plane and a set of color filters disposed on the substrate. The set of color filters comprise a first color filter disposed at a first lateral position and operable to pass a first wavelength range, a second color filter disposed at a second lateral position and operable to pass a second wavelength range, and a third color filter disposed at a third lateral position and operable to pass a third wavelength range. The eyepiece further includes a first planar waveguide positioned in a first lateral plane adjacent the substrate lateral plane, a second planar waveguide positioned in a second lateral plane adjacent to the first lateral plane, and a third planar waveguide positioned in a third lateral plane adjacent to the second lateral plane.
A method of fabricating a fiber scanning system includes forming a set of piezoelectric elements. The method also includes coating an interior surface and an exterior surface of each of the set of piezoelectric elements with a first conductive material. The method also includes providing a fiber optic element having an actuation region and coating the actuation region of the fiber optic element with a second conductive material. The method also includes joining the interior surfaces of the set of piezoelectric elements to the actuation region of the fiber optic element and poling the set of piezoelectric elements. The method also includes forming electrical connections to the exterior surface of each of the set of piezoelectric elements and the fiber optic element.
H02N 2/00 - Electric machines in general using piezoelectric effect, electrostriction or magnetostriction
H02N 2/02 - Electric machines in general using piezoelectric effect, electrostriction or magnetostriction producing linear motion, e.g. actuators; Linear positioners
H10N 30/045 - Treatments to modify a piezoelectric or electrostrictive property, e.g. polarisation characteristics, vibration characteristics or mode tuning by polarising
A thin transparent layer can be integrated in a head mounted display device and disposed in front of the eye of a wearer. The thin transparent layer may be configured to output light such that light is directed onto the eye to create reflections therefrom that can be used, for example, for glint based tracking. The thin transparent layer can be configured to reduced obstructions in the field of the view of the user.
A handheld controller includes a housing having a frame, one or more external surfaces, and a plurality of vibratory external surfaces. The handheld controller also includes a plurality of vibration sources disposed in the housing, where the one or more external surfaces and the plurality of vibration sources are mechanically coupled to the frame. The handheld controller also includes a plurality of structural members, where each of the plurality of structural members mechanically couple one of the plurality of vibratory external surfaces to one of the plurality of vibration sources.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
A63F 13/245 - Constructional details thereof, e.g. game controllers with detachable joystick handles specially adapted to a particular type of game, e.g. steering wheels
A63F 13/285 - Generating tactile feedback signals via the game input device, e.g. force feedback
85.
EYE-IMAGING APPARATUS USING DIFFRACTIVE OPTICAL ELEMENTS
Examples of eye-imaging apparatus using diffractive optical elements are provided. For example, an optical device comprises a substrate having a proximal surface and a distal surface, a first coupling optical element disposed on one of the proximal and distal surfaces of the substrate, and a second coupling optical element disposed on one of the proximal and distal surfaces of the substrate and offset from the first coupling optical element. The first coupling optical element can be configured to deflect light at an angle to totally internally reflect (TIR) the light between the proximal and distal surfaces and toward the second coupling optical element, and the second coupling optical element can be configured to deflect at an angle out of the substrate. The eye-imaging apparatus can be used in a head-mounted display such as an augmented or virtual reality display.
One embodiment is directed to a system for enabling two or more users to interact within a virtual world comprising virtual world data, comprising a computer network comprising one or more computing devices, the one or more computing devices comprising memory, processing circuitry, and software stored at least in part in the memory and executable by the processing circuitry to process at least a portion of the virtual world data; wherein at least a first portion of the virtual world data originates from a first user virtual world local to a first user, and wherein the computer network is operable to transmit the first portion to a user device for presentation to a second user, such that the second user may experience the first portion from the location of the second user, such that aspects of the first user virtual world are effectively passed to the second user.
Disclosed herein are systems and methods for calculating angular acceleration based on inertial data using two or more inertial measurement units (IMUs). The calculated angular acceleration may be used to estimate a position of a wearable head device comprising the IMUs. Virtual content may be presented based on the position of the wearable head device. In some embodiments, a first IMU and a second IMU share a coincident measurement axis.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G01P 15/08 - Measuring acceleration; Measuring deceleration; Measuring shock, i.e. sudden change of acceleration by making use of inertia forces with conversion into electric or magnetic values
G01P 15/16 - Measuring acceleration; Measuring deceleration; Measuring shock, i.e. sudden change of acceleration by evaluating the time-derivative of a measured speed signal
88.
INTERACTIONS WITH 3D VIRTUAL OBJECTS USING POSES AND MULTIPLE-DOF CONTROLLERS
A wearable system can comprise a display system configured to present virtual content in a three-dimensional space, a user input device configured to receive a user input, and one or more sensors configured to detect a user's pose. The wearable system can support various user interactions with objects in the user's environment based on contextual information. As an example, the wearable system can adjust the size of an aperture of a virtual cone during a cone cast (e.g., with the user's poses) based on the contextual information. As another example, the wearable system can adjust the amount of movement of virtual objects associated with an actuation of the user input device based on the contextual information.
G06T 19/00 - Manipulating 3D models or images for computer graphics
G06F 1/16 - Constructional details or arrangements
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0346 - Pointing devices displaced or positioned by the user; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
89.
THREAD WEAVE FOR CROSS-INSTRUCTION SET ARCHITECTUREPROCEDURE CALLS
The invention provides a method of initiating code including (i) storing an application having first, second and third functions, the first function being a main function that calls the second and third functions to run the application, (ii) compiling the application to first and second heterogeneous processors to create first and second central processing unit (CPU) instruction set architecture (ISA) objects respectively, (iii) pruning the first and second CPU ISA objects by removing the third function from the first CPU ISA objects and removing first and second functions from the second CPU ISA objects, (iv) proxy inserting first and second remote procedure calls (RPC's) in the first and second CPU ISA objects respectively, and pointing respectively to the third function in the second CPU ISA objects and the second function in the first CPU ISA objects, and (v) section renaming the second CPU ISA objects to common application library.
Head-mounted virtual and augmented reality display systems include a light projector with one or more emissive micro-displays having a first resolution and a pixel pitch. The projector outputs light forming frames of virtual content having at least a portion associated with a second resolution greater than the first resolution. The projector outputs light forming a first subframe of the rendered frame at the first resolution, and parts of the projector are shifted using actuators, such that physical positions of light output for individual pixels occupy gaps between the old locations of light output for individual pixels. The projector then outputs light forming a second subframe of the rendered frame. The first and second subframes are outputted within the flicker fusion threshold. Advantageously, an emissive micro-display (e.g., micro-LED display) having a low resolution can form a frame having a higher resolution by using the same light emitters to function as multiple pixels of that frame.
G02B 27/14 - Beam splitting or combining systems operating by reflection only
G02B 27/18 - Optical systems or apparatus not provided for by any of the groups , for optical projection, e.g. combination of mirror and condenser and objective
G02B 27/62 - Optical apparatus specially adapted for adjusting optical elements during the assembly of optical systems
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
G09G 3/32 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix using controlled light sources using electroluminescent panels semiconductive, e.g. using light-emitting diodes [LED]
H02N 2/02 - Electric machines in general using piezoelectric effect, electrostriction or magnetostriction producing linear motion, e.g. actuators; Linear positioners
A fiber scanning projector includes a piezoelectric element and a scanning fiber passing through and mechanically coupled to the piezoelectric element. The scanning fiber emits light propagating along an optical path. The fiber scanning projector also includes a first polarization sensitive reflector disposed along and perpendicular to the optical path. The first polarization sensitive reflector includes an aperture and the scanning fiber passes through the aperture. The fiber scanning projector also includes a second polarization sensitive reflector disposed along and perpendicular to the optical path.
A high-resolution image sensor suitable for use in an augmented reality (AR) system. The AR system may be small enough to be packaged within a wearable device such as a set of goggles or mounted on a frame resembling ordinary eyeglasses. The image sensor may have pixels configured to output events indicating changes in sensed IR light. Those pixels may be sensitive to IR light of the same frequency source as an active IR light source, and may be part of an eye tracking camera, oriented toward a user's eye. Changes in IR light may be used to determine the location of the user's pupil, which may be used in rendering virtual objects. The events may be generated and processed at a high rate, enabling the system to render the virtual object based on the user's gaze so that the virtual object will appear more realistic to the user.
A two-dimensional waveguide light multiplexer can efficiently multiplex and distribute a light signal in two dimensions. An example of a two-dimensional waveguide light multiplexer can include a waveguide, a first diffraction grating, and a second diffraction grating arranged such that the grating direction of the first diffraction grating is perpendicular to the grating direction of the second diffraction grating. In some examples, the first and second diffraction gratings are on opposite sides of a waveguide. In some examples, the first and second diffraction gratings are on a same side of a waveguide, with the second grating over the first grating.
G02F 1/00 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics
G02F 1/1335 - Structural association of cells with optical devices, e.g. polarisers or reflectors
94.
METHOD AND SYSTEM FOR PERFORMING OPTICAL IMAGING IN AUGMENTED REALITY DEVICES
An image projection system includes an illumination source and an eyepiece waveguide including a plurality of diffractive incoupling optical elements. The eyepiece waveguide includes a region operable to transmit light from the illumination source. The image projection system also includes a first optical element including a reflective polarizer, a second optical element including a partial reflector, a first quarter waveplate disposed between the first optical element and the second optical element, a reflective spatial light modulator, and a second quarter waveplate disposed between the second optical element and the reflective spatial light modulator.
An imprint lithography method of configuring an optical layer includes selecting one or more parameters of a nanolayer to be applied to a substrate for changing an effective refractive index of the substrate and imprinting the nanolayer on the substrate to change the effective refractive index of the substrate such that a relative amount of light transmittable through the substrate is changed by a selected amount.
G03F 7/00 - Photomechanical, e.g. photolithographic, production of textured or patterned surfaces, e.g. printed surfaces; Materials therefor, e.g. comprising photoresists; Apparatus specially adapted therefor
G02B 1/118 - Anti-reflection coatings having sub-optical wavelength surface structures designed to provide an enhanced transmittance, e.g. moth-eye structures
96.
REAL-TIME PREVIEW OF CONNECTABLE OBJECTS IN A PHYSICALLY-MODELED VIRTUAL SPACE
Virtual reality (VR), augmented reality (AR), and/or mixed reality (MR) systems may enable one or more users to connect two or more connectable objects together. These connectable objects may be real objects from the user's environment, virtual objects, or a combination thereof. A preview system may be included as a part of the VR, AR, and/or MR systems that provide a preview of the connection between the connectable objects prior to the user(s) connecting the connectable objects. The preview may include a representation of the connectable objects in a connected state along with an indication of whether the connected state is valid or invalid. The preview system may continuously physically model the connectable objects while simultaneously displaying a preview of the connection process to the user of the VR, AR, or MR system.
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/04845 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
An image projection system includes an illumination source, a linear polarizer, and an eyepiece waveguide including a plurality of diffractive in-coupling optical elements. The eyepiece waveguide includes a region operable to transmit illumination light from the illumination source. The image projection system also includes a polarizing beamsplitter, a reflective structure, a quarter waveplate disposed between the polarizing beamsplitter and the reflective structure, and a reflective spatial light modulator.
A system includes a first chuck operable to support a stencil including a plurality of apertures, a wafer chuck operable to support and move a wafer including a plurality of incoupling gratings, a first light source operable to direct light to impinge on a first surface of the stencil, and one or more second light sources operable to direct light to impinge on the wafer. The system also includes one or more lens and camera assemblies operable to receive light from the first light source passing through the plurality of apertures in the stencil and receive light from the one or more second light sources diffracted from the plurality of incoupling gratings in the wafer. The system also includes an alignment system operable to move the wafer with respect to the stencil to reduce an offset between aperture locations and incoupling grating locations.
B29C 39/02 - Shaping by casting, i.e. introducing the moulding material into a mould or between confining surfaces without significant moulding pressure; Apparatus therefor for making articles of definite length, i.e. discrete articles
An eye tracking system can include eye-tracking camera(s) configured to obtain images of the eye at different exposure times or different frame rates. For example, longer exposure images of the eye taken at a longer exposure time can show iris or pupil features, and shorter exposure, glint images can show peaks of glints reflected from the eye. The shorter exposure glint images may be taken at a higher frame rate than the longer exposure images for more accurate gaze prediction. The shorter exposure glint images can be analyzed to provide glint locations to subpixel accuracy. The longer exposure images can be analyzed for pupil center and/or center of rotation. The eye tracking system can predict gaze direction, which can be used for foveated rendering by a wearable display system. In some instances, the eye-tracking system may estimate the location of a partially or totally occluded glint.
Systems and methods for interacting with virtual objects in a three-dimensional space using a wearable system are disclosed. The wearable system can be programmed to permit user interaction with interactable objects in a field of regard (FOR) of a user. The FOR includes a portion of the environment around the user that is capable of being perceived by the user via the AR system. The system can determine a group of interactable objects in the FOR of the user and determine a pose of the user. The system can update, based on a change in the pose or a field of view (FOV) of the user, a subgroup of the interactable objects that are located in the FOV of the user and receive a selection of a target interactable object from the subgroup of interactable objects. The system can initiate a selection event on the target interactable object.
G06F 3/0346 - Pointing devices displaced or positioned by the user; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 1/16 - Constructional details or arrangements
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/04883 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text