A system may comprise a selectively transparent projection device for projecting an image toward an eye of a viewer from a projection device position in space relative to the eye of the viewer, the projection device being capable of assuming a substantially transparent state when no image is projected; an occlusion mask device coupled to the projection device and configured to selectively block light traveling toward the eye from one or more positions opposite of the projection device from the eye of the viewer in an occluding pattern correlated with the image projected by the projection device; and a zone plate diffraction patterning device interposed between the eye of the viewer and the projection device and configured to cause light from the projection device to pass through a diffraction pattern having a selectable geometry as it travels to the eye.
G02B 30/24 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p. ex. des effets stéréoscopiques en fournissant des première et seconde images de parallaxe à chacun des yeux gauche et droit d’un observateur du type stéréoscopique impliquant un multiplexage temporel, p. ex. utilisant des obturateurs gauche et droit activés séquentiellement
G02B 30/34 - Stéréoscopes fournissant une paire stéréoscopique d'images séparées correspondant à des vues déplacées parallèlement du même objet, p. ex. visionneuses de diapositives 3D
G02B 30/52 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p. ex. des effets stéréoscopiques l’image étant construite à partir d'éléments d'image répartis sur un volume 3D, p. ex. des voxels le volume 3D étant construit à partir d'une pile ou d'une séquence de plans 2D, p. ex. systèmes d'échantillonnage en profondeur
G03B 21/00 - Projecteurs ou visionneuses du type par projectionLeurs accessoires
G03B 35/08 - Photographie stéréoscopique par enregistrement simultané
G03B 35/18 - Photographie stéréoscopique par examen simultané
2.
METHOD AND SYSTEM FOR USING CHARACTERIZATION LIGHT TO DETECT FIBER POSITION IN A FIBER SCANNING PROJECTOR
A projector including a cantilever position detection system includes a chassis and an actuator mounted to the chassis. The projector also includes a cantilever light source having a longitudinal axis and mechanically coupled to the actuator. The cantilever light source is operable to transmit light. The projector further includes an optical assembly section operable to receive the light. The optical assembly section includes a polarizing beamsplitter having an incidence surface and an opposing surface. The polarizing beamsplitter is operable to transmit light incident on the incidence surface and reflect at least a portion of light incident on the opposing surface. The projectors includes a position measurement device operable to receive the reflected portion of the light and an optical waveguide disposed between the optical assembly section and the position measurement device. The optical waveguide is operable to transmit at least a second portion of the light.
G02B 26/08 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables pour commander la direction de la lumière
H04N 9/31 - Dispositifs de projection pour la présentation d'images en couleurs
3.
Display panel or portion thereof with a transitional mixed reality graphical user interface
A head-mounted display system includes a waveguide configured to guide light from a light projection system coupled into the waveguide; a grating structure optically coupled to the waveguide, the grating structure being configured to couple light from the light projection system into the waveguide. The grating structure includes a grating layer having a grating with multiple ridges having ablaze profile in at least one cross-section, the blaze profile having an anti-blaze angle of 85° or less; and one or more additional layers on the grating layer, the additional layers including a first layer of a material having a refractive index of 1.5 or less at an operative wavelength of the head-mounted display, the first layer being an outermost layer of the grating structure.
A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an three-dimensional image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional depth map. Coordinates of the points in the depth map may be estimated with a level of certainty. The level of certainty may be used to determine which points are included in the composite image. The selected points may be smoothed and a mesh model may be formed by creating a convex hull of the selected points. The mesh model and associated texture information may be used to render a three-dimensional representation of the object on a two-dimensional display. Additional techniques include processing and formatting of the three-dimensional representation data to be printed by a three-dimensional printer so a three-dimensional model of the object may be formed.
G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation
G06T 19/20 - Édition d'images tridimensionnelles [3D], p. ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
H04N 13/211 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant un seul capteur d’images 2D utilisant le multiplexage temporel
H04N 13/221 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant un seul capteur d’images 2D utilisant le mouvement relatif de caméras et de sujets
H04N 13/271 - Générateurs de signaux d’images où les signaux d’images générés comprennent des cartes de profondeur ou de disparité
H04N 13/275 - Générateurs de signaux d’images à partir de modèles 3D d’objets, p. ex. des signaux d’images stéréoscopiques générés par ordinateur
A visual perception device is described. The visual perception device has corrective optics for viewing virtual and real-world content. An insert for the corrective optics is attached using a magnetic set, pins and/or a nose piece. Interchangeable nose pieces allow for height adjustments to accommodate different users. The visual perception device has pliable components to absorb forces exerted on a nose piece and a protective barrier for limiting electric shock or ingress of dirt.
Disclosed herein are systems and methods for presenting an audio signal associated with presentation of a virtual object colliding with a surface. The virtual object and the surface may be associated with a mixed reality environment. Generation of the audio signal may be based on at least one of an audio stream from a microphone and a video stream form a sensor. In some embodiments, the collision between the virtual object and the surface is associated with a footstep on the surface.
G06V 20/40 - ScènesÉléments spécifiques à la scène dans le contenu vidéo
G10L 25/57 - Techniques d'analyse de la parole ou de la voix qui ne se limitent pas à un seul des groupes spécialement adaptées pour un usage particulier pour comparaison ou différentiation pour le traitement des signaux vidéo
A head-mounted display system configured to be worn over eyes of a user includes a display disposed over the eyes of the user and configured to emit light into the eyes of the user, where the display includes a dimmer configured to modify a transparency of the display. The system also includes a light sensor separated from an external environment of the user by the display and configured to measure a transmitted light intensity of light from the external environment of the user transmitted through the dimmer. The system further includes a processor operatively coupled to the display and the light sensor, and configured to control an emitted light intensity of the light emitted by the display based on the transmitted light intensity.
G09G 3/22 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p. ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice utilisant des sources lumineuses commandées
G09G 3/06 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un seul caractère, soit en sélectionnant un seul caractère parmi plusieurs, soit en composant le caractère par combinaison d'éléments individuels, p. ex. de segments élémentaires utilisant des sources lumineuses commandées
H05B 45/10 - Commande de l'intensité de la lumière
G06F 3/038 - Dispositions de commande et d'interface à cet effet, p. ex. circuits d'attaque ou circuits de contrôle incorporés dans le dispositif
H05B 41/38 - Commande de l'intensité de la lumière
G09G 5/02 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation caractérisés par la manière dont la couleur est visualisée
G09G 3/20 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p. ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice
9.
METHOD AND SYSTEM FOR ALTERNATING MASK-BASED AND FRAME-BASED DATA COMPRESSION
A method displaying a reconstructed image includes receiving an image, determining an eye gaze location of a user, compressing a primary quality region of the image using a compression ratio of X → Y bits per pixel (bpp) to form a primary quality image, and compressing a secondary quality region of the image using a compression ratio of X → Z bpp, wherein Z < Y, to form a secondary quality image. The method also includes overlaying the primary quality image on the secondary quality image to form the reconstructed image and displaying the reconstructed image.
H04N 19/132 - Échantillonnage, masquage ou troncature d’unités de codage, p. ex. ré-échantillonnage adaptatif, saut de trames, interpolation de trames ou masquage de coefficients haute fréquence de transformée
H04N 19/182 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant un pixel
H04N 19/42 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques caractérisés par les détails de mise en œuvre ou le matériel spécialement adapté à la compression ou à la décompression vidéo, p. ex. la mise en œuvre de logiciels spécialisés
An eyepiece for an augmented reality headset includes an eyepiece waveguide and an incoupling diffractive optical element coupled to the eyepiece waveguide. The incoupling diffractive optical element is disposed at a first lateral location. The eyepiece also includes an outcoupling diffractive optical element coupled to the eyepiece waveguide. The outcoupling diffractive optical element is disposed at a second lateral location different than the first lateral location. The eyepiece further includes an optical structure coupled to the eyepiece waveguide. The optical structure is disposed at a third lateral location between the first lateral location and the second lateral location.
Display systems are described including augmented 1-dimensional pixel arrays and scanning mirrors. In one example, a pixel array includes first and second columns of pixels, relay optics configured to receive incident light and to output the incident light to a viewer, and a scanning mirror disposed to receive the light from the first and second columns of pixels and to reflect the received light toward the relay optics. The scanning mirror may move between a plurality of positions while the first and second columns emit light in temporally spaced pulses so as to form a perceived image at the relay optics having a higher resolution relative to the pixel pitch of the individual columns. Foveated rendering may provide for more efficient use of power and processing resources.
An optical device can include a liquid crystal layer including a first plurality of liquid crystal molecules arranged in a first pattern and a second plurality of liquid crystal molecules arranged in a second pattern. The first and the second patterns may be separated from each other by a suitable distance, e.g., about 20 nm to about 100 nm, along a longitudinal or a transverse axis of the liquid crystal layer. The first and the second pluralities of liquid crystal molecules can be configured as first and second grating structures that can redirect light of visible or infrared wavelengths. In some examples, the optical device includes electrode layers arranged on either side of the liquid crystal layer to control an alignment of the liquid crystal molecules. Methods of fabricating such devices are also described.
G02F 1/1347 - Disposition de couches ou de cellules à cristaux liquides dans lesquelles un faisceau lumineux est modifié par l'addition des effets de plusieurs couches ou cellules
B81C 1/00 - Fabrication ou traitement de dispositifs ou de systèmes dans ou sur un substrat
G02B 6/00 - Guides de lumièreDétails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p. ex. des moyens de couplage
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
G02B 27/10 - Systèmes divisant ou combinant des faisceaux
G02B 30/36 - Stéréoscopes fournissant une paire stéréoscopique d'images séparées correspondant à des vues déplacées parallèlement du même objet, p. ex. visionneuses de diapositives 3D utilisant des éléments optiques réfractifs, p. ex. des prismes, dans le chemin optique entre les images et les observateurs
G02F 1/1334 - Dispositions relatives à la structure basées sur des cristaux liquides dispersés dans un polymère, p. ex. cristaux liquides micro-encapsulés
G02F 1/1337 - Orientation des molécules des cristaux liquides induite par les caractéristiques de surface, p. ex. par des couches d'alignement
G02F 1/29 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p. ex. commutation, ouverture de porte ou modulationOptique non linéaire pour la commande de la position ou de la direction des rayons lumineux, c.-à-d. déflexion
13.
SYSTEMS AND METHODS FOR PERFORMING A MOTOR SKILLS NEUROLOGICAL TEST USING AUGMENTED OR VIRTUAL REALITY
System and methods for performing a motor skills neurological test using augmented reality which provides an objective assessment of the results of the test. A virtual target is displayed to a user in an AR field of view of an AR system at a target location. The movement of a body part (e.g., a finger) of a user is tracked as the user moves the body part from a starting location to the target location. A total traveled distance of the body part in moving from the starting location to the target location is determined based on the tracking. A linear distance between the starting location and the target location is determined. An efficiency index is then determined which represents an overall quality of movement of the body part from the starting location to the target location based on the total traveled distance and the linear distance.
A method of processing an acoustic signal is disclosed. According to one or more embodiments, a first acoustic signal is received via a first microphone. The first acoustic signal is associated with a first speech of a user of a wearable headgear unit. A first sensor input is received via a sensor. a control parameter is determined based on the sensor input. The control parameter is applied to one or more of the first acoustic signal, the wearable headgear unit, and the first microphone. Determining the control parameter comprises determining, based on the first sensor input, a relationship between the first speech and the first acoustic signal.
Examples of systems and methods for rendering an avatar in a mixed reality environment are disclosed. The systems and methods may be configured to automatically scale an avatar or to render an avatar based on a determined intention of a user, an interesting impulse, environmental stimuli, or user saccade points. The disclosed systems and methods may apply discomfort curves when rendering an avatar. The disclosed systems and methods may provide a more realistic interaction between a human user and an avatar.
Disclosed herein are systems and methods for fabricating nanostructures on a substrate that can be used in eyepieces for displays, e.g., in head wearable devices. Fabricating and/or etching such a substrate can include submerging the substrate in a bath and applying ultrasonication to the bath for a first time period. The ultrasonication applied to the first bath can agitate the fluid to provide a substantially uniform first reactive environment across the surface of the substrate. The substrate can be submerged in a second bath and ultrasonication can be applied to the second bath for a second time period. The ultrasonication applied to the second bath can agitate the fluid to provide a substantially uniform second reactive environment across the surface of the substrate. A predetermined amount of material can be removed from the surface of the substrate during the second time period to produce an etched substrate.
B82Y 20/00 - Nano-optique, p. ex. optique quantique ou cristaux photoniques
B82Y 40/00 - Fabrication ou traitement des nanostructures
G02B 1/02 - Éléments optiques caractérisés par la substance dont ils sont faitsRevêtements optiques pour éléments optiques faits de cristaux, p. ex. sel gemme, semi-conducteurs
A wearable device can include an inward-facing imaging system configured to acquire images of a user's periocular region. The wearable device can determine a relative position between the wearable device and the user's face based on the images acquired by the inward-facing imaging system. The relative position may be used to determine whether the user is wearing the wearable device, whether the wearable device fits the user, and/or whether an adjustment to a rendering location of a virtual object can be made to compensate for a deviation of the wearable device from its normal resting position relative to the user's face.
G06T 3/20 - Translation linéaire d’images complètes ou de parties d’image, p. ex. panoramique
G06T 11/60 - Édition de figures et de texteCombinaison de figures ou de texte
G06V 10/46 - Descripteurs pour la forme, descripteurs liés au contour ou aux points, p. ex. transformation de caractéristiques visuelles invariante à l’échelle [SIFT] ou sacs de mots [BoW]Caractéristiques régionales saillantes
G06V 20/20 - ScènesÉléments spécifiques à la scène dans les scènes de réalité augmentée
G06V 40/16 - Visages humains, p. ex. parties du visage, croquis ou expressions
G06V 40/18 - Caractéristiques de l’œil, p. ex. de l’iris
A display system includes a head-mounted display configured to project light, having different amounts of wavefront divergence, to an eye of a user to display virtual image content appearing to be disposed at different depth planes. The wavefront divergence may be changed in discrete steps, with the change in steps being triggered based upon whether the user is fixating on a particular depth plane. The display system may be calibrated for switching depth planes for a main user. Upon determining that a guest user is utilizing the system, rather than undergoing a full calibration, the display system may be configured to switch depth planes based on a rough determination of the virtual content that the user is looking at. The virtual content may have an associated depth plane and the display system may be configured to switch to the depth plane of that virtual content.
Examples of the disclosure describe systems and methods for presenting an audio signal to a user of a wearable head device. According to an example method, a source location corresponding to the audio signal is identified. For each of the respective left and right ear of the user, a virtual speaker position, of a virtual speaker array, is determined, the virtual speaker position collinear with the source location and with a position of the respective ear. For each of the respective left and right ear of the user, a head-related transfer function (HRTF) corresponding to the virtual speaker position and to the respective ear is determined; and the output audio signal is presented to the respective ear of the user via one or more speakers associated with the wearable head device. Processing the audio signal includes applying the HRTF to the audio signal.
Configurations are disclosed for a health system to be used in various healthcare applications, e.g., for patient diagnostics, monitoring, and/or therapy. The health system may comprise a light generation module to transmit light or an image to a user, one or more sensors to detect a physiological parameter of the user's body, including their eyes, and processing circuitry to analyze an input received in response to the presented images to determine one or more health conditions or defects.
A61B 3/00 - Appareils pour l'examen optique des yeuxAppareils pour l'examen clinique des yeux
A61B 3/02 - Appareils pour l'examen optique des yeuxAppareils pour l'examen clinique des yeux du type à mesure subjective, c.-à-d. appareils de d’examen nécessitant la participation active du patient
A61B 3/024 - Appareils pour l'examen optique des yeuxAppareils pour l'examen clinique des yeux du type à mesure subjective, c.-à-d. appareils de d’examen nécessitant la participation active du patient pour la détermination du champ de vision, p. ex. périmètres
A61B 3/028 - Appareils pour l'examen optique des yeuxAppareils pour l'examen clinique des yeux du type à mesure subjective, c.-à-d. appareils de d’examen nécessitant la participation active du patient pour examen d'acuité visuelleAppareils pour l'examen optique des yeuxAppareils pour l'examen clinique des yeux du type à mesure subjective, c.-à-d. appareils de d’examen nécessitant la participation active du patient pour la détermination de la réfraction, p. ex. phoromètres
A61B 3/06 - Appareils pour l'examen optique des yeuxAppareils pour l'examen clinique des yeux du type à mesure subjective, c.-à-d. appareils de d’examen nécessitant la participation active du patient pour examen de sensibilité à la lumière, p. ex. d'adaptationAppareils pour l'examen optique des yeuxAppareils pour l'examen clinique des yeux du type à mesure subjective, c.-à-d. appareils de d’examen nécessitant la participation active du patient pour examen de vision des couleurs
A61B 3/08 - Appareils pour l'examen optique des yeuxAppareils pour l'examen clinique des yeux du type à mesure subjective, c.-à-d. appareils de d’examen nécessitant la participation active du patient pour examen de vision binoculaire ou stéréoscopique, p. ex. pour le contrôle du strabisme
A61B 3/10 - Appareils pour l'examen optique des yeuxAppareils pour l'examen clinique des yeux du type à mesure objective, c.-à-d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient
A61B 3/103 - Appareils pour l'examen optique des yeuxAppareils pour l'examen clinique des yeux du type à mesure objective, c.-à-d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient pour la détermination de la réfraction, p. ex. réfractomètres, skiascopes
A61B 3/113 - Appareils pour l'examen optique des yeuxAppareils pour l'examen clinique des yeux du type à mesure objective, c.-à-d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient pour déterminer ou enregistrer le mouvement de l'œil
A61B 3/12 - Appareils pour l'examen optique des yeuxAppareils pour l'examen clinique des yeux du type à mesure objective, c.-à-d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient pour examiner le fond de l'œil, p. ex. ophtalmoscopes
A61B 3/14 - Dispositions spécialement adaptées à la photographie de l'œil
A61B 3/16 - Appareils pour l'examen optique des yeuxAppareils pour l'examen clinique des yeux du type à mesure objective, c.-à-d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient pour mesurer la pression intraoculaire, p. ex. tonomètres
A61B 5/00 - Mesure servant à établir un diagnostic Identification des individus
A61B 5/01 - Mesure de la température de parties du corps
A61B 5/145 - Mesure des caractéristiques du sang in vivo, p. ex. de la concentration des gaz dans le sang ou de la valeur du pH du sang
A61B 5/1455 - Mesure des caractéristiques du sang in vivo, p. ex. de la concentration des gaz dans le sang ou de la valeur du pH du sang en utilisant des capteurs optiques, p. ex. des oxymètres à photométrie spectrale
A61F 7/00 - Appareils de chauffage ou de refroidissement pour traitement médical ou thérapeutique du corps humain
A61F 9/00 - Procédés ou dispositifs pour le traitement des yeuxDispositifs pour mettre en place des verres de contactDispositifs pour corriger le strabismeAppareils pour guider les aveuglesDispositifs protecteurs pour les yeux, portés sur le corps ou dans la main
A61F 9/008 - Procédés ou dispositifs pour la chirurgie de l'œil utilisant un laser
A61M 21/00 - Autres dispositifs ou méthodes pour amener un changement dans l'état de conscienceDispositifs pour provoquer ou arrêter le sommeil par des moyens mécaniques, optiques ou acoustiques, p. ex. pour mettre en état d'hypnose
A61M 21/02 - Autres dispositifs ou méthodes pour amener un changement dans l'état de conscienceDispositifs pour provoquer ou arrêter le sommeil par des moyens mécaniques, optiques ou acoustiques, p. ex. pour mettre en état d'hypnose pour provoquer le sommeil ou la relaxation, p. ex. par stimulation directe des nerfs, par hypnose ou par analgésie
A61N 5/06 - Thérapie par radiations utilisant un rayonnement lumineux
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G16H 40/63 - TIC spécialement adaptées à la gestion ou à l’administration de ressources ou d’établissements de santéTIC spécialement adaptées à la gestion ou au fonctionnement d’équipement ou de dispositifs médicaux pour le fonctionnement d’équipement ou de dispositifs médicaux pour le fonctionnement local
G16H 40/67 - TIC spécialement adaptées à la gestion ou à l’administration de ressources ou d’établissements de santéTIC spécialement adaptées à la gestion ou au fonctionnement d’équipement ou de dispositifs médicaux pour le fonctionnement d’équipement ou de dispositifs médicaux pour le fonctionnement à distance
21.
METHODS TO IMPROVE THE PERCEPTUAL QUALITY OF FOVEATED RENDERED IMAGES
A method includes rendering a foveated image that includes a foveated zone and a peripheral zone. The foveated zone has a first set of image data and is rendered at a first pixel resolution and the peripheral zone has a second set of image data and is rendered at a second, lower, pixel resolution. The method also includes packing the first set of image data into a first image block, packing the second set of image data into a second image block, and generating a control packet that includes rendering information associated with the foveated image. The method further includes concatenating the control packet with the first image block and the second image block to form a frame, transmitting the frame to a display unit, parsing the control packet, decoding the control packet to obtain the rendering information, and projecting a display image rendered according to the rendering information.
An augmented reality headset includes a frame and a plurality of eyepiece waveguide displays supported in the frame. Each of the plurality of eyepiece waveguide displays includes a projector and an eyepiece having a world side and a user side. The eyepiece includes one or more eyepiece waveguide layers, each of the one or more eyepiece waveguide layers including an in-coupling diffractive optical element and an out-coupling diffractive optical element. Each of the plurality of eyepiece waveguide displays also includes a first extended depth of field (EDOF) refractive element disposed adjacent the world side a dimmer assembly disposed adjacent the world side, a second EDOF refractive element disposed adjacent the user side, and an optical absorber disposed adjacent the eyepiece and overlapping in plan view with a portion of the eyepiece.
A light-emitting user input device can include a touch sensitive portion configured to accept user input (e.g., from a user's thumb) and a light-emitting portion configured to output a light pattern. The light pattern can be used to assist the user in interacting with the user input device. Examples include emulating a multi-degree-of-freedom controller, indicating scrolling, swiping, or other actions, indicating presence of objects nearby the device, indicating receipt of notifications, assisting pairing the user input device with another device, or assisting calibrating the user input device. The light-emitting user input device can be used to provide user input to a wearable device, such as, e.g., a head mounted display device.
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p. ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/02 - Dispositions d'entrée utilisant des interrupteurs actionnés manuellement, p. ex. des claviers ou des cadrans
G06F 3/03 - Dispositions pour convertir sous forme codée la position ou le déplacement d'un élément
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateurLeurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p. ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaison
G06F 3/0354 - Dispositifs de pointage déplacés ou positionnés par l'utilisateurLeurs accessoires avec détection des mouvements relatifs en deux dimensions [2D] entre le dispositif de pointage ou une partie agissante dudit dispositif, et un plan ou une surface, p. ex. souris 2D, boules traçantes, crayons ou palets
G06F 3/038 - Dispositions de commande et d'interface à cet effet, p. ex. circuits d'attaque ou circuits de contrôle incorporés dans le dispositif
G06F 3/041 - Numériseurs, p. ex. pour des écrans ou des pavés tactiles, caractérisés par les moyens de transduction
G06F 3/04847 - Techniques d’interaction pour la commande des valeurs des paramètres, p. ex. interaction avec des règles ou des cadrans
G06F 3/04883 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p. ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p. ex. des gestes en fonction de la pression exercée enregistrée par une tablette numérique utilisant un écran tactile ou une tablette numérique, p. ex. entrée de commandes par des tracés gestuels pour l’entrée de données par calligraphie, p. ex. sous forme de gestes ou de texte
G06F 3/04886 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p. ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p. ex. des gestes en fonction de la pression exercée enregistrée par une tablette numérique utilisant un écran tactile ou une tablette numérique, p. ex. entrée de commandes par des tracés gestuels par partition en zones à commande indépendante de la surface d’affichage de l’écran tactile ou de la tablette numérique, p. ex. claviers virtuels ou menus
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06V 10/145 - Éclairage spécialement adapté à la reconnaissance de formes, p. ex. utilisant des réseaux
G06V 20/20 - ScènesÉléments spécifiques à la scène dans les scènes de réalité augmentée
G06V 40/18 - Caractéristiques de l’œil, p. ex. de l’iris
G09G 3/32 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p. ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice utilisant des sources lumineuses commandées utilisant des panneaux électroluminescents semi-conducteurs, p. ex. utilisant des diodes électroluminescentes [LED]
H04W 4/80 - Services utilisant la communication de courte portée, p. ex. la communication en champ proche, l'identification par radiofréquence ou la communication à faible consommation d’énergie
A display system can include a head-mounted display configured to project light to an eye of a user to display virtual image content at different amounts of divergence and collimation. The display system can include an inward-facing imaging system possibly comprising a plurality of cameras that image the user's eye and glints thereon and processing electronics that are in communication with the inward-facing imaging system and that are configured to obtain an estimate of a center of cornea and/or center of rotation of the user's eye, and/or other parameter(s), using data derived from the glint images. The display system may use spherical and/or aspheric cornea models to estimate a location of the corneal center of the user's eye and/or other parameter(s).
A61B 3/113 - Appareils pour l'examen optique des yeuxAppareils pour l'examen clinique des yeux du type à mesure objective, c.-à-d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient pour déterminer ou enregistrer le mouvement de l'œil
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
Systems and methods for matching content elements to surfaces in a spatially organized 3D environment. The method includes receiving content, identifying one or more elements in the content, determining one or more surfaces, matching the one or more elements to the one or more surfaces, and displaying the one or more elements as virtual content onto the one or more surfaces.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p. ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06T 19/20 - Édition d'images tridimensionnelles [3D], p. ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
H04N 21/254 - Gestion au sein du serveur de données additionnelles, p. ex. serveur d'achat ou serveur de gestion de droits
H04N 21/431 - Génération d'interfaces visuellesRendu de contenu ou données additionnelles
An example head-mounted display device includes a light projector, an optical assembly arranged to direct light from a light projector to a user, and an actuator module. The optical assembly includes a variable focus lens assembly including a rigid refractive component, a shaper ring defining an aperture, and a flexible lens membrane between the shaper ring and the rigid refractive component and covering the aperture. The refractive component, the shaper ring, and the lens membrane are arranged along an axis. The refractive component and the lens membrane define a chamber containing a volume of fluid. The actuator module is configured to adjust an optical power of the variable focus lens by moving the shaper ring relative to the refractive component along the axis, such that a curvature of the lens membrane in the aperture is modified.
A switchable optical assembly comprises a switchable waveplate configured to be electrically activated and deactivated to selectively alter the polarization state of light incident on the switchable waveplate. The switchable waveplate comprises first and second surfaces and a liquid crystal layer disposed between the first and second surfaces. The liquid crystal layer comprises a plurality of liquid crystal molecules. The first surface and/or the second surface may be planar. The first surface and/or the second surface may be curved. The plurality of liquid crystal molecules may vary in tilt with respect to the first and second surfaces with outward radial distance from an axis through the first and second surfaces and the liquid crystal layer in a plurality of radial directions. The switchable waveplate can include a plurality of electrodes to apply an electrical signal across the liquid crystal layer.
G02F 1/137 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p. ex. commutation, ouverture de porte ou modulationOptique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des cristaux liquides, p. ex. cellules d'affichage individuelles à cristaux liquides caractérisés par l'effet électro-optique ou magnéto-optique, p. ex. transition de phase induite par un champ, effet d'orientation, interaction entre milieu récepteur et matière additive ou diffusion dynamique
G02B 30/34 - Stéréoscopes fournissant une paire stéréoscopique d'images séparées correspondant à des vues déplacées parallèlement du même objet, p. ex. visionneuses de diapositives 3D
G02F 1/01 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p. ex. commutation, ouverture de porte ou modulationOptique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur
G02F 1/13 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p. ex. commutation, ouverture de porte ou modulationOptique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des cristaux liquides, p. ex. cellules d'affichage individuelles à cristaux liquides
Disclosed herein are systems and methods for calculating angular acceleration based on inertial data using two or more inertial measurement units (IMUs). The calculated angular acceleration may be used to estimate a position of a wearable head device comprising the IMUs. Virtual content may be presented based on the position of the wearable head device. In some embodiments, a first IMU and a second IMU share a coincident measurement axis.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G01P 15/08 - Mesure de l'accélérationMesure de la décélérationMesure des chocs, c.-à-d. d'une variation brusque de l'accélération en ayant recours aux forces d'inertie avec conversion en valeurs électriques ou magnétiques
G01P 15/16 - Mesure de l'accélérationMesure de la décélérationMesure des chocs, c.-à-d. d'une variation brusque de l'accélération en calculant la dérivée par rapport au temps d'un signal de vitesse mesuré
A head mounted display system can include at least one imaging device, a waveguide, and optical elements formed on or in the waveguide, including at least one coupling optical element configured to in-couple, into the waveguide, eye illumination light that is reflected off of a user's eye, and at least one out-coupling optical element configured to out-couple, from the waveguide and toward the imaging device(s), the reflected eye illumination light, such that the imaging device(s) can generate image(s) of the eye based on the reflected eye illumination light. The waveguide may also include an in-coupling optical element configured to couple, into the waveguide, image light that conveys virtual image content, and another out-coupling optical element that may be separate from the at least one coupling optical element and that is configured to couple the image light out of the waveguide toward the user's eye.
A method of presenting a signal to a speech recognition engine is disclosed. According to an example of the method, an audio signal is received from a user. A portion of the audio signal is identified, the portion having a first time and a second time. A pause in the portion of the audio signal, the pause comprising the second time, is identified. It is determined whether the pause indicates the completion of an utterance of the audio signal. In accordance with a determination that the pause indicates the completion of the utterance, the portion of the audio signal is presented as input to the speech recognition engine. In accordance with a determination that the pause does not indicate the completion of the utterance, the portion of the audio signal is not presented as input to the speech recognition engine.
A portable electronic system receives a set of one or more canonical maps and determines the sparse map based at least in part upon one or more anchors pertaining to the physical environment. The sparse map is localized to at least one canonical map in the set of one or more canonical maps, and a new canonical map is created at least by merging sparse map data of the sparse map into the at least one canonical map. The set of one or more canonical maps may be determined from a universe of canonical maps comprising a plurality of canonical maps by applying a hierarchical filtering scheme to the universe. The sparse map may be localized to the at least one canonical map at least by splitting the sparse map into a plurality of connected components and by one or more merger operations.
An imprint lithography method of configuring an optical layer includes selecting one or more parameters of a nanolayer to be applied to a substrate for changing an effective refractive index of the substrate and imprinting the nanolayer on the substrate to change the effective refractive index of the substrate such that a relative amount of light transmittable through the substrate is changed by a selected amount.
G03F 7/00 - Production par voie photomécanique, p. ex. photolithographique, de surfaces texturées, p. ex. surfaces impriméesMatériaux à cet effet, p. ex. comportant des photoréservesAppareillages spécialement adaptés à cet effet
G02B 1/118 - Revêtements antiréfléchissants ayant des structures de surface de longueur d’onde sous-optique conçues pour améliorer la transmission, p. ex. structures du type œil de mite
33.
SYSTEMS AND METHODS FOR VIRTUAL AND AUGMENTED REALITY
Disclosed are methods, systems, and computer program products that present, by a spatial computing system, an extended reality presentation to a user, wherein the extended reality presentation comprises virtual object(s) generated by a spatial computing system and transmitted to one or both eyes of the user and a physical environment that includes a first physical object and a second physical object and is perceived by the user via light reflected from the physical environment. The extended-reality representation is rendered for an object interaction between the first physical object and the first virtual object or the user. A physics-based effect on the first physical object by the object interaction is determined by using at least a stereoscopic model for the first physical object based at least in part upon the object interaction. The extended reality presentation into an undated extended reality presentation based at least in part upon the physics-based effect.
A cross reality system enables any of multiple devices to efficiently access previously stored maps. Both stored maps and tracking maps used by portable devices may have any of multiple types of location metadata associated with them. The location metadata may be used to select a set of candidate maps for operations, such as localization or map merge, that involve finding a match between a location defined by location information from a portable device and any of a number of previously stored maps. The types of location metadata may prioritized for use in selecting the subset. To aid in selection of candidate maps, a universe of stored maps may be indexed based on geo-location information. A cross reality platform may update that index as it interacts with devices that supply geo-location information in connection with location information and may propagate that geo-location information to devices that do not supply it.
G06F 16/29 - Bases de données d’informations géographiques
G06F 16/907 - Recherche caractérisée par l’utilisation de métadonnées, p. ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement
35.
DISPLAY SYSTEM WITH OPTICAL ELEMENTS FOR IN-COUPLING MULTIPLEXED LIGHT STREAMS
Architectures are provided for optical devices such as waveguides including structures that are configured to selectively in-couple one or more streams of light from a multiplexed light stream into the waveguide. The multiplexed light stream can include light with different characteristics (e.g., different wavelengths and/or different polarizations). The waveguide can comprise one or more in-coupling elements that can selectively couple one or more streams of light from the multiplexed light stream into the waveguide while transmitting, e.g., without in-coupling, one or more other streams of light from the multiplexed light stream.
G02F 1/13 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p. ex. commutation, ouverture de porte ou modulationOptique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des cristaux liquides, p. ex. cellules d'affichage individuelles à cristaux liquides
Systems and methods for providing accurate and independent control of reverberation properties are disclosed. In some embodiments, a system may include a reverberation processing system, a direct processing system, and a combiner. The reverberation processing system can include a reverb initial power (RIP) control system and a reverberator. The RIP control system can include a reverb initial gain (RIG) and a RIP corrector. The RIG can be configured to apply a RIG value to the input signal, and the RIP corrector can be configured to apply a RIP correction factor to the signal from the RIG. The reverberator can be configured to apply reverberation effects to the signal from the RIP control system. In some embodiments, one or more values and/or correction factors can be calculated and applied such that the signal output from a component in the reverberation processing system is normalized to a predetermined value (e.g., unity (1.0)).
A method of presenting audio comprises: identifying a first ear listener position and a second ear listener position in a mixed reality environment; identifying a first virtual sound source in the mixed reality environment; identifying a first object in the mixed reality environment; determining a first audio signal in the mixed reality environment, wherein the first audio signal originates at the first virtual sound source and intersects the first ear listener position; determining a second audio signal in the mixed reality environment, wherein the second audio signal originates at the first virtual sound source, intersects the first object, and intersects the second ear listener position; determining a third audio signal based on the second audio signal and the first object; presenting, to a first ear of a user, the first audio signal; and presenting, to a second ear of the user, the third audio signal.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
H04R 1/40 - Dispositions pour obtenir la fréquence désirée ou les caractéristiques directionnelles pour obtenir la caractéristique directionnelle désirée uniquement en combinant plusieurs transducteurs identiques
H04R 3/12 - Circuits pour transducteurs pour distribuer des signaux à plusieurs haut-parleurs
A cross reality system enables any of multiple devices to efficiently and accurately access previously persisted maps of very large scale environments and render virtual content specified in relation to those maps. The cross reality system may build a persisted map, which may be in canonical form, by merging tracking maps from the multiple devices. A map merge process determines mergibility of a tracking map with a canonical map and merges a tracking map with a canonical map in accordance with mergibility criteria, such as, when a gravity direction of the tracking map aligns with a gravity direction of the canonical map. Refraining from merging maps if the orientation of the tracking map with respect to gravity is not preserved avoids distortions in persisted maps and results in multiple devices, which may use the maps to determine their locations, to present more realistic and immersive experiences for their users.
Disclosed herein are systems and methods for storing, organizing, and maintaining acoustic data for mixed reality systems. A system may include one or more sensors of a head-wearable device, a speaker of the head-wearable device, and one or more processors. A method performed by the one or more processors may include receiving a request to present an audio signal. An environment may be identified via the one or more sensors of the head-wearable device. One or more audio model components associated with the environment may be retrieved. A first audio model may be generated based on the audio model components. A second audio model may be generated based on the first audio model. A modified audio signal may be determined based on the second audio model and based on the request to present an audio signal. The modified audio signal may be presented via the speaker of the head-wearable device.
Methods and systems for manufacturing an optical waveguide include depositing an adhesion promoting layer on a substrate. Multiple curable resist droplets are dispensed on the adhesion promoting layer. The adhesion promoting layer is disposed between and contacts the substrate and the curable resist droplets. The curable resist droplets define an optical eyepiece layer such that a zero residual layer thickness (RLT) region of the optical eyepiece layer is free of the curable resist droplets. The optical eyepiece layer is incised from the substrate to form the optical waveguide.
G03F 7/00 - Production par voie photomécanique, p. ex. photolithographique, de surfaces texturées, p. ex. surfaces impriméesMatériaux à cet effet, p. ex. comportant des photoréservesAppareillages spécialement adaptés à cet effet
A head mounted display system can include a camera, at least one waveguide, at least one coupling optical element that is configured such that light is coupled into said waveguide and guided therein, and at least one out-coupling element. The at least one out-coupling element can be configured to couple light that is guided within said waveguide out of said waveguide and direct said light to said camera. The at least one coupling element may comprise a diffractive optical element having optical power.
A cross reality system enables any of multiple devices to efficiently and accurately access previously persisted maps, even maps of very large environments, and render virtual content specified in relation to those maps. The cross reality system may quickly process a batch of images acquired with a portable device to determine whether there is sufficient consistency across the batch in the computed localization. Processing on at least one image from the batch may determine a rough localization of the device to the map. This rough localization result may be used in a refined localization process for the image for which it was generated. The rough localization result may also be selectively propagated to a refined localization process for other images in the batch, enabling rough localization processing to be skipped for the other images.
Augmented reality systems and methods for creating, saving and rendering designs comprising multiple items of virtual contenting a three-dimensional (3D) environment of a user. The designs may be saved as a scene, which is built by a user from pre-built sub-components, built components, and/or previously saved scenes. Location information, expressed as a saved scene anchor and position relative to the saved scene anchor for each item of virtual content, may also be saved. Upon opening the scene, the saved scene anchor node may be correlated to a location within the mixed reality environment of the user for whom the scene is opened. The virtual items of the scene may be positioned with the same relationship to that location as they have to the saved scene anchor node. That location may be selected automatically and/or by user input.
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p. ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 20/20 - ScènesÉléments spécifiques à la scène dans les scènes de réalité augmentée
Disclosed are systems and methods for mixed reality collaboration. A method may include receiving persistent coordinate data; presenting a first virtual session handle to a first user at a first position via a transmissive display of a wearable device, wherein the first position is based on the persistent coordinate data; presenting a virtual object to the first user at a second location via the transmissive display, wherein the second position is based on the first position; receiving location data from a second user, wherein the location data relates a position of the second user to a position of a second virtual session handle; presenting a virtual avatar to the first user at a third position via the transmissive display, wherein the virtual avatar corresponds to the second user, wherein the third position is based on the location data, and wherein the third position is further based on the first position.
Techniques for operating an optical system are disclosed. World light may be linearly polarized along a first axis. When the optical system is operating in accordance with a first state, a polarization of the world light may be rotated by 90 degrees, the world light may be linearly polarized along a second axis perpendicular to the first axis, and zero net optical power may be applied to the world light. When the optical system is operating in accordance with a second state, virtual image light may be projected onto an eyepiece of the optical system, the world light and the virtual image light may be linearly polarized along the second axis, a polarization of the virtual image light may be rotated by 90 degrees, and non-zero net optical power may be applied to the virtual image light.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
H04N 13/279 - Générateurs de signaux d’images à partir de modèles 3D d’objets, p. ex. des signaux d’images stéréoscopiques générés par ordinateur les positions des points de vue virtuels étant choisies par les spectateurs ou déterminées par suivi
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
Systems and methods generating an animation ng corresponding to a pose of a subject include accessing image data corresponding to the pose of the subject. The image data can include the face of the subject. The systems and methods process the image data by successively analyzing subregions of the image according to a solver order. The solver order can be biologically or anatomically ordered to proceed from subregions that cause larger scale movements to subregions that cause smaller scale movements. In each subregion, the systems and methods can perform an optimization technique to fit parameters of the animation rig to the input image data. After all subregions have been processed, the animation rig can be used to animate an avatar to appear to be performing the pose of the subject.
A wearable display system includes a light projection system having one or more emissive micro-displays, e.g., micro-LED displays. The light projection system projects time-multiplexed left-eye and right-eye images, which pass through an optical router having a polarizer and a switchable polarization rotator. The optical router is synchronized with the generation of images by the light projection system to impart a first polarization to left-eye images and a second different polarization to right-eye images. Light of the first polarization is incoupled into an eyepiece having one or more waveguides for outputting light to one of the left and right eyes, while light of the second polarization may be incoupled into another eyepiece having one or more waveguides for outputting light to the other of the left and right eyes. Each eyepiece may output incoupled light with variable amounts of wavefront divergence, to elicit different accommodation responses from the user's eyes.
G02B 27/28 - Systèmes ou appareils optiques non prévus dans aucun des groupes , pour polariser
G02B 30/24 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p. ex. des effets stéréoscopiques en fournissant des première et seconde images de parallaxe à chacun des yeux gauche et droit d’un observateur du type stéréoscopique impliquant un multiplexage temporel, p. ex. utilisant des obturateurs gauche et droit activés séquentiellement
G02B 30/25 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p. ex. des effets stéréoscopiques en fournissant des première et seconde images de parallaxe à chacun des yeux gauche et droit d’un observateur du type stéréoscopique utilisant des techniques de polarisation
H04N 13/341 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques utilisant le multiplexage temporel
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
Examples of the disclosure describe systems and methods for recording augmented reality and mixed reality experiences. In an example method, an image of a real environment is received via a camera of a wearable head device. A pose of the wearable head device is estimated, and a first image of a virtual environment is generated based on the pose. A second image of the virtual environment is generated based on the pose, wherein the second image of the virtual environment comprises a larger field of view than a field of view of the first image of the virtual environment. A combined image is generated based on the second image of the virtual environment and the image of the real environment.
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
Examples of an imaging system for use with a head mounted display (HMD) are disclosed. The imaging system can include a forward-facing imaging camera and a surface of a display of the HMD can include an off-axis diffractive optical element (DOE) or hot mirror configured to reflect light to the imaging camera. The DOE or hot mirror can be segmented, for example, with different segments having different angles or different optical power. The imaging system can be used for eye tracking, biometric identification, multiscopic reconstruction of the three-dimensional shape of the eye, etc. Methods for manufacturing angularly segmented optical elements are also provided. The methods can include injection molding.
B29C 45/00 - Moulage par injection, c.-à-d. en forçant un volume déterminé de matière à mouler par une buse d'injection dans un moule ferméAppareils à cet effet
B29C 45/16 - Fabrication d'objets multicouches ou polychromes
B29K 33/00 - Utilisation de polymères d'acides non saturés ou de leurs dérivés comme matière de moulage
B29K 67/00 - Utilisation de polyesters comme matière de moulage
B29K 69/00 - Utilisation de polycarbonates comme matière de moulage
B29L 11/00 - Éléments optiques, p. ex. lentilles, prismes
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
According to an example method, a location of a first virtual speaker array is determined. A first virtual speaker density is determined. Based on the first virtual speaker density, a location of a second virtual speaker of the first virtual speaker array is determined. A source location in a virtual environment is determined for an audio signal. A virtual speaker of the first virtual speaker array is selected based on the source location and based further on a position or an orientation of a listener in the virtual environment. A head-related transfer function (HRTF) is identified that corresponds to the selected virtual speaker of the first virtual speaker array. The HRTF is applied to the audio signal to produce a first filtered audio signal. The first filtered audio signal is presented to the listener via a first speaker.
A cross reality system enables any of multiple devices to efficiently and accurately access previously persisted maps of very large scale environments and render virtual content specified in relation to those maps. The cross reality system may quickly determine whether a 2D set of features derived from images acquired with a portable device match a set of 3D features of an environment map and, if so, determine the relative pose of the feature sets. The pose may be used in quickly and accurately localizing the portable device to the environment map. Pairs of features in the 2D and 3D features sets may be identified based on matching feature descriptors and may be scored in a neural network trained to assess the quality of the match. Poses may be identified based on subsets of the matching features weighted towards pairs of features with high quality.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 18/22 - Critères d'appariement, p. ex. mesures de proximité
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p. ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersectionsAnalyse de connectivité, p. ex. de composantes connectées
G06V 10/75 - Organisation de procédés de l’appariement, p. ex. comparaisons simultanées ou séquentielles des caractéristiques d’images ou de vidéosApproches-approximative-fine, p. ex. approches multi-échellesAppariement de motifs d’image ou de vidéoMesures de proximité dans les espaces de caractéristiques utilisant l’analyse de contexteSélection des dictionnaires
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 10/98 - Détection ou correction d’erreurs, p. ex. en effectuant une deuxième exploration du motif ou par intervention humaineÉvaluation de la qualité des motifs acquis
An augmented reality headset includes a projector and an eyepiece waveguide stack optically coupled to the projector. The eyepiece waveguide stack includes: a first eyepiece waveguide including a first incoupling diffractive optical element and a first combined pupil expander; and a second eyepiece waveguide including a second incoupling diffractive optical element and a second combined pupil expander. The second incoupling diffractive optical element is laterally offset from the first incoupling diffractive optical element in a lateral direction.
G02B 6/293 - Moyens de couplage optique ayant des bus de données, c.-à-d. plusieurs guides d'ondes interconnectés et assurant un système bidirectionnel par nature en mélangeant et divisant les signaux avec des moyens de sélection de la longueur d'onde
An augmented reality system includes a projector and projection optics coupled to the projector. The augmented reality system also includes an eyepiece waveguide including hybrid diffractive structure including: one or more first nanofeatures having first material with a first index of refraction; and one or more second nanofeatures having a second material with a second index of refraction.
Embodiments of the present disclosure can provide systems and methods for presenting audio signals based on an analysis of a voice of a speaker in an augmented reality or mixed reality environment. Methods according to embodiments of this disclosure can include receiving audio data from a microphone of a first wearable head device, the first wearable head device in communication with a virtual environment, the audio data comprising speech data. In some examples, the methods can include identifying a voice parameter based on the audio data. In some examples, the methods can include determining an acoustic parameter based on the voice parameter. In some examples, the methods can include applying the acoustic parameter to the audio data to generate a spatialized audio signal. In some examples, the methods can include presenting the spatialized audio signal to a second wearable head device in communication with the virtual environment.
A thin transparent layer can be integrated in a head mounted display device and disposed in front of the eye of a wearer. The thin transparent layer may be configured to output light such that light is directed onto the eye to create reflections therefrom that can be used, for example, for glint based tracking. The thin transparent layer can be configured to reduced obstructions in the field of the view of the user.
A cross reality system enables any of multiple types of devices to efficiently and accurately access previously stored maps and render virtual content specified in relation to those maps. The cross reality system may include a cloud-based localization service that responds to requests from devices to localize with respect to a stored map. Devices of any type, with native hardware and software configured for augmented reality operations may be configured to work with the cross reality system by incorporating components that interface between the native AR framework of the device and the cloud-based localization service. These components may present position information about the device in a format recognized by the localization service. Additionally, these components may filter or otherwise process perception data provided by the native AR framework to increase the accuracy of localization.
Systems, methods, and computer program products for displaying virtual contents using a wearable electronic device determine the location of the sighting centers of both eyes of a user wearing the wearable electronic device and estimate an error or precision for these sighting center. A range of operation may be determined for a focal distance or a focal plane at the focal distance based at least in part upon the error or the precision and a criterion pertaining to vergence and accommodation of binocular vision of the virtual contents with the wearable electronic device. A virtual content may be adjusted into an adjusted virtual content for presentation with respect to the focal plane or the focal distance based at least in part upon the range of operation. The adjusted virtual content may be presented to the user with respect to the focal distance or the focal plane.
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
H04N 13/383 - Suivi des spectateurs pour le suivi du regard, c.-à-d. avec détection de l’axe de vision des yeux du spectateur
Examples of the disclosure describe systems and methods for reducing audio effects of fan noise, specifically, for a wearable system. A method wherein operating a fan of a wearable head device; detecting, with a microphone of the wearable head device, noise generated by the fan; generating a fan reference signal, wherein the fan reference signal represents at least one of a speed of the fan, a mode of the fan, a power output of the fan, and a phase of the fan; deriving a transfer function based on the fan reference signal and based further on the detected noise of the fan; generating a compensation signal based on the transfer function; and while operating the fan of the wearable head device, outputting, by a speaker of the wearable head device, an anti-noise signal, wherein the anti-noise signal is based on the compensation signal.
G10K 11/178 - Procédés ou dispositifs de protection contre le bruit ou les autres ondes acoustiques ou pour amortir ceux-ci, en général utilisant des effets d'interférenceMasquage du son par régénération électro-acoustique en opposition de phase des ondes acoustiques originales
60.
SYSTEMS AND METHODS FOR DISPLAY BINOCULAR DEFORMATION COMPENSATION
A display subsystem for a virtual image generation system used by an end user, comprises first and second waveguide apparatuses, first and second projection subassemblies configured for introducing first and second light beams respectively into the first and second waveguide apparatuses, such that at least a first light ray and at least a second light ray respectively exit the first and second waveguide apparatuses to display first and second monocular images as a binocular image to the end user, and a light sensing assembly configured for detecting at least one parameter indicative of a mismatch between the displayed first and second monocular images as the binocular image.
An anti-reflective waveguide assembly comprising a waveguide substrate having a first index of refraction, a plurality of diffractive optical elements disposed upon a first surface of the waveguide and an anti-reflective coating disposed upon a second surface of the waveguide. The anti-reflective coating preferably increases absorption of light through a surface to which it is applied into the waveguide so that at least 97 percent of the light is transmitted. The anti- reflective coating is composed of four layers of material having different indices of refraction that the first index of refraction and an imaginary refractive index less than 1×10−3 but preferably less than 5×10−4.
A wearable device can have a field of view through which a user can perceive real or virtual objects. The device can display a visual aura representing contextual information associated with an object that is outside the user's field of view. The visual aura can be displayed near an edge of the field of view and can dynamically change as the contextual information associated with the object changes, e.g., the relative position of the object and the user (or the user's field of view) changes.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p. ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06V 20/20 - ScènesÉléments spécifiques à la scène dans les scènes de réalité augmentée
G06V 40/18 - Caractéristiques de l’œil, p. ex. de l’iris
Extended reality display systems and methods of manufacturing for improving the compactness and manufacturability of the display system. The display systems include an integral projection lens barrel into which a lens stack is directly loaded. The display systems may also have the illuminator and spatial light modulator on opposite sides of a waveguide which reduces the overall volume of the display system. Embodiments of the displays systems and methods also include the use of structural datums for accurately positioning components of the display systems to improve the manufacturing tolerances of the display systems which ensure that the display systems function at an acceptable optical performance.
A cross reality system enables any of multiple devices to efficiently render shared location-based content. The cross reality system may include a cloud-based service that responds to requests from devices to localize with respect to a stored map. The service may return to the device information that localizes the device with respect to the stored map. In conjunction with localization information, the service may provide information about locations in the physical world proximate the device for which virtual content has been provided. Based on information received from the service, the device may render, or stop rendering, virtual content to each of multiple users based on the user's location and specified locations for the virtual content.
G10K 11/178 - Procédés ou dispositifs de protection contre le bruit ou les autres ondes acoustiques ou pour amortir ceux-ci, en général utilisant des effets d'interférenceMasquage du son par régénération électro-acoustique en opposition de phase des ondes acoustiques originales
H04R 1/40 - Dispositions pour obtenir la fréquence désirée ou les caractéristiques directionnelles pour obtenir la caractéristique directionnelle désirée uniquement en combinant plusieurs transducteurs identiques
An augmented reality (AR) display device can display a virtual assistant character that interacts with the user of the AR device. The virtual assistant may be represented by a robot (or other) avatar that assists the user with contextual objects and suggestions depending on what virtual content the user is interacting with. Animated images may be displayed above the robot's head to display its intents to the user. For example, the robot can run up to a menu and suggest an action and show the animated images. The robot can materialize virtual objects that appear on its hands. The user can remove such an object from the robot's hands and place it in the environment. If the user does not interact with the object, the robot can dematerialize it. The robot can rotate its head to keep looking at the user and/or an object that the user has picked up.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p. ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
67.
WAVEGUIDES WITH INTEGRATED OPTICAL ELEMENTS AND METHODS OF MAKING THE SAME
This disclosure describes optical devices, such as waveguides, and methods of manufacturing same. An example waveguide can include a polymer layer having substantially optically transparent material with first and second major surfaces configured such that light containing image information can propagate through the polymer layer being guided therein by reflecting from the first and second major surfaces via total internal reflection. The first surface can include first smaller and second larger surface portions monolithically integrated with the polymer layer and with each other. The first smaller surface portion can include at least a part of an in-coupling optical element configured to couple light incident on the in-coupling optical element into the polymer layer for propagation therethrough by reflection from the second major surface and the second larger surface portion of the first major surface.
G02B 6/28 - Moyens de couplage optique ayant des bus de données, c.-à-d. plusieurs guides d'ondes interconnectés et assurant un système bidirectionnel par nature en mélangeant et divisant les signaux
G02B 27/18 - Systèmes ou appareils optiques non prévus dans aucun des groupes , pour projection optique, p. ex. combinaison de miroir, de condensateur et d'objectif
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
68.
EXTENDED REALITY DISPLAY SYSTEM WITH VISION CORRECTION
An extended reality (XR) system includes an XR display configured to be movably disposed in a line of sight of a user. The system also includes a vision correction component configured to be disposed in the line of sight of the user. The system further includes a displacement mechanism configured to guide the XR display out of the line of sight of the user while the vision correction component remains in the line of sight of the user, and limit relative positions of the XR display and the vision correction component.
According to an example method, it is determined whether a difference between first acoustic data and second acoustic data exceeds a threshold. The first acoustic data is associated with a first client application in communication with an audio service. The second acoustic data is associated with a second client application. A first input audio signal associated with the first client application is received via the audio service. In accordance with the determination that the difference does not exceed the threshold, the second acoustic data is applied to the first input audio signal to produce a first output audio signal. In accordance with a determination that the difference exceeds the threshold, the first acoustic data is applied to the first input audio signal to produce the first output audio signal. The first output audio signal is presented to a user of a wearable head device in communication with the audio service.
A63F 13/54 - Commande des signaux de sortie en fonction de la progression du jeu incluant des signaux acoustiques, p. ex. pour simuler le bruit d’un moteur en fonction des tours par minute [RPM] dans un jeu de conduite ou la réverbération contre un mur virtuel
A virtual image generation system comprises a planar optical waveguide having opposing first and second faces, an in-coupling (IC) element configured for optically coupling a collimated light beam from an image projection assembly into the planar optical waveguide as an in-coupled light beam, a first orthogonal pupil expansion (OPE) element associated with the first face of the planar optical waveguide for splitting the in-coupled light beam into a first set of orthogonal light beamlets, a second orthogonal pupil expansion (OPE) element associated with the second face of the planar optical waveguide for splitting the in-coupled light beam into a second set of orthogonal light beamlets, and an exit pupil expansion (EPE) element associated with the planar optical waveguide for splitting the first and second sets of orthogonal light beamlets into an array of out-coupled light beamlets that exit the planar optical waveguide.
G02B 6/12 - Guides de lumièreDétails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p. ex. des moyens de couplage du type guide d'ondes optiques du genre à circuit intégré
G02B 26/08 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables pour commander la direction de la lumière
A method of fabricating an optical element includes providing a substrate, forming a castable material coupled to the substrate, and casting the castable material using a mold. The method also includes curing the castable material and removing the mold. The optical element includes a planar region and a clear aperture adjacent the planar region and characterized by an optical power.
Disclosed herein is an augmented reality (AR) system that provides information about purchasing alternatives to a user who is about to purchase an item or product (e.g., a target product) in a physical retail location. In some variations, offers to purchase the product and/or an alternative product are provided by the merchant and/or competitors via the AR system. An offer negotiation server (ONS) aggregates offer data provided various external parties (EPs) and displays these offers to the user as the user is considering the purchase of a target product. In some variations, an AR system may be configured to facilitate the process of purchasing items at a retail location.
A head-mounted apparatus include an eyepiece that include a variable dimming assembly and a frame mounting the eyepiece so that a user side of the eyepiece faces a towards a user and a world side of the eyepiece opposite the first side faces away from the user. The dynamic dimming assembly selectively modulates an intensity of light transmitted parallel to an optical axis from the world side to the user side during operation. The dynamic dimming assembly includes a variable birefringence cell having multiple pixels each having an independently variable birefringence, a first linear polarizer arranged on the user side of the variable birefringence cell, the first linear polarizer being configured to transmit light propagating parallel to the optical axis linearly polarized along a pass axis of the first linear polarizer orthogonal to the optical axis, a quarter wave plate arranged between the variable birefringence cell and the first linear polarizer, a fast axis of the quarter wave plate being arranged relative to the pass axis of the first linear polarizer to transform linearly polarized light transmitted by the first linear polarizer into circularly polarized light, and a second linear polarizer on the world side of the variable birefringence cell.
G02B 27/28 - Systèmes ou appareils optiques non prévus dans aucun des groupes , pour polariser
G02F 1/1335 - Association structurelle de cellules avec des dispositifs optiques, p. ex. des polariseurs ou des réflecteurs
G02F 1/13363 - Éléments à biréfringence, p. ex. pour la compensation optique
G02F 1/139 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p. ex. commutation, ouverture de porte ou modulationOptique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des cristaux liquides, p. ex. cellules d'affichage individuelles à cristaux liquides caractérisés par l'effet électro-optique ou magnéto-optique, p. ex. transition de phase induite par un champ, effet d'orientation, interaction entre milieu récepteur et matière additive ou diffusion dynamique basés sur des effets d'orientation où les cristaux liquides restent transparents
75.
EYE TRACKING BASED VIDEO TRANSMISSION AND COMPRESSION
A computer-implemented method includes receiving gaze information about an observer of a video stream; determining a video compression spatial map for the video stream based on the received gaze information and performance characteristics of a network connection with the observer; compressing the video stream according to the video compression spatial map; and sending the compressed video stream to the observer.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06T 7/50 - Récupération de la profondeur ou de la forme
G06V 10/25 - Détermination d’une région d’intérêt [ROI] ou d’un volume d’intérêt [VOI]
H04N 19/59 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre un sous-échantillonnage spatial ou une interpolation spatiale, p. ex. modification de la taille de l’image ou de la résolution
Systems and methods of presenting an output audio signal to a listener located at a first location in a virtual environment are disclosed. According to embodiments of a method, an input audio signal is received. For each sound source of a plurality of sound sources in the virtual environment, a respective first intermediate audio signal corresponding to the input audio signal is determined, based on a location of the respective sound source in the virtual environment, and the respective first intermediate audio signal is associated with a first bus. For each of the sound sources of the plurality of sound sources in the virtual environment, a respective second intermediate audio signal is determined. The respective second intermediate audio signal corresponds to a reflection of the input audio signal in a surface of the virtual environment. The respective second intermediate audio signal is determined based on a location of the respective sound source, and further based on an acoustic property of the virtual environment. The respective second intermediate audio signal is associated with a second bus. The output audio signal is presented to the listener via the first bus and the second bus.
G10K 15/10 - Dispositions pour produire une réverbération sonore ou un écho utilisant des réseaux retardateurs comportant des dispositifs électromécaniques ou électro-acoustiques
H04R 3/04 - Circuits pour transducteurs pour corriger la fréquence de réponse
H04R 3/12 - Circuits pour transducteurs pour distribuer des signaux à plusieurs haut-parleurs
H04R 5/033 - Casques pour communication stéréophonique
This disclosure describes a wearable display system configured to project light to the eye(s) of a user to display virtual (e.g., augmented reality) image content in a vision field of the user. The system can include light source(s) that output light, spatial light modulator(s) that modulate the light to provide the virtual image content, and an eyepiece configured to convey the modulated light toward the eye(s) of the user. The eyepiece can include waveguide(s) and a plurality of in-coupling optical elements arranged on or in the waveguide(s) to in-couple the modulated light received from the spatial light modulator(s) into the waveguide(s) to be guided toward the user's eye(s). The spatial light modulator(s) may be movable, and/or may include movable components, to direct different portions of the modulated light toward different ones of the in-coupling optical elements at different times.
Embodiments of this disclosure provides systems and methods for displays. In embodiments, a display system includes a frame, an eyepiece coupled to the frame, and a first adhesive bond disposed between the frame and the eyepiece. The eyepiece can include a light input region and a light output region. The first adhesive bond can be disposed along a first portion of a perimeter of the eyepiece, where the first portion of the perimeter of the eyepiece borders the light input region such that the first adhesive bond is configured to maintain a position of the light input region relative to the frame.
The invention provides a content provisioning system. A mobile device has a mobile device processor. The mobile device mobile device has communication interface connected to the mobile device processor and a first resource device communication interface and under the control of the mobile device processor to receive first content transmitted by the first resource device transmitter The mobile device mobile device has a mobile device output device connected to the mobile device processor and under control of the mobile device processor capable of providing an output that can be sensed by a user.
A method of processing an audio signal is disclosed. According to embodiments of the method, magnitude response information of a prototype filter is determined. The magnitude response information includes a plurality of gain values, at least one of which includes a first gain corresponding to a first frequency. The magnitude response information of the prototype filter is stored. The magnitude response information of the prototype filter at the first frequency is retrieved. Gains are computed for a plurality of control frequencies based on the retrieved magnitude response information of the prototype filter at the first frequency, and the computed gains are applied to the audio signal.
An augmented reality viewer includes components, assemblies, and executable logic to provide a user with the perception of rich augmented reality experiences, including aspects of the aquatic world.
A pupil separation system includes a first optical element disposed at a first position along an axis and including a first input surface, a first beamsplitter optically coupled to the first input surface, and a first output surface optically coupled to the first beamsplitter. The pupil separation system also includes a second optical element disposed at a second position along the axis and including a second input surface, a second beamsplitter optically coupled to the second input surface, and a second output surface optically coupled to the second beamsplitter. The pupil separation system further includes a third optical element disposed at a third position along the axis and including a third input surface, a reflective surface optically coupled to the third input surface, and a third output surface optically coupled to the reflective surface.
G06F 18/21 - Conception ou mise en place de systèmes ou de techniquesExtraction de caractéristiques dans l'espace des caractéristiquesSéparation aveugle de sources
G06F 18/22 - Critères d'appariement, p. ex. mesures de proximité
A user may interact and select positions in three-dimensional space arbitrarily through conversion of a two-dimensional positional input into a three-dimensional point in space. The system may allow a user to use one or more user input devices for pointing, annotating, or drawing on virtual objects, real objects or empty space in reference to the location of the three-dimensional point in space within an augmented reality or mixed reality session.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateurLeurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p. ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaison
G06F 3/041 - Numériseurs, p. ex. pour des écrans ou des pavés tactiles, caractérisés par les moyens de transduction
G06F 3/04812 - Techniques d’interaction fondées sur l’aspect ou le comportement du curseur, p. ex. sous l’influence de la présence des objets affichés
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p. ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
A method is disclosed, the method comprising the steps of identifying a first real object in a mixed reality environment, the mixed reality environment having a user; identifying a second real object in the mixed reality environment; generating, in the mixed reality environment, a first virtual object corresponding to the second real object; identifying, in the mixed reality environment, a collision between the first real object and the first virtual object; determining a first attribute associated with the collision; determining, based on the first attribute, a first audio signal corresponding to the collision; and presenting to the user, via one or more speakers, the first audio signal.
A method of forming a reconstructed image includes receiving an image having a plurality of lines of pixel data and forming a mask for each line. Each line is defined by a plurality of pixel groups. The method includes, for each pixel group of the plurality of pixel groups, defining a first bit if pixels in the pixel group are characterized by pixel values less than a brightness threshold and defining a second bit if pixels in the pixel group are characterized by pixel values greater than or equal to the brightness threshold. The method further includes providing pixel values for pixels in pixel groups having the second bit, storing the mask and the provided pixel values for each line in a memory, extracting the mask and the provided pixel values for each line from the memory, and forming a reconstructed image using the mask and the provided pixel values.
H04N 1/64 - Systèmes pour la transmission ou l'enregistrement du signal d'image en couleursLeurs détails, p. ex. leurs moyens de codage, de décodage
H04N 19/593 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre des techniques de prédiction spatiale
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06N 5/04 - Modèles d’inférence ou de raisonnement
H04N 25/535 - Commande du temps d'intégration en utilisant des temps d'intégration différents pour les différentes régions du capteur par une sélection dynamique des régions
H04N 1/41 - Réduction de la largeur de bande ou de la redondance
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
86.
TEMPERATURE DEPENDENT CALIBRATION OF MOVEMENT DETECTION DEVICES
An electronics system has a board with a thermal interface having an exposed surface. A thermoelectric device is placed against the thermal interface to heat the board. Heat transfers through the board from a first region where the thermal interface is located to a second region where an electronics device is mounted. The electronics device has a temperature sensor that detects the temperature of the electronics device. The temperature of the electronics device is used to calibrate an accelerometer and a gyroscope in the electronics device. Calibration data includes a temperature and a corresponding acceleration offset and a corresponding angle offset. A field computer simultaneously senses a temperature, an acceleration and an angle from the temperature sensor, accelerometer and gyroscope and adjusts the measured data with the offset data at the same temperature. The field computer provides corrected data to a controlled system.
G01C 25/00 - Fabrication, étalonnage, nettoyage ou réparation des instruments ou des dispositifs mentionnés dans les autres groupes de la présente sous-classe
B81B 7/02 - Systèmes à microstructure comportant des dispositifs électriques ou optiques distincts dont la fonction a une importance particulière, p. ex. systèmes micro-électromécaniques [SMEM, MEMS]
An augmented reality (AR) system includes a wearable device comprising a display inside the wearable device and operable to display virtual content and an imaging device mounted to the wearable device. The AR system also includes a handheld device comprising handheld fiducials affixed to the handheld device and a computing apparatus configured to perform localization of the handheld device with respect to the wearable device.
Examples of the disclosure describe systems and methods for generating and displaying a virtual companion. In an example method, a first input from an environment of a user is received at a first time via a first sensor. An occurrence of an event in the environment is determined based on the first input. A second input from the user is received via a second sensor, and an emotional reaction of the user is identified based on the second input. An association is determined between the emotional reaction and the event. A view of the environment is presented at a second time later than the first time via a display. A stimulus is presented at the second time via a virtual companion displayed via the display, wherein the stimulus is determined based on the determined association between the emotional reaction and the event.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 21/53 - Contrôle des utilisateurs, des programmes ou des dispositifs de préservation de l’intégrité des plates-formes, p. ex. des processeurs, des micrologiciels ou des systèmes d’exploitation au stade de l’exécution du programme, p. ex. intégrité de la pile, débordement de tampon ou prévention d'effacement involontaire de données par exécution dans un environnement restreint, p. ex. "boîte à sable" ou machine virtuelle sécurisée
A method of operating an eyepiece waveguide having a first diffractive region and a second diffractive region includes directing light from a first projector to impinge on a first incoupling grating (ICG) and directing light from a second projector to impinge on a second ICG. Light from the first projector is diffracted into a first portion of the first diffractive region, diffracted into the second diffractive region, and subsequently diffracted out of the eyepiece waveguide. Light from the second projector is diffracted into the third portion of the second diffractive region, diffracted into the first portion of the first diffractive region, and subsequently diffracted out of the eyepiece waveguide.
A method of operating an imaging system to display an image includes providing a light source having an emission area and a light-guiding optical element having an output field of view. The method also includes activating a segment of the light source, causing the light source to produce a light beam that illuminates a portion of the emission area, encoding the light beam with image data, thereby producing an encoded light beam, focusing the encoded light beam onto the light-guiding optical element, and projecting the encoded light beam from a portion of the output field of view of the light-guiding optical element that corresponds to the portion of the emission area of the light source.
G02B 30/24 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p. ex. des effets stéréoscopiques en fournissant des première et seconde images de parallaxe à chacun des yeux gauche et droit d’un observateur du type stéréoscopique impliquant un multiplexage temporel, p. ex. utilisant des obturateurs gauche et droit activés séquentiellement
G03H 1/22 - Procédés ou appareils pour obtenir une image optique à partir d'un hologramme
G03H 1/26 - Procédés ou appareils adaptés spécialement pour produire des hologrammes multiples ou pour en obtenir des images, p. ex. procédés pour l'holographie à plusieurs couleurs
H04N 9/31 - Dispositifs de projection pour la présentation d'images en couleurs
H04N 23/56 - Caméras ou modules de caméras comprenant des capteurs d'images électroniquesLeur commande munis de moyens d'éclairage
91.
METHOD AND SYSTEM FOR VARIABLE OPTICAL THICKNESS WAVEGUIDES FOR AUGMENTED REALITY DEVICES
An augmented reality device includes a projector, projector optics optically coupled to the projector, and a substrate structure including a substrate having an incident surface and an opposing exit surface and a first variable thickness film coupled to the incident surface. The substrate structure can also include a first combined pupil expander coupled to the first variable thickness film, a second variable thickness film coupled to the opposing exit surface, an incoupling grating coupled to the opposing exit surface, and a second combined pupil expander coupled to the opposing exit surface.
G02B 6/13 - Circuits optiques intégrés caractérisés par le procédé de fabrication
F21V 8/00 - Utilisation de guides de lumière, p. ex. dispositifs à fibres optiques, dans les dispositifs ou systèmes d'éclairage
G02B 6/12 - Guides de lumièreDétails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p. ex. des moyens de couplage du type guide d'ondes optiques du genre à circuit intégré
G02B 6/122 - Éléments optiques de base, p. ex. voies de guidage de la lumière
G02B 26/08 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables pour commander la direction de la lumière
92.
METHOD OF WAKING A DEVICE USING SPOKEN VOICE COMMANDS
Disclosed herein are systems and methods for processing speech signals in mixed reality applications. A method may include receiving an audio signal; determining, via first processors, whether the audio signal comprises a voice onset event; in accordance with a determination that the audio signal comprises the voice onset event: waking a second one or more processors; determining, via the second processors, that the audio signal comprises a predetermined trigger signal; in accordance with a determination that the audio signal comprises the predetermined trigger signal: waking third processors; performing, via the third processors, automatic speech recognition based on the audio signal; and in accordance with a determination that the audio signal does not comprise the predetermined trigger signal: forgoing waking the third processors; and in accordance with a determination that the audio signal does not comprise the voice onset event: forgoing waking the second processors.
Augmented reality and virtual reality display systems and devices are configured for efficient use of projected light. In some aspects, a display system includes a light projection system and a head-mounted display configured to project light into an eye of the user to display virtual image content. The head-mounted display includes at least one waveguide comprising a plurality of in-coupling elements each configured to receive, from the light projection system, light corresponding to a portion of the user's field of view and to in-couple the light into the waveguide; and a plurality of out-coupling elements configured to out-couple the light out of the waveguide to display the virtual content, wherein each of the out-coupling elements are configured to receive light from different ones of the in-coupling elements.
A foveated display for projecting an image to an eye of a viewer is provided. The foveated display includes a first projector and a dynamic eyepiece optically coupled to the first projector. The dynamic eyepiece comprises a waveguide having a variable surface profile. The foveated display also includes a second projector and a fixed depth plane eyepiece optically coupled to the second projector.
G02B 30/52 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p. ex. des effets stéréoscopiques l’image étant construite à partir d'éléments d'image répartis sur un volume 3D, p. ex. des voxels le volume 3D étant construit à partir d'une pile ou d'une séquence de plans 2D, p. ex. systèmes d'échantillonnage en profondeur
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
Examples of an imaging system for use with a head mounted display (HMD) are disclosed. The imaging system can include a forward-facing imaging camera and a surface of a display of the HMD can include an off-axis diffractive optical element (DOE) or hot mirror configured to reflect light to the imaging camera. The DOE or hot mirror can be segmented. The imaging system can be used for eye tracking, biometric identification, multiscopic reconstruction of the three-dimensional shape of the eye, etc.
A61B 3/10 - Appareils pour l'examen optique des yeuxAppareils pour l'examen clinique des yeux du type à mesure objective, c.-à-d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient
A61B 3/113 - Appareils pour l'examen optique des yeuxAppareils pour l'examen clinique des yeux du type à mesure objective, c.-à-d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient pour déterminer ou enregistrer le mouvement de l'œil
A61B 3/12 - Appareils pour l'examen optique des yeuxAppareils pour l'examen clinique des yeux du type à mesure objective, c.-à-d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient pour examiner le fond de l'œil, p. ex. ophtalmoscopes
A61B 3/14 - Dispositions spécialement adaptées à la photographie de l'œil
A61B 5/00 - Mesure servant à établir un diagnostic Identification des individus
A61B 5/11 - Mesure du mouvement du corps entier ou de parties de celui-ci, p. ex. tremblement de la tête ou des mains ou mobilité d'un membre
Methods and systems for triggering presentation of virtual content based on sensor information. The display system may be an augmented reality display system configured to provide virtual content on a plurality of depth planes using different wavefront divergences. The system may monitor information detected via the sensors, and based on the monitored information, trigger access to virtual content identified in the sensor information. Virtual content can be obtained, and presented as augmented reality content via the display system. The system may monitor information detected via the sensors to identify a QR code, or a presence of a wireless beacon. The QR code or wireless beacon can trigger the display system to obtain virtual content for presentation.
A63F 13/211 - Dispositions d'entrée pour les dispositifs de jeu vidéo caractérisées par leurs capteurs, leurs finalités ou leurs types utilisant des capteurs d’inertie, p. ex. des accéléromètres ou des gyroscopes
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
97.
OPTICAL LAYERS TO IMPROVE PERFORMANCE OF EYEPIECES FOR USE WITH VIRTUAL AND AUGMENTED REALITY DISPLAY SYSTEMS
Improved diffractive optical elements for use in an eyepiece for an extended reality system, The diffractive optical elements comprise a diffraction structure having a waveguide substrate, a surface grating positioned on a first side of the waveguide substrate, and one or more optical layer pairs disposed between the waveguide substrate and the surface grating. Each optical layer pair comprises a low index layer and a high index layer disposed directly on an exterior side of the low index layer.
In some embodiments, an augmented reality system includes at least one waveguide that is configured to receive and redirect light toward a user, and is further configured to allow ambient light from an environment of the user to pass therethrough toward the user. The augmented reality system also includes a first adaptive lens assembly positioned between the at least one waveguide and the environment, a second adaptive lens assembly positioned between the at least one waveguide and the user, and at least one processor operatively coupled to the first and second adaptive lens assemblies. Each lens assembly of the augmented reality system is selectively switchable between at least two different states in which the respective lens assembly is configured to impart at least two different optical powers to light passing therethrough, respectively. The at least one processor is configured to cause the first and second adaptive lens assemblies to synchronously switch between different states in a manner such that the first and second adaptive lens assemblies impart a substantially constant net optical power to ambient light from the environment passing therethrough.
G02F 1/01 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p. ex. commutation, ouverture de porte ou modulationOptique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur
G02F 1/13 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p. ex. commutation, ouverture de porte ou modulationOptique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des cristaux liquides, p. ex. cellules d'affichage individuelles à cristaux liquides
G02F 1/1337 - Orientation des molécules des cristaux liquides induite par les caractéristiques de surface, p. ex. par des couches d'alignement
G02F 1/29 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p. ex. commutation, ouverture de porte ou modulationOptique non linéaire pour la commande de la position ou de la direction des rayons lumineux, c.-à-d. déflexion
99.
ANGULARLY SELECTIVE ATTENUATION OF LIGHT TRANSMISSION ARTIFACTS IN WEARABLE DISPLAYS
A wearable display system includes an eyepiece stack having a world side and a user side opposite the world side. During use, a user positioned on the user side views displayed images delivered by the wearable display system via the eyepiece stack which augment the user's field of view of the user's environment. The system also includes an optical attenuator arranged on the world side of the of the eyepiece stack, the optical attenuator having a layer of a birefringent material having a plurality of domains each having a principal optic axis oriented in a corresponding direction different from the direction of other domains. Each domain of the optical attenuator reduces transmission of visible light incident on the optical attenuator for a corresponding different range of angles of incidence.
G02B 27/28 - Systèmes ou appareils optiques non prévus dans aucun des groupes , pour polariser
G02F 1/01 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p. ex. commutation, ouverture de porte ou modulationOptique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur
G02F 1/1335 - Association structurelle de cellules avec des dispositifs optiques, p. ex. des polariseurs ou des réflecteurs
G02F 1/13363 - Éléments à biréfringence, p. ex. pour la compensation optique
G02F 1/1337 - Orientation des molécules des cristaux liquides induite par les caractéristiques de surface, p. ex. par des couches d'alignement
G02F 1/139 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p. ex. commutation, ouverture de porte ou modulationOptique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des cristaux liquides, p. ex. cellules d'affichage individuelles à cristaux liquides caractérisés par l'effet électro-optique ou magnéto-optique, p. ex. transition de phase induite par un champ, effet d'orientation, interaction entre milieu récepteur et matière additive ou diffusion dynamique basés sur des effets d'orientation où les cristaux liquides restent transparents
A head mounted display system configured to project a first image to an eye of a user, the head mounted display system includes at least one waveguide comprising a first major surface, a second major surface opposite the first major surface, and a first edge and a second edge between the first major surface and second major surface. The at least one waveguide also includes a first reflector disposed between the first major surface and the second major surface. The head mounted display system also includes at least one light source disposed closer to the first major surface than the second major surface and a spatial light modulator configured to form a second image and disposed closer to the first major surface than the second major surface, wherein the first reflector is configured to reflect light toward the spatial light modulator.