Systems and methods for generating and serving stylized map tiles for a social media platform's map-based graphical user interface. Multiple earth imagery tiles corresponding to a geographical area are retrieved, each comprising a photographic image of a corresponding portion of the Earth's surface. Based on the earth imagery tiles, multiple stylized map tiles are generated. In response to receiving a request from a user device for display of a target area in the map-based GUI, a set of stylized map tiles corresponding to the target area is retrieved and transmitted to the user device. The generation of stylized map tiles may include retrieving a target earth imagery tile together with neighboring tiles, generating an expanded earth imagery tile, stylizing the expanded tile, and cropping to produce the final stylized map tile. Different neural networks may be used for stylizing tiles at different zoom levels.
G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/04842 - Selection of displayed objects or displayed text elements
G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
G06F 16/487 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
G06F 16/9535 - Search customisation based on user profiles and personalisation
G06F 16/9537 - Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
G06Q 50/00 - Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
G06T 11/20 - Drawing from basic elements, e.g. lines or circles
G06T 11/60 - Editing figures and textCombining figures or text
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
H04L 41/28 - Restricting access to network management systems or functions, e.g. using authorisation function to access network configuration
H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
H04L 67/12 - Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
H04L 67/52 - Network services specially adapted for the location of the user terminal
H04W 4/02 - Services making use of location information
H04W 4/029 - Location-based management or tracking services
H04W 4/18 - Information format or content conversion, e.g. adaptation by the network of the transmitted or received information for the purpose of wireless delivery to users or terminals
H04W 4/21 - Services signallingAuxiliary data signalling, i.e. transmitting data via a non-traffic channel for social networking applications
H04W 12/02 - Protecting privacy or anonymity, e.g. protecting personally identifiable information [PII]
Systems herein describe a stylization system that accesses an input image, generates a paired image dataset using a first neural network, generates a stylized target image based on the input image by applying the stylization effect on an entire portion of the input image using a second neural network trained on the paired image dataset, and causes display of the stylized target image on a graphical user interface of a computing device.
G06T 11/60 - Editing figures and textCombining figures or text
G06F 3/04845 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
G06T 3/60 - Rotation of whole images or parts thereof
G06T 5/50 - Image enhancement or restoration using two or more images, e.g. averaging or subtraction
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
A system to navigate a browser based on image data may perform operations that include: receiving a scan request from a client device, the scan request including an image that comprises image data; identifying an object depicted within the image based on the image data; determining a classification of the object; and navigating a browser associated with the client device to a resource based on the classification.
Methods and devices for wired charging and communication with a wearable device are described. In one embodiment, a symmetrical contact interface comprises a first contact pad and a second contact pad, and particular wired circuitry is coupled to the first and second contact pads to enable charging as well as receive and transmit communications via the contact pads as part of various device states.
H01L 27/02 - Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including integrated passive circuit elements with at least one potential-jump barrier or surface barrier
H01R 13/62 - Means for facilitating engagement or disengagement of coupling parts or for holding them in engagement
H02J 7/04 - Regulation of the charging current or voltage
H02J 7/34 - Parallel operation in networks using both storage and other DC sources, e.g. providing buffering
H03K 19/0185 - Coupling arrangementsInterface arrangements using field-effect transistors only
H04B 3/54 - Systems for transmission via power distribution lines
H04B 3/56 - Circuits for coupling, blocking, or by-passing of signals
An artificial intelligence (AI) network or neural network is trained to generate three-dimensional (3D) models or shapes with color from two-dimensional (2D) input images and input text describing the 3D model with color. Example methods include converting a first three-dimensional (3D) model from a first representation to a second representation, the second representation including color information for the 3D model and inputting the second representation into an encoder to generate a third representation having a lower dimension than the second representation. The method further includes inputting the third representation into a decoder to generate a fourth representation having a same dimension as the second representation and generating a second 3D model from the fourth representation. The method further includes determining losses between the first 3D model and the second 3D model and updating weights of the encoder and the decoder based on the losses.
Example embodiments described herein therefore relate to an AR guidance system to perform operations that include: detecting a client device at a location within a geo-fenced area, wherein the geo-fenced area may include within it, a destination of interest; determining a route to the destination of interest from the location of the client device within the geo-fenced area; causing display of a presentation of an environment within an AR interface at the client device; detecting a display of real-world signage within the presentation of the environment; generating a media item in response to the detecting the display of the signage within the presentation of the environment, wherein the media item is based on the route to the destination of interest; and causing display of the media item within the AR interface based on the position of the signage within the presentation of the environment.
Systems herein describe a stylization system that accesses an input image, generates a paired image dataset using a first neural network, generates a stylized target image based on the input image by applying the stylization effect on an entire portion of the input image using a second neural network trained on the paired image dataset, and causes display of the stylized target image on a graphical user interface of a computing device.
A method for recalibrating an augmented reality (AR) device includes generating and storing a ground truth map of a real-world environment when the AR device is operating with a high likelihood of having an accurate factory calibration. During operation of the AR device, new map data is generated for the real-world environment. The new map data is compared to the ground truth map to detect potential calibration errors. If calibration errors are detected, a recalibration procedure is executed by determining an optimal path through the real-world environment that allows for observing parameters requiring recalibration. Visual cues are generated to guide a user of the AR device through the optimal path. As the user follows the visual cues, calibration parameters are iteratively adjusted to eliminate detected calibration errors. The recalibration procedure may be presented as an interactive game to improve user engagement, with rewards provided for accurately following guidance.
Bending data is used to facilitate tracking operations of an extended reality (XR) device, such as hand tracking or other object tracking operations. The XR device obtains bending data indicative of bending of the XR device to accommodate a body part of a user wearing the XR device. The XR device determines, based on the bending data, whether to use previously identified biometric data in a tracking operation. A mode of the XR device is selected responsive to determining whether to use the previously identified biometric data. The selected mode is used to initialize the tracking operation. The selected mode may be a first mode in which the previously identified biometric data is used in the tracking operation or a second mode in which the previously identified biometric data is not used in the tracking operation.
A method for recalibrating an augmented reality (AR) device includes generating and storing a ground truth map of a real-world environment when the AR device is operating with a high likelihood of having an accurate factory calibration. During operation of the AR device, new map data is generated for the real-world environment. The new map data is compared to the ground truth map to detect potential calibration errors. If calibration errors are detected, a recalibration procedure is executed by determining an optimal path through the real-world environment that allows for observing parameters requiring recalibration. Visual cues are generated to guide a user of the AR device through the optimal path. As the user follows the visual cues, calibration parameters are iteratively adjusted to eliminate detected calibration errors. The recalibration procedure may be presented as an interactive game to improve user engagement, with rewards provided for accurately following guidance.
G01C 25/00 - Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
G01C 21/00 - NavigationNavigational instruments not provided for in groups
G01C 21/16 - NavigationNavigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigatedDead reckoning by integrating acceleration or speed, i.e. inertial navigation
G06T 19/00 - Manipulating 3D models or images for computer graphics
11.
LOW-POWER HAND-TRACKING SYSTEM FOR WEARABLE DEVICE
A method for a low-power hand-tracking system is described. In one aspect, a method includes polling a proximity sensor of a wearable device to detect a proximity event, the wearable device includes a low-power processor and a high-power processor, in response to detecting the proximity event, operating a low-power hand-tracking application on the low-power processor based on proximity data from the proximity sensor, and ending an operation of the low-power hand-tracking application in response to at least one of: detecting and recognizing a gesture based on the proximity data, detecting without recognizing the gesture based on the proximity data, or detecting a lack of activity from the proximity sensor within a timeout period based on the proximity data.
Interactive augmented reality experiences with an eyewear device including a virtual eyewear beam. The user can direct the virtual beam by orienting the eyewear device or the user's eye gaze or both. The eyewear device may detect the direction of an opponent's eyewear device or eye gaze of both. The eyewear device may calculate a score based on hits of the virtual beam of the user and the opponent on respective target areas such as the other player's head or face.
Examples described herein relate to automatic image generation. A plurality of inputs is accessed. The inputs include first input data and second input data. The first input data includes a text prompt describing a desired image and the second input data is indicative of one or more structural features of the desired image. One or more intermediate outputs are generated via a first generative machine learning model that uses the plurality of inputs as first control signals. An output image is generated via a second generative machine learning model that uses at least a subset of the plurality of inputs and at least a subset of the one or more intermediate outputs as second control signals. The output image is presented at a user device of a user.
An artificial intelligence (Al) network or neural network is trained to generate three-dimensional (3D) models or shapes with color from two- dimensional (2D) input images and input text describing the 3D model with color. Example methods include converting a first three-dimensional (3D) model from a first representation to a second representation, the second representation including color information for the 3D model and inputting the second representation into an encoder to generate a third representation having a lower dimension than the second representation. The method further includes inputting the third representation into a decoder to generate a fourth representation having a same dimension as the second representation and generating a second 3D model from the fourth representation. The method further includes determining losses between the first 3D model and the second 3D model and updating weights of the encoder and the decoder based on the losses.
A content collection is shared between a first user and a second user. A content collection interface is presented on a second user device of the second user. The content collection interface enables the second user to navigate the shared content collection. The shared content collection includes a first content item. Responsive to receiving, from the second user device, an indication of a first combination selection, a second content item is accessed and the second user is enabled to combine the first content item with the second content item to create a first combined content item. Responsive to receiving, from the second user device, an indication of a first content addition selection, the first combined content item is stored in association with the shared content collection. The first combined content item is presented within the content collection interface on a first user device of the first user.
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
16.
WAVEGUIDE AND DIFFRACTION GRATING FOR AUGMENTED REALITY OR VIRTUAL REALITY DISPLAY
A waveguide for use in a virtual reality, VR, or augmented reality, AR, device, is disclosed. The waveguide comprising an input region configured to couple light into the waveguide so that it propagates under total internal reflection (TIR) within the waveguide, and an output region comprising optical structures configured to receive image bearing light from the input region. The output region comprises a plurality of zones having different diffraction to each other, the plurality of zones comprising diffraction efficiencies so as to reduce rainbow artefacts.
A system and method for enabling augmented reality effects in a web browser without requiring installation of additional software is disclosed. A web server provides a website with a gallery of selectable special effects. Upon selecting an effect, the website loads a page specific to that effect which includes a live preview showing the effect applied to a video feed from the user's webcam. This allows the user to view themselves with the effect applied in real-time. The website requests access to the webcam and microphone through the browser's built-in permission system. Captured photos and videos with the effect applied can be saved locally or shared through native operating system tools. The system provides an engaging augmented reality experience accessible directly via a standard web browser, without needing to install a dedicated app.
H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies or resolving scheduling conflicts
H04N 21/4788 - Supplemental services, e.g. displaying phone caller identification or shopping application communicating with other users, e.g. chatting
A system and method for enabling augmented reality effects in a web browser without requiring installation of additional software is disclosed. A web server provides a website with a gallery of selectable special effects. Upon selecting an effect, the website loads a page specific to that effect which includes a live preview showing the effect applied to a video feed from the user's webcam. This allows the user to view themselves with the effect applied in real-time. The website requests access to the webcam and microphone through the browser's built-in permission system. Captured photos and videos with the effect applied can be saved locally or shared through native operating system tools. The system provides an engaging augmented reality experience accessible directly via a standard web browser, without needing to install a dedicated app.
A method of adjusting visual content. The method comprises selecting, on a client terminal, visual content, extracting visual content data pertaining to the visual content, forwarding a request which includes the visual content data to a network node via a network, receiving, in response to the request, a list of a plurality of visual content editing functions from the network node, presenting, on the client terminal, the plurality of visual content editing functions to a user, receiving a selection of at least one member of the list from the user, adjusting the visual content using the at least one member, and outputting the adjusted visual content.
H04N 21/431 - Generation of visual interfacesContent or additional data rendering
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/04845 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
G06V 20/40 - ScenesScene-specific elements in video content
G11B 27/034 - Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
H04M 1/72445 - User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for supporting Internet browser applications
H04M 1/72457 - User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to geographic location
H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies or resolving scheduling conflicts
H04N 21/472 - End-user interface for requesting content, additional data or servicesEnd-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content
H04N 21/4788 - Supplemental services, e.g. displaying phone caller identification or shopping application communicating with other users, e.g. chatting
H04N 23/63 - Control of cameras or camera modules by using electronic viewfinders
A survey distribution system receives a selection of a first subset of a user population. For example, an administrator of the system may select one or more user attributes of the users among the user population. In response, the survey distribution system identified the first subset of users based on the selected attributes. In some example embodiments, the administrator of the system may additionally define a maximum or minimum number of users to be exposed to the content, as well as targeting parameters for the content, such as a period of time in which to distribute the content to the first subset of users, as well as location criteria, such that the content may only be distributed to users located in specific areas.
Methods and systems are disclosed for performing operations comprising: receiving a monocular image that includes a depiction of a whole body of a user; generating a segmentation of the whole body of the user based on the monocular image; accessing a video feed comprising a plurality of monocular images received prior to the monocular image; smoothing, using the video feed, the segmentation of the whole body generated based on the monocular image to provide a smoothed segmentation; and applying one or more visual effects to the monocular image based on the smoothed segmentation.
Examples described herein relate to automatic image generation. A plurality of inputs is accessed. The inputs include first input data and second input data. The first input data includes a text prompt describing a desired image and the second input data is indicative of one or more structural features of the desired image. One or more intermediate outputs are generated via a first generative machine learning model that uses the plurality of inputs as first control signals. An output image is generated via a second generative machine learning model that uses at least a subset of the plurality of inputs and at least a subset of the one or more intermediate outputs as second control signals. The output image is presented at a user device of a user.
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
23.
IMPLEMENTING USER INTERFACES OF OTHER APPLICATIONS
A first application uses a user interface (UI) component of a second application to determine a user intent based on user input and then determines an action to perform based on the determined user intent. The first application makes it easier for the user to learn the UI of the second application. Example methods include a first application displaying a first content item, the first content item being content of the first application, and the first application displaying a second content item, the second content item being content of a second application. The method may further include in response to a. second selection of a second user interface item associated with the second, content item, the first application, determining a user intent and an action associated with the user intent based on a second user interface, the second user interface associated with the second application.
G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
24.
CHUNKED TRANSCODING AND UPLOADING FOR VIDEO TRANSMISSION
Uploading of a video file is performed by transcoding, processing and uploading portions of the video file in parallel, to reduce total processing and upload time. The processing of the video file may include applying associated augmented reality effects to a raw video recording, to generate an enhanced video recording for transmission and viewing at a recipient device. The uploaded portions of the video file may be assembled into a fragmented file format such as fMP4, in which portions of the video file are stored as fragments.
H04N 19/436 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
H04N 19/40 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
H04N 19/85 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
A user with geographically nearby users is offered the opportunity to send the nearby users a friend request. Example methods include accessing a location of a user system of a user, where the user a member of an interaction platform, determining a list of other users, where the list of other users include other users associated with other user systems that are within a threshold distance of the location of the user system, where the other users have a threshold number of connections with the user, and where the other users are members of the interaction platform. The method may further include causing to be displayed on a screen of the user system indications of the other users of the list of other users and user interface items for the user to send a friend request to a corresponding other user of the list of other users.
H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
G06Q 50/00 - Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
Augmented reality eyewear devices allow users to experience a version of our “real” physical world augmented with virtual objects. Augmented reality eyewear may present a user with a graphical user interface that appears to be in the airspace directly in front of the user thereby encouraging the user to interact with virtual objects in socially undesirable ways, such as by making sweeping hand gestures in the airspace in front of the user. Anchoring various input mechanisms or the graphical user interface of an augmented reality eyewear application to a wristwatch may allow a user to interact with an augmented reality eyewear device in a more socially acceptable manner. Combining the displays of a smartwatch and an augmented reality eyewear device into a single graphical user interface may provide enhanced display function and more responsive gestural input.
Eyewear including an optical element, a controller, a support structure configured to support the optical element and the controller, light sources coupled to the controller and supported by the support structure, and a diffuser positioned adjacent to the light sources and supported by the support structure, the diffuser including microstructures that diffuse light emitted by the light sources in a radial anisotropic diffusion pattern or a prism-like diffusion pattern.
A text string provided by a second client device of a second user is received by a first client device of a first user. The text string is parsed into one or more text portions. A score is assigned to each of the one or more text portions based on a specified criterion. One or more relevant tags of a plurality of tags are determined based on the one or more text portions. One or more media overlays are selected based on the one or more relevant tags and the assigned score for each of the one or more text portions. The text string with a reply interface for sending a reply message to the second client device is displayed.
H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
29.
GENERATING GROUND TRUTH DATASETS FOR VIRTUAL REALITY EXPERIENCES
Systems and methods of generating ground truth datasets for producing virtual reality (VR) experiences, for testing simulated sensor configurations, and for training machine-learning algorithms. In one example, a recording device with one or more cameras and one or more inertial measurement units captures images and motion data along a real path through a physical environment. A SLAM application uses the captured data to calculate the trajectory of the recording device. A polynomial interpolation module uses Chebyshev polynomials to generate a continuous time trajectory (CTT) function. The method includes identifying a virtual environment and assembling a simulated sensor configuration, such as a VR headset. Using the CTT function, the method includes generating a ground truth output dataset that represents the simulated sensor configuration in motion along a virtual path through the virtual environment. The virtual path is closely correlated with the motion along the real path as captured by the recording device. Accordingly, the output dataset produces a realistic and life-like VR experience. In addition, the methods described can be used to generate multiple output datasets, at various sample rates, which are useful for training the machine-learning algorithms which are part of many VR systems.
A machine includes a processor and a memory connected to the processor. The memory stores instructions executed by the processor to receive a message and a message parameter indicative of a characteristic of the message, where the message includes a photograph or a video. A determination is made that the message parameter corresponds to a selected gallery, where the selected gallery includes a sequence of photographs or videos. The message is posted to the selected gallery in response to the determination. The selected gallery is supplied in response to a request.
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/04842 - Selection of displayed objects or displayed text elements
G06F 3/04883 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
G06F 3/0489 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using dedicated keyboard keys or combinations thereof
G06F 40/169 - Annotation, e.g. comment data or footnotes
G06T 11/60 - Editing figures and textCombining figures or text
G11B 27/031 - Electronic editing of digitised analogue information signals, e.g. audio or video signals
G11B 27/32 - IndexingAddressingTiming or synchronisingMeasuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on separate auxiliary tracks of the same or an auxiliary record carrier
H04L 51/214 - Monitoring or handling of messages using selective forwarding
H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
H04L 69/329 - Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the application layer [OSI layer 7]
Method for creating marker-based shared augmented reality (AR) session starts with initializing a shared AR session by a first device and by a second device. The first device displays on a display a marker. The second device detects the marker using a camera included in the second device and captures an image of the marker using the camera. The second device determines a transformation between the first device and the second device using the image of the marker. A common coordinate frame is then determined using the transformation, the shared AR session is generated using the common coordinate frame, and the shared AR session is caused to be displayed by the first device and by the second device. Other embodiments are described herein.
Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for displaying object names in association with augmented reality content. The program and method provide for receiving, by a messaging application running on a device, a first request to identify plural objects based on an image captured by a camera of the device; identifying, in response to receiving the first request, the plural objects based on the image; for each of the plural objects, determining at least one attribute of the object, and calculating a number of augmented reality content items, from plural augmented reality content items, corresponding to the at least one attribute of the object; selecting, from the plural objects, an object with a largest calculated number of corresponding augmented reality content items; and displaying a name for each of the plural objects based on the selecting.
The present invention relates to a method for generating and causing display of a communication interface that facilitates the sharing of emotions through the creation of 3D avatars, and more particularly with the creation of such interfaces for displaying 3D avatars for use with mobile devices, cloud based systems and the like.
A system and method for suggesting relevant groups and recipients when replying to messages in a messaging application. In response to a first received message, the system identifies groups with membership comprising the sender and receiver. Interface elements representing these mutual groups are displayed as selectable suggestions. The receiving user can choose groups to include in the reply, along with other users. Suggested groups are determined based on recent interactions, mutual connections, and message content. Users can also create new groups from suggestions for ongoing messaging. By recommending shared groups and relevant recipients, the system enables efficient context-based selection when replying. The suggestions aim to streamline recipient picking through intuitive interfaces and machine learning algorithms. This improves the user experience for seamless messaging discussions with appropriate recipients.
G06Q 10/107 - Computer-aided management of electronic mailing [e-mailing]
H04L 51/216 - Handling conversation history, e.g. grouping of messages in sessions or threads
H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
35.
SUGGESTING RELEVANT GROUPS AND INDIVIDUALS IN MESSAGE REPLIES
A system and method for suggesting relevant groups and recipients when replying to messages in a messaging application. In response to a first received message, the system identifies groups with membership comprising the sender and receiver. Interface elements representing these mutual groups are displayed as selectable suggestions. The receiving user can choose groups to include in the reply, along with other users. Suggested groups are determined based on recent interactions, mutual connections, and message content. Users can also create new groups from suggestions for ongoing messaging. By recommending shared groups and relevant recipients, the system enables efficient context-based selection when replying. The suggestions aim to streamline recipient picking through intuitive interfaces and machine learning algorithms. This improves the user experience for seamless messaging discussions with appropriate recipients.
H04L 51/216 - Handling conversation history, e.g. grouping of messages in sessions or threads
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
H04L 51/043 - Real-time or near real-time messaging, e.g. instant messaging [IM] using or handling presence information
H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
36.
IMPLEMENTING USER INTERFACES OF OTHER APPLICATIONS
A first application uses a user interface (UI) component of a second application to determine a user intent based on user input and then determines an action to perform based on the determined user intent. The first application makes it easier for the user to learn the UI of the second application. Example methods include a first application displaying a first content item, the first content item being content of the first application, and the first application displaying a second content item, the second content item being content of a second application. The method may further include in response to a second selection of a second user interface item associated with the second content item, the first application, determining a user intent and an action associated with the user intent based on a second user interface, the second user interface associated with the second application.
G06F 3/04886 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
Systems, methods, and computer readable media for object counting on augmented reality (AR) wearable devices are disclosed. Embodiments are disclosed that enable display of a count of objects as part of a user view. Upon receipt of a request to count objects, the AR wearable device captures an image of the user view. The AR wearable device transmits the image to a backend for processing to determine the objects in the image. The AR wearable device selects a group of objects of the determined objects to count and overlays boundary boxes over counted objects within the user view. The position of the boundary boxes is adjusted to account for movement of the AR wearable device. A hierarchy of objects is used to group together objects that are related but have different labels or names.
Methods and systems are disclosed for performing operations comprising: receiving a monocular image that includes a depiction of a person wearing an article of clothing; generating a segmentation of the article of clothing worn by the person in the monocular image; obtaining one or more audio-track related augmented reality elements; and applying the one or more audio-track related augmented reality elements to the article of clothing worn by the person based on the segmentation of the article of clothing worn by the person.
G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
An electronics-enabled eyewear device provides a primary command channel and a secondary command channel for receiving user input during untethered wear, one of the command channels providing for tap input detected by motion sensor(s) incorporated in a body of the eyewear device. A predefined tap sequence or pattern can be applied to frame of the device to trigger as device function. In one example, a double tap of the device's frame causes charge level display indicating a battery charge level.
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
H04N 23/54 - Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
H04N 23/56 - Cameras or camera modules comprising electronic image sensorsControl thereof provided with illuminating means
H04N 23/57 - Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
H04N 23/667 - Camera operation mode switching, e.g. between still and video, sport and normal or high and low resolution modes
Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and a method for detecting a pose of a user. The program and method include operations comprising receiving a monocular image that includes a depiction of a body of a user; detecting a plurality of skeletal joints of the body based on the monocular image; accessing a video feed comprising a plurality of monocular images received prior to the monocular image; filtering, using the video feed, the plurality of skeletal joints of the body detected based on the monocular image; and determining a pose represented by the body depicted in the monocular image based on the filtered plurality of skeletal joints of the body.
G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestriansBody parts, e.g. hands
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
H04L 51/04 - Real-time or near real-time messaging, e.g. instant messaging [IM]
H04N 21/4402 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
41.
SELECTIVE IDENTIFICATION AND ORDER OF IMAGE MODIFIERS
Systems, devices, media and methods are presented for presentation of modified objects within a video stream. The systems and methods select an object of interest depicted within a user interface based on an associated image modifier, determine a modifier context based at least in part on one or more characteristics of the selected object, identify a set of image modifiers based on the modifier context, rank a first portion of the identified set of image modifiers based on a primary ordering characteristic, rank a second portion of the identified set of image modifiers based on a secondary ordering characteristic and cause presentation of the modifier icons for the ranked set of image modifiers.
Devices, media, and methods are presented for an immersive augmented reality (AR) experience using an eyewear device with spatial audio. The eyewear device has a processor, a memory, an image sensor, and a speaker system. The eyewear device captures image information for an environment surrounding the device and identifies an object location within the same environment. The eyewear device then associates a virtual object with the identified object location. The eyewear device monitors the position of the device with respect to the virtual object and presents audio signals to alert the user that the identified object is in the environment.
An optical arrangement to transmit an image from an image plane to a user's eye. The arrangement providing a folded optical transmission path comprising a collimating element, having a first optical element with a first plurality of optically powered surfaces; and a second optical element comprising at least one optically powered surface. The collimating element to receive light forming the image from an image source and collimate and output the light. The optically powered surfaces having a plurality of interfaces along the folded optical path. A refractive index change at each interface is predetermined to control the direction of light passing through each interface. One surface of each of the first and the second optical elements being adjacent to one another. The adjacent surfaces having dissimilar shapes and each defining an angle with a respective other surface of the relevant optical element at opposing ends of the adjacent surfaces.
Systems, methods, devices, computer readable media, and other various embodiments are described for location management processes in wearable electronic devices. Performance of such devices is improved with reduced time to first fix of location operations in conjunction with low-power operations. In one embodiment, low-power circuitry manages high-speed circuitry and location circuitry to provide location assistance data from the high-speed circuitry to the low-power circuitry automatically on initiation of location fix operations as the high-speed circuitry and location circuitry are booted from low-power states. In some embodiments, the high-speed circuitry is returned to a low-power state prior to completion of a location fix and after capture of content associated with initiation of the location fix. In some embodiments, high-speed circuitry is booted after completion of a location fix to update location data associated with content.
G01S 5/00 - Position-fixing by co-ordinating two or more direction or position-line determinationsPosition-fixing by co-ordinating two or more distance determinations
Methods and systems are disclosed for performing operations comprising: receiving a video that includes a depiction of a real-world object; generating a three-dimensional (3D) body mesh associated with the real-world object that tracks movement of the real-world object across frames of the video; determining UV positions of the real-world object depicted in the video to obtain pixel values associated with the UV positions; generating an external mesh and associated augmented reality (AR) element representing the real-world object based on the pixel values associated with the UV positions; deforming the external mesh based on changes to the 3D body mesh and a deformation parameter; and modifying the video to replace the real-world object with the AR element based on the deformed external mesh.
Among other things, embodiments of the present disclosure improve the functionality of electronic messaging software and systems by generating customized images with avatars of different users within electronic messages. For example, users of different mobile computing devices can exchange electronic communications with images generated to include avatars representing themselves as well as their friends, colleagues, and other acquaintances.
H04L 51/08 - Annexed information, e.g. attachments
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
H04L 67/52 - Network services specially adapted for the location of the user terminal
H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
Disclosed is an augmented reality system to generate and cause display of an augmented reality interface at a client device. Various embodiments may detect speech, identify a source of the speech, transcribe the speech to a text string, generate a speech bubble based on properties of the speech and that includes a presentation of the text string, and cause display of the speech bubble at a location in the augmented reality interface based on the source of the speech.
G10L 21/10 - Transforming into visible information
G10L 25/63 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for estimating an emotional state
A crowd-sourced modeling system to perform operations that include: receiving image data that comprises image attributes; accessing a 3D model based on at least the image attributes of the image data, wherein the 3D model comprises a plurality of parts that collectively depict an object or environment; identifying a change in the object or environment based on a comparison of the image data with the plurality of parts of the 3D model, the change corresponding to a part of the 3D model from among the plurality of parts; and generating an update to the part of the 3D model based on the image attributes of the image data.
A method for calibrating a visual-inertial tracking system is described. A device operates the visual-inertial tracking system without receiving a tracking request from a virtual object display application. In response to operating the visual-inertial tracking system, the device accesses sensor data from sensors at the device. The device identifies, based on the sensor data, a first calibration parameter value of the visual-inertial tracking system and stores the first calibration parameter value. The system detects a tracking request from the virtual object display application. In response to the tracking request, the system accesses the first calibration parameter value and determines a second calibration parameter value from the first calibration parameter value.
A mixed-reality media content system may be configured to perform operations that include: causing display of image data at a client device, the image data comprising a depiction of an object that includes a graphical code at a position upon the object; detecting the graphical code at the position upon the depiction of the object based on the image data; accessing media content within a media repository based on the graphical code scanned by the client device; and causing display of a presentation of the media content at the position of the graphical code upon the depiction of the object at the client device.
H04N 21/8545 - Content authoring for generating interactive applications
G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
G06K 19/06 - Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
Systems and methods are provided for retrieving first query result data associated with a first user account and rendering the first query result data into a first result item, generating a shareable search result stream comprising the first result item associated with the first user account, retrieving second query result data associated with a second user account and rendering the second query result data into a second result item, adding the second result item to the shareable search result stream associated with the first user account, and providing the sharable search result stream comprising the first result item and the second result item to a first computing device associated with the first user account and a second computing device associated with the second user account.
A user with geographically nearby users is offered the opportunity to send the nearby users a friend request. Example methods include accessing a. location of a user system of a user, where the user a member of an interaction platform, determining a list of other users, where the list of other users include other users associated with other user systems that are within a threshold distance of the location of the user system, where the other users have a threshold number of connections with the user, and. where the other users are members of the interaction platform. The method may further include causing to be displayed on a screen of the user system indications of the other users of the list of other users and user interface items for the user to send a. friend request to a corresponding other user of the list of other users.
G06Q 50/00 - Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
H04L 51/222 - Monitoring or handling of messages using geographical location information, e.g. messages transmitted or received in proximity of a certain spot or area
H04W 4/02 - Services making use of location information
53.
VIRTUAL MANIPULATION OF AUGMENTED AND VIRTUAL REALITY OBJECTS
Systems and methods are provided. For example, a method includes determining a position of a user's hand and identifying a manipulation gesture performed by the user targeting a virtual object. The method also includes determining a three-dimensional (3D) origin point based on the position of the user's hand when the manipulation gesture is performed, and determining a 3D end point based on a movement of the user's hand from the origin point. The method additionally includes deriving a 3D vector based on the 3D origin point and the 3D end point, and applying an action to the targeted virtual object based on the 3D vector, wherein the targeted virtual object is at a distance greater than the user's arm reach.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 3/04845 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
G06F 3/0487 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
G06V 10/70 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
An optical waveguide manufacturing method includes: receiving a master template having a plurality of individual waveguide structures imprinted thereon; coating the master template with a curable master template stamp material; curing the master template stamp material to form a master template stamp; separating the master template stamp from the master template; imprinting the master template stamp onto one or more first substrates having an imprintable coating to form one or more master template copies, each master template copy having a plurality of individual waveguide structure copies imprinted thereon; curing the one or more master template copies; separating the master template stamp from the one or more master template copies; coating one of the one or more master template copies with a curable working stamp material; curing the working stamp material to form a working stamp for manufacturing optical waveguides; and separating the master template copy from the working stamp.
G02B 6/132 - Integrated optical circuits characterised by the manufacturing method by deposition of thin films
G02B 6/10 - Light guidesStructural details of arrangements comprising light guides and other optical elements, e.g. couplings of the optical waveguide type
G02B 6/12 - Light guidesStructural details of arrangements comprising light guides and other optical elements, e.g. couplings of the optical waveguide type of the integrated circuit kind
G02B 6/136 - Integrated optical circuits characterised by the manufacturing method by etching
Systems and methods are provided for performing operations comprising: detecting, by one or more electromyograph (EMG) electrodes of an EMG communication device, subthreshold muscle activation signals of one or more muscles associated with speech production, the subthreshold muscle activation signals being generated in response to inner speech of a user; applying a machine learning technique to the subthreshold muscle activation signals to estimate one or more speech features corresponding to the subthreshold muscle activation signals, the machine learning technique being trained to establish a relationship between a plurality of training subthreshold muscle activation signals and ground truth speech features; generating visual or audible output based on the one or more speech features; and causing the visual or audible output to be processed by a messaging application to engage a feature of the messaging application.
H04M 1/7243 - User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
H04R 5/033 - Headphones for stereophonic communication
A gesture-based wake process for an AR system is described herein. The AR system places a hand-tracking input pipeline of the AR system in a suspended mode. A camera component of the hand-tracking input pipeline detects a possible visual wake command being made by a user of the AR system. On the basis of detecting the possible visual wake command, the AR system wakes the hand-tracking input pipeline and places the camera component in a fully operational mode. If the AR system, using the hand-tracking input pipeline, verifies the possible visual wake command as an actual wake command, the AR system initiates execution of an AR application.
Systems and methods are provided for receiving a selection to add an event invite media overlay to a media content item, receiving content to be added to the event invite media overlay, the content corresponding to an event, and adding to the event invite media overlay, the content corresponding to the event to generate a custom event invite media overlay. The systems and methods further comprise causing display of the custom event invite media overlay on the media content item, receiving at least one user to which to send an invite to the event, and sending, to a second computing device associated with the at least one user, an invite to the event, the invite comprising the custom event invite media overlay and the media content item.
H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
H04L 65/401 - Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference
A method of producing a light emitting diode (LED) array comprises: forming a plurality of layers of semiconductor material; forming a dielectric mask layer over the plurality of layers, the dielectric mask layer having an array of holes through it each exposing an area of one of the layers of semiconductor material, and growing an LED structure in each of the holes arranged to emit light over a range of wavelengths. At least some of the plurality layers form a distributed Bragg reflector (DBR) arranged to reflect light of at least some of said range of wavelengths.
H01L 27/15 - Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components with at least one potential-jump barrier or surface barrier, specially adapted for light emission
H01L 33/00 - SEMICONDUCTOR DEVICES NOT COVERED BY CLASS - Details thereof
H01L 33/08 - SEMICONDUCTOR DEVICES NOT COVERED BY CLASS - Details thereof characterised by the semiconductor bodies with a plurality of light emitting regions, e.g. laterally discontinuous light emitting layer or photoluminescent region integrated within the semiconductor body
H01L 33/32 - Materials of the light emitting region containing only elements of group III and group V of the periodic system containing nitrogen
Methods, systems, user interfaces, media, and devices are described for sharing the location of participants of a communication session established via a messaging system. Consistent with some embodiments, an electronic communication containing location information is received from a location sensor coupled to a first client device. A current location of the first user is determined based on the location information. A current location of the first user is displayed, on a display screen of a second client device, the current location of the first user being displayed within a messaging UI during a communication session between the computing device and the second computing device. The location information may be updated during the communication session as messages are exchanged and as a current location changes. Various embodiments may include additional information with the current location, such as a time period associated with the location, or other such information.
H04W 4/029 - Location-based management or tracking services
G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
H04W 4/14 - Short messaging services, e.g. short message service [SMS] or unstructured supplementary service data [USSD]
A compact generative neural network can be distilled from a teacher generative neural network using a training network. The compact network can be trained on the input data and output data of the teacher network. The training network train the student network using a discrimination layer and one or more types of losses, such as perception loss and adversarial loss.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
G06V 10/778 - Active pattern-learning, e.g. online learning of image or video features
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
61.
INTEGRATED DISPLAY MODULE OR APPARATUS AND METHODS FOR OPERATING AND MANUFACTURING THE SAME
Systems, methods, apparatuses and devices provide an integrated display module or apparatus including a Liquid crystal assembly with highly integrated components including display driver circuitry and backplane circuitry. These approaches provide for packaging of displays with small form-factor displays and microdisplays and, in aspects, for usage in virtual and augmented reality devices.
Systems and methods are provided. For example, a method includes determining a position of a user's hand and identifying a manipulation gesture performed by the user targeting a virtual object. The method also includes determining a three-dimensional (3D) origin point based on the position of the user's hand when the manipulation gesture is performed, and determining a 3D end point based on a movement of the user's hand from the origin point. The method additionally includes deriving a 3D vector based on the 3D origin point and the 3D end point, and applying an action to the targeted virtual object based on the 3D vector, wherein the targeted virtual object is at a distance greater than the user's arm reach.
A method and system for augmenting live video feeds with augmented reality (AR) effects. A live video feed comprising a plurality of video frames is received and the format of the video frames is determined. The video frames are converted to a format compatible with an AR software development kit (SDK). One or more AR effects from the AR SDK are applied to the converted frames. This can include detecting depictions of objects in the frames and applying effects to the detected objects. The effects can be selected based on detected object types. The frames are then re-converted back to the original format. If the frame rate differs between the video feed and AR SDK, frame rate conversion is performed before and after applying the AR effects. The augmented video frames including the AR effects are provided as output, such as for broadcast or display.
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
Systems and methods for device handshaking are described. Embodiments for client device and associated wearable device initiated handshaking are described. In certain embodiments, a device such as wearable camera eyeglasses having both high-speed wireless circuitry and low-power wireless circuitry communicates with a client device. The low-power wireless circuitry is used for signaling and to manage power on handshaking for the high-speed circuitry in order to reduce power consumption. An analysis of a high-speed connection status may be performed by a client device, and used to conserve power at the glasses with signaling from the client device to indicate when the high-speed circuitry of the glasses should be powered on.
H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
H04W 52/38 - TPC being performed in particular situations
The present invention relates to a joint automatic audio visual driven facial animation system that in some example embodiments includes a full scale state of the art Large Vocabulary Continuous Speech Recognition (LVCSR) with a strong language model for speech recognition and obtained phoneme alignment from the word lattice.
The subject technology generates a segmentation mask based on first image data. The subject technology applies the segmentation mask on first depth data to reduce a set of artifacts in a depth map based on the first depth data. The subject technology generates a packed depth map based at least in part on the depth map. The subject technology converts a single channel floating point texture to a raw depth map. The subject technology generates multiple channels. The subject technology applies, to the first image data and the first depth data, a first augmented reality content generator corresponding to a selected first selectable graphical item, the first image data and the first depth data being captured with a camera. The subject technology generates a message including the applied first augmented reality content generator to the first image data and the first depth data.
G06T 19/00 - Manipulating 3D models or images for computer graphics
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for recreating keyboard and mouse sounds within a virtual working environment. The program and method provide for receiving, from a first client device of a first participant of a group of participants within a virtual working environment, a timing of keyboard and mouse input detected at the client device, the group of participants having been selected from among plural participants of the virtual working environment; generating, in response to the receiving, keyboard and mouse sounds that correspond to the timing of the keyboard and mouse input; and providing the generated keyboard and mouse sounds to one or more second client devices of respective one or more second participants of the group of participants, for presentation on the one or more second client devices.
Methods and systems are disclosed for using machine learning models to recommend fashion item fit styles based on body surface landmarks. The methods and systems access one or more images depicting a person wearing one or more fashion items and process, using one or more machine learning models, the one or more images to estimate a data set comprising a set of body landmarks of the person, a set of garment classifications associated with the one or more fashion items, and a set of garment segmentations for the one or more fashion items. The methods and systems identify one or more fit styles associated with the person based on the estimated data set and cause presentation of one or more real-world fashion items matching the identified one or more fit styles.
G06V 10/26 - Segmentation of patterns in the image fieldCutting or merging of image elements to establish the pattern region, e.g. clustering-based techniquesDetection of occlusion
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestriansBody parts, e.g. hands
A response component determines the context of a received message and provides a user with a similar context to generate a response to the message. Example methods include accessing a first content item, the first content item, determining an application used to generate the first content item, causing to be displayed on a display of the computing device, an indication of the first content item and an indication of the application, and responding to a selection of the indication of the application by a user, running the application to generate a second content item.
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
70.
CORNER SHIELD PROTECTION FOR SURFACE MOUNT DEVICES
In some examples, a corner shield is provided for protecting a surface-mount device (SMD) on a printed circuit board (PCB). An example corner shield comprises a rigid structure configured to conform to a corner area of the SMD, and one or more mounting surfaces configured to mount the rigid structure to one or more soldering pads on the PCB adjacent to the SMD.
A method and a system include providing for a group conversation between plural users including a first user and a second user; determining that the second user is active within one of the main conversation view or the experience page; upon determining that the second user is active in the main conversation view, providing a first graphical element for display on a first device associated with the first user, the first graphical element including an avatar and name of the second user; and upon determining that the second user is active in the experience page, providing a second graphical element for display on the first device associated with the first user, the second graphical element including the avatar and name of the second user together with an icon representing the experience page.
Systems and methods for providing personalized videos are provided. An example method includes receiving preprocessed videos including facial expression parameters, modifying a source face to adopt the facial expression parameters thereby generating a modified source face, inserting the modified source face into the preprocessed videos to generate one or more personalized videos, providing a first user interface enabling a user to select a personalized video from the one or more personalized videos, determining that the user has selected the personalized video from the one or more personalized videos, and, in response to the determination, providing a second user interface enabling the user to select, from a list of actions, an action to be applied to the selected personalized video.
Various embodiments include systems, methods, and non-transitory computer-readable media for sharing and managing media galleries. Consistent with these embodiments, a method includes receiving a request from a first device to share a media gallery that includes a user avatar; generating metadata associated with the media gallery; generating a message associated with the media gallery, the message at least including the media gallery identifier and the identifier of the user avatar; and transmitting the message to a second device of the recipient user.
G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
H04L 67/146 - Markers for unambiguous identification of a particular session, e.g. session cookie or URL-encoding
A response component determines the context of a received message and provides a user with a similar context to generate a. response to the message. Example methods include accessing a first content item, the first content item, determining an application used to generate the first content item, causing to be displayed on a display of the computing device, an indication of the first content item and an indication of the application, and responding to a selection of the indication of the application by a user, running the application to generate a second content item.
Methods and systems are disclosed for sharing collections of content items in chat sessions. The methods and systems receive a request to share a first content item and present a GUI comprising a first set of options and a second set of options, the first set of options being associated with adding the first content item to a collection of content items that is accessible to a plurality of recipients, the second set of options being associated with sending the first content item to individual recipients. The methods and systems determine a set of target recipients of the first content item and select a content sharing link between a first link to the collection of content items and a second link directly to the first content item. The methods and systems send, to a target recipient, the content sharing link that has been selected.
Methods and systems are disclosed for using machine learning models to recommend fashion item fit styles based on body surface landmarks. The methods and systems access one or more images depicting a person wearing one or more fashion items and process, using one or more machine learning models, the one or more images to estimate a data set comprising a set of body landmarks of the person, a set of garment classifications associated with the one or more fashion items, and a set of garment segmentations for the one or more fashion items. The methods and systems identify one or more fit styles associated with the person based on the estimated data set and cause presentation of one or more real-world fashion items matching the identified one or more fit styles.
In some examples, a corner shield is provided for protecting a surface-mount device (SMD) on a printed circuit board (PCB). An example corner shield comprises a rigid structure configured to conform to a corner area of the SMD, and one or more mounting surfaces configured to mount the rigid structure to one or more soldering pads on the PCB adjacent to the SMD.
H05K 3/30 - Assembling printed circuits with electric components, e.g. with resistor
H05K 3/34 - Assembling printed circuits with electric components, e.g. with resistor electrically connecting electric components or wires to printed circuits by soldering
H05K 1/11 - Printed elements for providing electric connections to or between printed circuits
Examples herein describe a product scan system for identifying packaged items in an image. The product scan system accesses image frames, detects a packaged item in the image frames, generates text feature data by extracting text features from the packaged item in the image frames, generates image feature data by extracting image features from the packaged item in the image frames, generates a first ranked set of query results using the generated text feature data, generates a second ranked set of query results using the generated image feature data, generates a final ranked set of query results, presents a subset of the final ranked set of query results on a graphical user interface of the computing device.
G06F 16/532 - Query formulation, e.g. graphical querying
G06F 16/583 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
G06F 16/783 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
A method and system for augmenting live video feeds with augmented reality (AR) effects. A live video feed comprising a plurality of video frames is received and the format of the video frames is determined. The video frames are converted to a format compatible with an AR software development kit (SDK). One or more AR effects from the AR SDK are applied to the converted frames. This can include detecting depictions of objects in the frames and applying effects to the detected objects. The effects can be selected based on detected object types. The frames are then re-converted back to the original format. If the frame rate differs between the video feed and AR SDK, frame rate conversion is performed before and after applying the AR effects. The augmented video frames including the AR effects are provided as output, such as for broadcast or display.
Methods and systems are disclosed for sharing collections of content items in chat sessions. The methods and systems receive a request to share a first content item and present a GUI comprising a first set of options and a second set of options, the first set of options being associated with adding the first content item to a collection of content items that is accessible to a plurality of recipients, the second set of options being associated with sending the first content item to individual recipients. The methods and systems determine a set of target recipients of the first content item and select a content sharing link between a first link to the collection of content items and a second link directly to the first content item. The methods and systems send, to a target recipient, the content sharing link that has been selected.
H04N 21/4788 - Supplemental services, e.g. displaying phone caller identification or shopping application communicating with other users, e.g. chatting
H04N 21/231 - Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers or prioritizing data for deletion
H04N 21/431 - Generation of visual interfacesContent or additional data rendering
Examples herein describe a product scan system for identifying packaged items in an image. The product scan system accesses image frames, detects a packaged item in the image frames, generates text feature data by extracting text features from the packaged item in the image frames, generates image feature data by extracting image features from the packaged item in the image frames, generates a first ranked set of query results using the generated text feature data, generates a second ranked set of query results using the generated image feature data, generates a final ranked set of query results, presents a subset of the final ranked set of query results on a graphical user interface of the computing device.
A system is provided that detects a start of a camera session, captures initial raw data frames and stores them in memory. Upon determining that the camera session corresponds to a video recording session, the system activates a video recording pipeline and upon determining that the video recording pipeline is active, the system retrieves the initial raw data frames, encodes the initial raw data frames using the video recording pipeline, accesses additional captured raw data frames, and encodes the additional captured raw data frames using the video recording pipeline until detection of an end of the camera session. Upon detecting an end of the camera session, the system deactivates the video recording pipeline.
Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and method for performing operations comprising: receiving, by one or more processors that implement a messaging application, a video feed from a camera of a user device; detecting, by the messaging application, a face in the video feed; in response to detecting the face in the video feed, retrieving a three-dimensional (3D) caption; modifying the video feed to include the 3D caption at a position in 3D space of the video feed proximate to the face; and displaying a modified video feed that includes the face and the 3D caption.
G06T 19/00 - Manipulating 3D models or images for computer graphics
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/04847 - Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
G06F 3/04883 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
Techniques are described for reconciling events timestamped in different time domains in multi-node systems supporting low-latency hardware timestamping. First and second nodes having independent time bases are synchronized by the first node generating an event that is received effectively simultaneously at the first and second nodes, the first and second nodes recording a timestamp of receipt of the event, the first node asynchronously querying the second node for its timestamp of receipt of the event and comparing its timestamp of receipt of the event with the timestamp of receipt of the event by the second node, and the first node using a difference in the timestamps of receipt of the event by the first and second nodes to align the time bases of the first and second nodes. The nodes may include hardware timestamping functionality or use an external component (e.g., field programmable gate array) to provide the timestamping functionality.
An augmented reality display is disclosed. A colour projector 2 emits an image in a narrow beam comprising three primary colours: red, green and blue. A pair of waveguides 4, 6 is provided in the path of the projected beam. A first input grating 8 receives light from the projector 2 and diffracts the received light so that diffracted wavelengths of the light in first and second primary colours are coupled into the first waveguide 6, and so that diffracted wavelengths of the light in second and third primary colours are coupled out of the first waveguide in a direction towards the second waveguide 4. A second input diffraction grating 10 receives light coupled out of the first waveguide 6 and diffracts the second and third primary colours so that they are coupled into the second waveguide 4.
Methods, systems, and devices for predicting a departure time of a user from a labeled place. In some embodiments, the location sharing system accesses historical location data of the user and extracts, for one or more labeled location of the user, an attendance record of the user at the labeled place. Then, when the location sharing system receives current location data of the user, and the system determines that the user is currently at the labeled place, the user predicts a departure time of the user from the labeled place based on the attendance record of the user at the labeled places. Some embodiments share the predicted departure time of the user with the user's friends via a map GUI.
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
(1) Peripherals; augmented reality glasses; augmented reality headsets; computer hardware, peripherals and software for remotely accessing, capturing, transmitting and displaying pictures, video, audio and data; software for setting up, configuring, and controlling wearable computer hardware and peripherals; software for setting up, configuring, and controlling wearable computer hardware and peripheral devices in the field of augmented reality; downloadable computer operating software for augmented reality; downloadable mobile operating system software; downloadable computer operating system software; downloadable computer operating system for operating augmented reality devices (1) Providing temporary use of online non-downloadable middleware for providing an interface between augmented reality devices and operating systems; providing temporary use of online non-downloadable software for providing an interface between augmented reality devices and operating systems; providing temporary use of online non-downloadable software for providing an interface between computer peripheral devices and operating systems
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Computer peripherals; augmented reality glasses; augmented reality headsets; computer hardware, peripherals and software for remotely accessing, capturing, transmitting and displaying pictures, video, audio and data; software for setting up, configuring, and controlling wearable computer hardware and peripherals; software for setting up, configuring, and controlling wearable computer hardware and peripheral devices in the field of augmented reality; downloadable computer operating software for augmented reality; downloadable mobile operating system software; downloadable computer operating system software; downloadable computer operating system for operating augmented reality devices. Providing temporary use of online non-downloadable middleware for providing an interface between augmented reality devices and operating systems; providing temporary use of online non-downloadable software for providing an interface between augmented reality devices and operating systems; providing temporary use of online non-downloadable software for providing an interface between computer peripheral devices and operating systems.
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Administration of a program enabling participants to access pre-release, exclusive, and experimental features, software, and games via a mobile application, global computer networks, wireless networks, and electronic communications networks. Downloadable mobile application software for users to view and access ad-free content; Software for modifying the appearance and enabling transmission of photographs and videos; software for use in taking and editing photographs and recording and editing videos; software to enable the transmission of photographs and videos to mobile telephones; software for the collection, editing, organizing, modifying, transmission, storage and sharing of data and information; computer software for use as an application programming interface (api); software to enable uploading, downloading, accessing, storing, posting, displaying, tagging, distributing, streaming, linking, sharing, transmitting or otherwise providing electronic media, photographic and video content, digital data or information via computer and communication networks; software for streaming audio-visual media content via a global computer network and to mobile and digital electronic devices; computer software which allows users to build and access social network information including address book, friend lists, profiles, preferences and personal data; software for managing contact information in mobile device address books; electronic database in the field of entertainment recorded on computer media; downloadable software for sending digital photos, videos, images, audio-visual content and text to others via a global computer network; downloadable computer software for use in mobile devices, namely, augmented reality software for integrating electronic data with real world environments for the purpose of viewing, capturing, recording and editing augmented images and augmented videos; downloadable computer software application which allows users to create avatars, graphic icons, symbols, graphical depictions of people, places and things, fanciful designs, comics and phrases that can be posted, shared and transmitted via multi-media messaging (mms), text messaging (sms), the internet, and other communication networks; downloadable software for the purpose of analyzing the interactions and engagement between users for the purpose of ranking relationships; downloadable software for the purpose of tracking, accessing and sharing the location of users; downloadable software for recording, tracking, accessing and sharing of past and real-time location data for the purpose of sharing a user's location with others; downloadable software granting users early or exclusive access to new and experimental features relating to all of the foregoing. Providing temporary use of non-downloadable software allowing users to view and access ad-free content; Providing temporary use of non-downloadable software for modifying the appearance and enabling transmission of photographs and videos; providing temporary use of non-downloadable software for use in taking and editing photographs and recording and editing videos; providing temporary use of non-downloadable software for the collection, recommendation, editing, organizing, modifying, transmission, uploading, display storage and sharing of data, information, photographs, games, music, videos, audio-visual material and user generate content; providing temporary use of non-downloadable software to enable uploading, downloading, accessing, storing, posting, displaying, tagging, distributing, streaming, linking, sharing, transmitting or otherwise providing electronic media, photographic and video content, digital data or information via computer and communication networks; providing temporary use of non-downloadable software for streaming audio-visual media content via a global computer network and to mobile and digital electronic devices; providing temporary use of non-downloadable computer software which allows users to build and access social network information including address book, friend lists, profiles, preferences and personal data; providing temporary use of non-downloadable software for managing contact information in mobile device address books; providing temporary use of non-downloadable software for sending digital photos, videos, images, audio-visual content and text to others via a global computer network; providing temporary use of non-downloadable computer software for use in mobile devices, namely, augmented reality software for integrating electronic data with real world environments for the purpose of viewing, capturing, recording and editing augmented images and augmented videos; providing temporary use of non-downloadable computer software application which allows users to create avatars, graphic icons, symbols, graphical depictions of people, places and things, fanciful designs, comics and phrases that can be posted, shared and transmitted via multi-media messaging (mms), text messaging (sms), the internet, and other communication networks; providing temporary use of non-downloadable software for the purpose of analyzing the interactions and engagement between users for the purpose of ranking relationships; providing temporary use of non-downloadable software for the purpose of tracking, accessing and sharing the location of users; providing temporary use of non-downloadable software for recording, tracking, accessing and sharing of past and real-time location data for the purpose of sharing a user's location with others; hosting of digital content on the internet; providing information from searchable indexes and databases of information, including text, electronic documents, databases, graphics, photographic images and audio visual information, by means of computer and communication networks; computer services, namely, creating virtual communities for registered users to participate in discussions and engage in social, business and community networking; application service provider (asp) featuring software to enable or facilitate the uploading, downloading, streaming, posting, displaying, linking, sharing or otherwise providing electronic media or information over communication networks; providing temporary use of non-downloadable software granting users early or exclusive access to new and experimental features relating to all of the foregoing
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Peripherals; augmented reality glasses; augmented reality headsets; computer hardware, peripherals and software for remotely accessing, capturing, transmitting and displaying pictures, video, audio and data; software for setting up, configuring, and controlling wearable computer hardware and peripherals; software for setting up, configuring, and controlling wearable computer hardware and peripheral devices in the field of augmented reality; downloadable computer operating software for augmented reality; downloadable mobile operating system software; downloadable computer operating system software; downloadable computer operating system for operating augmented reality devices Providing temporary use of online non-downloadable middleware for providing an interface between augmented reality devices and operating systems; providing temporary use of online non-downloadable software for providing an interface between augmented reality devices and operating systems; providing temporary use of online non-downloadable software for providing an interface between computer peripheral devices and operating systems
91.
TIME SYNCHRONIZATION FOR SHARED EXTENDED REALITY EXPERIENCES
A first extended reality (XR) device and a second XR device are colocated in an environment. The first XR device captures sensory data of a wearer of the second XR device. The sensory data is used to determine a time offset between a first clock of the first XR device and a second clock of the second XR device. The first clock and the second clock are synchronized based on the time offset and a shared coordinate system is established. The shared coordinate system enables alignment of virtual content that is simultaneously presented by the first XR device and the second XR device based on the synchronization of the first clock and the second clock.
A method for transferring a gait pattern of a first user to a second user to simulate augmented reality content in a virtual simulation environment is described. In one aspect, the method includes identifying a gait pattern of a first user operating a first visual tracking system in a first physical environment, identifying a trajectory from a second visual tracking system operated by a second user in a second physical environment, the trajectory based on poses of the second visual tracking system over time, modifying the trajectory from the second visual tracking system based on the gait pattern of the first user, applying the modified trajectory in a plurality of virtual environments, and generating simulated ground truth data based on the modified trajectory in the plurality of virtual environments.
A method for adjusting an over-rendered area of a display in an AR device is described. The method includes identifying an angular velocity of a display device, a most recent pose of the display device, previous warp poses, and previous over-rendered areas, and adjusting a size of a dynamic over-rendered area based on a combination of the angular velocity, the most recent pose, the previous warp poses, and the previous over-rendered areas.
One aspect disclosed is a method including determining a location from a positioning system receiver, determining, using a hardware processor and the location, that the location is approaching a path of direction of visual direction information, displaying the visual direction information on a display of a wearable device in response to the determining, determining, using the positioning system receiver, whether the turn of the visual direction information has been made, determining, by the hardware processor, a first period of time for display of the content data based on whether the turn of the visual direction information has been made, powering on the display and displaying, using the display, content data for the first period of time, turning off the display and the hardware processor following display of the content data.
A gesture-based text entry user interface for an Augmented Reality (AR) system is provided. The AR system detects a start text entry gesture made by a user of the AR system, generates a virtual keyboard user interface including a virtual keyboard having a plurality of virtual keys, and provides to the user the virtual keyboard user interface. The AR system detects a hold of an enter text gesture made by the user. While the user holds the enter text gesture, the AR system collects continuous motion gesture data of a continuous motion as the user makes the continuous motion through the virtual keys of the virtual keyboard. The AR system detects a release of the enter text gesture by the user and generates entered text data based on the continuous motion gesture data.
G06F 3/04886 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/042 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
Systems, methods, and computer readable media for customizable avatar generation system, where the methods include accessing text data, processing, using at least one processor, the text data to determine first characteristics of the text data, selecting a personalized avatar of a plurality of personalized avatars for the text data based on matching the first characteristics with second characteristics of the plurality of personalized avatars, generating a customized avatar based on the text data and the selected personalized avatar, and causing the customized avatar to be displayed on a display of a computing device.
Methods and systems are disclosed for performing operations for providing an augmented reality unboxing experience. The operations include retrieving an augmented reality element comprising a virtual box that is in a closed state. The operations include obtaining triggers associated with the virtual box, the triggers configured to change the virtual box from the closed state to an open state. The operations include displaying the virtual box. The operations include receiving input associated with the virtual box. The operations include determining that the received input corresponds to the one or more triggers associated with the virtual box. The operations include modifying the virtual box from being displayed in the closed state to being displayed in the open state.
H04N 1/00 - Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmissionDetails thereof
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 16/16 - File or folder operations, e.g. details of user interfaces specifically adapted to file systems
G06T 19/00 - Manipulating 3D models or images for computer graphics
98.
DIRECT SCALE LEVEL SELECTION FOR MULTILEVEL FEATURE TRACKING UNDER MOTION BLUR
A method for mitigating motion blur in a visual-inertial tracking system is described. In one aspect, the method includes accessing a first image generated by an optical sensor of the visual tracking system, accessing a second image generated by the optical sensor of the visual tracking system, the second image following the first image, determining a first motion blur level of the first image, determining a second motion blur level of the second image, identifying a scale change between the first image and the second image, determining a first optimal scale level for the first image based on the first motion blur level and the scale change, and determining a second optimal scale level for the second image based on the second motion blur level and the scale change.
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
An audio response system can generate multimodal messages that can be dynamically updated on viewer's client device based on a type of audio response detected. The audio responses can include keywords or continuum-based signal (e.g., levels of wind noise). A machine learning scheme can be trained to output classification data from the audio response data for content selection and dynamic display updates.
An AR or VR display device. First and third input gratings receive light of a first color from first and second projectors, respectively, coupling the light into a first waveguide. Second and fourth input gratings receive light of a second color from the first and second projectors, respectively, coupling the light into a second waveguide. An output diffractive optical element couples light out of the waveguides towards a viewing position. The first and second projectors provide light to the input diffractive optical elements in directions that are at a first and second angle, respectively, to a waveguide normal vector. The output diffractive optical element couples light out of the waveguides in a first range of angles for light from the first projector and in a second range of angles for light from the second projector, the first range of angles and the second range of angles differing but partially overlapping.