An image system dynamically updates drive sequences in an image system. Drive sequences are image display settings or display driving characteristics with which a display is operated. The image system may determine the drive sequence at least partially based on input from one or more sensors. For example, the image system may include sensors such as an inertial measurement unit, a light sensor, a camera, a temperature sensor, or other sensors from which sensor data may be collected. The image system may analyze the sensor data to calculate drive sequence settings or to select a drive sequence from a number of predetermined drive sequences. Displaying image content on a display includes providing the display with image data and includes operating the display with various drive sequences.
G09G 5/00 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/14 - Sortie numérique vers un dispositif de visualisation
Systems and methods are provided for providing emotion-based text to speech. The systems and methods perform operations comprising accessing a text string; storing a plurality of embeddings associated with a plurality of speakers, a first embedding for a first speaker being associated with a first emotion and a second embedding for a second speaker of the plurality of speakers being associated with a second emotion; selecting the first speaker to speak one or more words of the text string; determining that the one or more words are associated with the second emotion; generating, based on the first embedding and the second embedding, a third embedding for the first speaker associated with the second emotion; and applying the third embedding and the text string to a vocoder to generate an audio stream comprising the one or more words being spoken by the first speaker with the second emotion.
G10L 13/08 - Analyse de texte ou génération de paramètres pour la synthèse de la parole à partir de texte, p.ex. conversion graphème-phonème, génération de prosodie ou détermination de l'intonation ou de l'accent tonique
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
G10L 13/033 - Procédés d'élaboration de parole synthétique; Synthétiseurs de parole Édition de voix, p.ex. transformation de la voix du synthétiseur
G10L 13/047 - Architecture des synthétiseurs de parole
G10L 25/18 - Techniques d'analyses de la parole ou de la voix qui ne se limitent pas à un seul des groupes caractérisées par le type de paramètres extraits les paramètres extraits étant l’information spectrale de chaque sous-bande
A text entry process for an Augmented Reality (AR) system. The AR system detects, using one or more cameras of the AR system, a start text entry gesture made by a user of the AR system. During text entry, the AR system detects, using the one or more cameras, a symbol corresponding to a fingerspelling sign made by the user. The AR system generates entered text data based on the symbol and provides text in a text scene component of an AR overlay provided by the AR system to the user based on the entered text data.
Systems, methods, and computer readable media for an authentication orchestration system. Example methods include receiving, from an authentication client, an authentication request, the authentication request comprising an indication of an account and an indication of a goal authentication level. The method further includes accessing a current authentication level and adjusting, based on a risk level, the goal authentication level to an adjusted goal authentication level. The method further includes selecting a challenge method of a plurality of challenge methods based on a difference between the adjusted goal authentication level and the current authentication level. The method further includes performing the selected challenge method with a user associated with the account, and causing to be sent, to the authentication client, an indication of whether the adjusted authentication level was achieved.
Systems, devices, methods and instructions are described for generating and displaying a reply menu within a graphical user interface (GUI). One embodiment involves receiving a selection of messages received at a client device, detecting a reply message generated by the first client device in response to selected messages, generating a reply menu comprising an ordered list of user account identifiers, the user account identifiers representing user accounts, each user account associated with a corresponding selected message, causing display of the reply menu within a GUI, receiving a selection of a subset of user account identifiers from the reply menu, initiating independent communication sessions with the selected subset of user account identifiers and transmitting the reply message via independent communication sessions.
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
H04M 1/72436 - Interfaces utilisateur spécialement adaptées aux téléphones sans fil ou mobiles avec des moyens de soutien local des applications accroissant la fonctionnalité avec des moyens interactifs de gestion interne des messages pour la messagerie textuelle, p.ex. SMS ou courriel
Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and a method for detecting a pose of a user. The program and method include receiving a monocular image that includes a depiction of a body of a user; detecting a plurality of skeletal joints of the body depicted in the monocular image; and determining a pose represented by the body depicted in the monocular image based on the detected plurality of skeletal joints of the body. A pose of an avatar is modified to match the pose represented by the body depicted in the monocular image by adjusting a set of skeletal joints of a rig of an avatar based on the detected plurality of skeletal joints of the body; and the avatar having the modified pose that matches the pose represented by the body depicted in the monocular image is generated for display.
Described is a system for emphasizing XR content based on user intent by gathering interaction data from use of one or more interaction functions by a user, accessing a camera feed of a camera system from the XR device, analyzing a combination of data corresponding to the interaction data and the camera feed using a first machine learning model to identify a priority for individual media content items, and determining that a first subset of media content items are of a higher priority than a second subset of media content items. Then the system displays the media content items on the XR device of the user, the first subset of the media content items displayed differently than the second subset of the media content items based on the identified priority.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
8.
Display screen or portion thereof with a graphical user interface
Examples disclosed herein relate to the use of shared pose data in extended reality (XR) tracking. A communication link is established between a first XR device and a second XR device. The second XR device is worn by a user. The first XR device receives pose data of the second XR device via the communication link and captures an image of the user. The user is identified based on the image and the pose data.
Methods and systems are disclosed for performing operations for estimating power usage of an AR experience. The operations include: accessing resource utilization data associated with execution of an augmented reality (AR) experience; applying a machine learning technique to the resource utilization data to estimate power consumption of the AR experience, the machine learning technique being trained to establish a relationship between a plurality of training resource utilization data associated with training AR experiences and corresponding ground-truth power consumption of the training AR experiences; and adjusting one or more operations of the AR experience to reduce power consumption based on the estimated power consumption of the AR experience.
In a graphical user interface (GUI) for a social media platform, social media content is surfaced with variable visual attributes based on activity metrics. In a GUI for a social media platform, where the GUI provides access to posted social media items, an activity metric is determined for the underlying social media activity associated with each of multiple collections of social media items. The GUI displays user-selectable interface elements, such as icons, corresponding to these collections, with each icon having a visual attribute that varies depending on the activity metric. This results in visual differences between collection icons based on differences in their corresponding activity metrics.
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport utilisant des icônes
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés
G06F 3/0488 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer utilisant un écran tactile ou une tablette numérique, p.ex. entrée de commandes par des tracés gestuels
G06F 16/248 - Présentation des résultats de requêtes
G06F 16/29 - Bases de données d’informations géographiques
G06F 16/487 - Recherche caractérisée par l’utilisation de métadonnées, p.ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement utilisant des informations géographiques ou spatiales, p.ex. la localisation
G06F 16/9535 - Adaptation de la recherche basée sur les profils des utilisateurs et la personnalisation
G06F 16/9537 - Recherche à dépendance spatiale ou temporelle, p.ex. requêtes spatio-temporelles
G06Q 50/00 - Systèmes ou procédés spécialement adaptés à un secteur particulier d’activité économique, p.ex. aux services d’utilité publique ou au tourisme
G06T 11/20 - Traçage à partir d'éléments de base, p.ex. de lignes ou de cercles
G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte
H04L 41/22 - Dispositions pour la maintenance, l’administration ou la gestion des réseaux de commutation de données, p.ex. des réseaux de commutation de paquets comprenant des interfaces utilisateur graphiques spécialement adaptées [GUI]
H04L 41/28 - Restriction de l’accès aux systèmes ou aux fonctions de gestion de réseau, p.ex. en utilisant la fonction d’autorisation pour accéder à la configuration du réseau
H04L 51/52 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p.ex. courriel pour la prise en charge des services des réseaux sociaux
H04L 67/12 - Protocoles spécialement adaptés aux environnements propriétaires ou de mise en réseau pour un usage spécial, p.ex. les réseaux médicaux, les réseaux de capteurs, les réseaux dans les véhicules ou les réseaux de mesure à distance
H04L 67/52 - Services réseau spécialement adaptés à l'emplacement du terminal utilisateur
H04W 4/02 - Services utilisant des informations de localisation
H04W 4/029 - Services de gestion ou de suivi basés sur la localisation
H04W 4/18 - Conversion de format ou de contenu d'informations, p.ex. adaptation, par le réseau, des informations reçues ou transmises pour une distribution sans fil aux utilisateurs ou aux terminaux
H04W 4/21 - Signalisation de services; Signalisation de données auxiliaires, c. à d. transmission de données par un canal non destiné au trafic pour applications de réseaux sociaux
H04W 12/02 - Protection de la confidentialité ou de l'anonymat, p.ex. protection des informations personnellement identifiables [PII]
12.
CONTEXTUAL ACTION MECHANISMS IN CHAT USER INTERFACES
A graphical user interface (GUI) for a messaging or chat application on a mobile electronic device launches, responsive to user-selection of a particular message cell in the GUI, a contextual message overlaid on an underlying scrollable message board or list. The action menu comprises a preview area displaying a preview of message content of the selected message cell, and further comprises one or more user-selectable action items for executing respective corresponding user actions with respect to the selected message. The preview area is automatically scaled and positioned dependent on one or more attributes of the selected message cell.
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés
G06F 3/0488 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer utilisant un écran tactile ou une tablette numérique, p.ex. entrée de commandes par des tracés gestuels
G06T 3/20 - Translation linéaire d'une image entière ou d'une partie d'image, p.ex. décalage
G06T 3/40 - Changement d'échelle d'une image entière ou d'une partie d'image
H04L 51/04 - Messagerie en temps réel ou quasi en temps réel, p.ex. messagerie instantanée [IM]
H04L 51/52 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p.ex. courriel pour la prise en charge des services des réseaux sociaux
13.
HAND GESTURES FOR ANIMATING AND CONTROLLING VIRTUAL AND GRAPHICAL ELEMENTS
Examples are described for controlling virtual elements on a display in response to hand gestures detected by an eyewear device that is capturing frames of video data with its camera system. An image processing system detects a hand and presents a menu icon on the display in accordance with a detected current hand location. The image processing system detects a series of hand shapes in the captured frames of video data and determines whether the detected hand shapes match any of a plurality of predefined hand gestures stored in a hand gesture library. If a match, an action executes in accordance with the matching hand gesture. In response to an opening gesture, an element animation system presents one or more graphical elements incrementally moving along a path extending away from the menu icon. A closing hand gesture causes the elements to retreat along the path toward the menu icon.
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport utilisant des icônes
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
G06V 20/40 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans le contenu vidéo
G06V 40/10 - Corps d’êtres humains ou d’animaux, p.ex. occupants de véhicules automobiles ou piétons; Parties du corps, p.ex. mains
G06V 40/20 - Mouvements ou comportement, p.ex. reconnaissance des gestes
Provided are systems and methods for template-based generation of personalized videos. An example method includes receiving a sequence of frame images, face area parameters corresponding to positions of a face area in a frame image of the sequence of frame images, and facial landmark parameters corresponding to the frame image of the sequence of frame images, where the facial landmark parameters are absent from the frame images, receiving an image of a source face, modifying, based on the facial landmark parameters corresponding to the frame image, the image of the source face to obtain a further face image featuring the source face adopting a facial expression corresponding to the facial landmark parameters, and inserting the further face image into the frame image at a position determined by the face area parameters corresponding to the frame image, thereby generating an output frame of an output video.
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p.ex. d’êtres humains, d’animaux ou d’êtres virtuels
G06V 40/16 - Visages humains, p.ex. parties du visage, croquis ou expressions
Aspects of the present disclosure involve a system for presenting AR items. The system receives a video that includes a depiction of a real-world object in a real-world environment. The system generates a three-dimensional (3D) bounding box for the real-world object and stabilizes the 3D bounding box based on one or more sensors of the device. The system determines a position, orientation, and dimensions of the real-world object based on the stabilized 3D bounding box and renders a display of an augmented reality (AR) item within the video based on the position, orientation, and dimensions of the real-world object.
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06V 10/25 - Détermination d’une région d’intérêt [ROI] ou d’un volume d’intérêt [VOI]
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
Examples disclosed herein relate to the use of shared pose data in extended reality (XR) tracking. A communication link is established between a first XR device and a second XR device. The second XR device is worn by a user. The first XR device receives pose data of the second XR device via the communication link and captures an image of the user. The user is identified based on the image and the pose data.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/03 - Dispositions pour convertir sous forme codée la position ou le déplacement d'un élément
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
G06F 3/038 - Dispositions de commande et d'interface à cet effet, p.ex. circuits d'attaque ou circuits de contrôle incorporés dans le dispositif
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
17.
PHOTO-REALISTIC TEMPORALLY STABLE HAIRSTYLE CHANGE IN REAL-TIME
The subject technology trains a neural network based on a training process. The subject technology selects a frame from an input video, the selected frame comprising image data including a representation of a face and hair, the representation of the hair being masked. The subject technology determines a previous predicted frame. The subject technology concatenates the selected frame and the previous predicted frame to generate a concatenated frame, the concatenated frame being provided to the neural network. The subject technology generates, using the neural network, a set of outputs including an output tensor, warping field, and a soft mask. The subject technology performs, using a warping field, a warp of the selected frame and the output tensor. The subject technology generates a prediction corresponding to a corrected texture rendering of the selected frame.
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06T 3/18 - Déformation d’images, p. ex. réarrangement de pixels individuellement
18.
MESSAGE-BASED PROCESSING BASED ON MULTICAST PATTERNS
A message-based processor includes a plurality of processor components. In response to receiving an input message, the message-based processor accesses a multicast pattern that includes at least one set of pattern elements. For each pattern element, a target processor component of the plurality of processor components and a target memory location are determined based on a mapping applied for the pattern element. Respective target instructions are multicast to the target processor components. The respective target instruction of each of the target processor components identifies the target memory location associated with the target processor component. A state value stored at the target memory location identified by the respective target instruction is updated by each of the target processor components to obtain an updated state value. Output messages related to the updated state values are selectively provided.
Please cancel the previous version of the Abstract, and enter the following substitute A neural network device includes a shared physical memory that has a plurality of independently accessible memory sections. The neural network device further includes a data processor core to execute instructions. The instructions include at least one instruction involving multiple memory access operations specifying respective logical memory addresses in a plurality of logical memories. During configuration of the neural network device for a particular application, respective memory sections of the plurality of independently accessible memory sections are assigned to respective logical memories of the plurality of logical memories. In accordance with the configuration, each logical memory address of the respective logical memory addresses is mapped to a physical address by providing an indication of a memory section of the plurality of independently accessible memory sections and a row address within the memory section.
G06F 3/06 - Entrée numérique à partir de, ou sortie numérique vers des supports d'enregistrement
20.
METHOD, SYSTEM, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM FOR ANALYZING FACIAL FEATURES FOR AUGMENTED REALITY EXPERIENCES OF PHYSICAL PRODUCTS IN A MESSAGING SYSTEM
The subject technology receives image data including a representation of a face of a user. The subject technology analyzes the image data to determine a set of characteristics of the representation of the face. The subject technology, based at least in part on the determined set of characteristics, selects a particular product and a set of media content associated with the particular product. The subject technology causes display, at a client device, at least one recommendation corresponding to the set of media content associated with the particular product.
G06V 10/56 - Extraction de caractéristiques d’images ou de vidéos relative à la couleur
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
G06V 40/16 - Visages humains, p.ex. parties du visage, croquis ou expressions
21.
SEMICONDUCTOR STRUCTURES GROWN ON HETERO-INTERFACE WITHOUT ETCH DAMAGE
An array of semiconductor structures is grown on a hetero-interface barrier layer by forming successive semiconductor layers within holes formed through a dielectric layer deposited above the hetero-interface barrier layer. The hetero-interface forms a two dimensional charge carrier gas. Each semiconductor structure is grown within one of the holes and includes at least one LED active layer between an n-type semiconductor layer and a p-type semiconductor layer. The bottom one of the two semiconductor layers has the same conductivity type as the barrier layer on which it is formed. The hetero-interface is defined between the barrier layer and a buffer layer. The barrier layer and buffer layer can be formed from GaN, AlGaN, and/or InGaN of varying concentrations. The two dimensional charge carrier gas can be a 2D electron gas or a 2D hole gas.
H01L 27/15 - Dispositifs consistant en une pluralité de composants semi-conducteurs ou d'autres composants à l'état solide formés dans ou sur un substrat commun comprenant des composants semi-conducteurs avec au moins une barrière de potentiel ou une barrière de surface, spécialement adaptés pour l'émission de lumière
H01L 33/00 - DISPOSITIFS À SEMI-CONDUCTEURS NON COUVERTS PAR LA CLASSE - Détails
H01L 33/06 - DISPOSITIFS À SEMI-CONDUCTEURS NON COUVERTS PAR LA CLASSE - Détails caractérisés par les corps semi-conducteurs ayant une structure à effet quantique ou un superréseau, p.ex. jonction tunnel au sein de la région électroluminescente, p.ex. structure de confinement quantique ou barrière tunnel
H01L 33/32 - Matériaux de la région électroluminescente contenant uniquement des éléments du groupe III et du groupe V de la classification périodique contenant de l'azote
H01L 33/38 - DISPOSITIFS À SEMI-CONDUCTEURS NON COUVERTS PAR LA CLASSE - Détails caractérisés par les électrodes ayant une forme particulière
The present disclosure relates to off-platform messaging. A selection of a third-party communication mechanism is received from a first device of a first user. A message is generated for communication to a second device of a second user via the third-party communication mechanism. The message identifies a network resource containing information relating to an event. A request to access the network resource is received. An event invitation interface is caused to be presented at the second device. The event invitation interface comprises a user-selectable indicium to cause issuing of a request to download an application associated with a messaging system. The request to download the application comprises a user identifier associated with the second user and an event identifier associated with the event. The user identifier and the event identifier are used to join the second user to a group chat hosted by the messaging system and pertaining to the event.
H04L 51/046 - Interopérabilité avec d'autres applications ou services réseau
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport utilisant des icônes
H04L 12/18 - Dispositions pour la fourniture de services particuliers aux abonnés pour la diffusion ou les conférences
H04L 51/224 - Surveillance ou traitement des messages en fournissant une notification sur les messages entrants, p.ex. des poussées de notifications des messages reçus
H04W 4/14 - Services d'envoi de messages courts, p.ex. SMS ou données peu structurées de services supplémentaires [USSD]
H04W 8/26 - Adressage ou numérotation de réseau pour support de mobilité
23.
PHOTO-REALISTIC TEMPORALLY STABLE HAIRSTYLE CHANGE IN REAL-TIME
The subject technology trains a neural network based on a training process. The subject technology selects a frame from an input video, the selected frame comprising image data including a representation of a face and hair, the representation of the hair being masked. The subject technology determines a previous predicted frame. The subject technology concatenates the selected frame and the previous predicted frame to generate a concatenated frame, the concatenated frame being provided to the neural network. The subject technology generates, using the neural network, a set of outputs including an output tensor, warping field, and a soft mask. The subject technology performs, using a warping field, a warp of the selected frame and the output tensor. The subject technology generates a prediction corresponding to a corrected texture rendering of the selected frame.
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06T 3/00 - Transformation géométrique de l'image dans le plan de l'image
G06V 20/40 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans le contenu vidéo
24.
WAVEGUIDE COMBINER ASSEMBLIES FOR AUGMENTED REALITY OR VIRTUAL REALITY DISPLAYS
A waveguide combiner assembly for an augmented reality or virtual reality display. A waveguide combiner has front and rear surfaces substantially parallel to a waveguide plane. A chassis includes a support arm for supporting the waveguide combiner and defining a cavity between front and rear walls of the support arm which are spaced along a first direction by a distance greater than the thickness of the waveguide combiner. A waveguide axis normal to the waveguide plane and the first direction define therebetween a zero or nonzero offset angle. An edge portion of the waveguide combiner extends into the cavity. At least one actively adjustable mounting structure is configured to hold a respective point of the edge portion of the waveguide combiner at a selected position relative to the support arm, thereby enabling adjustment of the offset angle.
A method includes determining participation in an interaction function by a first user of an interaction system with a second user of the interaction system. The method also includes accessing profile data of the first user, and determining, based on the profile data, whether the first user has captured or designated a first-user self-image for use in the interaction function. In response to determining that the first user has not captured or designated the first-user self-image, the method includes accessing a media content item that includes a character, identifying a head portion of the character in the media content item, replacing the head portion with a placeholder space, and displaying the media content item with the placeholder space in a user interface corresponding to the interaction function.
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p.ex. d’êtres humains, d’animaux ou d’êtres virtuels
H04L 51/046 - Interopérabilité avec d'autres applications ou services réseau
H04L 51/52 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p.ex. courriel pour la prise en charge des services des réseaux sociaux
A method for detecting changes in a scene includes accessing a first set of images and corresponding pose data in a first coordinate system associated with a first user session of an augmented reality (AR) device and accessing a second set of images and corresponding pose data in a second coordinate system associated with a second user session. The method identifies the first set of images corresponding to a second image from the second set of images based on the pose data of the first set of images being determined spatially closest to the pose data of the second image after aligning the first coordinate system and the second coordinate system. A trained neural network generates a synthesized image from the first set of images. Features of the second image are subtracted from features of the synthesized image. Area of changes are identified based on the subtracted features.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
G06V 10/75 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques utilisant l’analyse de contexte; Sélection des dictionnaires
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 20/40 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans le contenu vidéo
Systems, methods, and computer readable media for messaging system with augmented reality (AR) makeup are presented. Methods include processing a first image to extract a makeup portion of the first image, the makeup portion representing the makeup from the first image and training a neural network to process images of people to add AR makeup representing the makeup from the first image. The methods may further include receiving, via a messaging application implemented by one or more processors of a user device, input that indicates a selection to add the AR makeup to a second image of a second person. The methods may further include processing the second image with the neural network to add the AR makeup to the second image and causing the second image with the AR makeup to be displayed on a display device of the user device.
Systems, methods, and computer instructions are provided. The method includes retrieving a first set of a media content transmitted by a plurality of interaction clients based on a chronological order, wherein the first set of media content has been saved as part of communications of ephemeral messages between at least two users of the plurality of interaction clients. The method further includes creating a visual representation of the first set of media content, and causing to display, on at least one of the plurality of interaction clients, the visual representation the first set of media content.
H04L 51/52 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p.ex. courriel pour la prise en charge des services des réseaux sociaux
29.
BIMANUAL GESTURES FOR CONTROLLING VIRTUAL AND GRAPHICAL ELEMENTS
Example systems, devices, media, and methods are described for controlling the presentation of one or more virtual or graphical elements on a display in response to bimanual hand gestures detected by an eyewear device that is capturing frames of video data with its camera system. An image processing system detects a first hand and defines an input plane relative to a surface of the detected first hand. The image processing system also detects a series of bimanual hand shapes, including the detected first hand and at least one fingertip of a second hand. In response, the system presents a first movable element on the display at a location that is correlated with the current fingertip location. In addition, the image processing system determines whether the detected series of bimanual hand shapes matches a predefined hand gesture. In response to a matching gesture, the system executes a selecting action of an element nearest the current fingertip location.
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 3/04847 - Techniques d’interaction pour la commande des valeurs des paramètres, p.ex. interaction avec des règles ou des cadrans
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06V 20/40 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans le contenu vidéo
G06V 40/10 - Corps d’êtres humains ou d’animaux, p.ex. occupants de véhicules automobiles ou piétons; Parties du corps, p.ex. mains
G06V 40/20 - Mouvements ou comportement, p.ex. reconnaissance des gestes
Systems, methods, and computer instructions are provided. The method includes retrieving a first set of a media content transmitted by a plurality of interaction clients based on a chronological order, wherein the first set of media content has been saved as part of communications of ephemeral messages between at least two users of the plurality of interaction clients. The method further includes creating a visual representation of the first set of media content, and causing to display, on at least one of the plurality of interaction clients, the visual representation the first set of media content.
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G06F 3/0488 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer utilisant un écran tactile ou une tablette numérique, p.ex. entrée de commandes par des tracés gestuels
H04L 51/046 - Interopérabilité avec d'autres applications ou services réseau
31.
DESIGNATING A SELF-IMAGE FOR USE IN AN INTERACTION FUNCTION
A method includes determining participation in an interaction function by a first user of an interaction system with a second user of the interaction system. The method also includes accessing profile data of the first user, and determining, based on the profile data, whether the first user has captured or designated a first-user self-image for use in the interaction function. In response to determining that the first user has not captured or designated the first-user self-image, the method includes accessing a media content item that includes a character, identifying a head portion of the character in the media content item, replacing the head portion with a placeholder space, and displaying the media content item with the placeholder space in a user interface corresponding to the interaction function.
A UAV having a GPS spoofing detector allowing the UAV to determine during flight if the GPS is being spoofed by a third party. The UAV includes a 3-axis magnetometer that is utilized by a controller to determine if the GPS data is correct, or if it is incorrect and perhaps being spoofed. The controller compares GPS heading data with heading data provided by the 3-axis magnetometer. The controller generates an alert if heading are not correlated. This allows the controller to respond to the spoofing detection, such as by directing the UAV to return home.
G01S 19/01 - Systèmes de positionnement par satellite à radiophares émettant des messages horodatés, p.ex. GPS [Système de positionnement global], GLONASS [Système global de navigation par satellite] ou GALILEO
A survey distribution system receives a selection of a first subset of a user population. For example, an administrator of the system may select one or more user attributes of the users among the user population. In response, the survey distribution system identified the first subset of users based on the selected attributes. In some example embodiments, the administrator of the system may additionally define a maximum or minimum number of users to be exposed to the content, as well as targeting parameters for the content, such as a period of time in which to distribute the content to the first subset of users, as well as location criteria, such that the content may only be distributed to users located in specific areas.
Systems and methods for performing operations comprising: storing, by one or more processors of a server, an encrypted profile for a user; receiving encrypted information from a first application that is installed on a user device associated with the user; updating the encrypted profile based on the received encrypted information without the server decrypting the profile and the information; selecting a first advertisement from a plurality of advertisements based on the updated encrypted profile; and transmitting the first advertisement to the user device.
A neural network processor is designed to process sequential windows (W1, W2,... Wn) of a time-dependent signal. Each window contains multiple samples (ns) of the signal over a time interval (T), with each window shifted relative to the previous one by a time-step (ΔT) smaller than T. This time-step corresponds to a base shift amount (h) defined by h=[ΔT/T ns]. The processor executes a neural network with multiple layers (L), each containing a plurality of neurons (N(...,Y,...L)), indexed by time domain (Y). It performs operations for each sample window, including computing a differential result signal for a neuron by referencing a neuron with an index value (Y) determined by the base shift amount and an accumulated up/down-sampling factor S.
G06N 3/063 - Réalisation physique, c. à d. mise en œuvre matérielle de réseaux neuronaux, de neurones ou de parties de neurone utilisant des moyens électroniques
G06F 7/483 - Calculs avec des nombres représentés par une combinaison non linéaire de nombres codés, p.ex. nombres rationnels, système de numération logarithmique ou nombres à virgule flottante
G06N 3/063 - Réalisation physique, c. à d. mise en œuvre matérielle de réseaux neuronaux, de neurones ou de parties de neurone utilisant des moyens électroniques
Methods and systems are disclosed for enhancing or modifying an image by a diffusion model. The methods and systems receive a first image depicting a real-world scene including a target object and receive input associated with adjusting a zoom level of the first image. The methods and systems, in response to receiving the input, modify the zoom level associated with the first image to generate a second image having a view of the target object that is different from a view of the target object in the first image. The methods and systems analyze the second image using a generative machine learning model to generate an artificial image that modifies portions of the second image to improve the view of the target object relative to the second image.
Described is a system for identifying content augmentations based on an interaction function initiated by a user by determining an initiation of an interaction function from a first user of an interaction system, processing data associated with the interaction function using a first machine learning model to generate a feature vector, and identifying at least one recommended content augmentation based on a comparison of the feature vector for the interaction function to a feature vector for the at least one recommended content augmentation. The system then displays the at least one recommended content augmentation to the first user with a corresponding selectable user interface element for individual recommended content augmentations.
G06V 10/74 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques
G06V 10/77 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source
G06V 10/776 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source Évaluation des performances
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
39.
GLOBAL CONFIGURATION INTERFACE FOR DEFAULT SELF-IMAGES
Provided are systems and methods for operating a messaging system. An example method includes receiving, by a computing device, a personalized video including at least a part of a self-image of a user associated with the computing device and at least a part of a stock video, where the personalized video is received from a further computing device, receiving, by the computing device, a user input including an indication of whether the user has authorized using the self-image in the personalized video, and, in response to the user input, sending, by the computing device, the indication of whether the user has authorized using the self-image in the personalized video to the further computing device.
An image is updated on a display device. Display driver logic receives image frame data and processes the image frame data to generate updated image data and compute a map. The map identifies one or more active areas of the display device for updating to display at least a portion of the updated image data. The one or more active areas of the display device are determined at least in part by analyzing whether or not there are any non-black pixels in the active areas.
A projector having a curved field lens and a displaced light modulating display. The system includes at least one light source configured to generate colored light beams, and a prism routing the light beams to the display. The curved field lens is coupled to a face of the prism, and the prism and curved field lens together decenter the light beam from the prism face and uniformly illuminate the display. A center of the display is displaced from the projection lens optical axis. The decentered light beam and the displaced display together generate a favorable shifted boresight of the created image. Dimensions of components of the projector are a function of a curvature of the curved field lens. The greater the curvature of the curved field lens, the smaller the dimensions of the components and the overall projector. The projector may be used in eyewear.
Devices and methods for dynamic power configuration (e.g., reduction) for thermal management (e.g., mitigation) in a wearable electronic device such as an eyewear device. The wearable electronic device monitors its temperature and, responsive to the temperature, configures the services it provides to operate in different modes for thermal mitigation (e.g., to prevent overheating). For example, based on temperature, the wearable electronic device adjusts sensors (e.g., turns cameras on or off, changes the sampling rate, or a combination thereof) and adjusts display components (e.g., adjusted rate at which a graphical processing unit generates images and a visual display is updated). This enables the wearable electronic device to consume less power when temperatures are too high in order to provide thermal mitigation.
Eyewear having a frame, a hinge, and a hyperextendable temple. An extender is coupled to the hinge and the temple, and the extender extends with respect to the hinge allowing hyperextension of the temple with respect to the frame. A cam is configured to leverage the temple away from the frame during hyperextension and reduce wear. A cosmetic trim may include a recess that receives a protrusion of the frame in the open position, and which protrusion moves out of the recess during hyperextension and creates the cam.
Methods and systems are disclosed for enhancing or modifying an image by a diffusion model. The methods and systems receive a first image depicting a real-world scene including a target object and receive input associated with adjusting a zoom level of the first image. The methods and systems, in response to receiving the input, modify the zoom level associated with the first image to generate a second image having a view of the target object that is different from a view of the target object in the first image. The methods and systems analyze the second image using a generative machine learning model to generate an artificial image that modifies portions of the second image to improve the view of the target object relative to the second image.
Described is a system for identifying content augmentations based on an interaction function initiated by a user by determining an initiation of an interaction function from a first user of an interaction system, processing data associated with the interaction function using a first machine learning model to generate a feature vector, and identifying at least one recommended content augmentation based on a comparison of the feature vector for the interaction function to a feature vector for the at least one recommended content augmentation. The system then displays the at least one recommended content augmentation to the first user with a corresponding selectable user interface element for individual recommended content augmentations.
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
A content suggestion system to generate and cause display of a set of chat suggestions based on messages received at a client device. The content suggestions system is configured to display messages that include message content at a client device, and identify content selected by a user of the client device to be included in a response to the messages received at the client device. The content suggestion system tracks and stores a number of times in which a particular pair of content appear in succession in a chat context, and calculates a ranking of the content among a set of available content. When subsequent messages that include the content of the content pair are displayed at the client device, the content suggestion system retrieves and presents a set of content as suggestions, based on the corresponding ranks.
H04L 51/04 - Messagerie en temps réel ou quasi en temps réel, p.ex. messagerie instantanée [IM]
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
G06F 16/958 - Organisation ou gestion de contenu de sites Web, p.ex. publication, conservation de pages ou liens automatiques
H04L 51/02 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p.ex. courriel en utilisant des réactions automatiques ou la délégation par l’utilisateur, p.ex. des réponses automatiques ou des messages générés par un agent conversationnel
47.
EYEWEAR BIDIRECTIONAL COMMUNICATION USING TIME GATING POWER TRANSFER
Eyewear that is configured to be wirelessly charged and also wirelessly communicate with a case charger having a battery using time gating power transfer. In one example, wireless charging and bidirectional communication of the eyewear can be performed using a unidirectional communication protocol, such as the Qi baseline power profile (BPP) which is only suited for unidirectional communication. The eyewear has a processor configured to send data to the wireless power charger that instructs the wireless power charger to stop wireless charging for a time period that is correlated to a state of charge (SOC) of the wireless power charger battery. The wireless power charger resumes charging of the eyewear battery after the time period, and the eyewear determines the SOC of the wireless power charger battery to be a percentage that correlates to the time period.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
H02J 7/04 - Régulation du courant ou de la tension de charge
H02J 50/10 - Circuits ou systèmes pour l'alimentation ou la distribution sans fil d'énergie électrique utilisant un couplage inductif
H02J 50/80 - Circuits ou systèmes pour l'alimentation ou la distribution sans fil d'énergie électrique mettant en œuvre l’échange de données, concernant l’alimentation ou la distribution d’énergie électrique, entre les dispositifs de transmission et les dispositifs de réception
H04B 5/72 - pour la communication locale à l'intérieur d’un dispositif
H04B 5/79 - pour le transfert de données en combinaison avec le transfert d'énergie
H04N 13/239 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant deux capteurs d’images 2D dont la position relative est égale ou en correspondance à l’intervalle oculaire
A system for hand tracking for an Augmented Reality (AR) system. The AR system uses a camera of the AR system to capture tracking video frame data of a hand of a user of the AR system. The AR system generates a skeletal model based on the tracking video frame data and determines a location of the hand of the user based on the skeletal model. The AR system causes a steerable camera of the AR system to focus on the hand of the user.
A first neural network is trained to generate a ground truth using a small set of example images that illustrate the goal ground truth output images, which can be full-body images of people in an AR style. The first neural network is used to generate ground truth output images from random input images. Example methods of the first neural network include determining poses in input images, changing values of pixels within areas of the input images, inputting the poses, the areas of the changed input images, and a text prompt describing the input images, into a neural network, to generate output images. The methods further include determining losses between the output images and the input images and updating weights of the neural network based on the losses. A second neural network is then trained using the generated ground truth. And, an application is generated that uses the second neural network.
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06V 10/774 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source méthodes de Bootstrap, p.ex. "bagging” ou “boosting”
G06V 40/10 - Corps d’êtres humains ou d’animaux, p.ex. occupants de véhicules automobiles ou piétons; Parties du corps, p.ex. mains
50.
Support arm thermal structure for extended reality glasses
A support arm assembly for a head-worn device includes a metal support arm configured to form a rear face, a bottom face, and a top face of an enclosure for a projector, thermally coupled to the projector to act as a heatsink, configured to structurally attach to a rear structural element of the head-worn device, and configured to structurally attach to an optical element holder of the head-worn device, such that the metal support arm forms a structural support joining the optical element holder to the rear structural element without placing mechanical load on the projector.
A technique for deriving a mashup is described. Given a collection of media content items for sequential playback, a subset of the media content items are selected for inclusion in a mashup, based on selection criteria specified in a template associated with the featured story. The subset of media content items are then arranged in a mashup, which is prepended to the story. By automatically generating a mashup—an abbreviated version of a story—the mashup will increase user engagement and encourage sharing, because the mashup condenses the content into a more digestible and captivating format. By using optimized content selection criteria, the mashup will include only the best and most impactful moments, highlights, or key elements of the story. The shorter version grabs the viewer's attention, maintaining their interest and prompting them to share the condensed experience with others, enticing them to discover the full story.
G06Q 50/00 - Systèmes ou procédés spécialement adaptés à un secteur particulier d’activité économique, p.ex. aux services d’utilité publique ou au tourisme
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
A mobile device can implement a neural network-based style transfer scheme to modify an image in a first style to a second style. The style transfer scheme can be configured to detect an object in the image, apply an effect to the image, and blend the image using color space adjustments and blending schemes to generate a realistic result image. The style transfer scheme can further be configured to efficiently execute on the constrained device by removing operational layers based on resources available on the mobile device.
G06Q 50/00 - Systèmes ou procédés spécialement adaptés à un secteur particulier d’activité économique, p.ex. aux services d’utilité publique ou au tourisme
G06T 5/40 - Amélioration ou restauration d'image en utilisant des techniques d'histogrammes
G06T 5/92 - basée sur les propriétés globales des images
G06T 7/90 - Détermination de caractéristiques de couleur
G06V 10/774 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source méthodes de Bootstrap, p.ex. "bagging” ou “boosting”
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 40/16 - Visages humains, p.ex. parties du visage, croquis ou expressions
Disclosed is a method of providing a proximity warning using a head-worn device. A distance is determined between the head-worn device and a relevant object, such as a cyclist, jogger or vehicle, using image-processing techniques. A speed of the head-worn device is determined using a GPS receiver or other position components located in the head-worn device or an associated user device. A braking distance for the head-worn device is determined based on the speed of the head-worn device, and compared to the distance to the relevant object. A warning notification is provided by the head-worn device if the distance to the relevant object is less than the braking distance.
G06V 20/58 - Reconnaissance d’objets en mouvement ou d’obstacles, p.ex. véhicules ou piétons; Reconnaissance des objets de la circulation, p.ex. signalisation routière, feux de signalisation ou routes
A personalized preview system to receive a request to access a collection of media items from a user of a user device. Responsive to receiving the request to access the collection of media items, the personalized preview system accesses user profile data associated with the user, wherein the user profile data includes an image. For example, the image may comprise a depiction of a face, wherein the face comprises a set of facial landmarks. Based on the image, the personalized preview system generates one or more media previews based on corresponding media templates and the image, and displays the one or more media previews within a presentation of the collection of media items at a client device of the user.
G06F 16/535 - Filtrage basé sur des données supplémentaires, p.ex. sur des profils d'utilisateurs ou de groupes
G06F 16/538 - Présentation des résultats des requêtes
G06F 16/54 - Navigation; Visualisation à cet effet
G06F 16/58 - Recherche caractérisée par l’utilisation de métadonnées, p.ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement
G06V 40/16 - Visages humains, p.ex. parties du visage, croquis ou expressions
Systems, devices, media, and methods are presented for gaze-based control of device operations. One method includes receiving a video stream from an imaging device, the video stream depicting one or more eyes, determining a gaze direction for the one or more eyes depicted in the video stream, detecting a change in the gaze direction of the one or more eyes, and triggering an operation in a client device based on the change in the gaze direction.
Example systems, devices, media, and methods are described for presenting a virtual experience using the display of an eyewear device in augmented reality. A content delivery application implements and controls the detecting of beacons broadcast from beacon transmitters deployed at fixed locations and determining the current eyewear location based on the detected beacons. The method includes retrieving content and presenting a virtual experience based on the retrieved content, the beacon data, and a user profile. The virtual experience includes playing audio messages, presenting text on the display, playing video segments on the display, and combinations thereof. In addition to wireless detection of beacons, the method includes scanning and decoding a beacon activation code positioned near the beacon transmitter to access a beacon.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06F 1/16 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES - Détails non couverts par les groupes et - Détails ou dispositions de structure
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0354 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection des mouvements relatifs en deux dimensions [2D] entre le dispositif de pointage ou une partie agissante dudit dispositif, et un plan ou une surface, p.ex. souris 2D, boules traçantes, crayons ou palets
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
An interactive sticker system to perform operations that include: causing display of a presentation of media at a client device, the presentation of the media including a display of an icon within the presentation of the media; receiving an input that selects the icon from the client device, the input comprising an input attribute; generating a menu element based on the icon and the input attribute in response to the input that selects the icon; and presenting the menu element at a position within the presentation of the media at the client device.
Examples described herein relate to techniques for facilitating selection of stickers for inclusion in messages within the context of an interaction system. According to some examples, message content is detected and a set of candidate stickers is identified based on the message content. A search icon is dynamically replaced with a representation of respective ones of the set of candidate stickers. At a first point in time, the search icon represents a first candidate sticker of the set of candidate stickers. At a second point in time, the search icon represents a second candidate sticker of the set of candidate stickers.
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport utilisant des icônes
H04L 51/046 - Interopérabilité avec d'autres applications ou services réseau
H04L 51/52 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p.ex. courriel pour la prise en charge des services des réseaux sociaux
A stepped diffraction grating and method for manufacturer thereof are disclosed. A plurality of parallel grating lines are each formed on a substrate surface by forming a plurality of stacked layers of optically transmissive material. In cross-section, each grating line has an upper layer having an upper surface having a first end and a second end; a bottom layer having a bottom surface abutting the substrate surface and an upper surface having a first end and a second end; a rising staircase portion extending at a rising staircase angle between 10 degrees and 60 degrees; and a falling staircase portion extending at a falling staircase angle between the rising staircase angle and 89 degrees.
A technique for deriving a mashup is described. Given a collection of media content items for sequential playback, a subset of the media content items are selected for inclusion in a mashup, based on selection criteria specified in a template associated with the featured story. The subset of media content items are then arranged in a mashup, which is prepended to the story. By automatically generating a mashup - an abbreviated version of a story - the mashup will increase user engagement and encourage sharing, because the mashup condenses the content into a more digestible and captivating format. By using optimized content selection criteria, the mashup will include only the best and most impactful moments, highlights, or key elements of the story. The shorter version grabs the viewer's attention, maintaining their interest and prompting them to share the condensed experience with others, enticing them to discover the full story.
A vehicle identification system may perform operations that include: receiving a scan request that includes an image that comprises image data; identifying one or more vehicles within the image based on the image data based on computer vision and object recognition; generating bounding boxes based on the identified vehicles; cropping the image based on one or more of the bounding boxes; classifying a vehicle depicted within the cropped image; and presenting a notification that includes a display of the classification of the vehicle at the client device.
G06F 16/954 - Navigation, p.ex. en utilisant la navigation par catégories
G06F 16/955 - Recherche dans le Web utilisant des identifiants d’information, p.ex. des localisateurs uniformisés de ressources [uniform resource locators - URL]
G06F 18/214 - Génération de motifs d'entraînement; Procédés de Bootstrapping, p.ex. ”bagging” ou ”boosting”
An extended Reality (XR) display system includes a Light Emitting Diode (LED) display controller, and a Light Emitting Diode (LED) near-eye display element operatively coupled to the LED display driver. The LED near-eye display element includes one or more motors and an LED array operably connected to the one or more motors. During operation, the LED display driver receives video data including a rendered virtual object of an XR experience and generates LED array control signals based on the video data, the LED array control signals causing one or more LEDs of the LED array to be energized in a sequence. The LED display driver also generates synchronized motor control signals and simultaneously communicates the LED array control signals to the LED array and the synchronized motor control signals to the one or more motors causing the LED near-eye display element to display the rendered virtual object.
G09G 3/32 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice utilisant des sources lumineuses commandées utilisant des panneaux électroluminescents semi-conducteurs, p.ex. utilisant des diodes électroluminescentes [LED]
A head-wearable extended reality device includes a display arrangement mounted to a frame. The display arrangement includes a first display layer, a second display layer, and a light source that is arranged to illuminate the first display layer and the second display layer. At least one of the first display layer or the second display layer is selectively displaceable relative to the frame. One or more processors are provided to control the display arrangement such that the light source is deactivated during displacement of the first display layer or the second display layer relative to the frame.
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G09G 3/34 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice en commandant la lumière provenant d'une source indépendante
A user interface with a message composition area is presented at a user device. The message composition area includes message content presented in a base size. The user interface further includes a resizing graphical element presented at a base position within the user interface. A resizing gesture commences at the base position. While the resizing gesture is in progress, the resizing gesture is tracked and the message content is dynamically resized as positioning of the resizing gesture changes relative to the base position. Transmission of a message is caused when ending of the resizing gesture is detected at an adjusted position relative to the base position. The message includes the message content in an adjusted size relative to the base size.
G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p.ex. glissement, rotation, agrandissement ou changement de couleur
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0488 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer utilisant un écran tactile ou une tablette numérique, p.ex. entrée de commandes par des tracés gestuels
65.
CURATED CONTEXTUAL OVERLAYS FOR AUGMENTED REALITY EXPERIENCES
Example systems, devices, media, and methods are described for curating and presenting a contextual overlay that includes graphical elements and virtual elements in an augmented reality experience. A contextual overlay application implements and controls the capturing of frames of video data within a field of view of the camera. The image processing system detects, in the captured frames of video data, one or more food items in the physical environment. Detecting food items may involve computer vision and machine-trained classification models. The method includes retrieving data associated with the detected food item, curating a contextual overlay based on the retrieved data and a configurable profile, and presenting the contextual overlay on the display.
G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés
G06F 3/0488 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer utilisant un écran tactile ou une tablette numérique, p.ex. entrée de commandes par des tracés gestuels
Systems and methods for generating static and articulated 3D assets are provided that include a 3D autodecoder at their core. The 3D autodecoder framework embeds properties learned from the target dataset in the latent space, which can then be decoded into a volumetric representation for rendering view-consistent appearance and geometry. The appropriate intermediate volumetric latent space is then identified and robust normalization and de-normalization operations are implemented to learn a 3D diffusion from 2D images or monocular videos of rigid or articulated objects. The methods are flexible enough to use either existing camera supervision or no camera information at all—instead efficiently learning the camera information during training. The generated results are shown to outperform state-of-the-art alternatives on various benchmark datasets and metrics, including multi-view image datasets of synthetic objects, real in-the-wild videos of moving people, and a large-scale, real video dataset of static objects.
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
Systems and methods for generating static and articulated 3D assets are provided that include a 3D autodecoder at their core. The 3D autodecoder framework embeds properties learned from the target dataset in the latent space, which can then be decoded into a volumetric representation for rendering view-consistent appearance and geometry. The appropriate intermediate volumetric latent space is then identified and robust normalization and de-normalization operations are implemented to learn a 3D diffusion from 2D images or monocular videos of rigid or articulated objects. The methods are flexible enough to use either existing camera supervision or no camera information at all – instead efficiently learning the camera information during training. The generated results are shown to outperform state-of-the-art alternatives on various benchmark datasets and metrics, including multi-view image datasets of synthetic objects, real in-the-wild videos of moving people, and a large-scale, real video dataset of static objects.
A chatbot system for an interactive platform is disclosed. The chatbot system retrieves a conversation history of one or more conversations between a user and a chatbot from a conversation history datastore and generates one or more summarized memories using the conversation history. One or more moderated memories are generated using the summarized memories. The moderated memories are stored in a memories datastore. A user prompt is received, and a current conversation context is generated from a current conversation between the user and the chatbot. One or more memories are retrieved from the memories datastore using the current conversation context. An augmented prompt is generated using the user prompt and the one or more memories, which is communicated to a generative Al model. A response is received from the generative Al model to the augmented prompt, which is provided to the user.
G06F 40/35 - Représentation du discours ou du dialogue
H04L 51/02 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p.ex. courriel en utilisant des réactions automatiques ou la délégation par l’utilisateur, p.ex. des réponses automatiques ou des messages générés par un agent conversationnel
H04L 51/216 - Gestion de l'historique des conversations, p.ex. regroupement de messages dans des sessions ou des fils de conversation
A hydrofoil board having a hydrofoil and individually controllable flaps configured to be controlled to stabilize the board in a level position even when incurring waves. The flaps are spaced from the hydrofoil to generate a gap, and direct fluid flowing under the hydrofoil through the gap, and over the flaps. The flaps control the pitch and direction of the hydrofoil board when propelled in motion. A processor uses an internal measurement unit (IMU) to obtain orientation and acceleration information of the hydrofoil board. A global positioning system (GPS) unit is also used as an additional speed and location sensor. The processor combines IMU data with a user/rider's input, such as selected speed and direction via handheld wireless controller, and individually controls the flap motors to position the flaps, and the propulsion motor to set speed. In one example, the controller is configured to bring the hydrofoil board to a complete and stabile stop.
B63B 1/28 - Caractéristiques hydrodynamiques ou hydrostatiques des coques ou des ailes portantes tirant une portance supplémentaire des forces hydrodynamiques du type ailes portantes à ailes portantes réglables
B63B 1/24 - Caractéristiques hydrodynamiques ou hydrostatiques des coques ou des ailes portantes tirant une portance supplémentaire des forces hydrodynamiques du type ailes portantes
B63B 34/40 - Structures de soutien du corps supportées par des foils sous l'eau
B63B 79/40 - Surveillance des caractéristiques ou des paramètres de fonctionnement des navires en opération pour le suivi des operations des navires, p.ex. le suivi de leur vitesse, de leur itinéraire ou de leur calendrier d’entretien
Systems, methods, and computer readable media are described for remotely changing settings on augmented reality (AR) wearable devices. Embodiments are disclosed that enable a user to change settings of an AR wearable device on a user interface (UI) provided by a host client device that can communicate wirelessly with the AR wearable device. The host client device and AR wearable device provide remote procedure calls (RPCs) and an application program interface (API) to access settings and determine if settings have been changed. The API enables the host client device to determine the settings on the AR wearable device without any prior knowledge of the settings on the AR wearable device. The RPCs and the API enable the host client device to automatically update the settings on the AR wearable device when the user changes the settings on the host client device.
A waveguide has an output diffractive optical element to couple light out of the waveguide towards a viewer, and a returning diffractive optical element to receive light from the output diffractive optical element and return the received light toward the output diffractive optical element. The output diffractive optical element has overlaid first and second output diffractive optical elements. The first output diffractive optical element receives light from an input direction and couples it toward the second output diffractive optical element in a first direction that is oblique to the input direction. The second output diffractive optical element receives light from the input direction and couples it towards the first output diffractive optical element in a second direction that is oblique to the input direction. The returning diffractive optical element has first and second returning diffractive optical elements that return light opposite the first and second directions, respectively.
G02B 1/00 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES Éléments optiques caractérisés par la substance dont ils sont faits; Revêtements optiques pour éléments optiques
A stepped diffraction grating and method for manufacturer thereof are disclosed. A plurality of parallel grating lines are each formed on a substrate surface by forming a plurality of stacked layers of optically transmissive material. In cross-section, each grating line has an upper layer having an upper surface having a first end and a second end; a bottom layer having a bottom surface abutting the substrate surface and an upper surface having a first end and a second end; a rising staircase portion extending at a rising staircase angle between 10 degrees and 60 degrees; and a falling staircase portion extending at a falling staircase angle between the rising staircase angle and 89 degrees.
A diffraction grating for use as an output element of a diffractive waveguide combiner for an augmented reality or virtual reality display. First and second periodic arrays of optical structures are arranged on a plane according to a common unit cell which is oblique, first and second periods, respectively, of the diffraction grating being defined by a spacing between neighbouring optical structures of one of the first and second periodic arrays along a first side and second side, respectively, of the common unit cell. The first periodic array of optical structures is overlaid on the second periodic array of optical structures in the plane such that the arrays are spatially offset from one another on the plane.
Methods and systems are disclosed for applying machine learning models to compressed videos. The system receives a video, depicting an object, that has previously been compressed using one or more video compression processes. The system analyzes, using one or more machine learning models, the video that has previously been compressed to generate a prediction corresponding to the object depicted in the video, with one or more artifacts resulting from application of the one or more machine learning models to the video that has been previously compressed being absent from the prediction. The system generates a visual output based on the prediction in which the one or more artifacts are absent.
G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte
H04N 19/86 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le pré-traitement ou le post-traitement spécialement adaptés pour la compression vidéo mettant en œuvre la diminution des artéfacts de codage, p.ex. d'artéfacts de blocs
75.
GENERATIVE NEURAL NETWORKS FOR STYLIZING MEDIA CONTENT
A mobile application with an improved user interface facilitates generating stylized media content items including images and videos. An end-user selects a desired visual effect from a set of options. The mobile application captures or accesses an image. The image is processed on a server using a generative neural network pre-trained to apply stylizations based on the selected effect. The server sends back the stylized image to the mobile application for display. The end-user can then save the stylized image or generate a video (e.g., an animation) showing the original image transition to the stylized image. The user interface provides an efficient creative workflow to apply aesthetic enhancements in a visual style chosen by the end-user. Generative machine learning techniques automate stylization to enable accessible media, customization and sharing.
G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p.ex. glissement, rotation, agrandissement ou changement de couleur
A method for manufacturing a liquid crystal (LC) display includes determining an amount of an LC material to be used in the LC display, determining a silane material to be mixed with the LC material and an amount of the silane material to be mixed with the LC material based on the LC material and the amount of the LC material, mixing the amount of the silane material with the amount of the LC material to generate an LC mixture, and heat treating the LC mixture in contact with a display substrate to bond at least a portion of the silane material to one or more surfaces of the display substrate, such that the silane material acts as a surfactant. The amount of the silane material may constitute at least 0.8% of the LC mixture by weight.
G02F 1/1337 - Orientation des molécules des cristaux liquides induite par les caractéristiques de surface, p.ex. par des couches d'alignement
C09K 19/02 - Substances formant des cristaux liquides caractérisées par les propriétés optiques, électriques ou physiques des constituants, en général
G02F 1/137 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des cristaux liquides, p.ex. cellules d'affichage individuelles à cristaux liquides caractérisés par l'effet électro-optique ou magnéto-optique, p.ex. transition de phase induite par un champ, effet d'orientation, interaction entre milieu récepteur et matière additive ou diffusion dynamique
G02F 1/139 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des cristaux liquides, p.ex. cellules d'affichage individuelles à cristaux liquides caractérisés par l'effet électro-optique ou magnéto-optique, p.ex. transition de phase induite par un champ, effet d'orientation, interaction entre milieu récepteur et matière additive ou diffusion dynamique basés sur des effets d'orientation où les cristaux liquides restent transparents
A chatbot system for an interactive platform is disclosed. The chatbot system retrieves a conversation history of one or more conversations between a user and a chatbot from a conversation history datastore and generates one or more summarized memories using the conversation history. One or more moderated memories are generated using the summarized memories. The moderated memories are stored in a memories datastore. A user prompt is received, and a current conversation context is generated from a current conversation between the user and the chatbot. One or more memories are retrieved from the memories datastore using the current conversation context. An augmented prompt is generated using the user prompt and the one or more memories, which is communicated to a generative AI model. A response is received from the generative AI model to the augmented prompt, which is provided to the user.
H04L 51/02 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p.ex. courriel en utilisant des réactions automatiques ou la délégation par l’utilisateur, p.ex. des réponses automatiques ou des messages générés par un agent conversationnel
G06F 16/34 - Navigation; Visualisation à cet effet
H04L 51/216 - Gestion de l'historique des conversations, p.ex. regroupement de messages dans des sessions ou des fils de conversation
78.
ADDING GRAPHICAL REPRESENTATION OF REAL-WORLD OBJECT
Methods and systems are disclosed for modifying an image. For example, a messaging application implemented on a client device displays an image comprising a real-world object and determines a current location of the client device. The messaging application identifies a venue associated with the current location of the client device and obtains a list of items available for purchase at the venue. The messaging application receives input that selects a given item from the list of items that corresponds to the real-world object. The messaging application adds, to the image, a graphical representation of the given item that corresponds to the real-world object depicted in the image.
G06Q 50/00 - Systèmes ou procédés spécialement adaptés à un secteur particulier d’activité économique, p.ex. aux services d’utilité publique ou au tourisme
G06V 10/75 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques utilisant l’analyse de contexte; Sélection des dictionnaires
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
A laser scanning projection system comprises a laser source configured to emit light towards a pair of polarizing beam splitters. The polarizing beam splitters direct light to reflect an odd number of times from a plurality of mirrors through one or more quarter waveplates, using a laser scanner comprising a scanning mirror to direct the light across an angular field of view, forming an exit pupil.
An optical device for use in an augmented reality or virtual reality display, comprising: a planar waveguide; an input diffractive optical element, DOE, configured to receive light from a projector and couple the light into the waveguide; the output DOE configured to receive light from the input diffractive optical element in an input direction, expand the light in two dimensions and couple the light out of the waveguide towards a user, the output DOE comprising a first diffractive region configured to diffract light within the waveguide, the first diffractive region having a first direction of periodicity and a second direction of periodicity, wherein an angle between the input direction and the first direction of periodicity is equal and opposite to an angle between the input direction and the second direction of periodicity, wherein the first diffractive region comprises an array of optical structures, wherein each optical structure is oriented in a third direction that is non-parallel to the first and second directions of periodicity, and is configured to couple light out of the waveguide towards the user.
A method for manufacturing a liquid crystal (LC) display includes determining an amount of an LC material to be used in the LC display, determining a silane material to be mixed with the LC material and an amount of the silane material to be mixed with the LC material based on the LC material and the amount of the LC material, mixing the amount of the silane material with the amount of the LC material to generate an LC mixture, and heat treating the LC mixture in contact with a display substrate to bond at least a portion of the silane material to one or more surfaces of the display substrate, such that the silane material acts as a surfactant. The amount of the silane material may constitute at least 0.8% of the LC mixture by weight.
Described is a system for performing a set of machine learning model training operations that include: accessing media content items associated with interaction functions initiated by users of an interaction system, generating training data including labels for the media content items, extracting features from a media content item of the media content items, identifying additional media content items to include in the training data based on the extracted features from the media content item, processing the training data using a machine learning model to generate a media content item output; and updating one or more parameters of the machine learning model based on the media content item output. The system checks whether retraining criteria has been met, and repeats the set of machine learning model training operations to retrain the machine learning model.
H04N 21/25 - Opérations de gestion réalisées par le serveur pour faciliter la distribution de contenu ou administrer des données liées aux utilisateurs finaux ou aux dispositifs clients, p.ex. authentification des utilisateurs finaux ou des dispositifs clients ou
G06V 10/774 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source méthodes de Bootstrap, p.ex. "bagging” ou “boosting”
G06V 20/40 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans le contenu vidéo
H04N 21/234 - Traitement de flux vidéo élémentaires, p.ex. raccordement de flux vidéo ou transformation de graphes de scènes MPEG-4
83.
CUSTOMIZING A CAPTURE BUTTON USED DURING VIDEO RECORDING
Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for customizing a capture button used during video recording. The program and method provide for determining that a user of an application has access to exclusive features within the application, customizing a capture button for replacing display of a shutter button during video recording; displaying a first user interface for user selection of the capture button from among plural available capture buttons; receiving user input provided selecting the capture button from among the plural available capture buttons; displaying a second user interface for presenting real-time image data captured by a camera, the second user interface including the shutter button which is user-selectable to initiate video recording in response to second user input; and replacing, upon detecting the second user input, display of the shutter button with the selected capture button.
H04N 23/63 - Commande des caméras ou des modules de caméras en utilisant des viseurs électroniques
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport utilisant des icônes
G06F 3/0488 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer utilisant un écran tactile ou une tablette numérique, p.ex. entrée de commandes par des tracés gestuels
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
Methods and systems are disclosed for performing operations for applying augmented reality elements to a person depicted in an image. The operations include receiving an image that includes data representing a depiction of a person; generating a segmentation of the data representing the person depicted in the image; extracting a portion of the image corresponding to the segmentation of the data representing the person depicted in the image; applying a machine learning model to the portion of the image to predict a surface normal tensor for the data representing the depiction of the person, the surface normal tensor representing surface normals of each pixel within the portion of the image; and applying one or more augmented reality (AR) elements to the image based on the surface normal tensor.
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
85.
INTEGRATING AUGMENTED REALITY INTO THE WEB VIEW PLATFORM
A methodology is described that provides access to an augmented reality (AR) component maintained by a messaging server system directly from a web view application. When a user activates, from a web view application executing in the messaging client, a user selectable element that references an AR component, a web view AR system obtains the identification of the AR component, performs validation of the identification and of any additional launch data, and launches a camera view user interface (UI) with the AR component loaded in the camera view UI. Content captured from the camera view UI can be shared to other computing devices.
G06F 3/0488 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer utilisant un écran tactile ou une tablette numérique, p.ex. entrée de commandes par des tracés gestuels
A map-based graphical user interface (GUI) for a public messaging platform allows a user location-based to their own expired ephemeral content. Such expired content is no longer available to other users for online viewing. The user can, however, switch the GUI between a live mode and a historical mode, access to their own expired content in the historical mode being facilitated in a manner closely similar to that for viewing live publicly available content.
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport utilisant des icônes
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés
G06F 3/0488 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer utilisant un écran tactile ou une tablette numérique, p.ex. entrée de commandes par des tracés gestuels
G06F 16/248 - Présentation des résultats de requêtes
G06F 16/29 - Bases de données d’informations géographiques
G06F 16/487 - Recherche caractérisée par l’utilisation de métadonnées, p.ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement utilisant des informations géographiques ou spatiales, p.ex. la localisation
G06F 16/9535 - Adaptation de la recherche basée sur les profils des utilisateurs et la personnalisation
G06F 16/9537 - Recherche à dépendance spatiale ou temporelle, p.ex. requêtes spatio-temporelles
G06Q 50/00 - Systèmes ou procédés spécialement adaptés à un secteur particulier d’activité économique, p.ex. aux services d’utilité publique ou au tourisme
G06T 11/20 - Traçage à partir d'éléments de base, p.ex. de lignes ou de cercles
G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte
H04L 41/22 - Dispositions pour la maintenance, l’administration ou la gestion des réseaux de commutation de données, p.ex. des réseaux de commutation de paquets comprenant des interfaces utilisateur graphiques spécialement adaptées [GUI]
H04L 41/28 - Restriction de l’accès aux systèmes ou aux fonctions de gestion de réseau, p.ex. en utilisant la fonction d’autorisation pour accéder à la configuration du réseau
H04L 51/52 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p.ex. courriel pour la prise en charge des services des réseaux sociaux
H04L 67/12 - Protocoles spécialement adaptés aux environnements propriétaires ou de mise en réseau pour un usage spécial, p.ex. les réseaux médicaux, les réseaux de capteurs, les réseaux dans les véhicules ou les réseaux de mesure à distance
H04L 67/52 - Services réseau spécialement adaptés à l'emplacement du terminal utilisateur
H04W 4/02 - Services utilisant des informations de localisation
H04W 4/029 - Services de gestion ou de suivi basés sur la localisation
H04W 4/18 - Conversion de format ou de contenu d'informations, p.ex. adaptation, par le réseau, des informations reçues ou transmises pour une distribution sans fil aux utilisateurs ou aux terminaux
H04W 4/21 - Signalisation de services; Signalisation de données auxiliaires, c. à d. transmission de données par un canal non destiné au trafic pour applications de réseaux sociaux
H04W 12/02 - Protection de la confidentialité ou de l'anonymat, p.ex. protection des informations personnellement identifiables [PII]
A redundant tracking system comprising multiple redundant tracking sub-systems, enabling seamless transitions between such tracking sub-systems, provides a solution to this problem by merging multiple tracking approaches into a single tracking system. This system is able to combine tracking objects with six degrees of freedom (6DoF) and 3DoF through combining and transitioning between multiple tracking systems based on the availability of tracking indicia tracked by the tracking systems. Thus, as the indicia tracked by any one tracking system becomes unavailable, the redundant tracking system seamlessly switches between tracking in 6DoF and 3DoF thereby providing the user with an uninterrupted experience.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
A63F 13/211 - Dispositions d'entrée pour les dispositifs de jeu vidéo caractérisées par leurs capteurs, leurs finalités ou leurs types utilisant des capteurs d’inertie, p.ex. des accéléromètres ou des gyroscopes
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/03 - Dispositions pour convertir sous forme codée la position ou le déplacement d'un élément
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
G06F 3/038 - Dispositions de commande et d'interface à cet effet, p.ex. circuits d'attaque ou circuits de contrôle incorporés dans le dispositif
G06F 11/08 - Détection ou correction d'erreur par introduction de redondance dans la représentation des données, p.ex. en utilisant des codes de contrôle
G06T 7/246 - Analyse du mouvement utilisant des procédés basés sur les caractéristiques, p.ex. le suivi des coins ou des segments
A passive flash system for illuminating images being captured on a user device while maintaining preview of the content being captured. The passive flash system can display a portion of a screen in as an elevated brightness element that is brighter than the content being captured. The elevated brightness element can surround or overlap the content being captured to passively increase the lighting of the imaged environment.
G06T 5/73 - Élimination des flous; Accentuation de la netteté
G06T 7/90 - Détermination de caractéristiques de couleur
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 40/16 - Visages humains, p.ex. parties du visage, croquis ou expressions
Methods and systems are disclosed for applying machine learning models to compressed videos. The system receives a video, depicting an object, that has previously been compressed using one or more video compression processes. The system analyzes, using one or more machine learning models, the video that has previously been compressed to generate a prediction corresponding to the object depicted in the video, with one or more artifacts resulting from application of the one or more machine learning models to the video that has been previously compressed being absent from the prediction. The system generates a visual output based on the prediction in which the one or more artifacts are absent.
G06V 20/40 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans le contenu vidéo
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
Described is a system for improving machine learning models. In some cases, the system improves such models by identifying a performance characteristic for machine learning model blocks in an iterative denoising process of a machine learning model, connecting a prior machine learning model block with a subsequent machine learning model block of the machine learning model blocks within the machine learning model based on the identified performance characteristic, identifying a prompt of a user, the prompt indicative of an intent of the user for generative images, and analyzing data corresponding to the prompt using the machine learning model to generate one or more images, the machine learning model trained to generate images based on data corresponding to prompts.
Described is a system for improving machine learning models by accessing a first latent diffusion machine learning model, the first latent diffusion machine learning model trained to perform a first number of denoising steps, accessing a second latent diffusion machine learning model that was derived from the first latent diffusion machine learning model, the second latent diffusion machine learning model trained to perform a second number of denoising steps, generating noise data, processing the noise data via the first latent diffusion machine learning model to generate one or more first images, processing the noise data via the second latent diffusion machine learning model to generate one or more second images, and modify a parameter of the second latent diffusion machine learning model based on a comparison of the one or more first images with the one or more second images.
A video montage is assembled by one or more processors by selecting a number of media items for use in the video montage from a collection of media items. An audio track having a theme parameter corresponding to a theme parameter of the number of media items is identified, and a video montage incorporating the media items and the audio track is generated. A data structure may specify an identity and order of the media items and a start location of the audio track, and the video montage may be created by generating individual video segments from each media item in the number of media items, and assembling the individual video segments into the video montage based on an order specified in the data structure. Updates or edits to the video montage are represented as changes to the data structure, which is used to generate an updated video montage.
Methods and systems are disclosed for generating mirrored 3D assets for an extended reality (XR) experience. The system receives a three-dimensional (3D) object comprising a target and analyzes the 3D object using one or more machine learning models to generate data associated with a mirrored version of the target of the 3D object. The system applies the mirrored version of the target to a mirrored version of the 3D object using the generated data and generats a new 3D object comprising the mirrored version of the 3D object and the mirrored version of the target.
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
94.
PROVIDING DRAGGABLE SHUTTER BUTTON DURING VIDEO RECORDING
Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for providing an draggable shutter button during video recording. The program and method provide for displaying a user interface within an application running on a device, the user interface presenting real-time image data captured by a camera of the device, the user interface including a shutter button which is configured to be selectable by a user to initiate video recording in response to a first user gesture; and upon detecting the first user gesture selecting the shutter button, initiating video recording with respect to the real-time image data, and providing for the shutter button to be draggable in predefined directions to perform respective functions related to the video recording.
H04N 23/62 - Commande des paramètres via des interfaces utilisateur
H04N 23/63 - Commande des caméras ou des modules de caméras en utilisant des viseurs électroniques
H04N 23/667 - Changement de mode de fonctionnement de la caméra, p. ex. entre les modes photo et vidéo, sport et normal ou haute et basse résolutions
H04N 23/45 - Caméras ou modules de caméras comprenant des capteurs d'images électroniques; Leur commande pour générer des signaux d'image à partir de plusieurs capteurs d'image de type différent ou fonctionnant dans des modes différents, p. ex. avec un capteur CMOS pour les images en mouvement en combinaison avec un dispositif à couplage de charge [CCD]
H04N 23/69 - Commande de moyens permettant de modifier l'angle du champ de vision, p. ex. des objectifs de zoom optique ou un zoom électronique
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport utilisant des icônes
G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p.ex. glissement, rotation, agrandissement ou changement de couleur
G06F 3/0488 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer utilisant un écran tactile ou une tablette numérique, p.ex. entrée de commandes par des tracés gestuels
H04N 5/222 - TRANSMISSION D'IMAGES, p.ex. TÉLÉVISION - Détails des systèmes de télévision Équipements de studio
G11B 27/031 - Montage électronique de signaux d'information analogiques numérisés, p.ex. de signaux audio, vidéo
A lift reporting system to perform operations that include: accessing user behavior data associated with one or more machine-learned (ML) models, the ML models associated with identifiers; determining causal conversions associated with the ML models based on the user behavior data, the causal conversions comprising values; performing a comparison between the values that represents the causal conversions; determining a ranking of the ML models based on the comparison; and causing display of a graphical user interface (GUI) that includes a display of identifiers associated with ML models.
Methods and systems are disclosed for generating mirrored 3D assets for an extended reality (XR) experience. The system receives a three-dimensional (3D) object comprising a target and analyzes the 3D object using one or more machine learning models to generate data associated with a mirrored version of the target of the 3D object. The system applies the mirrored version of the target to a mirrored version of the 3D object using the generated data and generates a new 3D object comprising the mirrored version of the 3D object and the mirrored version of the target.
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for providing an indication of video recording. The program and method provide for displaying a user interface within an application running on a device, the user interface presenting real-time image data captured by a camera of the device, the user interface including a shutter button which is selectable to initiate video recording in response to a first user gesture; and upon detecting the first user gesture selecting the shutter button, initiating video recording with respect to the real-time image data, replacing a first set of interface elements within the user interface with a second set of interface elements within the user interface, and updating an appearance of the shutter button.
Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for providing an draggable shutter button during video recording. The program and method provide for displaying a user interface within an application running on a device, the user interface presenting real-time image data captured by a camera of the device, the user interface including a shutter button which is configured to be selectable by a user to initiate video recording in response to a first user gesture; and upon detecting the first user gesture selecting the shutter button, initiating video recording with respect to the real-time image data, and providing for the shutter button to be draggable in predefined directions to perform respective functions related to the video recording.
H04N 23/63 - Commande des caméras ou des modules de caméras en utilisant des viseurs électroniques
H04N 5/77 - Circuits d'interface entre un appareil d'enregistrement et un autre appareil entre un appareil d'enregistrement et une caméra de télévision
H04N 23/667 - Changement de mode de fonctionnement de la caméra, p. ex. entre les modes photo et vidéo, sport et normal ou haute et basse résolutions
A system includes one or more hardware processors and at least one memory storing instructions that cause the one or more hardware processors to perform operations including retrieving a first set of a media content captured by an interaction client included in a client device, and retrieving a second set of media content captured by the interaction client included in the client device. The operations also include assigning the first set of media content a first ranking value, and assigning the second set of media content a second ranking value, creating a first visual representation of the first set of media content and a second visual representation of the second set of the second set of media content based on the first ranking value and on the second ranking value, and causing to display, on a display of the client device, the first visual representation and the second visual representation.
G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte
G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p.ex. glissement, rotation, agrandissement ou changement de couleur
G06F 3/14 - Sortie numérique vers un dispositif de visualisation
G06Q 50/00 - Systèmes ou procédés spécialement adaptés à un secteur particulier d’activité économique, p.ex. aux services d’utilité publique ou au tourisme
A lift reporting system to perform operations that include: accessing user behavior data associated with one or more machine-learned (ML) models, the ML models associated with identifiers; determining causal conversions associated with the ML models based on the user behavior data, the causal conversions comprising values; performing a comparison between the values that represents the causal conversions; determining a ranking of the ML models based on the comparison; and causing display of a graphical user interface (GUI) that includes a display of identifiers associated with ML models.