Snap Inc.

United States of America

Back to Profile

1-100 of 5,555 for Snap Inc. and 5 subsidiaries Sort by
Query
Aggregations
IP Type
        Patent 5,189
        Trademark 366
Jurisdiction
        United States 4,234
        World 1,125
        Canada 100
        Europe 96
Owner / Subsidiary
[Owner] Snap Inc. 5,517
Snapchat, Inc. 34
Bitstrips Inc. 1
Flite, Inc. 1
Scan, Inc. 1
See more
Date
New (last 4 weeks) 86
2025 April (MTD) 14
2025 March 72
2025 February 63
2025 January 51
See more
IPC Class
G06T 19/00 - Manipulating 3D models or images for computer graphics 813
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer 655
G02B 27/01 - Head-up displays 584
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus 425
H04L 12/58 - Message switching systems 373
See more
NICE Class
09 - Scientific and electric apparatus and instruments 246
42 - Scientific, technological and industrial services, research and design 144
41 - Education, entertainment, sporting and cultural services 135
35 - Advertising and business services 106
38 - Telecommunications services 67
See more
Status
Pending 1,061
Registered / In Force 4,494
  1     2     3     ...     56        Next Page

1.

STYLIZED MAP TILE GENERATION AND SERVING FOR ONLINE PLATFORMS

      
Application Number 18981026
Status Pending
Filing Date 2024-12-13
First Publication Date 2025-04-03
Owner Snap Inc. (USA)
Inventor
  • Amitay, Daniel
  • Brody, Jonathan
  • Gorkin, Leonid
  • Johnson, Jeffrey Arthur
  • Lin, Andrew
  • Lin, Walton
  • Samaranayake, Nayana
  • Spiegel, Evan
  • Yung, Marcel M.

Abstract

Systems and methods for generating and serving stylized map tiles for a social media platform's map-based graphical user interface. Multiple earth imagery tiles corresponding to a geographical area are retrieved, each comprising a photographic image of a corresponding portion of the Earth's surface. Based on the earth imagery tiles, multiple stylized map tiles are generated. In response to receiving a request from a user device for display of a target area in the map-based GUI, a set of stylized map tiles corresponding to the target area is retrieved and transmitted to the user device. The generation of stylized map tiles may include retrieving a target earth imagery tile together with neighboring tiles, generating an expanded earth imagery tile, stylizing the expanded tile, and cropping to produce the final stylized map tile. Different neural networks may be used for stylizing tiles at different zoom levels.

IPC Classes  ?

  • G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
  • G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
  • G06F 3/04842 - Selection of displayed objects or displayed text elements
  • G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
  • G06F 9/54 - Interprogram communication
  • G06F 16/248 - Presentation of query results
  • G06F 16/29 - Geographical information databases
  • G06F 16/487 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
  • G06F 16/9535 - Search customisation based on user profiles and personalisation
  • G06F 16/9537 - Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
  • G06Q 50/00 - Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
  • G06T 11/20 - Drawing from basic elements, e.g. lines or circles
  • G06T 11/60 - Editing figures and textCombining figures or text
  • H04L 9/40 - Network security protocols
  • H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
  • H04L 41/28 - Restricting access to network management systems or functions, e.g. using authorisation function to access network configuration
  • H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
  • H04L 67/12 - Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
  • H04L 67/306 - User profiles
  • H04L 67/50 - Network services
  • H04L 67/52 - Network services specially adapted for the location of the user terminal
  • H04W 4/02 - Services making use of location information
  • H04W 4/029 - Location-based management or tracking services
  • H04W 4/18 - Information format or content conversion, e.g. adaptation by the network of the transmitted or received information for the purpose of wireless delivery to users or terminals
  • H04W 4/21 - Services signallingAuxiliary data signalling, i.e. transmitting data via a non-traffic channel for social networking applications
  • H04W 12/02 - Protecting privacy or anonymity, e.g. protecting personally identifiable information [PII]

2.

GENERALIZING IMAGE STYLIZATION EFFECTS

      
Application Number 18478783
Status Pending
Filing Date 2023-09-29
First Publication Date 2025-04-03
Owner Snap Inc. (USA)
Inventor
  • Gomez Zharkov, Andrey Alejandrovich
  • Yevtushenko, Bohdan
  • Gudkov, Konstantin

Abstract

Systems herein describe a stylization system that accesses an input image, generates a paired image dataset using a first neural network, generates a stylized target image based on the input image by applying the stylization effect on an entire portion of the input image using a second neural network trained on the paired image dataset, and causes display of the stylized target image on a graphical user interface of a computing device.

IPC Classes  ?

  • G06T 11/60 - Editing figures and textCombining figures or text
  • G06F 3/04845 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
  • G06T 3/60 - Rotation of whole images or parts thereof
  • G06T 5/50 - Image enhancement or restoration using two or more images, e.g. averaging or subtraction
  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

3.

IMAGE BASED BROWSER NAVIGATION

      
Application Number 18977018
Status Pending
Filing Date 2024-12-11
First Publication Date 2025-04-03
Owner Snap Inc. (USA)
Inventor Li, Yanjia

Abstract

A system to navigate a browser based on image data may perform operations that include: receiving a scan request from a client device, the scan request including an image that comprises image data; identifying an object depicted within the image based on the image data; determining a classification of the object; and navigating a browser associated with the client device to a resource based on the classification.

IPC Classes  ?

  • G06F 16/954 - Navigation, e.g. using categorised browsing
  • G06F 16/955 - Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
  • G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
  • G06F 18/24 - Classification techniques
  • G06T 7/11 - Region-based segmentation
  • G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]

4.

CIRCUITS AND METHODS FOR WEARABLE DEVICE CHARGING AND WIRED CONTROL

      
Application Number 18978737
Status Pending
Filing Date 2024-12-12
First Publication Date 2025-04-03
Owner Snap Inc. (USA)
Inventor
  • Tham, Yu Jiang
  • Larson, Nicholas
  • Brook, Peter
  • Patton, Russell Douglas
  • Alhaideri, Miran
  • Hong, Zhihao

Abstract

Methods and devices for wired charging and communication with a wearable device are described. In one embodiment, a symmetrical contact interface comprises a first contact pad and a second contact pad, and particular wired circuitry is coupled to the first and second contact pads to enable charging as well as receive and transmit communications via the contact pads as part of various device states.

IPC Classes  ?

  • H02J 7/00 - Circuit arrangements for charging or depolarising batteries or for supplying loads from batteries
  • G02C 1/00 - Assemblies of lenses with bridges or browbars
  • G02C 5/14 - Side-members
  • G02C 11/00 - Non-optical adjunctsAttachment thereof
  • H01L 27/02 - Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including integrated passive circuit elements with at least one potential-jump barrier or surface barrier
  • H01R 13/62 - Means for facilitating engagement or disengagement of coupling parts or for holding them in engagement
  • H02J 7/04 - Regulation of the charging current or voltage
  • H02J 7/34 - Parallel operation in networks using both storage and other DC sources, e.g. providing buffering
  • H03K 19/0185 - Coupling arrangementsInterface arrangements using field-effect transistors only
  • H04B 3/54 - Systems for transmission via power distribution lines
  • H04B 3/56 - Circuits for coupling, blocking, or by-passing of signals

5.

GENERATING 3D MODELS WITH TEXTURE

      
Application Number 18375332
Status Pending
Filing Date 2023-09-29
First Publication Date 2025-04-03
Owner Snap Inc. (USA)
Inventor
  • Han, Songfang
  • Korolev, Sergei
  • Lee, Hsin-Ying
  • Stoliar, Aleksei

Abstract

An artificial intelligence (AI) network or neural network is trained to generate three-dimensional (3D) models or shapes with color from two-dimensional (2D) input images and input text describing the 3D model with color. Example methods include converting a first three-dimensional (3D) model from a first representation to a second representation, the second representation including color information for the 3D model and inputting the second representation into an encoder to generate a third representation having a lower dimension than the second representation. The method further includes inputting the third representation into a decoder to generate a fourth representation having a same dimension as the second representation and generating a second 3D model from the fourth representation. The method further includes determining losses between the first 3D model and the second 3D model and updating weights of the encoder and the decoder based on the losses.

IPC Classes  ?

  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
  • G06T 7/90 - Determination of colour characteristics

6.

AUGMENTED REALITY GUIDANCE INTERFACE

      
Application Number 18978381
Status Pending
Filing Date 2024-12-12
First Publication Date 2025-04-03
Owner Snap Inc. (USA)
Inventor
  • Cowburn, Piers George
  • Li, David
  • Müller Sandvik, Isac Andreas
  • Pan, Qi

Abstract

Example embodiments described herein therefore relate to an AR guidance system to perform operations that include: detecting a client device at a location within a geo-fenced area, wherein the geo-fenced area may include within it, a destination of interest; determining a route to the destination of interest from the location of the client device within the geo-fenced area; causing display of a presentation of an environment within an AR interface at the client device; detecting a display of real-world signage within the presentation of the environment; generating a media item in response to the detecting the display of the signage within the presentation of the environment, wherein the media item is based on the route to the destination of interest; and causing display of the media item within the AR interface based on the position of the signage within the presentation of the environment.

IPC Classes  ?

  • G01C 21/36 - Input/output arrangements for on-board computers
  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06V 30/413 - Classification of content, e.g. text, photographs or tables
  • H04W 4/021 - Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
  • H04W 4/029 - Location-based management or tracking services

7.

GENERALIZING IMAGE STYLIZATION EFFECTS

      
Application Number US2024048696
Publication Number 2025/072545
Status In Force
Filing Date 2024-09-26
Publication Date 2025-04-03
Owner SNAP INC. (USA)
Inventor
  • Gomez Zharkov, Andrey Alejandrovich
  • Yevtushenko, Bohdan
  • Gudkov, Konstantin

Abstract

Systems herein describe a stylization system that accesses an input image, generates a paired image dataset using a first neural network, generates a stylized target image based on the input image by applying the stylization effect on an entire portion of the input image using a second neural network trained on the paired image dataset, and causes display of the stylized target image on a graphical user interface of a computing device.

IPC Classes  ?

  • G06T 11/00 - 2D [Two Dimensional] image generation

8.

FIELD CALIBRATION OF AN AUGMENTED REALITY DEVICE

      
Application Number US2024048233
Publication Number 2025/072216
Status In Force
Filing Date 2024-09-24
Publication Date 2025-04-03
Owner SNAP INC. (USA)
Inventor
  • Birklbauer, Clemens
  • Halmetschlager-Funek, Georg
  • Kalkgruber, Matthias
  • Pereira Torres, Tiago Miguel
  • Schreiberhuber, Simon

Abstract

A method for recalibrating an augmented reality (AR) device includes generating and storing a ground truth map of a real-world environment when the AR device is operating with a high likelihood of having an accurate factory calibration. During operation of the AR device, new map data is generated for the real-world environment. The new map data is compared to the ground truth map to detect potential calibration errors. If calibration errors are detected, a recalibration procedure is executed by determining an optimal path through the real-world environment that allows for observing parameters requiring recalibration. Visual cues are generated to guide a user of the AR device through the optimal path. As the user follows the visual cues, calibration parameters are iteratively adjusted to eliminate detected calibration errors. The recalibration procedure may be presented as an interactive game to improve user engagement, with rewards provided for accurately following guidance.

IPC Classes  ?

  • G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
  • G06T 7/80 - Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

9.

BENDING-ASSISTED CALIBRATION FOR EXTENDED REALITY TRACKING

      
Application Number US2024048934
Publication Number 2025/072727
Status In Force
Filing Date 2024-09-27
Publication Date 2025-04-03
Owner SNAP INC. (USA)
Inventor
  • Faeulhammer, Thomas
  • Kalkgruber, Matthias
  • Muttenthaler, Thomas
  • Pereira Torres, Tiago Miguel
  • Wolf, Daniel

Abstract

Bending data is used to facilitate tracking operations of an extended reality (XR) device, such as hand tracking or other object tracking operations. The XR device obtains bending data indicative of bending of the XR device to accommodate a body part of a user wearing the XR device. The XR device determines, based on the bending data, whether to use previously identified biometric data in a tracking operation. A mode of the XR device is selected responsive to determining whether to use the previously identified biometric data. The selected mode is used to initialize the tracking operation. The selected mode may be a first mode in which the previously identified biometric data is used in the tracking operation or a second mode in which the previously identified biometric data is not used in the tracking operation.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer

10.

FIELD CALIBRATION OF AN AUGMENTED REALITY DEVICE

      
Application Number 18477297
Status Pending
Filing Date 2023-09-28
First Publication Date 2025-04-03
Owner Snap Inc. (USA)
Inventor
  • Birklbauer, Clemens
  • Halmetschlager-Funek, Georg
  • Kalkgruber, Matthias
  • Pereira Torres, Tiago Miguel
  • Schreiberhuber, Simon

Abstract

A method for recalibrating an augmented reality (AR) device includes generating and storing a ground truth map of a real-world environment when the AR device is operating with a high likelihood of having an accurate factory calibration. During operation of the AR device, new map data is generated for the real-world environment. The new map data is compared to the ground truth map to detect potential calibration errors. If calibration errors are detected, a recalibration procedure is executed by determining an optimal path through the real-world environment that allows for observing parameters requiring recalibration. Visual cues are generated to guide a user of the AR device through the optimal path. As the user follows the visual cues, calibration parameters are iteratively adjusted to eliminate detected calibration errors. The recalibration procedure may be presented as an interactive game to improve user engagement, with rewards provided for accurately following guidance.

IPC Classes  ?

  • G01C 25/00 - Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
  • G01C 21/00 - NavigationNavigational instruments not provided for in groups
  • G01C 21/16 - NavigationNavigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigatedDead reckoning by integrating acceleration or speed, i.e. inertial navigation
  • G06T 19/00 - Manipulating 3D models or images for computer graphics

11.

LOW-POWER HAND-TRACKING SYSTEM FOR WEARABLE DEVICE

      
Application Number 18978713
Status Pending
Filing Date 2024-12-12
First Publication Date 2025-04-03
Owner Snap Inc. (USA)
Inventor
  • Feinman, Alex
  • Arya, Ashwani

Abstract

A method for a low-power hand-tracking system is described. In one aspect, a method includes polling a proximity sensor of a wearable device to detect a proximity event, the wearable device includes a low-power processor and a high-power processor, in response to detecting the proximity event, operating a low-power hand-tracking application on the low-power processor based on proximity data from the proximity sensor, and ending an operation of the low-power hand-tracking application in response to at least one of: detecting and recognizing a gesture based on the proximity data, detecting without recognizing the gesture based on the proximity data, or detecting a lack of activity from the proximity sensor within a timeout period based on the proximity data.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G01S 13/08 - Systems for measuring distance only

12.

AUGMENTED REALITY GAMING USING VIRTUAL EYEWEAR BEAMS

      
Application Number 18980214
Status Pending
Filing Date 2024-12-13
First Publication Date 2025-04-03
Owner Snap Inc. (USA)
Inventor
  • Canberk, Ilteris
  • Knipfing, Jacob

Abstract

Interactive augmented reality experiences with an eyewear device including a virtual eyewear beam. The user can direct the virtual beam by orienting the eyewear device or the user's eye gaze or both. The eyewear device may detect the direction of an opponent's eyewear device or eye gaze of both. The eyewear device may calculate a score based on hits of the virtual beam of the user and the opponent on respective target areas such as the other player's head or face.

IPC Classes  ?

  • G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
  • G02B 27/01 - Head-up displays
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer

13.

AUTOMATIC IMAGE GENERATION USING LATENT STRUCTURAL DIFFUSION

      
Application Number US2024048430
Publication Number 2025/072344
Status In Force
Filing Date 2024-09-25
Publication Date 2025-04-03
Owner SNAP INC. (USA)
Inventor
  • Ding, Erli
  • Eles, Colin
  • Fruchtman, Amir
  • Guler, Riza Alp
  • Li, Yanyu
  • Liu, Xian
  • Muca, Ergeta
  • Rami Koujan, Mohammad
  • Ren, Jian
  • Sagar, Dhritiman
  • Siarohin, Aliaksandr
  • Skorokhodov, Ivan
  • Tulyakov, Sergey

Abstract

Examples described herein relate to automatic image generation. A plurality of inputs is accessed. The inputs include first input data and second input data. The first input data includes a text prompt describing a desired image and the second input data is indicative of one or more structural features of the desired image. One or more intermediate outputs are generated via a first generative machine learning model that uses the plurality of inputs as first control signals. An output image is generated via a second generative machine learning model that uses at least a subset of the plurality of inputs and at least a subset of the one or more intermediate outputs as second control signals. The output image is presented at a user device of a user.

IPC Classes  ?

  • G06T 11/00 - 2D [Two Dimensional] image generation

14.

GENERATING 3D MODELS WITH TEXTURE

      
Application Number US2024048211
Publication Number 2025/072202
Status In Force
Filing Date 2024-09-24
Publication Date 2025-04-03
Owner SNAP INC. (USA)
Inventor
  • Han, Songfang
  • Korolev, Sergei
  • Lee, Hsin-Ying
  • Stoliar, Aleksei

Abstract

An artificial intelligence (Al) network or neural network is trained to generate three-dimensional (3D) models or shapes with color from two- dimensional (2D) input images and input text describing the 3D model with color. Example methods include converting a first three-dimensional (3D) model from a first representation to a second representation, the second representation including color information for the 3D model and inputting the second representation into an encoder to generate a third representation having a lower dimension than the second representation. The method further includes inputting the third representation into a decoder to generate a fourth representation having a same dimension as the second representation and generating a second 3D model from the fourth representation. The method further includes determining losses between the first 3D model and the second 3D model and updating weights of the encoder and the decoder based on the losses.

IPC Classes  ?

  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

15.

COMBINING CONTENT ITEMS IN A SHARED CONTENT COLLECTION

      
Application Number 18725071
Status Pending
Filing Date 2022-12-29
First Publication Date 2025-03-27
Owner Snap Inc. (USA)
Inventor
  • Cooper, Andrew Grosvenor
  • Heikkinen, Christie Marie
  • Tagare, Neil
  • Taitz, David Phillip

Abstract

A content collection is shared between a first user and a second user. A content collection interface is presented on a second user device of the second user. The content collection interface enables the second user to navigate the shared content collection. The shared content collection includes a first content item. Responsive to receiving, from the second user device, an indication of a first combination selection, a second content item is accessed and the second user is enabled to combine the first content item with the second content item to create a first combined content item. Responsive to receiving, from the second user device, an indication of a first content addition selection, the first combined content item is stored in association with the shared content collection. The first combined content item is presented within the content collection interface on a first user device of the first user.

IPC Classes  ?

  • G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
  • G06F 3/0482 - Interaction with lists of selectable items, e.g. menus

16.

WAVEGUIDE AND DIFFRACTION GRATING FOR AUGMENTED REALITY OR VIRTUAL REALITY DISPLAY

      
Application Number 18976111
Status Pending
Filing Date 2024-12-10
First Publication Date 2025-03-27
Owner Snap Inc. (USA)
Inventor
  • Crai, Alexandra
  • Phelan, Claran
  • Valera, Ohmed Salim Ibrahim
  • Crosby, David Nicholas

Abstract

A waveguide for use in a virtual reality, VR, or augmented reality, AR, device, is disclosed. The waveguide comprising an input region configured to couple light into the waveguide so that it propagates under total internal reflection (TIR) within the waveguide, and an output region comprising optical structures configured to receive image bearing light from the input region. The output region comprises a plurality of zones having different diffraction to each other, the plurality of zones comprising diffraction efficiencies so as to reduce rainbow artefacts.

IPC Classes  ?

  • G02B 27/01 - Head-up displays
  • G02B 5/18 - Diffracting gratings
  • G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
  • G06T 19/00 - Manipulating 3D models or images for computer graphics

17.

LIVE WEB-BASED SPECIAL EFFECTS

      
Application Number US2024047391
Publication Number 2025/064612
Status In Force
Filing Date 2024-09-19
Publication Date 2025-03-27
Owner SNAP INC. (USA)
Inventor
  • Connolly, Michael James
  • Fu, Xuelun
  • Hasrouni, Christine
  • Taitz, David Phillip

Abstract

A system and method for enabling augmented reality effects in a web browser without requiring installation of additional software is disclosed. A web server provides a website with a gallery of selectable special effects. Upon selecting an effect, the website loads a page specific to that effect which includes a live preview showing the effect applied to a video feed from the user's webcam. This allows the user to view themselves with the effect applied in real-time. The website requests access to the webcam and microphone through the browser's built-in permission system. Captured photos and videos with the effect applied can be saved locally or shared through native operating system tools. The system provides an engaging augmented reality experience accessible directly via a standard web browser, without needing to install a dedicated app.

IPC Classes  ?

  • H04N 21/4223 - Cameras
  • H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies or resolving scheduling conflicts
  • H04N 21/4788 - Supplemental services, e.g. displaying phone caller identification or shopping application communicating with other users, e.g. chatting

18.

LIVE WEB-BASED SPECIAL EFFECTS

      
Application Number 18631354
Status Pending
Filing Date 2024-04-10
First Publication Date 2025-03-27
Owner Snap Inc. (USA)
Inventor
  • Connolly, Michael James
  • Fu, Xuelun
  • Hasrouni, Christine
  • Taitz, David Phillip

Abstract

A system and method for enabling augmented reality effects in a web browser without requiring installation of additional software is disclosed. A web server provides a website with a gallery of selectable special effects. Upon selecting an effect, the website loads a page specific to that effect which includes a live preview showing the effect applied to a video feed from the user's webcam. This allows the user to view themselves with the effect applied in real-time. The website requests access to the webcam and microphone through the browser's built-in permission system. Captured photos and videos with the effect applied can be saved locally or shared through native operating system tools. The system provides an engaging augmented reality experience accessible directly via a standard web browser, without needing to install a dedicated app.

IPC Classes  ?

  • G06T 11/60 - Editing figures and textCombining figures or text
  • G06T 19/00 - Manipulating 3D models or images for computer graphics

19.

PROVIDING VISUAL CONTENT EDITING FUNCTIONS

      
Application Number 18974733
Status Pending
Filing Date 2024-12-09
First Publication Date 2025-03-27
Owner Snap Inc. (USA)
Inventor
  • Hogeg, Moshe
  • Shemesh, Yosef

Abstract

A method of adjusting visual content. The method comprises selecting, on a client terminal, visual content, extracting visual content data pertaining to the visual content, forwarding a request which includes the visual content data to a network node via a network, receiving, in response to the request, a list of a plurality of visual content editing functions from the network node, presenting, on the client terminal, the plurality of visual content editing functions to a user, receiving a selection of at least one member of the list from the user, adjusting the visual content using the at least one member, and outputting the adjusted visual content.

IPC Classes  ?

  • H04N 21/431 - Generation of visual interfacesContent or additional data rendering
  • G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
  • G06F 3/04845 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
  • G06F 40/30 - Semantic analysis
  • G06T 1/00 - General purpose image data processing
  • G06V 20/40 - ScenesScene-specific elements in video content
  • G11B 27/034 - Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
  • H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
  • H04M 1/72445 - User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for supporting Internet browser applications
  • H04M 1/72457 - User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to geographic location
  • H04N 5/44 - Receiver circuitry
  • H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
  • H04N 21/414 - Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
  • H04N 21/4223 - Cameras
  • H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies or resolving scheduling conflicts
  • H04N 21/472 - End-user interface for requesting content, additional data or servicesEnd-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content
  • H04N 21/4788 - Supplemental services, e.g. displaying phone caller identification or shopping application communicating with other users, e.g. chatting
  • H04N 23/63 - Control of cameras or camera modules by using electronic viewfinders

20.

SURVEY DISTRIBUTION SYSTEM

      
Application Number 18976978
Status Pending
Filing Date 2024-12-11
First Publication Date 2025-03-27
Owner Snap Inc. (USA)
Inventor
  • Can, Tolga
  • Chen, Yu
  • Ma, Yiwei
  • Siegel, Joshua
  • Wu, Shuo

Abstract

A survey distribution system receives a selection of a first subset of a user population. For example, an administrator of the system may select one or more user attributes of the users among the user population. In response, the survey distribution system identified the first subset of users based on the selected attributes. In some example embodiments, the administrator of the system may additionally define a maximum or minimum number of users to be exposed to the content, as well as targeting parameters for the content, such as a period of time in which to distribute the content to the first subset of users, as well as location criteria, such that the content may only be distributed to users located in specific areas.

IPC Classes  ?

21.

WHOLE BODY SEGMENTATION

      
Application Number 18977691
Status Pending
Filing Date 2024-12-11
First Publication Date 2025-03-27
Owner Snap Inc. (USA)
Inventor
  • Dudovitch, Gal
  • Harel, Peleg
  • Hsieh, Chia-Hao
  • Korolev, Sergei
  • Mishin Shuvi, Ma'Ayan

Abstract

Methods and systems are disclosed for performing operations comprising: receiving a monocular image that includes a depiction of a whole body of a user; generating a segmentation of the whole body of the user based on the monocular image; accessing a video feed comprising a plurality of monocular images received prior to the monocular image; smoothing, using the video feed, the segmentation of the whole body generated based on the monocular image to provide a smoothed segmentation; and applying one or more visual effects to the monocular image based on the smoothed segmentation.

IPC Classes  ?

  • G06T 7/11 - Region-based segmentation
  • G06F 18/21 - Design or setup of recognition systems or techniquesExtraction of features in feature spaceBlind source separation
  • G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
  • G06T 5/70 - DenoisingSmoothing
  • G06T 11/00 - 2D [Two Dimensional] image generation
  • G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestriansBody parts, e.g. hands

22.

AUTOMATIC IMAGE GENERATION USING LATENT STRUCTURAL DIFFUSION

      
Application Number 18429251
Status Pending
Filing Date 2024-01-31
First Publication Date 2025-03-27
Owner SNAP INC. (USA)
Inventor
  • Ding, Erli
  • Eles, Colin
  • Fruchtman, Amir
  • Guler, Riza Alp
  • Li, Yanyu
  • Liu, Xian
  • Muca, Ergeta
  • Rami Koujan, Mohammad
  • Ren, Jian
  • Sagar, Dhritiman
  • Siarohin, Aliaksandr
  • Skorokhodov, Ivan
  • Tulyakov, Sergey

Abstract

Examples described herein relate to automatic image generation. A plurality of inputs is accessed. The inputs include first input data and second input data. The first input data includes a text prompt describing a desired image and the second input data is indicative of one or more structural features of the desired image. One or more intermediate outputs are generated via a first generative machine learning model that uses the plurality of inputs as first control signals. An output image is generated via a second generative machine learning model that uses at least a subset of the plurality of inputs and at least a subset of the one or more intermediate outputs as second control signals. The output image is presented at a user device of a user.

IPC Classes  ?

  • G06T 11/00 - 2D [Two Dimensional] image generation
  • G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components

23.

IMPLEMENTING USER INTERFACES OF OTHER APPLICATIONS

      
Application Number US2024047265
Publication Number 2025/064531
Status In Force
Filing Date 2024-09-18
Publication Date 2025-03-27
Owner SNAP INC. (USA)
Inventor
  • Burckle, Chris
  • Wehrman, Ian Anthony

Abstract

A first application uses a user interface (UI) component of a second application to determine a user intent based on user input and then determines an action to perform based on the determined user intent. The first application makes it easier for the user to learn the UI of the second application. Example methods include a first application displaying a first content item, the first content item being content of the first application, and the first application displaying a second content item, the second content item being content of a second application. The method may further include in response to a. second selection of a second user interface item associated with the second, content item, the first application, determining a user intent and an action associated with the user intent based on a second user interface, the second user interface associated with the second application.

IPC Classes  ?

  • H04L 51/10 - Multimedia information
  • G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
  • G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons

24.

CHUNKED TRANSCODING AND UPLOADING FOR VIDEO TRANSMISSION

      
Application Number 18469256
Status Pending
Filing Date 2023-09-18
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor
  • Wang, Yichen
  • Li, Yuechuan
  • Wang, Si
  • Zhou, Yihuan
  • Wu, Haoyun
  • Nie, Junhong

Abstract

Uploading of a video file is performed by transcoding, processing and uploading portions of the video file in parallel, to reduce total processing and upload time. The processing of the video file may include applying associated augmented reality effects to a raw video recording, to generate an enhanced video recording for transmission and viewing at a recipient device. The uploaded portions of the video file may be assembled into a fragmented file format such as fMP4, in which portions of the video file are stored as fragments.

IPC Classes  ?

  • H04N 19/436 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
  • G06F 21/60 - Protecting data
  • H04N 19/136 - Incoming video signal characteristics or properties
  • H04N 19/162 - User input
  • H04N 19/40 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
  • H04N 19/85 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression

25.

ADDING NEARBY USERS

      
Application Number 18534342
Status Pending
Filing Date 2023-12-08
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor
  • Srivastava, Nikesh
  • Taitz, David Phillip
  • Vallecillo, Yamill Antonio

Abstract

A user with geographically nearby users is offered the opportunity to send the nearby users a friend request. Example methods include accessing a location of a user system of a user, where the user a member of an interaction platform, determining a list of other users, where the list of other users include other users associated with other user systems that are within a threshold distance of the location of the user system, where the other users have a threshold number of connections with the user, and where the other users are members of the interaction platform. The method may further include causing to be displayed on a screen of the user system indications of the other users of the list of other users and user interface items for the user to send a friend request to a corresponding other user of the list of other users.

IPC Classes  ?

  • H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
  • G06Q 50/00 - Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
  • H04L 67/306 - User profiles
  • H04L 67/52 - Network services specially adapted for the location of the user terminal

26.

WRISTWATCH BASED INTERFACE FOR AUGMENTED REALITY EYEWEAR

      
Application Number 18963218
Status Pending
Filing Date 2024-11-27
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor Rodriguez Ii, Jonathan M.

Abstract

Augmented reality eyewear devices allow users to experience a version of our “real” physical world augmented with virtual objects. Augmented reality eyewear may present a user with a graphical user interface that appears to be in the airspace directly in front of the user thereby encouraging the user to interact with virtual objects in socially undesirable ways, such as by making sweeping hand gestures in the airspace in front of the user. Anchoring various input mechanisms or the graphical user interface of an augmented reality eyewear application to a wristwatch may allow a user to interact with an augmented reality eyewear device in a more socially acceptable manner. Combining the displays of a smartwatch and an augmented reality eyewear device into a single graphical user interface may provide enhanced display function and more responsive gestural input.

IPC Classes  ?

  • G02B 27/01 - Head-up displays
  • G06F 1/16 - Constructional details or arrangements
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06F 21/36 - User authentication by graphic or iconic representation
  • G06T 19/00 - Manipulating 3D models or images for computer graphics

27.

DIFFUSERS IN WEARABLE DEVICES

      
Application Number 18963756
Status Pending
Filing Date 2024-11-29
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor Carlson, Cècile Claire Madeleine

Abstract

Eyewear including an optical element, a controller, a support structure configured to support the optical element and the controller, light sources coupled to the controller and supported by the support structure, and a diffuser positioned adjacent to the light sources and supported by the support structure, the diffuser including microstructures that diffuse light emitted by the light sources in a radial anisotropic diffusion pattern or a prism-like diffusion pattern.

IPC Classes  ?

  • G02C 11/00 - Non-optical adjunctsAttachment thereof
  • F21V 3/04 - GlobesBowlsCover glasses characterised by materials, surface treatments or coatings
  • F21V 23/00 - Arrangement of electric circuit elements in or on lighting devices
  • F21W 111/00 - Use or application of lighting devices or systems for signalling, marking or indicating, not provided for in groups
  • F21Y 103/33 - Elongate light sources, e.g. fluorescent tubes curved annular
  • F21Y 115/10 - Light-emitting diodes [LED]
  • G02B 5/02 - Diffusing elementsAfocal elements

28.

SMART STICKER SELECTION FOR A MESSAGING SYSTEM

      
Application Number 18968003
Status Pending
Filing Date 2024-12-04
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor He, Jiayu

Abstract

A text string provided by a second client device of a second user is received by a first client device of a first user. The text string is parsed into one or more text portions. A score is assigned to each of the one or more text portions based on a specified criterion. One or more relevant tags of a plurality of tags are determined based on the one or more text portions. One or more media overlays are selected based on the one or more relevant tags and the assigned score for each of the one or more text portions. The text string with a reply interface for sending a reply message to the second client device is displayed.

IPC Classes  ?

  • H04L 51/04 - Real-time or near real-time messaging, e.g. instant messaging [IM]
  • G06F 40/205 - Parsing
  • G06F 40/247 - ThesaurusesSynonyms
  • H04L 51/10 - Multimedia information
  • H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services

29.

GENERATING GROUND TRUTH DATASETS FOR VIRTUAL REALITY EXPERIENCES

      
Application Number 18968137
Status Pending
Filing Date 2024-12-04
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor
  • Zhou, Kai
  • Qi, Qi
  • Hol, Jeroen

Abstract

Systems and methods of generating ground truth datasets for producing virtual reality (VR) experiences, for testing simulated sensor configurations, and for training machine-learning algorithms. In one example, a recording device with one or more cameras and one or more inertial measurement units captures images and motion data along a real path through a physical environment. A SLAM application uses the captured data to calculate the trajectory of the recording device. A polynomial interpolation module uses Chebyshev polynomials to generate a continuous time trajectory (CTT) function. The method includes identifying a virtual environment and assembling a simulated sensor configuration, such as a VR headset. Using the CTT function, the method includes generating a ground truth output dataset that represents the simulated sensor configuration in motion along a virtual path through the virtual environment. The virtual path is closely correlated with the motion along the real path as captured by the recording device. Accordingly, the output dataset produces a realistic and life-like VR experience. In addition, the methods described can be used to generate multiple output datasets, at various sample rates, which are useful for training the machine-learning algorithms which are part of many VR systems.

IPC Classes  ?

30.

GALLERY OF MESSAGES FROM INDIVIDUALS WITH A SHARED INTEREST

      
Application Number 18968630
Status Pending
Filing Date 2024-12-04
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor Sehn, Timothy

Abstract

A machine includes a processor and a memory connected to the processor. The memory stores instructions executed by the processor to receive a message and a message parameter indicative of a characteristic of the message, where the message includes a photograph or a video. A determination is made that the message parameter corresponds to a selected gallery, where the selected gallery includes a sequence of photographs or videos. The message is posted to the selected gallery in response to the determination. The selected gallery is supplied in response to a request.

IPC Classes  ?

  • G06F 3/14 - Digital output to display device
  • G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
  • G06F 3/04842 - Selection of displayed objects or displayed text elements
  • G06F 3/04883 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
  • G06F 3/0489 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using dedicated keyboard keys or combinations thereof
  • G06F 40/169 - Annotation, e.g. comment data or footnotes
  • G06T 11/60 - Editing figures and textCombining figures or text
  • G11B 27/031 - Electronic editing of digitised analogue information signals, e.g. audio or video signals
  • G11B 27/32 - IndexingAddressingTiming or synchronisingMeasuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on separate auxiliary tracks of the same or an auxiliary record carrier
  • H04L 51/10 - Multimedia information
  • H04L 51/214 - Monitoring or handling of messages using selective forwarding
  • H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
  • H04L 69/329 - Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the application layer [OSI layer 7]

31.

MARKER-BASED SHARED AUGMENTED REALITY SESSION CREATION

      
Application Number 18970043
Status Pending
Filing Date 2024-12-05
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor
  • Cowburn, Piers
  • Li, David
  • Müller Sandvik, Isac Andreas
  • Pan, Qi
  • Zohar, Matan

Abstract

Method for creating marker-based shared augmented reality (AR) session starts with initializing a shared AR session by a first device and by a second device. The first device displays on a display a marker. The second device detects the marker using a camera included in the second device and captures an image of the marker using the camera. The second device determines a transformation between the first device and the second device using the image of the marker. A common coordinate frame is then determined using the transformation, the shared AR session is generated using the common coordinate frame, and the shared AR session is caused to be displayed by the first device and by the second device. Other embodiments are described herein.

IPC Classes  ?

  • G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06T 19/00 - Manipulating 3D models or images for computer graphics

32.

DISPLAYING OBJECT NAMES IN ASSOCIATION WITH AUGMENTED REALITY CONTENT

      
Application Number 18970064
Status Pending
Filing Date 2024-12-05
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor
  • Anvaripour, Kaveh
  • Mourkogiannis, Celia Nicole

Abstract

Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for displaying object names in association with augmented reality content. The program and method provide for receiving, by a messaging application running on a device, a first request to identify plural objects based on an image captured by a camera of the device; identifying, in response to receiving the first request, the plural objects based on the image; for each of the plural objects, determining at least one attribute of the object, and calculating a number of augmented reality content items, from plural augmented reality content items, corresponding to the at least one attribute of the object; selecting, from the plural objects, an object with a largest calculated number of corresponding augmented reality content items; and displaying a name for each of the plural objects based on the selecting.

IPC Classes  ?

  • G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
  • G06F 16/9532 - Query formulation
  • G06F 16/9538 - Presentation of query results
  • G06F 16/955 - Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • H04L 51/046 - Interoperability with other network applications or services

33.

ANIMATED CHAT PRESENCE

      
Application Number 18970425
Status Pending
Filing Date 2024-12-05
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor
  • Chand, Jesse
  • Voss, Jeremy

Abstract

The present invention relates to a method for generating and causing display of a communication interface that facilitates the sharing of emotions through the creation of 3D avatars, and more particularly with the creation of such interfaces for displaying 3D avatars for use with mobile devices, cloud based systems and the like.

IPC Classes  ?

  • H04L 65/403 - Arrangements for multi-party communication, e.g. for conferences
  • G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
  • G06T 13/40 - 3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
  • G06T 17/20 - Wire-frame description, e.g. polygonalisation or tessellation
  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
  • G06V 10/56 - Extraction of image or video features relating to colour
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
  • G06V 40/19 - Sensors therefor
  • G10L 15/26 - Speech to text systems
  • H04L 51/10 - Multimedia information
  • H04L 65/1069 - Session establishment or de-establishment
  • H04N 7/15 - Conference systems

34.

SUGGESTING RELEVANT GROUPS AND INDIVIDUALS IN MESSAGE REPLIES

      
Application Number US2024046338
Publication Number 2025/059269
Status In Force
Filing Date 2024-09-12
Publication Date 2025-03-20
Owner SNAP INC. (USA)
Inventor
  • Boyd, Nathan Kenneth
  • Connolly, Michael James
  • Grippi, Daniel Vincent
  • Taitz, David Phillip

Abstract

A system and method for suggesting relevant groups and recipients when replying to messages in a messaging application. In response to a first received message, the system identifies groups with membership comprising the sender and receiver. Interface elements representing these mutual groups are displayed as selectable suggestions. The receiving user can choose groups to include in the reply, along with other users. Suggested groups are determined based on recent interactions, mutual connections, and message content. Users can also create new groups from suggestions for ongoing messaging. By recommending shared groups and relevant recipients, the system enables efficient context-based selection when replying. The suggestions aim to streamline recipient picking through intuitive interfaces and machine learning algorithms. This improves the user experience for seamless messaging discussions with appropriate recipients.

IPC Classes  ?

  • G06Q 10/107 - Computer-aided management of electronic mailing [e-mailing]
  • H04L 51/216 - Handling conversation history, e.g. grouping of messages in sessions or threads
  • H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services

35.

SUGGESTING RELEVANT GROUPS AND INDIVIDUALS IN MESSAGE REPLIES

      
Application Number 18468276
Status Pending
Filing Date 2023-09-15
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor
  • Boyd, Nathan Kenneth
  • Conolly, Michael James
  • Grippi, Daniel Vincent
  • Taitz, David Phillip

Abstract

A system and method for suggesting relevant groups and recipients when replying to messages in a messaging application. In response to a first received message, the system identifies groups with membership comprising the sender and receiver. Interface elements representing these mutual groups are displayed as selectable suggestions. The receiving user can choose groups to include in the reply, along with other users. Suggested groups are determined based on recent interactions, mutual connections, and message content. Users can also create new groups from suggestions for ongoing messaging. By recommending shared groups and relevant recipients, the system enables efficient context-based selection when replying. The suggestions aim to streamline recipient picking through intuitive interfaces and machine learning algorithms. This improves the user experience for seamless messaging discussions with appropriate recipients.

IPC Classes  ?

  • H04L 51/216 - Handling conversation history, e.g. grouping of messages in sessions or threads
  • G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
  • H04L 51/043 - Real-time or near real-time messaging, e.g. instant messaging [IM] using or handling presence information
  • H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services

36.

IMPLEMENTING USER INTERFACES OF OTHER APPLICATIONS

      
Application Number 18796695
Status Pending
Filing Date 2024-08-07
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor
  • Burckle, Chris
  • Wehrman, Ian Anthony

Abstract

A first application uses a user interface (UI) component of a second application to determine a user intent based on user input and then determines an action to perform based on the determined user intent. The first application makes it easier for the user to learn the UI of the second application. Example methods include a first application displaying a first content item, the first content item being content of the first application, and the first application displaying a second content item, the second content item being content of a second application. The method may further include in response to a second selection of a second user interface item associated with the second content item, the first application, determining a user intent and an action associated with the user intent based on a second user interface, the second user interface associated with the second application.

IPC Classes  ?

  • G06F 3/04886 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
  • G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons

37.

OBJECT COUNTING ON AR WEARABLE DEVICES

      
Application Number 18959194
Status Pending
Filing Date 2024-11-25
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor
  • Gurgul, Piotr
  • Moll, Sharon
  • Zakrzewski, Tomasz

Abstract

Systems, methods, and computer readable media for object counting on augmented reality (AR) wearable devices are disclosed. Embodiments are disclosed that enable display of a count of objects as part of a user view. Upon receipt of a request to count objects, the AR wearable device captures an image of the user view. The AR wearable device transmits the image to a backend for processing to determine the objects in the image. The AR wearable device selects a group of objects of the determined objects to count and overlays boundary boxes over counted objects within the user view. The position of the boundary boxes is adjusted to account for movement of the AR wearable device. A hierarchy of objects is used to group together objects that are related but have different labels or names.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
  • G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
  • G06V 20/68 - Food, e.g. fruit or vegetables

38.

INTERACTIVE FASHION WITH MUSIC AR

      
Application Number 18965105
Status Pending
Filing Date 2024-12-02
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor
  • Berger, Itamar
  • Dudovitch, Gal
  • Sasson, Gal
  • Mishin Shuvi, Ma'Ayan
  • Zohar, Matan

Abstract

Methods and systems are disclosed for performing operations comprising: receiving a monocular image that includes a depiction of a person wearing an article of clothing; generating a segmentation of the article of clothing worn by the person in the monocular image; obtaining one or more audio-track related augmented reality elements; and applying the one or more audio-track related augmented reality elements to the article of clothing worn by the person based on the segmentation of the article of clothing worn by the person.

IPC Classes  ?

  • G10H 1/36 - Accompaniment arrangements
  • G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
  • G06T 7/11 - Region-based segmentation
  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition
  • G10H 1/00 - Details of electrophonic musical instruments

39.

EYEWEAR DEVICE INPUT MECHANISM

      
Application Number 18965470
Status Pending
Filing Date 2024-12-02
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor Hanover, Matthew

Abstract

An electronics-enabled eyewear device provides a primary command channel and a secondary command channel for receiving user input during untethered wear, one of the command channels providing for tap input detected by motion sensor(s) incorporated in a body of the eyewear device. A predefined tap sequence or pattern can be applied to frame of the device to trigger as device function. In one example, a double tap of the device's frame causes charge level display indicating a battery charge level.

IPC Classes  ?

  • H04N 23/62 - Control of parameters via user interfaces
  • G02C 11/00 - Non-optical adjunctsAttachment thereof
  • G02C 11/04 - Illuminating means
  • G03B 17/48 - Details of cameras or camera bodiesAccessories therefor adapted for combination with other photographic or optical apparatus
  • G03B 17/56 - Accessories
  • G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
  • H04N 23/54 - Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
  • H04N 23/56 - Cameras or camera modules comprising electronic image sensorsControl thereof provided with illuminating means
  • H04N 23/57 - Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
  • H04N 23/667 - Camera operation mode switching, e.g. between still and video, sport and normal or high and low resolution modes

40.

SKELETAL TRACKING USING PREVIOUS FRAMES

      
Application Number 18965798
Status Pending
Filing Date 2024-12-02
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor
  • Assouline, Avihay
  • Berger, Itamar
  • Dudovitch, Gal
  • Zohar, Matan

Abstract

Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and a method for detecting a pose of a user. The program and method include operations comprising receiving a monocular image that includes a depiction of a body of a user; detecting a plurality of skeletal joints of the body based on the monocular image; accessing a video feed comprising a plurality of monocular images received prior to the monocular image; filtering, using the video feed, the plurality of skeletal joints of the body detected based on the monocular image; and determining a pose represented by the body depicted in the monocular image based on the filtered plurality of skeletal joints of the body.

IPC Classes  ?

  • G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
  • G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
  • G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
  • G06V 20/40 - ScenesScene-specific elements in video content
  • G06V 20/64 - Three-dimensional objects
  • G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestriansBody parts, e.g. hands
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition
  • H04L 51/04 - Real-time or near real-time messaging, e.g. instant messaging [IM]
  • H04N 21/4402 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display

41.

SELECTIVE IDENTIFICATION AND ORDER OF IMAGE MODIFIERS

      
Application Number 18967192
Status Pending
Filing Date 2024-12-03
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor
  • Charlton, Ebony James
  • Evans, Michael John
  • Hare, Samuel Edward
  • Mcphee, Andrew James
  • Murphy, Robert Cornelius
  • Pilipski, Eitan

Abstract

Systems, devices, media and methods are presented for presentation of modified objects within a video stream. The systems and methods select an object of interest depicted within a user interface based on an associated image modifier, determine a modifier context based at least in part on one or more characteristics of the selected object, identify a set of image modifiers based on the modifier context, rank a first portion of the identified set of image modifiers based on a primary ordering characteristic, rank a second portion of the identified set of image modifiers based on a secondary ordering characteristic and cause presentation of the modifier icons for the ranked set of image modifiers.

IPC Classes  ?

  • H04N 5/272 - Means for inserting a foreground image in a background image, i.e. inlay, outlay
  • G11B 27/11 - IndexingAddressingTiming or synchronisingMeasuring tape travel by using information not detectable on the record carrier
  • G11B 27/34 - Indicating arrangements
  • H04N 23/62 - Control of parameters via user interfaces
  • H04N 23/63 - Control of cameras or camera modules by using electronic viewfinders

42.

AUGMENTED REALITY SPATIAL AUDIO EXPERIENCE

      
Application Number 18967473
Status Pending
Filing Date 2024-12-03
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor
  • Canberk, Ilteris
  • Kang, Shin Hwun

Abstract

Devices, media, and methods are presented for an immersive augmented reality (AR) experience using an eyewear device with spatial audio. The eyewear device has a processor, a memory, an image sensor, and a speaker system. The eyewear device captures image information for an environment surrounding the device and identifies an object location within the same environment. The eyewear device then associates a virtual object with the identified object location. The eyewear device monitors the position of the device with respect to the virtual object and presents audio signals to alert the user that the identified object is in the environment.

IPC Classes  ?

  • H04S 7/00 - Indicating arrangementsControl arrangements, e.g. balance control
  • G02B 27/01 - Head-up displays

43.

OPTICAL ARRANGEMENT FOR A DISPLAY

      
Application Number 18968401
Status Pending
Filing Date 2024-12-04
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor
  • Mills, Rory Thomas Alexander
  • Macken, Ian Thomas

Abstract

An optical arrangement to transmit an image from an image plane to a user's eye. The arrangement providing a folded optical transmission path comprising a collimating element, having a first optical element with a first plurality of optically powered surfaces; and a second optical element comprising at least one optically powered surface. The collimating element to receive light forming the image from an image source and collimate and output the light. The optically powered surfaces having a plurality of interfaces along the folded optical path. A refractive index change at each interface is predetermined to control the direction of light passing through each interface. One surface of each of the first and the second optical elements being adjacent to one another. The adjacent surfaces having dissimilar shapes and each defining an angle with a respective other surface of the relevant optical element at opposing ends of the adjacent surfaces.

IPC Classes  ?

  • G02B 27/01 - Head-up displays
  • G02B 17/08 - Catadioptric systems
  • G02B 27/09 - Beam shaping, e.g. changing the cross-sectioned area, not otherwise provided for

44.

WEARABLE DEVICE LOCATION SYSTEMS

      
Application Number 18969480
Status Pending
Filing Date 2024-12-05
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor
  • Tham, Yu Jiang
  • Robertson, John James
  • Nilles, Gerald
  • Heger, Jason
  • Vadivelu, Praveen Babu

Abstract

Systems, methods, devices, computer readable media, and other various embodiments are described for location management processes in wearable electronic devices. Performance of such devices is improved with reduced time to first fix of location operations in conjunction with low-power operations. In one embodiment, low-power circuitry manages high-speed circuitry and location circuitry to provide location assistance data from the high-speed circuitry to the low-power circuitry automatically on initiation of location fix operations as the high-speed circuitry and location circuitry are booted from low-power states. In some embodiments, the high-speed circuitry is returned to a low-power state prior to completion of a location fix and after capture of content associated with initiation of the location fix. In some embodiments, high-speed circuitry is booted after completion of a location fix to update location data associated with content.

IPC Classes  ?

  • H04W 52/02 - Power saving arrangements
  • G01S 5/00 - Position-fixing by co-ordinating two or more direction or position-line determinationsPosition-fixing by co-ordinating two or more distance determinations
  • H04B 1/3827 - Portable transceivers
  • H04W 4/029 - Location-based management or tracking services

45.

DEFORMING REAL-WORLD OBJECT USING AN EXTERNAL MESH

      
Application Number 18969689
Status Pending
Filing Date 2024-12-05
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor
  • Zohar, Matan
  • Zhao, Yanli
  • Fulkerson, Brian
  • Guler, Riza Alp

Abstract

Methods and systems are disclosed for performing operations comprising: receiving a video that includes a depiction of a real-world object; generating a three-dimensional (3D) body mesh associated with the real-world object that tracks movement of the real-world object across frames of the video; determining UV positions of the real-world object depicted in the video to obtain pixel values associated with the UV positions; generating an external mesh and associated augmented reality (AR) element representing the real-world object based on the pixel values associated with the UV positions; deforming the external mesh based on changes to the 3D body mesh and a deformation parameter; and modifying the video to replace the real-world object with the AR element based on the deformed external mesh.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06T 7/20 - Analysis of motion
  • G06T 7/60 - Analysis of geometric attributes
  • G06T 11/00 - 2D [Two Dimensional] image generation
  • G06T 17/20 - Wire-frame description, e.g. polygonalisation or tessellation

46.

GENERATING AND DISPLAYING CUSTOMIZED AVATARS IN ELECTRONIC MESSAGES

      
Application Number 18969983
Status Pending
Filing Date 2024-12-05
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor
  • Baldwin, Dorian Franklin
  • Blackstock, Jacob Edward
  • Kennedy, David James
  • Panth, Shahan

Abstract

Among other things, embodiments of the present disclosure improve the functionality of electronic messaging software and systems by generating customized images with avatars of different users within electronic messages. For example, users of different mobile computing devices can exchange electronic communications with images generated to include avatars representing themselves as well as their friends, colleagues, and other acquaintances.

IPC Classes  ?

  • H04L 51/08 - Annexed information, e.g. attachments
  • G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
  • G06T 13/80 - 2D animation, e.g. using sprites
  • G06V 20/40 - ScenesScene-specific elements in video content
  • H04L 51/04 - Real-time or near real-time messaging, e.g. instant messaging [IM]
  • H04L 51/063 - Content adaptation, e.g. replacement of unsuitable content
  • H04L 51/10 - Multimedia information
  • H04L 67/52 - Network services specially adapted for the location of the user terminal
  • H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication

47.

AUGMENTED REALITY SPEECH BALLOON SYSTEM

      
Application Number 18969990
Status Pending
Filing Date 2024-12-05
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor
  • Cowburn, Piers
  • Pan, Qi
  • Pilipski, Eitan

Abstract

Disclosed is an augmented reality system to generate and cause display of an augmented reality interface at a client device. Various embodiments may detect speech, identify a source of the speech, transcribe the speech to a text string, generate a speech bubble based on properties of the speech and that includes a presentation of the text string, and cause display of the speech bubble at a location in the augmented reality interface based on the source of the speech.

IPC Classes  ?

  • G06F 40/58 - Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
  • G06F 40/205 - Parsing
  • G06F 40/30 - Semantic analysis
  • G06T 11/00 - 2D [Two Dimensional] image generation
  • G06T 11/60 - Editing figures and textCombining figures or text
  • G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
  • G10L 15/25 - Speech recognition using non-acoustical features using position of the lips, movement of the lips or face analysis
  • G10L 15/26 - Speech to text systems
  • G10L 21/10 - Transforming into visible information
  • G10L 25/63 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for estimating an emotional state

48.

CROWD SOURCED MAPPING SYSTEM

      
Application Number 18970016
Status Pending
Filing Date 2024-12-05
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor
  • Cowburn, Piers
  • Müller Sandvik, Isac Andreas
  • Pan, Qi
  • Li, David

Abstract

A crowd-sourced modeling system to perform operations that include: receiving image data that comprises image attributes; accessing a 3D model based on at least the image attributes of the image data, wherein the 3D model comprises a plurality of parts that collectively depict an object or environment; identifying a change in the object or environment based on a comparison of the image data with the plurality of parts of the 3D model, the change corresponding to a part of the 3D model from among the plurality of parts; and generating an update to the part of the 3D model based on the image attributes of the image data.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06F 16/51 - IndexingData structures thereforStorage structures
  • G06F 16/54 - BrowsingVisualisation therefor
  • G06T 7/00 - Image analysis
  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
  • H04L 51/224 - Monitoring or handling of messages providing notification on incoming messages, e.g. pushed notifications of received messages
  • H04L 67/01 - Protocols
  • H04L 67/51 - Discovery or management thereof, e.g. service location protocol [SLP] or web services

49.

PERIODIC PARAMETER ESTIMATION FOR VISUAL-INERTIAL TRACKING SYSTEMS

      
Application Number 18970124
Status Pending
Filing Date 2024-12-05
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor
  • Halmetschlager-Funek, Georg
  • Kalkgruber, Matthias
  • Wolf, Daniel
  • Zillner, Jakob

Abstract

A method for calibrating a visual-inertial tracking system is described. A device operates the visual-inertial tracking system without receiving a tracking request from a virtual object display application. In response to operating the visual-inertial tracking system, the device accesses sensor data from sensors at the device. The device identifies, based on the sensor data, a first calibration parameter value of the visual-inertial tracking system and stores the first calibration parameter value. The system detects a tracking request from the virtual object display application. In response to the tracking request, the system accesses the first calibration parameter value and determines a second calibration parameter value from the first calibration parameter value.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06F 3/038 - Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
  • H04L 67/131 - Protocols for games, networked simulations or virtual reality

50.

MIXED REALITY MEDIA CONTENT

      
Application Number 18970441
Status Pending
Filing Date 2024-12-05
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor
  • Moll, Sharon
  • Gurgul, Piotr
  • Zhang, Dawei

Abstract

A mixed-reality media content system may be configured to perform operations that include: causing display of image data at a client device, the image data comprising a depiction of an object that includes a graphical code at a position upon the object; detecting the graphical code at the position upon the depiction of the object based on the image data; accessing media content within a media repository based on the graphical code scanned by the client device; and causing display of a presentation of the media content at the position of the graphical code upon the depiction of the object at the client device.

IPC Classes  ?

  • H04N 21/8545 - Content authoring for generating interactive applications
  • G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
  • G06K 19/06 - Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code

51.

GENERATING A CONTEXTUAL SEARCH STREAM

      
Application Number 18970471
Status Pending
Filing Date 2024-12-05
First Publication Date 2025-03-20
Owner Snap Inc. (USA)
Inventor Lo, Bobby

Abstract

Systems and methods are provided for retrieving first query result data associated with a first user account and rendering the first query result data into a first result item, generating a shareable search result stream comprising the first result item associated with the first user account, retrieving second query result data associated with a second user account and rendering the second query result data into a second result item, adding the second result item to the shareable search result stream associated with the first user account, and providing the sharable search result stream comprising the first result item and the second result item to a first computing device associated with the first user account and a second computing device associated with the second user account.

IPC Classes  ?

52.

ADDING NEARBY USERS

      
Application Number US2024045777
Publication Number 2025/058969
Status In Force
Filing Date 2024-09-09
Publication Date 2025-03-20
Owner SNAP INC. (USA)
Inventor
  • Srivastava, Nikesh
  • Taitz, David Phillip
  • Vallecillo, Yamill Antonio

Abstract

A user with geographically nearby users is offered the opportunity to send the nearby users a friend request. Example methods include accessing a. location of a user system of a user, where the user a member of an interaction platform, determining a list of other users, where the list of other users include other users associated with other user systems that are within a threshold distance of the location of the user system, where the other users have a threshold number of connections with the user, and. where the other users are members of the interaction platform. The method may further include causing to be displayed on a screen of the user system indications of the other users of the list of other users and user interface items for the user to send a. friend request to a corresponding other user of the list of other users.

IPC Classes  ?

  • G06Q 50/00 - Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
  • H04L 51/222 - Monitoring or handling of messages using geographical location information, e.g. messages transmitted or received in proximity of a certain spot or area
  • H04W 4/02 - Services making use of location information

53.

VIRTUAL MANIPULATION OF AUGMENTED AND VIRTUAL REALITY OBJECTS

      
Application Number 18463113
Status Pending
Filing Date 2023-09-07
First Publication Date 2025-03-13
Owner Snap Inc. (USA)
Inventor Spong, Mason

Abstract

Systems and methods are provided. For example, a method includes determining a position of a user's hand and identifying a manipulation gesture performed by the user targeting a virtual object. The method also includes determining a three-dimensional (3D) origin point based on the position of the user's hand when the manipulation gesture is performed, and determining a 3D end point based on a movement of the user's hand from the origin point. The method additionally includes deriving a 3D vector based on the 3D origin point and the 3D end point, and applying an action to the targeted virtual object based on the 3D vector, wherein the targeted virtual object is at a distance greater than the user's arm reach.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
  • G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
  • G06F 3/04845 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
  • G06F 3/0487 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
  • G06V 10/70 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition

54.

OPTICAL WAVEGUIDE MANUFACTURING METHOD

      
Application Number 18719675
Status Pending
Filing Date 2022-12-21
First Publication Date 2025-03-13
Owner Snap Inc. (USA)
Inventor
  • Alexeev, Arseny
  • Nagareddy, Venkata Karthik
  • Ojha, Nirdesh
  • Greenhalgh, Philip Andrew
  • Hayes, David

Abstract

An optical waveguide manufacturing method includes: receiving a master template having a plurality of individual waveguide structures imprinted thereon; coating the master template with a curable master template stamp material; curing the master template stamp material to form a master template stamp; separating the master template stamp from the master template; imprinting the master template stamp onto one or more first substrates having an imprintable coating to form one or more master template copies, each master template copy having a plurality of individual waveguide structure copies imprinted thereon; curing the one or more master template copies; separating the master template stamp from the one or more master template copies; coating one of the one or more master template copies with a curable working stamp material; curing the working stamp material to form a working stamp for manufacturing optical waveguides; and separating the master template copy from the working stamp.

IPC Classes  ?

  • G02B 6/132 - Integrated optical circuits characterised by the manufacturing method by deposition of thin films
  • G02B 6/10 - Light guidesStructural details of arrangements comprising light guides and other optical elements, e.g. couplings of the optical waveguide type
  • G02B 6/12 - Light guidesStructural details of arrangements comprising light guides and other optical elements, e.g. couplings of the optical waveguide type of the integrated circuit kind
  • G02B 6/136 - Integrated optical circuits characterised by the manufacturing method by etching

55.

EMG-BASED SPEECH DETECTION AND COMMUNICATION

      
Application Number 18887848
Status Pending
Filing Date 2024-09-17
First Publication Date 2025-03-13
Owner Snap Inc. (USA)
Inventor Ziv, Assif

Abstract

Systems and methods are provided for performing operations comprising: detecting, by one or more electromyograph (EMG) electrodes of an EMG communication device, subthreshold muscle activation signals of one or more muscles associated with speech production, the subthreshold muscle activation signals being generated in response to inner speech of a user; applying a machine learning technique to the subthreshold muscle activation signals to estimate one or more speech features corresponding to the subthreshold muscle activation signals, the machine learning technique being trained to establish a relationship between a plurality of training subthreshold muscle activation signals and ground truth speech features; generating visual or audible output based on the one or more speech features; and causing the visual or audible output to be processed by a messaging application to engage a feature of the messaging application.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06F 1/16 - Constructional details or arrangements
  • G10L 15/02 - Feature extraction for speech recognitionSelection of recognition unit
  • G10L 15/06 - Creation of reference templatesTraining of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
  • G10L 15/08 - Speech classification or search
  • G10L 15/16 - Speech classification or search using artificial neural networks
  • G10L 15/26 - Speech to text systems
  • H04M 1/7243 - User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
  • H04R 5/033 - Headphones for stereophonic communication

56.

VISUAL AND AUDIO WAKE COMMANDS

      
Application Number 18952335
Status Pending
Filing Date 2024-11-19
First Publication Date 2025-03-13
Owner Snap Inc. (USA)
Inventor
  • Colascione, Daniel
  • Hanover, Matthew
  • Korolev, Sergei
  • Marr, Michael David
  • Myers, Scott
  • Powderly, James

Abstract

A gesture-based wake process for an AR system is described herein. The AR system places a hand-tracking input pipeline of the AR system in a suspended mode. A camera component of the hand-tracking input pipeline detects a possible visual wake command being made by a user of the AR system. On the basis of detecting the possible visual wake command, the AR system wakes the hand-tracking input pipeline and places the camera component in a fully operational mode. If the AR system, using the hand-tracking input pipeline, verifies the possible visual wake command as an actual wake command, the AR system initiates execution of an AR application.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition

57.

EVENT PLANNING IN A CONTENT SHARING PLATFORM

      
Application Number 18954079
Status Pending
Filing Date 2024-11-20
First Publication Date 2025-03-13
Owner Snap Inc. (USA)
Inventor
  • Taitz, David
  • Mourkogiannis, Celia Nicole
  • Lipowicz, David Zak
  • Parrott, Nathaniel
  • Samaranayake, Nayana
  • Giacalone, Ty

Abstract

Systems and methods are provided for receiving a selection to add an event invite media overlay to a media content item, receiving content to be added to the event invite media overlay, the content corresponding to an event, and adding to the event invite media overlay, the content corresponding to the event to generate a custom event invite media overlay. The systems and methods further comprise causing display of the custom event invite media overlay on the media content item, receiving at least one user to which to send an invite to the event, and sending, to a second computing device associated with the at least one user, an invite to the event, the invite comprising the custom event invite media overlay and the media content item.

IPC Classes  ?

  • H04L 51/04 - Real-time or near real-time messaging, e.g. instant messaging [IM]
  • G06F 3/14 - Digital output to display device
  • G06T 11/60 - Editing figures and textCombining figures or text
  • H04L 51/10 - Multimedia information
  • H04L 51/52 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
  • H04L 65/401 - Support for services or applications wherein the services involve a main real-time session and one or more additional parallel real-time or time sensitive sessions, e.g. white board sharing or spawning of a subconference

58.

LED ARRAYS WITH DBR

      
Application Number 18954133
Status Pending
Filing Date 2024-11-20
First Publication Date 2025-03-13
Owner Snap Inc. (USA)
Inventor Wang, Tao

Abstract

A method of producing a light emitting diode (LED) array comprises: forming a plurality of layers of semiconductor material; forming a dielectric mask layer over the plurality of layers, the dielectric mask layer having an array of holes through it each exposing an area of one of the layers of semiconductor material, and growing an LED structure in each of the holes arranged to emit light over a range of wavelengths. At least some of the plurality layers form a distributed Bragg reflector (DBR) arranged to reflect light of at least some of said range of wavelengths.

IPC Classes  ?

  • H01L 27/15 - Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components with at least one potential-jump barrier or surface barrier, specially adapted for light emission
  • H01L 33/00 - SEMICONDUCTOR DEVICES NOT COVERED BY CLASS - Details thereof
  • H01L 33/08 - SEMICONDUCTOR DEVICES NOT COVERED BY CLASS - Details thereof characterised by the semiconductor bodies with a plurality of light emitting regions, e.g. laterally discontinuous light emitting layer or photoluminescent region integrated within the semiconductor body
  • H01L 33/32 - Materials of the light emitting region containing only elements of group III and group V of the periodic system containing nitrogen
  • H01L 33/60 - Reflective elements

59.

LOCATION-BASED CONTEXT INFORMATION SHARING IN A MESSAGING SYSTEM

      
Application Number 18954968
Status Pending
Filing Date 2024-11-21
First Publication Date 2025-03-13
Owner Snap Inc. (USA)
Inventor
  • Dancie, Nicolas
  • Fallourd, Nicolas
  • Latargère, Ugo
  • Martin, Antoine

Abstract

Methods, systems, user interfaces, media, and devices are described for sharing the location of participants of a communication session established via a messaging system. Consistent with some embodiments, an electronic communication containing location information is received from a location sensor coupled to a first client device. A current location of the first user is determined based on the location information. A current location of the first user is displayed, on a display screen of a second client device, the current location of the first user being displayed within a messaging UI during a communication session between the computing device and the second computing device. The location information may be updated during the communication session as messages are exchanged and as a current location changes. Various embodiments may include additional information with the current location, such as a time period associated with the location, or other such information.

IPC Classes  ?

  • H04W 4/029 - Location-based management or tracking services
  • G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
  • H04W 4/14 - Short messaging services, e.g. short message service [SMS] or unstructured supplementary service data [USSD]
  • H04W 76/10 - Connection setup

60.

GENERATIVE NEURAL NETWORK DISTILLATION

      
Application Number 18955297
Status Pending
Filing Date 2024-11-21
First Publication Date 2025-03-13
Owner Snap Inc. (USA)
Inventor
  • Tulyakov, Sergey
  • Korolev, Sergei
  • Stoliar, Aleksei
  • Gusarov, Maksim
  • Kotcur, Sergei
  • Crutchfield, Christopher Yale
  • Wan, Andrew

Abstract

A compact generative neural network can be distilled from a teacher generative neural network using a training network. The compact network can be trained on the input data and output data of the teacher network. The training network train the student network using a discrimination layer and one or more types of losses, such as perception loss and adversarial loss.

IPC Classes  ?

  • G06N 3/088 - Non-supervised learning, e.g. competitive learning
  • G06F 18/21 - Design or setup of recognition systems or techniquesExtraction of features in feature spaceBlind source separation
  • G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
  • G06N 3/045 - Combinations of networks
  • G06N 3/08 - Learning methods
  • G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
  • G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
  • G06V 10/778 - Active pattern-learning, e.g. online learning of image or video features
  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

61.

INTEGRATED DISPLAY MODULE OR APPARATUS AND METHODS FOR OPERATING AND MANUFACTURING THE SAME

      
Application Number 18956900
Status Pending
Filing Date 2024-11-22
First Publication Date 2025-03-13
Owner Snap Inc. (USA)
Inventor
  • Haisch, Mark E.
  • Chen, Fernando Y.
  • Kent, Rock Edward
  • Goetz, Howard V.
  • Thornton, Patrick R.
  • Heskett, Tyson
  • Kyles, Ian

Abstract

Systems, methods, apparatuses and devices provide an integrated display module or apparatus including a Liquid crystal assembly with highly integrated components including display driver circuitry and backplane circuitry. These approaches provide for packaging of displays with small form-factor displays and microdisplays and, in aspects, for usage in virtual and augmented reality devices.

IPC Classes  ?

62.

VIRTUAL MANIPULATION OF AUGMENTED AND VIRTUAL REALITY OBJECTS

      
Application Number US2024045570
Publication Number 2025/054441
Status In Force
Filing Date 2024-09-06
Publication Date 2025-03-13
Owner SNAP INC. (USA)
Inventor Spong, Mason

Abstract

Systems and methods are provided. For example, a method includes determining a position of a user's hand and identifying a manipulation gesture performed by the user targeting a virtual object. The method also includes determining a three-dimensional (3D) origin point based on the position of the user's hand when the manipulation gesture is performed, and determining a 3D end point based on a movement of the user's hand from the origin point. The method additionally includes deriving a 3D vector based on the 3D origin point and the 3D end point, and applying an action to the targeted virtual object based on the 3D vector, wherein the targeted virtual object is at a distance greater than the user's arm reach.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06F 3/0486 - Drag-and-drop

63.

SYSTEM AND METHOD FOR AUGMENTED REALITY BROADCAST INTEGRATION

      
Application Number US2024045596
Publication Number 2025/054457
Status In Force
Filing Date 2024-09-06
Publication Date 2025-03-13
Owner SNAP INC. (USA)
Inventor
  • Charlton, Ebony James
  • Jackson, Micah D.
  • Lo, Benjamin
  • Pessian, Arash
  • Cavins, Christopher

Abstract

A method and system for augmenting live video feeds with augmented reality (AR) effects. A live video feed comprising a plurality of video frames is received and the format of the video frames is determined. The video frames are converted to a format compatible with an AR software development kit (SDK). One or more AR effects from the AR SDK are applied to the converted frames. This can include detecting depictions of objects in the frames and applying effects to the detected objects. The effects can be selected based on detected object types. The frames are then re-converted back to the original format. If the frame rate differs between the video feed and AR SDK, frame rate conversion is performed before and after applying the AR effects. The augmented video frames including the AR effects are provided as output, such as for broadcast or display.

IPC Classes  ?

  • H04N 21/2187 - Live feed
  • H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
  • H04N 21/81 - Monomedia components thereof

64.

SYSTEMS AND METHODS FOR WEARABLE INITIATED HANDSHAKING

      
Application Number 18954651
Status Pending
Filing Date 2024-11-21
First Publication Date 2025-03-13
Owner Snap Inc. (USA)
Inventor
  • Bamberger, Alex
  • Brook, Peter
  • Dahlquist, Nicolas
  • Hanover, Matthew
  • Patton, Russell Douglas
  • Rodriguez, Ii, Jonathan M

Abstract

Systems and methods for device handshaking are described. Embodiments for client device and associated wearable device initiated handshaking are described. In certain embodiments, a device such as wearable camera eyeglasses having both high-speed wireless circuitry and low-power wireless circuitry communicates with a client device. The low-power wireless circuitry is used for signaling and to manage power on handshaking for the high-speed circuitry in order to reduce power consumption. An analysis of a high-speed connection status may be performed by a client device, and used to conserve power at the glasses with signaling from the client device to indicate when the high-speed circuitry of the glasses should be powered on.

IPC Classes  ?

  • H04W 52/02 - Power saving arrangements
  • H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
  • H04W 52/38 - TPC being performed in particular situations

65.

JOINT AUDIO-VIDEO FACIAL ANIMATION SYSTEM

      
Application Number 18955286
Status Pending
Filing Date 2024-11-21
First Publication Date 2025-03-13
Owner Snap Inc. (USA)
Inventor
  • Cao, Chen
  • Chen, Xin
  • Chu, Wei
  • Xue, Zehao

Abstract

The present invention relates to a joint automatic audio visual driven facial animation system that in some example embodiments includes a full scale state of the art Large Vocabulary Continuous Speech Recognition (LVCSR) with a strong language model for speech recognition and obtained phoneme alignment from the word lattice.

IPC Classes  ?

  • G06T 13/20 - 3D [Three Dimensional] animation
  • G06T 13/40 - 3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
  • G10L 21/10 - Transforming into visible information
  • H04L 51/08 - Annexed information, e.g. attachments
  • H04R 27/00 - Public address systems

66.

AUGMENTED REALITY CONTENT GENERATORS INCLUDING 3D DATA IN A MESSAGING SYSTEM

      
Application Number 18955887
Status Pending
Filing Date 2024-11-21
First Publication Date 2025-03-13
Owner Snap Inc. (USA)
Inventor
  • Goodrich, Kyle
  • Hare, Samuel Edward
  • Lazarov, Maxim Maximov
  • Mathew, Tony
  • Mcphee, Andrew James
  • Moreno, Daniel
  • Shang, Wentao

Abstract

The subject technology generates a segmentation mask based on first image data. The subject technology applies the segmentation mask on first depth data to reduce a set of artifacts in a depth map based on the first depth data. The subject technology generates a packed depth map based at least in part on the depth map. The subject technology converts a single channel floating point texture to a raw depth map. The subject technology generates multiple channels. The subject technology applies, to the first image data and the first depth data, a first augmented reality content generator corresponding to a selected first selectable graphical item, the first image data and the first depth data being captured with a camera. The subject technology generates a message including the applied first augmented reality content generator to the first image data and the first depth data.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
  • G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
  • G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
  • G06T 5/77 - RetouchingInpaintingScratch removal

67.

RECREATING KEYBOARD AND MOUSE SOUNDS WITHIN VIRTUAL WORKING ENVIRONMENT

      
Application Number 18957507
Status Pending
Filing Date 2024-11-22
First Publication Date 2025-03-13
Owner Snap Inc. (USA)
Inventor
  • Francis, Brandon
  • Lin, Andrew Cheng-Min
  • Lin, Walton

Abstract

Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for recreating keyboard and mouse sounds within a virtual working environment. The program and method provide for receiving, from a first client device of a first participant of a group of participants within a virtual working environment, a timing of keyboard and mouse input detected at the client device, the group of participants having been selected from among plural participants of the virtual working environment; generating, in response to the receiving, keyboard and mouse sounds that correspond to the timing of the keyboard and mouse input; and providing the generated keyboard and mouse sounds to one or more second client devices of respective one or more second participants of the group of participants, for presentation on the one or more second client devices.

IPC Classes  ?

  • H04N 7/15 - Conference systems
  • G06F 3/02 - Input arrangements using manually operated switches, e.g. using keyboards or dials
  • H04L 65/1083 - In-session procedures
  • H04N 7/14 - Systems for two-way working

68.

RECOMMENDING FASHION ITEM FIT STYLE USING LANDMARKS

      
Application Number 18458807
Status Pending
Filing Date 2023-08-30
First Publication Date 2025-03-06
Owner Snap Inc. (USA)
Inventor
  • Assouline, Avihay
  • Berger, Itamar

Abstract

Methods and systems are disclosed for using machine learning models to recommend fashion item fit styles based on body surface landmarks. The methods and systems access one or more images depicting a person wearing one or more fashion items and process, using one or more machine learning models, the one or more images to estimate a data set comprising a set of body landmarks of the person, a set of garment classifications associated with the one or more fashion items, and a set of garment segmentations for the one or more fashion items. The methods and systems identify one or more fit styles associated with the person based on the estimated data set and cause presentation of one or more real-world fashion items matching the identified one or more fit styles.

IPC Classes  ?

  • G06Q 30/0601 - Electronic shopping [e-shopping]
  • G06V 10/26 - Segmentation of patterns in the image fieldCutting or merging of image elements to establish the pattern region, e.g. clustering-based techniquesDetection of occlusion
  • G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
  • G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestriansBody parts, e.g. hands

69.

CONTEXT BASED RESPONSES

      
Application Number 18797253
Status Pending
Filing Date 2024-08-07
First Publication Date 2025-03-06
Owner Snap Inc. (USA)
Inventor
  • Boyd, Nathan Kenneth
  • Connolly, Michael James
  • Grippi, Daniel Vincent
  • Taitz, David Phillip

Abstract

A response component determines the context of a received message and provides a user with a similar context to generate a response to the message. Example methods include accessing a first content item, the first content item, determining an application used to generate the first content item, causing to be displayed on a display of the computing device, an indication of the first content item and an indication of the application, and responding to a selection of the indication of the application by a user, running the application to generate a second content item.

IPC Classes  ?

  • G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
  • G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
  • G06F 3/0482 - Interaction with lists of selectable items, e.g. menus

70.

CORNER SHIELD PROTECTION FOR SURFACE MOUNT DEVICES

      
Application Number 18819749
Status Pending
Filing Date 2024-08-29
First Publication Date 2025-03-06
Owner Snap Inc. (USA)
Inventor
  • Besseler, Edmilson
  • Tsau, Shu-Fong

Abstract

In some examples, a corner shield is provided for protecting a surface-mount device (SMD) on a printed circuit board (PCB). An example corner shield comprises a rigid structure configured to conform to a corner area of the SMD, and one or more mounting surfaces configured to mount the rigid structure to one or more soldering pads on the PCB adjacent to the SMD.

IPC Classes  ?

  • H05K 1/18 - Printed circuits structurally associated with non-printed electric components

71.

USER PRESENCE STATUS INDICATORS GENERATION AND MANAGEMENT

      
Application Number 18949572
Status Pending
Filing Date 2024-11-15
First Publication Date 2025-03-06
Owner Snap Inc. (USA)
Inventor
  • Desserrey, Laurent
  • Eirinberg, Dylan Shane
  • Voss, Jeremy Baker

Abstract

A method and a system include providing for a group conversation between plural users including a first user and a second user; determining that the second user is active within one of the main conversation view or the experience page; upon determining that the second user is active in the main conversation view, providing a first graphical element for display on a first device associated with the first user, the first graphical element including an avatar and name of the second user; and upon determining that the second user is active in the experience page, providing a second graphical element for display on the first device associated with the first user, the second graphical element including the avatar and name of the second user together with an icon representing the experience page.

IPC Classes  ?

  • H04L 51/043 - Real-time or near real-time messaging, e.g. instant messaging [IM] using or handling presence information
  • H04L 51/046 - Interoperability with other network applications or services
  • H04L 51/18 - Commands or executable codes
  • H04L 67/50 - Network services

72.

PERSONALIZED VIDEOS

      
Application Number 18950318
Status Pending
Filing Date 2024-11-18
First Publication Date 2025-03-06
Owner Snap Inc. (USA)
Inventor
  • Shaburov, Victor
  • Mashrabov, Alexander
  • Tkachenko, Grigoriy
  • Semenov, Ivan

Abstract

Systems and methods for providing personalized videos are provided. An example method includes receiving preprocessed videos including facial expression parameters, modifying a source face to adopt the facial expression parameters thereby generating a modified source face, inserting the modified source face into the preprocessed videos to generate one or more personalized videos, providing a first user interface enabling a user to select a personalized video from the one or more personalized videos, determining that the user has selected the personalized video from the one or more personalized videos, and, in response to the determination, providing a second user interface enabling the user to select, from a list of actions, an action to be applied to the selected personalized video.

IPC Classes  ?

  • H04N 23/63 - Control of cameras or camera modules by using electronic viewfinders
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
  • H04N 5/265 - Mixing
  • H04N 5/272 - Means for inserting a foreground image in a background image, i.e. inlay, outlay
  • H04N 23/611 - Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body

73.

MEDIA GALLERY SHARING AND MANAGEMENT

      
Application Number 18953642
Status Pending
Filing Date 2024-11-20
First Publication Date 2025-03-06
Owner Snap Inc. (USA)
Inventor
  • Kennedy, David James
  • Muñoz Escalante, Diego
  • Spool, Arianne
  • Xia, Yinghua David

Abstract

Various embodiments include systems, methods, and non-transitory computer-readable media for sharing and managing media galleries. Consistent with these embodiments, a method includes receiving a request from a first device to share a media gallery that includes a user avatar; generating metadata associated with the media gallery; generating a message associated with the media gallery, the message at least including the media gallery identifier and the identifier of the user avatar; and transmitting the message to a second device of the recipient user.

IPC Classes  ?

  • H04L 51/10 - Multimedia information
  • G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
  • G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
  • G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
  • H04L 67/146 - Markers for unambiguous identification of a particular session, e.g. session cookie or URL-encoding

74.

CONTEXT BASED RESPONSES

      
Application Number US2024044045
Publication Number 2025/049482
Status In Force
Filing Date 2024-08-27
Publication Date 2025-03-06
Owner SNAP INC. (USA)
Inventor
  • Boyd, Nathan Kenneth
  • Connolly, Michael James
  • Grippi, Daniel Vincent
  • Taitz, David Phillip

Abstract

A response component determines the context of a received message and provides a user with a similar context to generate a. response to the message. Example methods include accessing a first content item, the first content item, determining an application used to generate the first content item, causing to be displayed on a display of the computing device, an indication of the first content item and an indication of the application, and responding to a selection of the indication of the application by a user, running the application to generate a second content item.

IPC Classes  ?

  • G06F 9/451 - Execution arrangements for user interfaces
  • G06F 3/04842 - Selection of displayed objects or displayed text elements

75.

SHARING CONTENT ITEM COLLECTIONS IN A CHAT

      
Application Number US2024044072
Publication Number 2025/049497
Status In Force
Filing Date 2024-08-27
Publication Date 2025-03-06
Owner SNAP INC. (USA)
Inventor
  • Boyd, Nathan Kenneth
  • Grippi, Daniel Vincent
  • Taitz, David Phillip
  • Xia, Xingnan
  • Moreno Cuellar, Daniel

Abstract

Methods and systems are disclosed for sharing collections of content items in chat sessions. The methods and systems receive a request to share a first content item and present a GUI comprising a first set of options and a second set of options, the first set of options being associated with adding the first content item to a collection of content items that is accessible to a plurality of recipients, the second set of options being associated with sending the first content item to individual recipients. The methods and systems determine a set of target recipients of the first content item and select a content sharing link between a first link to the collection of content items and a second link directly to the first content item. The methods and systems send, to a target recipient, the content sharing link that has been selected.

IPC Classes  ?

  • G06F 16/45 - ClusteringClassification
  • G06F 16/48 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
  • G06F 3/048 - Interaction techniques based on graphical user interfaces [GUI]

76.

RECOMMENDING FASHION ITEM FIT STYLE USING LANDMARKS

      
Application Number US2024044274
Publication Number 2025/049635
Status In Force
Filing Date 2024-08-28
Publication Date 2025-03-06
Owner SNAP INC. (USA)
Inventor
  • Assouline, Avihay
  • Berger, Itamar

Abstract

Methods and systems are disclosed for using machine learning models to recommend fashion item fit styles based on body surface landmarks. The methods and systems access one or more images depicting a person wearing one or more fashion items and process, using one or more machine learning models, the one or more images to estimate a data set comprising a set of body landmarks of the person, a set of garment classifications associated with the one or more fashion items, and a set of garment segmentations for the one or more fashion items. The methods and systems identify one or more fit styles associated with the person based on the estimated data set and cause presentation of one or more real-world fashion items matching the identified one or more fit styles.

IPC Classes  ?

  • G06Q 30/0601 - Electronic shopping [e-shopping]
  • G06V 10/70 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning

77.

CORNER SHIELD PROTECTION FOR SURFACE MOUNT DEVICES

      
Application Number US2024044500
Publication Number 2025/049791
Status In Force
Filing Date 2024-08-29
Publication Date 2025-03-06
Owner SNAP INC. (USA)
Inventor
  • Besseler, Edmilson
  • Tsau, Shu-Fong

Abstract

In some examples, a corner shield is provided for protecting a surface-mount device (SMD) on a printed circuit board (PCB). An example corner shield comprises a rigid structure configured to conform to a corner area of the SMD, and one or more mounting surfaces configured to mount the rigid structure to one or more soldering pads on the PCB adjacent to the SMD.

IPC Classes  ?

  • H05K 3/30 - Assembling printed circuits with electric components, e.g. with resistor
  • H05K 3/34 - Assembling printed circuits with electric components, e.g. with resistor electrically connecting electric components or wires to printed circuits by soldering
  • H05K 1/11 - Printed elements for providing electric connections to or between printed circuits

78.

PACKAGED PRODUCT SCAN

      
Application Number US2024044841
Publication Number 2025/050025
Status In Force
Filing Date 2024-08-30
Publication Date 2025-03-06
Owner SNAP INC. (USA)
Inventor
  • Dela Rosa, Kevin Sarabia
  • Jeon, Byung Eun
  • Wang, Zelun

Abstract

Examples herein describe a product scan system for identifying packaged items in an image. The product scan system accesses image frames, detects a packaged item in the image frames, generates text feature data by extracting text features from the packaged item in the image frames, generates image feature data by extracting image features from the packaged item in the image frames, generates a first ranked set of query results using the generated text feature data, generates a second ranked set of query results using the generated image feature data, generates a final ranked set of query results, presents a subset of the final ranked set of query results on a graphical user interface of the computing device.

IPC Classes  ?

  • G06F 16/532 - Query formulation, e.g. graphical querying
  • G06F 16/583 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
  • G06F 16/732 - Query formulation
  • G06F 16/783 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
  • G06Q 30/0601 - Electronic shopping [e-shopping]

79.

SYSTEM AND METHOD FOR AUGMENTED REALITY BROADCAST INTEGRATION

      
Application Number 18439491
Status Pending
Filing Date 2024-02-12
First Publication Date 2025-03-06
Owner Snap Inc. (USA)
Inventor
  • Charlton, Ebony James
  • Goedjen, Maxwell
  • Jackson, Micah D.
  • Lo, Benjamin
  • Pessian, Arash
  • Cavins, Christopher

Abstract

A method and system for augmenting live video feeds with augmented reality (AR) effects. A live video feed comprising a plurality of video frames is received and the format of the video frames is determined. The video frames are converted to a format compatible with an AR software development kit (SDK). One or more AR effects from the AR SDK are applied to the converted frames. This can include detecting depictions of objects in the frames and applying effects to the detected objects. The effects can be selected based on detected object types. The frames are then re-converted back to the original format. If the frame rate differs between the video feed and AR SDK, frame rate conversion is performed before and after applying the AR effects. The augmented video frames including the AR effects are provided as output, such as for broadcast or display.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • H04N 21/2187 - Live feed
  • H04N 21/431 - Generation of visual interfacesContent or additional data rendering

80.

SHARING CONTENT ITEM COLLECTIONS IN A CHAT

      
Application Number 18457003
Status Pending
Filing Date 2023-08-28
First Publication Date 2025-03-06
Owner Snap Inc. (USA)
Inventor
  • Boyd, Nathan Kenneth
  • Grippi, Daniel Vincent
  • Taitz, David Phillip
  • Xia, Xingnan
  • Moreno Cuellar, Daniel

Abstract

Methods and systems are disclosed for sharing collections of content items in chat sessions. The methods and systems receive a request to share a first content item and present a GUI comprising a first set of options and a second set of options, the first set of options being associated with adding the first content item to a collection of content items that is accessible to a plurality of recipients, the second set of options being associated with sending the first content item to individual recipients. The methods and systems determine a set of target recipients of the first content item and select a content sharing link between a first link to the collection of content items and a second link directly to the first content item. The methods and systems send, to a target recipient, the content sharing link that has been selected.

IPC Classes  ?

  • H04N 21/4788 - Supplemental services, e.g. displaying phone caller identification or shopping application communicating with other users, e.g. chatting
  • H04N 21/231 - Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers or prioritizing data for deletion
  • H04N 21/431 - Generation of visual interfacesContent or additional data rendering

81.

PACKAGED PRODUCT SCAN

      
Application Number 18460393
Status Pending
Filing Date 2023-09-01
First Publication Date 2025-03-06
Owner Snap Inc. (USA)
Inventor
  • Dela Rosa, Kevin Sarabia
  • Jeon, Byung Eun
  • Wang, Zelun

Abstract

Examples herein describe a product scan system for identifying packaged items in an image. The product scan system accesses image frames, detects a packaged item in the image frames, generates text feature data by extracting text features from the packaged item in the image frames, generates image feature data by extracting image features from the packaged item in the image frames, generates a first ranked set of query results using the generated text feature data, generates a second ranked set of query results using the generated image feature data, generates a final ranked set of query results, presents a subset of the final ranked set of query results on a graphical user interface of the computing device.

IPC Classes  ?

  • G06V 20/62 - Text, e.g. of license plates, overlay texts or captions on TV images
  • G06F 16/2457 - Query processing with adaptation to user needs
  • G06F 16/248 - Presentation of query results
  • G06V 10/40 - Extraction of image or video features
  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
  • G06V 30/10 - Character recognition

82.

BUFFERED VIDEO RECORDING

      
Application Number 18516655
Status Pending
Filing Date 2023-11-21
First Publication Date 2025-03-06
Owner Snap Inc. (USA)
Inventor
  • Pang, Chao
  • Hao, Jianliang
  • Wu, Haoyun
  • Ma, Xiangying

Abstract

A system is provided that detects a start of a camera session, captures initial raw data frames and stores them in memory. Upon determining that the camera session corresponds to a video recording session, the system activates a video recording pipeline and upon determining that the video recording pipeline is active, the system retrieves the initial raw data frames, encodes the initial raw data frames using the video recording pipeline, accesses additional captured raw data frames, and encodes the additional captured raw data frames using the video recording pipeline until detection of an end of the camera session. Upon detecting an end of the camera session, the system deactivates the video recording pipeline.

IPC Classes  ?

  • H04N 5/77 - Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
  • H04N 5/91 - Television signal processing therefor

83.

3D CAPTIONS WITH FACE TRACKING

      
Application Number 18950395
Status Pending
Filing Date 2024-11-18
First Publication Date 2025-03-06
Owner Snap Inc. (USA)
Inventor
  • Goodrich, Kyle
  • Hare, Samuel Edward
  • Lavarov, Maxim Maximov
  • Mathew, Tony
  • Mcphee, Andrew James
  • Moreno, Daniel
  • Shang, Wentao

Abstract

Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and method for performing operations comprising: receiving, by one or more processors that implement a messaging application, a video feed from a camera of a user device; detecting, by the messaging application, a face in the video feed; in response to detecting the face in the video feed, retrieving a three-dimensional (3D) caption; modifying the video feed to include the 3D caption at a position in 3D space of the video feed proximate to the face; and displaying a modified video feed that includes the face and the 3D caption.

IPC Classes  ?

  • G06T 19/00 - Manipulating 3D models or images for computer graphics
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06F 3/04847 - Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
  • G06F 3/04883 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
  • G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
  • H04L 51/10 - Multimedia information

84.

RECONCILING EVENTS IN MULTI-NODE SYSTEMS USING HARDWARE TIMESTAMPS

      
Application Number 18950990
Status Pending
Filing Date 2024-11-18
First Publication Date 2025-03-06
Owner Snap Inc. (USA)
Inventor Feinman, Alex

Abstract

Techniques are described for reconciling events timestamped in different time domains in multi-node systems supporting low-latency hardware timestamping. First and second nodes having independent time bases are synchronized by the first node generating an event that is received effectively simultaneously at the first and second nodes, the first and second nodes recording a timestamp of receipt of the event, the first node asynchronously querying the second node for its timestamp of receipt of the event and comparing its timestamp of receipt of the event with the timestamp of receipt of the event by the second node, and the first node using a difference in the timestamps of receipt of the event by the first and second nodes to align the time bases of the first and second nodes. The nodes may include hardware timestamping functionality or use an external component (e.g., field programmable gate array) to provide the timestamping functionality.

IPC Classes  ?

  • H04J 3/06 - Synchronising arrangements
  • G02B 27/01 - Head-up displays
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06F 9/54 - Interprogram communication
  • G06F 13/40 - Bus structure

85.

OPTICAL STRUCTURE FOR AUGMENTED REALITY DISPLAY

      
Application Number 18952108
Status Pending
Filing Date 2024-11-19
First Publication Date 2025-03-06
Owner Snap Inc. (USA)
Inventor Valera, Mohmed Salim

Abstract

An augmented reality display is disclosed. A colour projector 2 emits an image in a narrow beam comprising three primary colours: red, green and blue. A pair of waveguides 4, 6 is provided in the path of the projected beam. A first input grating 8 receives light from the projector 2 and diffracts the received light so that diffracted wavelengths of the light in first and second primary colours are coupled into the first waveguide 6, and so that diffracted wavelengths of the light in second and third primary colours are coupled out of the first waveguide in a direction towards the second waveguide 4. A second input diffraction grating 10 receives light coupled out of the first waveguide 6 and diffracts the second and third primary colours so that they are coupled into the second waveguide 4.

IPC Classes  ?

86.

Departure time estimation in a location sharing system

      
Application Number 16351436
Grant Number 12242979
Status In Force
Filing Date 2019-03-12
First Publication Date 2025-03-04
Grant Date 2025-03-04
Owner Snap Inc. (USA)
Inventor Baylin, Benoît

Abstract

Methods, systems, and devices for predicting a departure time of a user from a labeled place. In some embodiments, the location sharing system accesses historical location data of the user and extracts, for one or more labeled location of the user, an attendance record of the user at the labeled place. Then, when the location sharing system receives current location data of the user, and the system determines that the user is currently at the labeled place, the user predicts a departure time of the user from the labeled place based on the attendance record of the user at the labeled places. Some embodiments share the predicted departure time of the user with the user's friends via a map GUI.

IPC Classes  ?

  • G06N 20/00 - Machine learning
  • G06F 16/28 - Databases characterised by their database models, e.g. relational or object models
  • G06F 16/29 - Geographical information databases
  • G06N 5/04 - Inference or reasoning models
  • H04L 67/01 - Protocols
  • H04L 67/52 - Network services specially adapted for the location of the user terminal
  • G06N 3/044 - Recurrent networks, e.g. Hopfield networks
  • G06N 3/08 - Learning methods
  • G06N 20/20 - Ensemble learning

87.

SNAP OS

      
Application Number 238299800
Status Pending
Filing Date 2025-02-27
Owner Snap Inc. (USA)
NICE Classes  ?
  • 09 - Scientific and electric apparatus and instruments
  • 42 - Scientific, technological and industrial services, research and design

Goods & Services

(1) Peripherals; augmented reality glasses; augmented reality headsets; computer hardware, peripherals and software for remotely accessing, capturing, transmitting and displaying pictures, video, audio and data; software for setting up, configuring, and controlling wearable computer hardware and peripherals; software for setting up, configuring, and controlling wearable computer hardware and peripheral devices in the field of augmented reality; downloadable computer operating software for augmented reality; downloadable mobile operating system software; downloadable computer operating system software; downloadable computer operating system for operating augmented reality devices (1) Providing temporary use of online non-downloadable middleware for providing an interface between augmented reality devices and operating systems; providing temporary use of online non-downloadable software for providing an interface between augmented reality devices and operating systems; providing temporary use of online non-downloadable software for providing an interface between computer peripheral devices and operating systems

88.

SNAP OS

      
Application Number 019149104
Status Pending
Filing Date 2025-02-27
Owner Snap Inc. (USA)
NICE Classes  ?
  • 09 - Scientific and electric apparatus and instruments
  • 42 - Scientific, technological and industrial services, research and design

Goods & Services

Computer peripherals; augmented reality glasses; augmented reality headsets; computer hardware, peripherals and software for remotely accessing, capturing, transmitting and displaying pictures, video, audio and data; software for setting up, configuring, and controlling wearable computer hardware and peripherals; software for setting up, configuring, and controlling wearable computer hardware and peripheral devices in the field of augmented reality; downloadable computer operating software for augmented reality; downloadable mobile operating system software; downloadable computer operating system software; downloadable computer operating system for operating augmented reality devices. Providing temporary use of online non-downloadable middleware for providing an interface between augmented reality devices and operating systems; providing temporary use of online non-downloadable software for providing an interface between augmented reality devices and operating systems; providing temporary use of online non-downloadable software for providing an interface between computer peripheral devices and operating systems.

89.

SNAPCHAT PLATINUM

      
Serial Number 99058735
Status Pending
Filing Date 2025-02-27
Owner Snap Inc. ()
NICE Classes  ?
  • 35 - Advertising and business services
  • 09 - Scientific and electric apparatus and instruments
  • 42 - Scientific, technological and industrial services, research and design

Goods & Services

Administration of a program enabling participants to access pre-release, exclusive, and experimental features, software, and games via a mobile application, global computer networks, wireless networks, and electronic communications networks. Downloadable mobile application software for users to view and access ad-free content; Software for modifying the appearance and enabling transmission of photographs and videos; software for use in taking and editing photographs and recording and editing videos; software to enable the transmission of photographs and videos to mobile telephones; software for the collection, editing, organizing, modifying, transmission, storage and sharing of data and information; computer software for use as an application programming interface (api); software to enable uploading, downloading, accessing, storing, posting, displaying, tagging, distributing, streaming, linking, sharing, transmitting or otherwise providing electronic media, photographic and video content, digital data or information via computer and communication networks; software for streaming audio-visual media content via a global computer network and to mobile and digital electronic devices; computer software which allows users to build and access social network information including address book, friend lists, profiles, preferences and personal data; software for managing contact information in mobile device address books; electronic database in the field of entertainment recorded on computer media; downloadable software for sending digital photos, videos, images, audio-visual content and text to others via a global computer network; downloadable computer software for use in mobile devices, namely, augmented reality software for integrating electronic data with real world environments for the purpose of viewing, capturing, recording and editing augmented images and augmented videos; downloadable computer software application which allows users to create avatars, graphic icons, symbols, graphical depictions of people, places and things, fanciful designs, comics and phrases that can be posted, shared and transmitted via multi-media messaging (mms), text messaging (sms), the internet, and other communication networks; downloadable software for the purpose of analyzing the interactions and engagement between users for the purpose of ranking relationships; downloadable software for the purpose of tracking, accessing and sharing the location of users; downloadable software for recording, tracking, accessing and sharing of past and real-time location data for the purpose of sharing a user's location with others; downloadable software granting users early or exclusive access to new and experimental features relating to all of the foregoing. Providing temporary use of non-downloadable software allowing users to view and access ad-free content; Providing temporary use of non-downloadable software for modifying the appearance and enabling transmission of photographs and videos; providing temporary use of non-downloadable software for use in taking and editing photographs and recording and editing videos; providing temporary use of non-downloadable software for the collection, recommendation, editing, organizing, modifying, transmission, uploading, display storage and sharing of data, information, photographs, games, music, videos, audio-visual material and user generate content; providing temporary use of non-downloadable software to enable uploading, downloading, accessing, storing, posting, displaying, tagging, distributing, streaming, linking, sharing, transmitting or otherwise providing electronic media, photographic and video content, digital data or information via computer and communication networks; providing temporary use of non-downloadable software for streaming audio-visual media content via a global computer network and to mobile and digital electronic devices; providing temporary use of non-downloadable computer software which allows users to build and access social network information including address book, friend lists, profiles, preferences and personal data; providing temporary use of non-downloadable software for managing contact information in mobile device address books; providing temporary use of non-downloadable software for sending digital photos, videos, images, audio-visual content and text to others via a global computer network; providing temporary use of non-downloadable computer software for use in mobile devices, namely, augmented reality software for integrating electronic data with real world environments for the purpose of viewing, capturing, recording and editing augmented images and augmented videos; providing temporary use of non-downloadable computer software application which allows users to create avatars, graphic icons, symbols, graphical depictions of people, places and things, fanciful designs, comics and phrases that can be posted, shared and transmitted via multi-media messaging (mms), text messaging (sms), the internet, and other communication networks; providing temporary use of non-downloadable software for the purpose of analyzing the interactions and engagement between users for the purpose of ranking relationships; providing temporary use of non-downloadable software for the purpose of tracking, accessing and sharing the location of users; providing temporary use of non-downloadable software for recording, tracking, accessing and sharing of past and real-time location data for the purpose of sharing a user's location with others; hosting of digital content on the internet; providing information from searchable indexes and databases of information, including text, electronic documents, databases, graphics, photographic images and audio visual information, by means of computer and communication networks; computer services, namely, creating virtual communities for registered users to participate in discussions and engage in social, business and community networking; application service provider (asp) featuring software to enable or facilitate the uploading, downloading, streaming, posting, displaying, linking, sharing or otherwise providing electronic media or information over communication networks; providing temporary use of non-downloadable software granting users early or exclusive access to new and experimental features relating to all of the foregoing

90.

SNAP OS

      
Serial Number 99059873
Status Pending
Filing Date 2025-02-27
Owner Snap Inc. ()
NICE Classes  ?
  • 09 - Scientific and electric apparatus and instruments
  • 42 - Scientific, technological and industrial services, research and design

Goods & Services

Peripherals; augmented reality glasses; augmented reality headsets; computer hardware, peripherals and software for remotely accessing, capturing, transmitting and displaying pictures, video, audio and data; software for setting up, configuring, and controlling wearable computer hardware and peripherals; software for setting up, configuring, and controlling wearable computer hardware and peripheral devices in the field of augmented reality; downloadable computer operating software for augmented reality; downloadable mobile operating system software; downloadable computer operating system software; downloadable computer operating system for operating augmented reality devices Providing temporary use of online non-downloadable middleware for providing an interface between augmented reality devices and operating systems; providing temporary use of online non-downloadable software for providing an interface between augmented reality devices and operating systems; providing temporary use of online non-downloadable software for providing an interface between computer peripheral devices and operating systems

91.

TIME SYNCHRONIZATION FOR SHARED EXTENDED REALITY EXPERIENCES

      
Application Number 18481804
Status Pending
Filing Date 2023-10-05
First Publication Date 2025-02-27
Owner Snap Inc. (USA)
Inventor
  • Ajanohoun, Jordy Innocentius
  • Diem, Markus
  • Evangelidis, Georgios
  • Penney, Matthew

Abstract

A first extended reality (XR) device and a second XR device are colocated in an environment. The first XR device captures sensory data of a wearer of the second XR device. The sensory data is used to determine a time offset between a first clock of the first XR device and a second clock of the second XR device. The first clock and the second clock are synchronized based on the time offset and a shared coordinate system is established. The shared coordinate system enables alignment of virtual content that is simultaneously presented by the first XR device and the second XR device based on the synchronization of the first clock and the second clock.

IPC Classes  ?

  • G06F 3/14 - Digital output to display device
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer

92.

AR DATA SIMULATION WITH GAITPRINT IMITATION

      
Application Number 18943047
Status Pending
Filing Date 2024-11-11
First Publication Date 2025-02-27
Owner Snap Inc. (USA)
Inventor Zhou, Kai

Abstract

A method for transferring a gait pattern of a first user to a second user to simulate augmented reality content in a virtual simulation environment is described. In one aspect, the method includes identifying a gait pattern of a first user operating a first visual tracking system in a first physical environment, identifying a trajectory from a second visual tracking system operated by a second user in a second physical environment, the trajectory based on poses of the second visual tracking system over time, modifying the trajectory from the second visual tracking system based on the gait pattern of the first user, applying the modified trajectory in a plurality of virtual environments, and generating simulated ground truth data based on the modified trajectory in the plurality of virtual environments.

IPC Classes  ?

  • G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
  • G06T 11/60 - Editing figures and textCombining figures or text
  • G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition

93.

DYNAMIC OVER-RENDERING IN LATE-WARPING

      
Application Number 18943110
Status Pending
Filing Date 2024-11-11
First Publication Date 2025-02-27
Owner Snap Inc. (USA)
Inventor
  • Jung, Bernhard
  • Lee Kim-Koon, Edward

Abstract

A method for adjusting an over-rendered area of a display in an AR device is described. The method includes identifying an angular velocity of a display device, a most recent pose of the display device, previous warp poses, and previous over-rendered areas, and adjusting a size of a dynamic over-rendered area based on a combination of the angular velocity, the most recent pose, the previous warp poses, and the previous over-rendered areas.

IPC Classes  ?

  • G06T 3/40 - Scaling of whole images or parts thereof, e.g. expanding or contracting
  • G06T 19/00 - Manipulating 3D models or images for computer graphics

94.

REDUCING BOOT TIME AND POWER CONSUMPTION IN DISPLAYING DATA CONTENT

      
Application Number 18943739
Status Pending
Filing Date 2024-11-11
First Publication Date 2025-02-27
Owner Snap Inc. (USA)
Inventor
  • Bamberger, Alex
  • Brook, Peter
  • Dahlquist, Nicolas
  • Hanover, Matthew
  • Patton, Russell Douglas
  • Rodriguez, Ii, Jonathan M.

Abstract

One aspect disclosed is a method including determining a location from a positioning system receiver, determining, using a hardware processor and the location, that the location is approaching a path of direction of visual direction information, displaying the visual direction information on a display of a wearable device in response to the determining, determining, using the positioning system receiver, whether the turn of the visual direction information has been made, determining, by the hardware processor, a first period of time for display of the content data based on whether the turn of the visual direction information has been made, powering on the display and displaying, using the display, content data for the first period of time, turning off the display and the hardware processor following display of the content data.

IPC Classes  ?

  • G02B 27/01 - Head-up displays
  • G02C 11/00 - Non-optical adjunctsAttachment thereof
  • G06F 1/16 - Constructional details or arrangements
  • G06F 1/32 - Means for saving power
  • G06F 1/3203 - Power management, i.e. event-based initiation of a power-saving mode
  • G06F 1/3206 - Monitoring of events, devices or parameters that trigger a change in power modality
  • G06F 1/3287 - Power saving characterised by the action undertaken by switching off individual functional units in the computer system
  • G06F 1/3293 - Power saving characterised by the action undertaken by switching to a less power-consuming processor, e.g. sub-CPU
  • H04N 5/76 - Television signal recording
  • H04N 23/60 - Control of cameras or camera modules
  • H04N 23/63 - Control of cameras or camera modules by using electronic viewfinders
  • H04N 23/65 - Control of camera operation in relation to power supply
  • H04N 23/66 - Remote control of cameras or camera parts, e.g. by remote control devices
  • H04N 23/661 - Transmitting camera control signals through networks, e.g. control via the Internet

95.

GESTURE-BASED KEYBOARD TEXT ENTRY

      
Application Number 18944816
Status Pending
Filing Date 2024-11-12
First Publication Date 2025-02-27
Owner Snap Inc. (USA)
Inventor
  • Moll, Sharon
  • Zhang, Dawei

Abstract

A gesture-based text entry user interface for an Augmented Reality (AR) system is provided. The AR system detects a start text entry gesture made by a user of the AR system, generates a virtual keyboard user interface including a virtual keyboard having a plurality of virtual keys, and provides to the user the virtual keyboard user interface. The AR system detects a hold of an enter text gesture made by the user. While the user holds the enter text gesture, the AR system collects continuous motion gesture data of a continuous motion as the user makes the continuous motion through the virtual keys of the virtual keyboard. The AR system detects a release of the enter text gesture by the user and generates entered text data based on the continuous motion gesture data.

IPC Classes  ?

  • G06F 3/04886 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06F 3/042 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means

96.

CUSTOMIZABLE AVATAR GENERATION SYSTEM

      
Application Number 18946045
Status Pending
Filing Date 2024-11-13
First Publication Date 2025-02-27
Owner Snap Inc. (USA)
Inventor
  • Kozakov, Michael
  • Seegobin, Avie
  • Cabuena, Mark Anthony

Abstract

Systems, methods, and computer readable media for customizable avatar generation system, where the methods include accessing text data, processing, using at least one processor, the text data to determine first characteristics of the text data, selecting a personalized avatar of a plurality of personalized avatars for the text data based on matching the first characteristics with second characteristics of the plurality of personalized avatars, generating a customized avatar based on the text data and the selected personalized avatar, and causing the customized avatar to be displayed on a display of a computing device.

IPC Classes  ?

  • G06T 11/60 - Editing figures and textCombining figures or text
  • G06F 40/109 - Font handlingTemporal or kinetic typography
  • G06F 40/20 - Natural language analysis
  • G06N 3/08 - Learning methods
  • G06N 7/01 - Probabilistic graphical models, e.g. probabilistic networks
  • G06T 3/40 - Scaling of whole images or parts thereof, e.g. expanding or contracting

97.

AUGMENTED REALITY UNBOXING EXPERIENCE

      
Application Number 18946303
Status Pending
Filing Date 2024-11-13
First Publication Date 2025-02-27
Owner Snap Inc. (USA)
Inventor
  • Dudovitch, Gal
  • Engle, Stephanie
  • Heikkinen, Christie Marie
  • Mishin Shuvi, Ma'Ayan

Abstract

Methods and systems are disclosed for performing operations for providing an augmented reality unboxing experience. The operations include retrieving an augmented reality element comprising a virtual box that is in a closed state. The operations include obtaining triggers associated with the virtual box, the triggers configured to change the virtual box from the closed state to an open state. The operations include displaying the virtual box. The operations include receiving input associated with the virtual box. The operations include determining that the received input corresponds to the one or more triggers associated with the virtual box. The operations include modifying the virtual box from being displayed in the closed state to being displayed in the open state.

IPC Classes  ?

  • H04N 1/00 - Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmissionDetails thereof
  • G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
  • G06F 16/16 - File or folder operations, e.g. details of user interfaces specifically adapted to file systems
  • G06T 19/00 - Manipulating 3D models or images for computer graphics

98.

DIRECT SCALE LEVEL SELECTION FOR MULTILEVEL FEATURE TRACKING UNDER MOTION BLUR

      
Application Number 18947329
Status Pending
Filing Date 2024-11-14
First Publication Date 2025-02-27
Owner Snap Inc. (USA)
Inventor
  • Kalkgruber, Matthias
  • Wolf, Daniel

Abstract

A method for mitigating motion blur in a visual-inertial tracking system is described. In one aspect, the method includes accessing a first image generated by an optical sensor of the visual tracking system, accessing a second image generated by the optical sensor of the visual tracking system, the second image following the first image, determining a first motion blur level of the first image, determining a second motion blur level of the second image, identifying a scale change between the first image and the second image, determining a first optimal scale level for the first image based on the first motion blur level and the scale change, and determining a second optimal scale level for the second image based on the second motion blur level and the scale change.

IPC Classes  ?

  • H04N 23/68 - Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
  • G06T 3/40 - Scaling of whole images or parts thereof, e.g. expanding or contracting
  • G06T 5/70 - DenoisingSmoothing
  • G06T 7/20 - Analysis of motion
  • G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components

99.

AUDIO RESPONSE MESSAGES

      
Application Number 18947355
Status Pending
Filing Date 2024-11-14
First Publication Date 2025-02-27
Owner Snap Inc. (USA)
Inventor
  • Gorumkonda, Gurunandan Krishnan
  • Nayar, Shree K.

Abstract

An audio response system can generate multimodal messages that can be dynamically updated on viewer's client device based on a type of audio response detected. The audio responses can include keywords or continuum-based signal (e.g., levels of wind noise). A machine learning scheme can be trained to output classification data from the audio response data for content selection and dynamic display updates.

IPC Classes  ?

  • G10L 25/84 - Detection of presence or absence of voice signals for discriminating voice from noise
  • G06F 3/16 - Sound inputSound output
  • G06N 3/08 - Learning methods
  • G10L 15/22 - Procedures used during a speech recognition process, e.g. man-machine dialog

100.

DISPLAY FOR AUGMENTED REALITY OR VIRTUAL REALITY

      
Application Number 18947648
Status Pending
Filing Date 2024-11-14
First Publication Date 2025-02-27
Owner Snap Inc. (USA)
Inventor Valera, Mohmed Salim

Abstract

An AR or VR display device. First and third input gratings receive light of a first color from first and second projectors, respectively, coupling the light into a first waveguide. Second and fourth input gratings receive light of a second color from the first and second projectors, respectively, coupling the light into a second waveguide. An output diffractive optical element couples light out of the waveguides towards a viewing position. The first and second projectors provide light to the input diffractive optical elements in directions that are at a first and second angle, respectively, to a waveguide normal vector. The output diffractive optical element couples light out of the waveguides in a first range of angles for light from the first projector and in a second range of angles for light from the second projector, the first range of angles and the second range of angles differing but partially overlapping.

IPC Classes  ?

  • G02B 27/01 - Head-up displays
  • G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
  • G02B 27/10 - Beam splitting or combining systems
  1     2     3     ...     56        Next Page