A localization approach uses a 3D map for an area made up of 3D points. A mobile device downloads the 3D map and localizes itself against the 3D map by comparing images captured by a camera on the mobile device to the 3D map. On-device localization obviates the need to send keyframes to the server and greater localization accuracy may be achieved as a larger number of images (e.g., all of the images captured by the device's camera) may be compared to the map. Tracking may also be performed on-device by comparing additional image captured by a camera on the mobile device to the 3D map in view of sensor data (e.g., inertial data).
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
A63F 13/211 - Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/216 - Input arrangements for video game devices characterised by their sensors, purposes or types using geographical information, e.g. location of the game device or player using GPS
2.
ON-DEVICE LOCALIZATION AND TRACKING WITHOUT KEYFRAMES
A localization approach uses a 3D map for an area made up of 3D points. A mobile device downloads the 3D map and localizes itself against the 3D map by comparing images captured by a camera on the mobile device to the 3D map. On-device localization obviates the need to send keyframes to the server and greater localization accuracy may be achieved as a larger number of images (e.g., all of the images captured by the device's camera) may be compared to the map. Tracking may also be performed on-device by comparing additional image captured by a camera on the mobile device to the 3D map in view of sensor data (e.g., inertial data).
The present disclosure describes an online system that applies template-based facial augmentations to images as part of an augmented reality experience. A facial augmentation template is a template for an augmented reality (AR) modification to apply to a user's face. These templates include a structure for how the augmentation should be applied to the face and display parameters that a user can set for how the templates should be rendered (e.g., color, specularity, or opacity). The online system identifies feature points on a 3D model of the user's face that correspond to features on the user's actual face, and localizes the template relative to the 3D model by mapping anchor points of the template to corresponding feature points on the 3D model. The online system renders the facial augmentation template at the localization to generate an augmented image of the user's face.
An online computing system generates and uses usability heatmaps to ensure that AR content is presented in suitable locations within geographic areas. To generate a usability heatmap, the online system accesses geographic data describing a geographic area for which the online system provides AR content services and identifies a set of portions of the geographic area. The online system identifies subsets of the geographic data for each of the portions of the geographic area and computes a usability score for each of the portions. The online system generates a usability heatmap based on the computed usability scores for the portions of the geographic area and stores the heatmap in a heatmap database that stores usability heatmaps for different geographic areas. The online system may use the heatmaps in the database to generate AR content when requested by client devices.
An online system uses a pose prior model and a pose objective function to estimate the pose of a client device. A pose prior model is a model for prior information known about client devices and their poses without reference to a particular client device and its pose data. The online system receives pose data from a client device and computes an estimated pose for the client device based on the received pose data, the pose prior model, and a generated initial candidate pose for the client device. The online system uses these as inputs to a pose objective function and optimizes the pose objective function to estimate a pose for the client device. The online system transmits this estimated pose to the client device, and may use the estimated pose as the pose for the client device for the purposes of delivering content to the user.
A63F 13/428 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
A63F 13/211 - Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
A63F 13/216 - Input arrangements for video game devices characterised by their sensors, purposes or types using geographical information, e.g. location of the game device or player using GPS
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
A set of code syntax is parsed to identify relevant resources rather than being executed or compiled. The syntax serves as a domain-specific language in that it can be used only to obtain resources that have been validated and included in the syntax. Developers write code to implement their components in a browser-based development environment. The development environment provides a library of user interface components that may be used and manipulated by developers. The studio parses the developer's code and syntax statements to infer a set of the user interface components that allows other developers to configure these components using the development environment's user interface.
A development system provides an editor and a simulator between which changes are bi-directionally synchronized. As the simulator runs the application logic, objects may be spawned and destroyed. Object properties (e.g., colors, positions, behaviors, etc.) can also change as time passes. The developer can see these changes in the editor view in real time. The developer can make changes in the editor view that persist and are synchronized in real time to the simulator.
A system may provision an application-specific cloud account for a backend component of a software application. The system may receive a request from a client system executing a client side component of the software application, the request including authorization credentials and a version identifier for the backend component. The system may validate the authorization credentials to determine the client system is authorized to access the requested virtual reality content. The system may identify a backend version of the backend component associated with the version identifier in the request. The system may load, into the cloud account, the backend component corresponding to the identified backend version. The system may execute the loaded backend component to process the request and generate the requested virtual reality content. The system may return the requested virtual reality content to the client system.
A machine learning model classifies points of interest in a parallel reality game hosted by a server. The server generates training data sets that include verified properties for points of interest. The machine learning model may predict unverified properties for points of interest. Players in the parallel reality game may input properties for the points of interest. The machine learning model use the received properties from players as inputs to the machine learning model to verify unverified properties or generate new properties for the points of interest. The server may classify the points of interest as suitable for particular activities, and the server may use the classifications for future activities within the parallel reality game.
A63F 13/69 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions
A63F 13/216 - Input arrangements for video game devices characterised by their sensors, purposes or types using geographical information, e.g. location of the game device or player using GPS
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
10.
MONOCULAR DEPTH ESTIMATION WITH GEOMETRY-INFORMED DEPTH HINT
A depth estimation model leverages a geometry-rendered depth map from a low-cost geometry model to provide depth hints. The model is trained and configured to input a time series of frames including a target frame. The time series of images are captured as monocular video data by a camera assembly. Applying the model includes: applying a feature encoder to extract visual features forming a feature map for each frame, matching features across the features maps forming a cost volume, obtaining a geometry-rendered depth map from the low-cost geometry model of the scene based on a pose of the target frame, modifying the cost volume based on the geometry-rendered depth map, and applying a depth decoder to the modified cost volume to generate the depth map for the target frame. A client device implementing the model may generate virtual content using the depth map to display the target frame of the scene augmented with the virtual content.
The present disclosure describes a method for estimating a pose of a client device using a magnetic field vector map. The method includes receiving a plurality of magnetic field measurements from a plurality of client devices, each magnetic field measurement describing a magnetic field vector at a geographic location. The method further includes grouping the magnetic field measurements into one or more region groups, aggregating the magnetic field measurements in each region group to generate a probability distribution of magnetic field vectors associated with the geographic region, determining a magnetic field vector within each geographic region, and generating a magnetic field vector map. Based on the magnetic field vector map, the method may include estimating a pose of a client device based on a user location of the client device and received magnetic field vector from the client device.
A client device estimates an error in a heading component of a magnetometer measurement based on an error in a vertical component of the magnetometer measurement. The client device receives a three-dimensional measurement from a magnetometer including a heading and a vertical component. The client device identifies an expected vertical component based on a position of the client device and a magnetic map. The client device generates an error estimate based on the vertical component of the three-dimensional measurement and the expected vertical component. The client device compares the error to an error threshold and, in response to the error estimate being less than an error threshold, determines a pose of the client device based on the heading. The client device displays augmented reality content to the user using the pose.
A method for determining a metric relative pose between a target image and a reference image is disclosed. The method includes receiving the target image depicting a scene captured by a camera assembly of a client device. The method includes applying a machine-learning model to the target image to determine a metric relative pose between the target image and a reference image, wherein the metric relative pose represents a transformation from a pose of the reference image to a pose of the target image that is scaled to physical dimensions of the scene. The machine-learning model may include a keypoint network for determining a keypoint distribution including the spatial coordinates of keypoints extracted from each image. The machine-learning model may establish correspondences between the keypoint distribution of the target image and the keypoint distribution of the reference image. Based on the identified correspondences, the machine-learning model may regress the metric relative pose. With the metric relative pose, augmented reality content may be generated and displayed informed by the physical dimensions of the scene described by the metric relative pose.
G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
A message router partially decodes messages to determine how to route the messages. The message router receives a message and identifies a field of the message as a candidate field for including an envelope identifier that indicates an envelope type of the message. The envelope type of the message indicates where information, such as where to route the message, is stored within the message. The message router attempts to decode the candidate field to determine whether the candidate field includes the envelope identifier, and responsive to the candidate filed including the envelope identifier, the message router determines the envelope type of the message. The message router routes the message according to the envelope type.
An image of a hand is captured by a camera. The captured image is rectified by establishing a canonical camera space and mapping predictions back to an original camera space. A set of 2D keypoints, a set of root-relative vertices, and a set of weights are predicted based on the rectified image. Using the set of root-relative vertices, a set of 3D keypoints that correspond to the set of 2D keypoints are obtained. A global camera space hand mesh prediction is generated in 3D space based on the set of 2D keypoints, the set of weights, and the set of 3D keypoints. A virtual element is output in a virtual space based on the generated global camera space hand mesh prediction.
Depth maps are generated based on a sequence of posed images captured by a camera, the depth maps are fused into a truncated signed distance function (TSDF), and an initial estimate of 3-dimensional (3D) scene geometry is generated by extracting a 3D mesh via the TSDF. 3D embeddings are estimated for each vertex in the 3D mesh by mapping each vertex to a multi-view consistent plane embedding space such that vertices on a same plane map to nearly a same place in the embedding space. The vertices are clustered into 3D plane instances based on respective 3D embeddings and geometry information defined by the 3D mesh to create a planar representation of the scene. A location of a virtual element in a virtual world of an augmented reality game is determined based on the planar representation.
G06T 17/20 - Wire-frame description, e.g. polygonalisation or tessellation
G06T 7/70 - Determining position or orientation of objects or cameras
G06V 10/762 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
G06V 20/70 - Labelling scene content, e.g. deriving syntactic or semantic representations
A user selects an object on the display of a mobile device and a corresponding coordinate in a 3D map including the object is selected. A bounding volume and center point of the object in 3D space is estimated based on segmenting the image to identify the object and the depth of pixels in the segmented region depicting the object. The user follows an approximately circular path around the object, pointing a camera at the object, and receives audio and/or haptic feedback regarding orientation of the camera, distance from the object, or speed of progression along the path. As the user moves the camera, a 3D scan of the object is generated. The center point of the object may be recalculated as the user progresses along the path.
This disclosure pertains to a scene-agnostic, map-relative pose regression method. The pose regressor is conditioned on a scene-specific map representation such that its pose predictions are relative to the scene map. This allows training of the pose regressor across multiple scenes to learn the generic relation between a scene-specific map representation and the camera pose. The map-relative pose regressor can then be applied to new map representations.
G06T 7/70 - Determining position or orientation of objects or cameras
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/655 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
The present disclosure describes a method for calibrating a magnetic sensor of a client device. The method may include receiving a set of magnetic field measurements, each of which includes a device location, an orientation of the client device, and an observed magnetic field vector measured by the magnetic sensor. The method may include computing a device correction vector for the client device based on the set of magnetic field measurements. For each magnetic field measurement, the method includes determining a world magnetic field vector at the device location of the magnetic field measurement, computing an expected measured magnetic field vector at the device location, accessing an estimated device correction vector for the client device, computing an expected adjusted vector for the client device, comparing the observed magnetic field vector associated with the magnetic field measurement and the expected adjusted vector, and computing the device correction vector based on the comparison.
A relocalizer model for an environment is trained using an iterative process. To initialize the relocalizer model, an initial image is registered with its camera pose established as the reference. In each subsequent iteration of training, the relocalizer model is applied to additional images to predict pose estimates for the images. The images and their pose estimates are then leveraged in retraining of the relocalizer model. In general, the training of the relocalizer model entails extracting scene coordinates for pixels of a training image. The scene coordinates are then projected into a projection based on the pose estimate of the training image. A loss is calculated between the projection and the training image. And parameters of the relocalizer model are adjusted to minimize the loss. The iterative training may continue until an end condition is met. The trained relocalizer model is configured to input an image of the environment and to output the camera pose for the image.
An augmented reality system generates computer-mediated reality on a client device. The client device has sensors including a camera configured to capture image data of an environment and a location sensor to capture location data describing a geolocation of the client device. The client device creates a three-dimensional (3-D) map with the image data and the location data for use in generating virtual objects to augment reality. The client device transmits the created 3-D map to an external server that may utilize the 3-D map to update a world map stored on the external server. The external server sends a local portion of the world map to the client device. The client device determines a distance between the client device and a mapping point to generate a computer-mediated reality image at the mapping point to be displayed on the client device.
The present disclosure describes an online system that applies template-based facial augmentations to images as part of an augmented reality experience. A facial augmentation template is a template for an augmented reality (AR) modification to apply to a user's face. These templates include a structure for how the augmentation should be applied to the face and display parameters that a user can set for how the templates should be rendered (e.g., color, specularity, or opacity). The online system identifies feature points on a 3D model of the user's face that correspond to features on the user's actual face, and localizes the template relative to the 3D model by mapping anchor points of the template to corresponding feature points on the 3D model. The online system renders the facial augmentation template at the localization to generate an augmented image of the user's face.
An AR client device generates and uses a background model to identify portions of images that depict the sky. A background model is a model that represents where the sky is visible for the client device. To identify a sky background portion of an image, a client device can map an image onto the background model and thereby determine which portion of the image represents the sky. The client device can use the identified sky background portion to augment the image to include AR content in the sky. To generate the background model, the client device applies a background detection model to a set of images to generate background probability images. The background probability images are mapped onto a background model using orientation data captured by the client device to update the background model based on the background probability image.
An AR client device generates and uses a background model to identify portions of images that depict the sky. A background model is a model that represents where the sky is visible for the client device. To identify a sky background portion of an image, a client device can map an image onto the background model and thereby determine which portion of the image represents the sky. The client device can use the identified sky background portion to augment the image to include AR content in the sky. To generate the background model, the client device applies a background detection model to a set of images to generate background probability images. The background probability images are mapped onto a background model using orientation data captured by the client device to update the background model based on the background probability image.
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06T 7/80 - Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
G06T 19/00 - Manipulating 3D models or images for computer graphics
26.
STRUCTURE LINE GENERATION FOR USER DEVICE POSE PREDICTION
A client device, or an online system, uses structure lines that are generated based on an image to predict a pose of the client device. Structure lines are lines that delineate structures in the physical world depicted in the image. The client device also uses a structure model to predict its pose. A structure model is a model that represents structures in the physical world within an area. The client device predicts its pose based on the structure model and the structure lines by applying an objective function. The client device may then iteratively update the estimated pose and score the updated poses until the client device identifies an estimated pose at which the structure lines sufficiently fit the structure model.
A method, system, and computer-readable storage medium are disclosed for displaying virtual elements (e.g., AR content) in a physical environment by a client device using a virtual camera pose that is different from the pose of the physical camera used to capture images of the physical environment. The client device uses a three-dimensional (3D) map (e.g., a topographical mesh) of the physical environment to determine a pose (a position and orientation) of the camera of the client device. The 3D map can include geometry, colors, textures, or any other suitable information describing the physical environment.
A63F 13/5378 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
A method of determining a position for a virtual object is described. A location of a client device is determined, and, based on the determined location a set of map segments is retrieved. A virtual object is determined to be displayed on the client device. Relation vectors between the virtual object and each map segment of the retrieved set of map segments are obtained. Each relation vector is weighted based on object parameters of the virtual object. A position to display the virtual object is determined based on the weighted relation vectors. The virtual object is provided for display on the client device as the determined position.
A63F 13/5378 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
The present disclosure describes approaches for evaluating interest points for localization uses based on a repeatability of the detection of the interest point in images capturing a scene that includes the interest point. The repeatability of interest points is determined by using a trained repeatability model. The repeatability model is trained by analyzing a time series of images of a scene and determining repeatability functions for each interest point in the scene. The repeatability function is determined by identifying which images in the time series of images allowed for the detection of the interest point by an interest point detection model.
A client device provides AR content to a user tracks features of the user's ears. The client device builds a model of the user's head that represents the distances between the user's facial features and features of the user's ears. The client device generates this user head model by identifying facial feature points in an image. The client device also applies an ear feature detection model to the image, which identifies 2D points in the image where ear features are depicted. The client device generates the model based on the 3D facial feature points and the 2D ear feature points. For example, the client device may use a feature relationship model that describes the general relationships between facial features and ear points. Once the client device has generated the user head model, the client device uses the user head model to estimate the position of the ear features in 3D.
G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
G06V 10/77 - Processing image or video features in feature spacesArrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]Blind source separation
A client device selects animations for an AR character by prompting a large language model (LLM) to select from a set of possible animations for the AR character. The client device captures an image of its environment using a camera and identifies objects that are depicted in the image. The client device generates a prompt for an LLM that instructs the LLM to select from a set of candidate actions for an AR character to perform based on the identified objects. The LLM returns a response to the client device and the client device extracts a set of selected actions from the LLM's response. The client device identifies a set of animations that correspond to the actions selected by the LLM and renders AR content that depicts the AR character performing those actions. The client device augments the captured image to include the AR content and displays the augmented image.
An augmented reality (“AR”) device applies smooth correction methods to correct the location of the virtual objects presented to a user. The AR device may apply an angular threshold to determine whether a virtual object can be moved from an original location to a target location. An angular threshold is a maximum angle by which a line from the AR device to the virtual object can change within a timestep. Similarly, the AR device may apply a motion threshold, which is a maximum on the distance that a virtual object's location can be corrected based on the motion of the virtual object. Furthermore, the AR device may apply a pixel threshold to the correction of the virtual object's location. A pixel threshold is a maximum on the distance that a pixel projection of the virtual object can change based on the virtual object's change in location.
An online system generates and stores virtual models of physical spaces. These virtual models represent physical objects within the physical spaces through 3D representations of those objects. A content development system can request a virtual model associated with a physical space and modify the virtual model to include virtual objects. The content development system transmits the modified virtual model to be stored by the online system. A client device requests the modified virtual model from the online system when the client device is posed near the physical space. The client device uses the modified virtual model to display a modified video feed that displays the virtual content generated by the content development system as an augmented reality experience.
A system determines an accuracy of a set containing a plurality of sensor location measurements of a client device and generating a set of sensor location measurements labeled with an associated accuracy estimate, is presented. The system receives a plurality of sensor location measurements of the client device generated by a location sensor. The system may determine the accuracy of a sensor location measurement by comparing the sensor location measurement to a reference location measurement, such as VIO location measurements computed using VIO data. The system computes a first set of location translations for the set of sensor location measurements and a second set of location translations for the set of VIO location measurements. The system may calculate a measurement difference between each corresponding pair of location translations from the first and second set, identify measurement differences that exceed a threshold, and label the corresponding sensor location measurement as inaccurate.
G01S 19/00 - Satellite radio beacon positioning systemsDetermining position, velocity or attitude using signals transmitted by such systems
A63F 13/00 - Video games, i.e. games using an electronically generated display having two or more dimensions
G01C 21/00 - NavigationNavigational instruments not provided for in groups
G01C 21/16 - NavigationNavigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigatedDead reckoning by integrating acceleration or speed, i.e. inertial navigation
G01C 21/18 - Stabilised platforms, e.g. by gyroscope
G01S 19/39 - Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
G01S 19/40 - Correcting position, velocity or attitude
A63F 13/216 - Input arrangements for video game devices characterised by their sensors, purposes or types using geographical information, e.g. location of the game device or player using GPS
A system determines an accuracy of a set containing a plurality of sensor location measurements of a client device and generating a set of sensor location measurements labeled with an associated accuracy estimate, is presented. The system receives a plurality of sensor location measurements of the client device generated by a location sensor. The system may determine the accuracy of a sensor location measurement by comparing the sensor location measurement to a reference location measurement, such as VIO location measurements computed using VIO data. The system computes a first set of location translations for the set of sensor location measurements and a second set of location translations for the set of VIO location measurements. The system may calculate a measurement difference between each corresponding pair of location translations from the first and second set, identify measurement differences that exceed a threshold, and label the corresponding sensor location measurement as inaccurate.
G01S 19/39 - Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
G01S 19/40 - Correcting position, velocity or attitude
36.
High-Speed Real-Time Scene Reconstruction from Input Image Data
A computer-implemented method is disclosed for generating scene reconstructions from image data. The method includes: receiving image data of a scene captured by a camera; inputting the image data of the scene into a scene reconstruction model; receiving, from the scene reconstruction model, a final spatial model of the scene, wherein the scene reconstruction model generates the final spatial model by: predicting a depth map for each image of the image data, extracting a feature map for each image of the image data, generating a first spatial model based on the predicted depth maps of the images, generating a second spatial model based on the extracted feature maps of the images, and determining the final spatial model by combining the first spatial model and the second spatial model; and providing functionality on a computing device related to the scene and based on the final spatial model.
42 - Scientific, technological and industrial services, research and design
Goods & Services
Providing online non-downloadable software for detecting and
sharing a user's location; providing online non-downloadable
software for and displaying relevant local information of
general interest; providing online non-downloadable software
for establishing geolocation information and geospatial
data; providing online non-downloadable software that
enables users to search, view, share, review, upload,
compile, and post geolocation data and 3D mapping
activities; application service provider featuring
application programming interface (API) software for
generation of software applications; application service
provider featuring application programming interface (API)
software for use in searching, transmitting, receiving,
accessing, and viewing geographic location information and
providing content based on location; providing online
non-downloadable augmented reality, virtual reality, mixed
reality, and extended reality software for integrating
electronic data with real world environments; providing
online non-downloadable software for integrating electronic
data with real world environments for the purpose of
locating points of interest, events, routes, and locations;
providing online non-downloadable computer software for
providing geographic information, interactive geographic
maps; software-as-a-service for use in uploading, embedding,
and sharing 3-dimensional scans; software-as-a-service for
organizing, viewing, editing, and manipulating 3-dimensional
images and objects; software-as-a-service for viewing
augmented reality, virtual reality, mixed reality, and
extended reality content; computer programming services for
creating augmented reality, virtual reality, mixed reality,
and extended reality software applications; platform as a
service (PAAS) featuring computer software platforms,
namely, software for creating and designing software
applications featuring content integrating electronic data
with real world or virtual environments; application service
provider featuring application programming interface (API)
software for integrating electronic data with real world or
virtual environments.
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Downloadable software for detecting a user's location;
downloadable software for displaying relevant local
information of general interest; downloadable software that
enables users to view information about locations, events,
and points of interest; downloadable software for
establishing geolocation information and geospatial data;
downloadable software that enables users to search, view,
share, review, upload, compile, and post geolocation data
and 3D mapping activities; downloadable application
programming interface (API) software for generation of
mobile applications; downloadable application programming
interface (API) software for use in searching, transmitting,
receiving, accessing, and viewing geographic location
information and providing content based on location;
downloadable augmented reality software for integrating
electronic data with real world environments; downloadable
software for integrating electronic data with real world
environments for the purpose of locating points of interest,
events, routes, and locations; downloadable software for
providing geographic information and interactive geographic
maps; downloadable software for use in uploading, embedding,
and sharing 3-dimensional scans; downloadable software for
organizing, viewing, editing, and manipulating 3-dimensional
images and objects; downloadable software for viewing
augmented reality, virtual reality, and mixed reality
content; downloadable software for creating augmented
reality, virtual reality, and mixed reality content;
downloadable software for creating and designing software
applications featuring content integrating electronic data
with real world or virtual environments; application
programming interface (API) software for integrating
electronic data with real world or virtual environments;
downloadable computer game software; downloadable computer
game software for use on wireless devices; downloadable
video game programs; downloadable interactive video game
programs; downloadable electronic game programs and computer
software platforms for social networking. Providing online non-downloadable software for detecting and
sharing a user's location; providing online non-downloadable
software for and displaying relevant local information of
general interest; providing online non-downloadable software
for establishing geolocation information and geospatial
data; providing online non-downloadable software that
enables users to search, view, share, review, upload,
compile, and post geolocation data and 3D mapping
activities; application service provider featuring
application programming interface (API) software for
generation of software applications; application service
provider featuring application programming interface (API)
software for use in searching, transmitting, receiving,
accessing, and viewing geographic location information and
providing content based on location; providing online
non-downloadable augmented reality software for integrating
electronic data with real world environments; providing
online non-downloadable software for integrating electronic
data with real world environments for the purpose of
locating points of interest, events, routes, and locations;
providing online non-downloadable computer software for
providing geographic information, interactive geographic
maps; software-as-a-service for use in uploading, embedding,
and sharing 3-dimensional scans; software-as-a-service for
organizing, viewing, editing, and manipulating 3-dimensional
images and objects; software-as-a-service for viewing
virtual reality content; computer programming services for
creating augmented reality, virtual reality, and mixed
reality software applications; platform as a service (PAAS)
featuring computer software platforms, namely, software for
creating and designing software applications featuring
content integrating electronic data with real world or
virtual environments; application service provider featuring
application programming interface (API) software for
integrating electronic data with real world or virtual
environments.
42 - Scientific, technological and industrial services, research and design
Goods & Services
Providing online non-downloadable software for detecting and
sharing a user's location; providing online non-downloadable
software for and displaying relevant local information of
general interest; providing online non-downloadable software
for establishing geolocation information and geospatial
data; providing online non-downloadable software that
enables users to search, view, share, review, upload,
compile, and post geolocation data and 3D mapping
activities; application service provider featuring
application programming interface (API) software for
generation of software applications; application service
provider featuring application programming interface (API)
software for use in searching, transmitting, receiving,
accessing, and viewing geographic location information and
providing content based on location; providing online
non-downloadable augmented reality, virtual reality, mixed
reality, and extended reality software for integrating
electronic data with real world environments; providing
online non-downloadable software for integrating electronic
data with real world environments for the purpose of
locating points of interest, events, routes, and locations;
providing online non-downloadable computer software for
providing geographic information, interactive geographic
maps; software-as-a-service for use in uploading, embedding,
and sharing 3-dimensional scans; software-as-a-service for
organizing, viewing, editing, and manipulating 3-dimensional
images and objects; software-as-a-service for viewing
augmented reality, virtual reality, mixed reality, and
extended reality content; computer programming services for
creating augmented reality, virtual reality, mixed reality,
and extended reality software applications; platform as a
service (PAAS) featuring computer software platforms,
namely, software for creating and designing software
applications featuring content integrating electronic data
with real world or virtual environments; application service
provider featuring application programming interface (API)
software for integrating electronic data with real world or
virtual environments.
40.
Magnetic field vector map for orientation determination
The present disclosure describes a method for estimating a pose of a client device using a magnetic field vector map. The method includes receiving a plurality of magnetic field measurements from a plurality of client devices, each magnetic field measurement describing a magnetic field vector at a geographic location. The method further includes grouping the magnetic field measurements into one or more region groups, aggregating the magnetic field measurements in each region group to generate a probability distribution of magnetic field vectors associated with the geographic region, determining a magnetic field vector within each geographic region, and generating a magnetic field vector map. Based on the magnetic field vector map, the method may include estimating a pose of a client device based on a user location of the client device and received magnetic field vector from the client device.
A system includes a first sensor system and a second sensor system. The first sensor system includes a first internal clock, a first sensor configured to generate data describing an environment, and a Universal Serial Bus (USB) module. The second sensor system includes a second internal clock and a controller configured to perform a precision time protocol (PTP) to synchronize the second internal clock with the first internal clock. The precision time protocol includes transmitting timestamped messages encoded in USB bulk messages between the first sensor system and the second sensor system. The timestamped messages may be encoded in USB musical instrument digital interface (MIDI) bulk messages.
An online system uses a visual positioning system (VPS) model to verify the location of a client device for anti-spoofing measures. The online system receives ostensible pose data and image data from the client device. This pose data and image data are ostensibly captured by the client device at the same time, or within some threshold time of each other. The online system determines whether they match according to a VPS model. The online system uses the VPS model to output candidate poses for the client device based on the received image data and compares those candidate poses to the pose in the received pose data. If the differences between the candidate poses and the pose from the received pose data exceed a threshold, the online system may determine that the received pose data and image data do not match and thus are likely being spoofed.
A63F 13/216 - Input arrangements for video game devices characterised by their sensors, purposes or types using geographical information, e.g. location of the game device or player using GPS
A63F 13/655 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
A63F 13/75 - Enforcing rules, e.g. detecting foul play or generating lists of cheating players
G06T 7/70 - Determining position or orientation of objects or cameras
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
(1) Downloadable software for detecting a user's location; downloadable software for displaying relevant local information of general interest; downloadable software that enables users to view information about locations, events, and points of interest; downloadable software for establishing geolocation information and geospatial data; downloadable software that enables users to search, view, share, review, upload, compile, and post geolocation data and 3D mapping activities; downloadable application programming interface (API) software for generation of mobile applications; downloadable application programming interface (API) software for use in searching, transmitting, receiving, accessing, and viewing geographic location information and providing content based on location; downloadable augmented reality software for integrating electronic data with real world environments; downloadable software for integrating electronic data with real world environments for the purpose of locating points of interest, events, routes, and locations; downloadable software for providing geographic information and interactive geographic maps; downloadable software for use in uploading, embedding, and sharing 3-dimensional scans; downloadable software for organizing, viewing, editing, and manipulating 3-dimensional images and objects; downloadable software for viewing augmented reality, virtual reality, and mixed reality content; downloadable software for creating augmented reality, virtual reality, and mixed reality content; downloadable software for creating and designing software applications featuring content integrating electronic data with real world or virtual environments; application programming interface (API) software for integrating electronic data with real world or virtual environments; downloadable computer game software; downloadable computer game software for use on wireless devices; downloadable video game programs; downloadable interactive video game programs; downloadable electronic game programs and computer software platforms for social networking. (1) Providing online non-downloadable software for detecting and sharing a user's location; providing online non-downloadable software for and displaying relevant local information of general interest; providing online non-downloadable software for establishing geolocation information and geospatial data; providing online non-downloadable software that enables users to search, view, share, review, upload, compile, and post geolocation data and 3D mapping activities; application service provider featuring application programming interface (API) software for generation of software applications; application service provider featuring application programming interface (API) software for use in searching, transmitting, receiving, accessing, and viewing geographic location information and providing content based on location; providing online non-downloadable augmented reality software for integrating electronic data with real world environments; providing online non-downloadable software for integrating electronic data with real world environments for the purpose of locating points of interest, events, routes, and locations; providing online non-downloadable computer software for providing geographic information, interactive geographic maps; software-as-a-service for use in uploading, embedding, and sharing 3-dimensional scans; software-as-a-service for organizing, viewing, editing, and manipulating 3-dimensional images and objects; software-as-a-service for viewing virtual reality content; computer programming services for creating augmented reality, virtual reality, and mixed reality software applications; platform as a service (PAAS) featuring computer software platforms, namely, software for creating and designing software applications featuring content integrating electronic data with real world or virtual environments; application service provider featuring application programming interface (API) software for integrating electronic data with real world or virtual environments.
44.
DETERMINING VISUAL OVERLAP OF IMAGES BY USING BOX EMBEDDINGS
An image matching system for determining visual overlaps between images by using box embeddings is described herein. The system receives two images depicting a 3D surface with different camera poses. The system inputs the images (or a crop of each image) into a machine learning model that outputs a box encoding for the first image and a box encoding for the second image. A box encoding includes parameters defining a box in an embedding space. Then the system determines an asymmetric overlap factor that measures asymmetric surface overlaps between the first image and the second image based on the box encodings. The asymmetric overlap factor includes an enclosure factor indicating how much surface from the first image is visible in the second image and a concentration factor indicating how much surface from the second image is visible in the first image.
G06V 10/75 - Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video featuresCoarse-fine approaches, e.g. multi-scale approachesImage or video pattern matchingProximity measures in feature spaces using context analysisSelection of dictionaries
G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
G06N 3/088 - Non-supervised learning, e.g. competitive learning
G06V 10/42 - Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
G06V 10/50 - Extraction of image or video features by performing operations within image blocksExtraction of image or video features by using histograms, e.g. histogram of oriented gradients [HoG]Extraction of image or video features by summing image-intensity valuesProjection analysis
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Downloadable software for detecting a user's location; downloadable software for displaying relevant local information of general interest; downloadable software that enables users to view information about locations, events, and points of interest; downloadable software for establishing geolocation information and geospatial data; downloadable software that enables users to search, view, share, review, upload, compile, and post geolocation data and 3D mapping activities; Downloadable application programming interface (API) software for generation of mobile applications; downloadable application programming interface (API) software for use in searching, transmitting, receiving, accessing, and viewing geographic location information and providing content based on location; downloadable augmented reality software for integrating electronic data with real world environments; downloadable software for integrating electronic data with real world environments for the purpose of locating points of interest, events, routes, and locations; downloadable software for providing geographic information and interactive geographic maps; downloadable software for use in uploading, embedding, and sharing 3-dimensional scans; downloadable software for organizing, viewing, editing, and manipulating 3-dimensional images and objects; downloadable software for viewing augmented reality, virtual reality, and mixed reality content; downloadable software for creating augmented reality, virtual reality, and mixed reality content; downloadable software for creating and designing software applications featuring content integrating electronic data with real world or virtual environments; downloadable application programming interface (API) software for integrating electronic data with real world or virtual environments; downloadable computer game software; downloadable computer game software for use on wireless devices; downloadable video game programs; downloadable interactive video game programs; downloadable electronic game programs and computer software platforms for social networking Providing online non-downloadable software for detecting and sharing a user's location; providing online non-downloadable software for and displaying relevant local information of general interest; providing online non-downloadable software for establishing geolocation information and geospatial data; providing online non-downloadable software that enables users to search, view, share, review, upload, compile, and post geolocation data and 3D mapping activities; application service provider featuring application programming interface (API) software for generation of software applications; application service provider featuring application programming interface (API) software for use in searching, transmitting, receiving, accessing, and viewing geographic location information and providing content based on location; providing online non-downloadable augmented reality software for integrating electronic data with real world environments; providing online non-downloadable software for integrating electronic data with real world environments for the purpose of locating points of interest, events, routes, and locations; providing online non-downloadable computer software for providing geographic information, interactive geographic maps; software-as-a-service (SAAS) services featuring software for use in uploading, embedding, and sharing 3-dimensional scans; software-as-a-service (SAAS) services featuring software for organizing, viewing, editing, and manipulating 3-dimensional images and objects; software-as-a-service (SAAS) services featuring software for viewing virtual reality content; computer programming services for creating augmented reality, virtual reality, and mixed reality software applications; platform as a service (PAAS) featuring computer software platforms, namely, software for creating and designing software applications featuring content integrating electronic data with real world or virtual environments; application service provider featuring application programming interface (API) software for integrating electronic data with real world or virtual environments
42 - Scientific, technological and industrial services, research and design
Goods & Services
(1) Providing online non-downloadable software for detecting and sharing a user's location; providing online non-downloadable software for and displaying relevant local information of general interest; providing online non-downloadable software for establishing geolocation information and geospatial data; providing online non-downloadable software that enables users to search, view, share, review, upload, compile, and post geolocation data and 3D mapping activities; application service provider featuring application programming interface (API) software for generation of software applications; application service provider featuring application programming interface (API) software for use in searching, transmitting, receiving, accessing, and viewing geographic location information and providing content based on location; providing online non-downloadable augmented reality, virtual reality, mixed reality, and extended reality software for integrating electronic data with real world environments; providing online non-downloadable software for integrating electronic data with real world environments for the purpose of locating points of interest, events, routes, and locations; providing online non-downloadable computer software for providing geographic information, interactive geographic maps; software-as-a-service for use in uploading, embedding, and sharing 3-dimensional scans; software-as-a-service for organizing, viewing, editing, and manipulating 3-dimensional images and objects; software-as-a-service for viewing augmented reality, virtual reality, mixed reality, and extended reality content; computer programming services for creating augmented reality, virtual reality, mixed reality, and extended reality software applications; platform as a service (PAAS) featuring computer software platforms, namely, software for creating and designing software applications featuring content integrating electronic data with real world or virtual environments; application service provider featuring application programming interface (API) software for integrating electronic data with real world or virtual environments.
42 - Scientific, technological and industrial services, research and design
Goods & Services
(1) Providing online non-downloadable software for detecting and sharing a user's location; providing online non-downloadable software for and displaying relevant local information of general interest; providing online non-downloadable software for establishing geolocation information and geospatial data; providing online non-downloadable software that enables users to search, view, share, review, upload, compile, and post geolocation data and 3D mapping activities; application service provider featuring application programming interface (API) software for generation of software applications; application service provider featuring application programming interface (API) software for use in searching, transmitting, receiving, accessing, and viewing geographic location information and providing content based on location; providing online non-downloadable augmented reality, virtual reality, mixed reality, and extended reality software for integrating electronic data with real world environments; providing online non-downloadable software for integrating electronic data with real world environments for the purpose of locating points of interest, events, routes, and locations; providing online non-downloadable computer software for providing geographic information, interactive geographic maps; software-as-a-service for use in uploading, embedding, and sharing 3-dimensional scans; software-as-a-service for organizing, viewing, editing, and manipulating 3-dimensional images and objects; software-as-a-service for viewing augmented reality, virtual reality, mixed reality, and extended reality content; computer programming services for creating augmented reality, virtual reality, mixed reality, and extended reality software applications; platform as a service (PAAS) featuring computer software platforms, namely, software for creating and designing software applications featuring content integrating electronic data with real world or virtual environments; application service provider featuring application programming interface (API) software for integrating electronic data with real world or virtual environments.
48.
Depth Image Generation Using a Graphics Processor for Augmented Reality
An AR device displays virtual objects to users as part of an AR experience by generating a depth image by rendering the depth image from a three-dimensional (3D) world model. The AR device receives an image from a camera and estimates its physical pose in the real world when the image was captured. The AR device accesses the 3D world model and estimates a virtual pose within a 3D world model that corresponds to the estimated physical pose in the real world. The AR device uses the virtual pose to render the depth image using the 3D world model. The AR device may use a graphics processor to render the depth image from a camera view corresponding to the virtual pose. The AR device uses the depth image to present content to the user over the image captured by the camera.
An online system uses a pose prior model and a pose objective function to estimate the pose of a client device. A pose prior model is a model for prior information known about client devices and their poses without reference to a particular client device and its pose data. The online system receives pose data from a client device and computes an estimated pose for the client device based on the received pose data, the pose prior model, and a generated initial candidate pose for the client device. The online system uses these as inputs to a pose objective function and optimizes the pose objective function to estimate a pose for the client device. The online system transmits this estimated pose to the client device, and may use the estimated pose as the pose for the client device for the purposes of delivering content to the user.
A63F 13/428 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
A63F 13/211 - Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
A63F 13/216 - Input arrangements for video game devices characterised by their sensors, purposes or types using geographical information, e.g. location of the game device or player using GPS
G06F 3/0346 - Pointing devices displaced or positioned by the userAccessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
50.
Self-Supervised Training of a Depth Estimation System
A method for training a depth estimation model and methods for use thereof are described. Images are acquired and input into a depth model to extract a depth map for each of the plurality of images based on parameters of the depth model. The method includes inputting the images into a pose decoder to extract a pose for each image. The method includes generating a plurality of synthetic frames based on the depth map and the pose for each image. The method includes calculating a loss value with an input scale occlusion and motion aware loss function based on a comparison of the synthetic frames and the images. The method includes adjusting the plurality of parameters of the depth model based on the loss value. The trained model can receive an image of a scene and generate a depth map of the scene according to the image.
The disclosure describes a method for calibrating a magnetic sensor of a client device. The method may include receiving a set of magnetic field measurements, each of which includes a device location, an orientation of the client device, and an observed magnetic field vector measured by the magnetic sensor. The method may include computing a device correction vector for the client device based on the set of magnetic field measurements. For each magnetic field measurement, the method includes determining a world magnetic field vector at the device location of the magnetic field measurement, computing an expected measured magnetic field vector at the device location, accessing an estimated device correction vector for the client device, computing an expected adjusted vector for the client device, comparing the observed magnetic field vector associated with the magnetic field measurement and the expected adjusted vector, and computing the device correction vector based on the comparison.
A machine learned model may calculate a relative pose between a pair of overlapping images of a scene. The model may be applied to predict one or more errors (e.g., translation error and/or rotation error) in the relative pose between the pair of overlapping images. The model may leverage epipolar geometry to compare features of the overlapping images in a dense manner. For example, the two-view geometry model may incorporate the epipolar geometry into an attention layer of a neural network for one or more different fundamental matrix hypotheses. The model may output one or more predicted errors for the pair of images along with a proposed fundamental matrix hypothesis. A client device may select a fundamental matrix associated with the lowest predicted one or more errors. The client device may then display content that accounts for the predicted one or more errors.
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
A63F 13/216 - Input arrangements for video game devices characterised by their sensors, purposes or types using geographical information, e.g. location of the game device or player using GPS
A63F 13/655 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
The disclosure describes a method for calibrating a magnetic sensor of a client device. The method may include receiving a set of magnetic field measurements, each of which includes a device location, an orientation of the client device, and an observed magnetic field vector measured by the magnetic sensor. The method may include computing a device correction vector for the client device based on the set of magnetic field measurements. For each magnetic field measurement, the method includes determining a world magnetic field vector at the device location of the magnetic field measurement, computing an expected measured magnetic field vector at the device location, accessing an estimated device correction vector for the client device, computing an expected adjusted vector for the client device, comparing the observed magnetic field vector associated with the magnetic field measurement and the expected adjusted vector, and computing the device correction vector based on the comparison.
The present disclosure describes approaches to camera re-localization that improve the accuracy of re-localization determinations by performing simulated consistency checks for three-dimensional maps. Client devices associated with users of a location-based application transmit image scans to a game server, which divides the received scan data into mapping sets used to generate 3D maps of environments and validation sets used to test the accuracy of the maps. To perform the testing, the game server identifies query scans in the validation set having GPS coordinates within a threshold distance of the mapped location and uses the 3D map of the environment to generate a pose estimate for each frame. The results of the localization queries are analyzed by comparing differences between the localization pose estimates and differences between the poses of independent pairs of frames in the query scan to evaluate the accuracy of the 3D map.
42 - Scientific, technological and industrial services, research and design
Goods & Services
Providing online non-downloadable software for detecting and sharing a user's location; providing online non-downloadable software for and displaying relevant local information of general interest; providing online non-downloadable software for establishing geolocation information and geospatial data; providing online non-downloadable software that enables users to search, view, share, review, upload, compile, and post geolocation data and 3D mapping activities; application service provider featuring application programming interface (API) software for generation of software applications; application service provider featuring application programming interface (API) software for use in searching, transmitting, receiving, accessing, and viewing geographic location information and providing content based on location; providing online non-downloadable augmented reality, virtual reality, mixed reality, and extended reality software for integrating electronic data with real world environments; providing online non-downloadable software for integrating electronic data with real world environments for the purpose of locating points of interest, events, routes, and locations; providing online non-downloadable computer software for providing geographic information, interactive geographic maps; software-as-a-service for use in uploading, embedding, and sharing 3-dimensional scans; software-as-a-service for organizing, viewing, editing, and manipulating 3-dimensional images and objects; software-as-a-service for viewing augmented reality, virtual reality, mixed reality, and extended reality content; computer programming services for creating augmented reality, virtual reality, mixed reality, and extended reality software applications; platform as a service (PAAS) featuring computer software platforms, namely, software for creating and designing software applications featuring content integrating electronic data with real world or virtual environments; application service provider featuring application programming interface (API) software for integrating electronic data with real world or virtual environments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Providing online non-downloadable software for detecting and sharing a user's location; providing online non-downloadable software for and displaying relevant local information of general interest; providing online non-downloadable software for establishing geolocation information and geospatial data; providing online non-downloadable software that enables users to search, view, share, review, upload, compile, and post geolocation data and 3D mapping activities; application service provider featuring application programming interface (API) software for generation of software applications; application service provider featuring application programming interface (API) software for use in searching, transmitting, receiving, accessing, and viewing geographic location information and providing content based on location; providing online non-downloadable augmented reality, virtual reality, mixed reality, and extended reality software for integrating electronic data with real world environments; providing online non-downloadable software for integrating electronic data with real world environments for the purpose of locating points of interest, events, routes, and locations; providing online non-downloadable computer software for providing geographic information, interactive geographic maps; software-as-a-service for use in uploading, embedding, and sharing 3-dimensional scans; software-as-a-service for organizing, viewing, editing, and manipulating 3-dimensional images and objects; software-as-a-service for viewing augmented reality, virtual reality, mixed reality, and extended reality content; computer programming services for creating augmented reality, virtual reality, mixed reality, and extended reality software applications; platform as a service (PAAS) featuring computer software platforms, namely, software for creating and designing software applications featuring content integrating electronic data with real world or virtual environments; application service provider featuring application programming interface (API) software for integrating electronic data with real world or virtual environments
The present disclosure describes a location-based application in which users can attach media content to geographic locations. Other users can later experience the media content when in proximity to the geographic content or via a map interface displaying indications of media content in the vicinity of the viewing user. When users experience media content attached to a geographic location they may initiate communication with the creator of the media content. Additionally, the creator or viewer of a video or photograph of a geographic location may select an object depicted in the video or photograph to create a 2D or 3D virtual object representing the depicted object. This virtual object may be held by the user (e.g., in a virtual bag) or dropped at the same or a different geographic location where it may be viewed and interacted with by other users in an augmented reality environment.
Methods and systems for creating accurate three-dimensional representations of environments using neural radiance fields regularized by denoising diffusion models are disclosed. Receiving a plurality of images representing an environment, a scene representation model is trained to create the three-dimensional model of the environment. Using these images, the virtual rays are sampled from training viewpoints within the environment. The scene representation model is then applied to these rays to generate simulated images of the environment from the training viewpoints. These simulated images undergo a regularization process that uses a denoising diffusion model to determine color gradients and depth gradients in each simulated image. The scene representation model is trained with this data to create the final three-dimensional model of the environment. This model is provided to the requesting client device to generate the three-dimensional representation and create a virtual object within the environment.
A63F 13/216 - Input arrangements for video game devices characterised by their sensors, purposes or types using geographical information, e.g. location of the game device or player using GPS
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
A63F 13/655 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
A63F 13/847 - Cooperative playing, e.g. requiring coordinated actions from several players to achieve a common goal
An augmented reality system generates computer-mediated reality on a client device. The client device has sensors including a camera configured to capture image data of an environment. The augmented reality system generates a first 3D map of the environment around the client device based on captured image data. The server receives image data captured from a second client device in the environment and generates a second 3D map of the environment. The server links the first and second 3D together in a singular 3D map. The singular 3D map may be a graphical representation of the real world using nodes that represent 3D maps generated by image data captured at client devices and edges that represent transformations between the nodes.
A63F 13/5378 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
A63F 13/216 - Input arrangements for video game devices characterised by their sensors, purposes or types using geographical information, e.g. location of the game device or player using GPS
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
G06T 15/00 - 3D [Three Dimensional] image rendering
G06T 19/00 - Manipulating 3D models or images for computer graphics
62.
Refining camera re-localization determination using prior pose model
The present disclosure describes approaches to camera re-localization that improve the speed and accuracy with which pose estimates are generated by fusing output of a computer vision algorithm with data from a prior model of a geographic area in which a user is located. For each candidate pose estimate output by the algorithm, a game server maps the estimate to a position on the prior model (e.g., a specific cell on a heatmap-style histogram) and retrieves a probability corresponding to the mapped position. A data fusion module fuses, for each candidate pose estimate, a confidence score generated by the computer vision algorithm with the location probability from the prior model to generate an updated confidence score. If an updated confidence score meets or exceeds a score threshold, a re-localization module initiates a location-based application (e.g., a parallel reality game) based on the associated candidate pose estimate.
A63F 13/53 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/216 - Input arrangements for video game devices characterised by their sensors, purposes or types using geographical information, e.g. location of the game device or player using GPS
A63F 13/5378 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
G06T 7/70 - Determining position or orientation of objects or cameras
G06T 19/00 - Manipulating 3D models or images for computer graphics
63.
Accelerated Coordinate Encoding: Learning to Relocalize in Minutes Using RBG and Poses
A set of training images of one or more environments and corresponding metadata are received. The metadata includes camera pose and intrinsics. A relocalizer model is trained using the set of training images and the corresponding metadata to generate predict scene coordinates corresponding to pixels in an image of an environment. The relocalizer model includes a scene-agnostic convolutional network and a scene-specific regression network. A set of query images of an environment is received and the trained relocalizer model is applied to the set of query images of the environment to generate predicted scene coordinates corresponding to the pixels in a query image. A pose solver algorithm is applied to the predicted scene coordinates to generate a camera pose.
A system generates augmented reality content by generating an occlusion mask via implicit depth estimation. The system receives input image(s) of a real-world environment captured by a camera assembly. The system generates a feature map from the input image(s), wherein the feature map comprises abstract features representing depth of object(s) in the real-world environment. The system generates an occlusion mask from the feature map and a depth map for the virtual object. The depth map for the virtual object indicates a depth of each pixel of the virtual object. The occlusion mask indicates pixel(s) of the virtual object that are occluded by an object in the real-world environment. The system generates the composite image based on a first input image at a current timestamp, the virtual object, and the occlusion mask. The composite image may then displayed on an electronic display.
A machine learning model classifies points of interest in a parallel reality game hosted by a server. The server generates training data sets that include verified properties for points of interest. The machine learning model may predict unverified properties for points of interest. Players in the parallel reality game may input properties for the points of interest. The machine learning model use the received properties from players as inputs to the machine learning model to verify unverified properties or generate new properties for the points of interest. The server may classify the points of interest as suitable for particular activities, and the server may use the classifications for future activities within the parallel reality game.
A63F 13/69 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions
A63F 13/216 - Input arrangements for video game devices characterised by their sensors, purposes or types using geographical information, e.g. location of the game device or player using GPS
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Downloadable software for use in scanning 3-dimensional
images and objects; downloadable software for use in
creating 3-dimensional scans; downloadable software for
organizing, viewing, editing, and manipulating 3-dimensional
images and objects; downloadable software for transmission
of scanned 3-dimensional images and objects; downloadable
software for use in creating virtual reality content. Software-as-a-service for use in uploading, embedding, and
sharing 3-dimensional scans; software-as-a-service for
organizing, viewing, editing, and manipulating 3-dimensional
images and objects; software-as-a-service for viewing
virtual reality content.
67.
Location determination and mapping with 3D line junctions
A system and method for determining a location of a client device is described herein. In particular, a client device receives images captured by a camera at the client device. The client device identifies features in the images. The features may be line junctions, lines, curves, or any other features found in images. The client device retrieves a 3D map of the environment from the map database and compares the identified features to the 3D map of the environment, which includes map features such as map line junctions, map lines, map curves, and the like. The client device identifies a correspondence between the features identified from the images and the map features and determines a location of the client device in the real world based on the correspondence. The client device may display visual data representing a location in a virtual world corresponding to the location in the real world.
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
A63F 13/25 - Output arrangements for video game devices
G06V 20/20 - ScenesScene-specific elements in augmented reality scenes
68.
Smooth object correction for augmented reality devices
An augmented reality (“AR”) device applies smooth correction methods to correct the location of the virtual objects presented to a user. The AR device may apply an angular threshold to determine whether a virtual object can be moved from an original location to a target location. An angular threshold is a maximum angle by which a line from the AR device to the virtual object can change within a timestep. Similarly, the AR device may apply a motion threshold, which is a maximum on the distance that a virtual object's location can be corrected based on the motion of the virtual object. Furthermore, the AR device may apply a pixel threshold to the correction of the virtual object's location. A pixel threshold is a maximum on the distance that a pixel projection of the virtual object can change based on the virtual object's change in location.
The present disclosure describes approaches to camera re-localization that improve the accuracy of re-localization determinations by performing simulated consistency checks for three-dimensional maps. Client devices associated with users of a location-based application transmit image scans to a game server, which divides the received scan data into mapping sets used to generate 3D maps of environments and validation sets used to test the accuracy of the maps. To perform the testing, the game server identifies query scans in the validation set having GPS coordinates within a threshold distance of the mapped location and uses the 3D map of the environment to generate a pose estimate for each frame. The results of the localization queries are analyzed by comparing differences between the localization pose estimates and differences between the poses of independent pairs of frames in the query scan to evaluate the accuracy of the 3D map.
An image matching system for determining visual overlaps between images by using box embeddings is described herein. The system receives two images depicting a 3D surface with different camera poses. The system inputs the images (or a crop of each image) into a machine learning model that outputs a box encoding for the first image and a box encoding for the second image. A box encoding includes parameters defining a box in an embedding space. Then the system determines an asymmetric overlap factor that measures asymmetric surface overlaps between the first image and the second image based on the box encodings. The asymmetric overlap factor includes an enclosure factor indicating how much surface from the first image is visible in the second image and a concentration factor indicating how much surface from the second image is visible in the first image.
G06V 10/75 - Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video featuresCoarse-fine approaches, e.g. multi-scale approachesImage or video pattern matchingProximity measures in feature spaces using context analysisSelection of dictionaries
G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
G06N 3/088 - Non-supervised learning, e.g. competitive learning
G06V 10/42 - Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
G06V 10/50 - Extraction of image or video features by performing operations within image blocksExtraction of image or video features by using histograms, e.g. histogram of oriented gradients [HoG]Extraction of image or video features by summing image-intensity valuesProjection analysis
A method of determining a position for a virtual object is described. A location of a client device is determined, and, based on the determined location a set of map segments is retrieved. A virtual object is determined to be displayed on the client device. Relation vectors between the virtual object and each map segment of the retrieved set of map segments are obtained. Each relation vector is weighted based on object parameters of the virtual object. A position to display the virtual object is determined based on the weighted relation vectors. The virtual object is provided for display on the client device as the determined position.
A63F 13/5378 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
(1) Downloadable software for use in scanning 3-dimensional images and objects; downloadable software for use in creating 3-dimensional scans; downloadable software for organizing, viewing, editing, and manipulating 3-dimensional images and objects; downloadable software for transmission of scanned 3-dimensional images and objects; downloadable software for use in creating virtual reality content. (1) Software-as-a-service for use in uploading, embedding, and sharing 3-dimensional scans; software-as-a-service for organizing, viewing, editing, and manipulating 3-dimensional images and objects; software-as-a-service for viewing virtual reality content.
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
downloadable software for use in scanning 3-dimensional images and objects; downloadable software for use in creating 3-dimensional scans; downloadable software for organizing, viewing, editing, and manipulating 3-dimensional images and objects; downloadable software for transmission of scanned 3-dimensional images and objects; downloadable software for use in creating virtual reality content software-as-a-service for use in uploading, embedding, and sharing 3-dimensional scans; software-as-a-service for organizing, viewing, editing, and manipulating 3-dimensional images and objects; software-as-a-service for viewing virtual reality content
A head-mounted device (HMID) may include a front portion, a rear portion and one or more bands. The front portion may include an optical display configured to output image light to a user's eyes. The rear portion may be arranged with the front portion to balance the HMID in weight. The one or more bands connect the front portion and the rear portion so that the two portions rest on either side of the user's head.
A63F 13/25 - Output arrangements for video game devices
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
A reference image and recorded sound of an environment of a client device are obtained. The recorded sound may be captured by a microphone of the client device in a period of time after generation of a localization sound by the client device. The location of the client device in the environment may be determined using the reference image and the recorded sound.
A method or a system for map-free visual relocalization of a device. The system obtains a reference image of an environment captured by a reference pose. The system also receives a query image taken by a camera of the device. The system determines a relative pose of the camera of the device relative to the reference camera based in part on the reference image and the query image. The system determines a pose of the query camera in the environment based on the reference pose and the relative pose.
An augmented reality (“AR”) device applies smooth correction methods to correct the location of the virtual objects presented to a user. The AR device may apply an angular threshold to determine whether a virtual object can be moved from an original location to a target location. An angular threshold is a maximum angle by which a line from the AR device to the virtual object can change within a timestep. Similarly, the AR device may apply a motion threshold, which is a maximum on the distance that a virtual object's location can be corrected based on the motion of the virtual object. Furthermore, the AR device may apply a pixel threshold to the correction of the virtual object's location. A pixel threshold is a maximum on the distance that a pixel projection of the virtual object can change based on the virtual object's change in location.
A depth estimation module may receive a reference image and a set of source images of an environment. The depth module may receive image features of the reference image and the set of source images. The depth module may generate a 4D feature volume that includes the image features and metadata associated with the reference image and set of source images. The image features and the metadata may be arranged in the feature volume based on relative pose distances between the reference image and the set of source images. The depth module may reduce the 4D feature volume to generate a 3D cost volume. The depth module may apply a depth estimation model to the 3D cost volume and data based on the reference image to generate a two dimensional (2D) depth map for the reference image.
A model predicts the geometry of both visible and occluded traversable surfaces from input images. The model may be trained from stereo video sequences, using camera poses, per-frame depth, and semantic segmentation to form training data, which is used to supervise an image to image network. In various embodiments, the model is applied to a single RGB image depicting a scene to produce information describing traversable space of the scene that includes occluded traversable. The information describing traversable space can include a segmentation mask of traversable space (both visible and occluded) and non-traversable space and a depth map indicating an estimated depth to traversable surfaces corresponding to each pixel determined to correspond to traversable space.
G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
G06V 10/26 - Segmentation of patterns in the image fieldCutting or merging of image elements to establish the pattern region, e.g. clustering-based techniquesDetection of occlusion
G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
45 - Legal and security services; personal services for individuals.
Goods & Services
Downloadable game software; downloadable game software for
use on mobile devices; downloadable video game software;
downloadable interactive game software; downloadable
augmented reality game software; downloadable computer
software for social networking; downloadable game software
for creating, customizing, and interacting with an
animal-like character. Providing online non-downloadable game software; providing
online non-downloadable video game software; providing
online non-downloadable interactive game software; providing
online non-downloadable augmented reality game software;
providing online non-downloadable game software for
creating, customizing, and interacting with an animal-like
character. Online social networking services.
81.
Refining camera re-localization determination using prior pose model
The present disclosure describes approaches to camera re-localization that improve the speed and accuracy with which pose estimates are generated by fusing output of a computer vision algorithm with data from a prior model of a geographic area in which a user is located. For each candidate pose estimate output by the algorithm, a game server maps the estimate to a position on the prior model (e.g., a specific cell on a heatmap-style histogram) and retrieves a probability corresponding to the mapped position. A data fusion module fuses, for each candidate pose estimate, a confidence score generated by the computer vision algorithm with the location probability from the prior model to generate an updated confidence score. If an updated confidence score meets or exceeds a score threshold, a re-localization module initiates a location-based application (e.g., a parallel reality game) based on the associated candidate pose estimate.
A63F 13/5378 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/216 - Input arrangements for video game devices characterised by their sensors, purposes or types using geographical information, e.g. location of the game device or player using GPS
G06T 7/70 - Determining position or orientation of objects or cameras
G06T 19/00 - Manipulating 3D models or images for computer graphics
82.
MAPPING TRAVERSABLE SPACE IN A SCENE USING A THREE-DIMENSIONAL MESH
A parallel-reality game uses a virtual game board having tiles placed over an identified traversable space corresponding to flat regions of a scene. A game board generation module receives one or more images of the scene captured by a camera of a mobile device. The game board generation module obtains a topographical mesh of the scene based on the received one or more images. The game board generation module then identifies a traversable space within the scene based on the obtained topographical mesh. The game board generation module determines a location for each of a set of polygon tiles in the identified traversable space. The game board generation module also allows for queries to identify parts of the game board that meet one or more provided criterion.
A63F 13/655 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
A63F 13/216 - Input arrangements for video game devices characterised by their sensors, purposes or types using geographical information, e.g. location of the game device or player using GPS
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
G06T 17/20 - Wire-frame description, e.g. polygonalisation or tessellation
83.
Efficient GPU/CPU pipeline for providing augmented reality content
Implementations generally relate to providing augmented reality in a web browser. In one implementation, a method includes capturing images of a physical scene with a camera of a device. The method further includes determining motion of the camera using six degrees of freedom (6DoF) markerless tracking. The method further includes overlaying virtual three-dimensional (3D) content onto a depicted physical scene in the images, resulting in augmented reality (AR) images. The method further includes rendering the AR images in a browser of the device.
A scene reconstruction model is disclosed that outputs a heightfield for a series of input images. The model, for each input image, predicts a depth map and extracts a feature map. The model builds a 3D model utilizing the predicted depth maps and camera poses for the images. The model raycasts the 3D model to determine a raw heightfield for the scene. The model utilizes the raw heightfield to sample features from the feature maps corresponding to positions on the heightfield. The model aggregates the sampled features into an aggregate feature map. The model regresses a refined heightfield based on the aggregate feature map. The model determines the final heightfield based on a combination of the raw heightfield and the refined heightfield. With the final heightfield, a client device may generate virtual content augmented on real-world images captured by the client device.
A client device and a controller allow a user to control AR content by selecting real-world locations or objects. The client receives position data indicating a position and orientation of the controller, the position data defining an axis of the controller. The client device performs ray casting to determine a location in a 3D map of a real world that intersects the axis. The client device receives a selection indication (e.g., a user pressing a button on the controller). The client device selects, subsequent to the selection indication, the location in the 3D map that intersects the axis as a waypoint. The client device defines a route for a virtual object based on the waypoint.
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
45 - Legal and security services; personal services for individuals.
Goods & Services
Downloadable game software; downloadable game software for
use on mobile devices; downloadable video game software;
downloadable interactive game software; downloadable
augmented reality game software; downloadable computer
software for social networking; downloadable game software
for creating, customizing, and interacting with an
animal-like character. Providing virtual environments in which users can interact
for recreational, leisure or entertainment purposes;
providing online non-downloadable game software; providing
online non-downloadable video game software; providing
online non-downloadable interactive game software; providing
online non-downloadable augmented reality game software;
providing online non-downloadable game software for
creating, customizing, and interacting with an animal-like
character. Online social networking services.
Implementations generally relate to metaverse content modality mapping. In some implementations, a method includes obtaining functionality developed for a first modality of a virtual environment. The method further includes mapping the functionality to a second modality of the virtual environment. The method further includes executing the functionality developed for the first modality based on user interaction associated with the second modality.
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 3/04883 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
45 - Legal and security services; personal services for individuals.
Goods & Services
Downloadable game software; Downloadable game software for use on mobile devices; Downloadable video game software; Downloadable interactive game software; Downloadable augmented reality game software; Downloadable computer software for social networking; Downloadable game software for creating, customizing, and interacting with an animal-like character. Providing virtual environments in which users can interact for recreational, leisure or entertainment purposes; providing online non-downloadable game software; providing online non-downloadable video game software; providing online non-downloadable interactive game software; providing online non-downloadable augmented reality game software; providing online non-downloadable game software for creating, customizing, and interacting with an animal-like character. Online social networking services.
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
45 - Legal and security services; personal services for individuals.
Goods & Services
Downloadable game software; Downloadable game software for use on mobile devices; Downloadable video game software; Downloadable interactive game software; Downloadable augmented reality game software; Downloadable computer software for social networking; Downloadable game software for creating, customizing, and interacting with an animal-like character. Providing virtual environments in which users can interact for recreational, leisure or entertainment purposes; providing online non-downloadable game software; providing online non-downloadable video game software; providing online non-downloadable interactive game software; providing online non-downloadable augmented reality game software; providing online non-downloadable game software for creating, customizing, and interacting with an animal-like character. Online social networking services.
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
45 - Legal and security services; personal services for individuals.
Goods & Services
(1) Downloadable game software; downloadable game software for use on mobile devices; downloadable video game software; downloadable interactive game software; downloadable augmented reality game software; downloadable computer software for social networking; downloadable game software for creating, customizing, and interacting with an animal-like character. (1) Providing virtual environments in which users can interact for recreational, leisure or entertainment purposes; providing online non-downloadable game software; providing online non-downloadable video game software; providing online non-downloadable interactive game software; providing online non-downloadable augmented reality game software; providing online non-downloadable game software for creating, customizing, and interacting with an animal-like character.
(2) Online social networking services.
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
45 - Legal and security services; personal services for individuals.
Goods & Services
(1) Downloadable computer game software; downloadable computer game software for use on mobile devices; downloadable video game software; downloadable interactive video game software; downloadable augmented reality game software; downloadable computer software for social networking; downloadable game software for creating, customizing, and interacting with an animal-like character (1) Providing online non-downloadable game software via the Internet; providing online non-downloadable video game software; providing online non-downloadable interactive game software via the Internet; providing online non-downloadable augmented reality game software; providing online non-downloadable game software for creating, customizing, and interacting with an animal-like character
(2) Online social networking services.
09 - Scientific and electric apparatus and instruments
41 - Education, entertainment, sporting and cultural services
42 - Scientific, technological and industrial services, research and design
45 - Legal and security services; personal services for individuals.
Goods & Services
Downloadable game software; Downloadable game software for use on mobile devices; Downloadable video game software; Downloadable interactive game software; Downloadable augmented reality game software; Downloadable computer software for social networking; Downloadable game software for creating, customizing, and interacting with an animal-like character Providing virtual environments in which users can interact for recreational, leisure or entertainment purposes Providing online non-downloadable game software; providing online non-downloadable video game software; providing online non-downloadable interactive game software; providing online non-downloadable augmented reality game software; providing online non-downloadable game software for creating, customizing, and interacting with an animal-like character Online social networking services
09 - Scientific and electric apparatus and instruments
41 - Education, entertainment, sporting and cultural services
42 - Scientific, technological and industrial services, research and design
45 - Legal and security services; personal services for individuals.
Goods & Services
Downloadable game software; Downloadable game software for use on mobile devices; Downloadable video game software; Downloadable interactive game software; Downloadable augmented reality game software; Downloadable computer software for social networking; Downloadable game software for creating, customizing, and interacting with an animal-like character Providing virtual environments in which users can interact for recreational, leisure or entertainment purposes Providing online non-downloadable game software; providing online non-downloadable video game software; providing online non-downloadable interactive game software; providing online non-downloadable augmented reality game software; providing online non-downloadable game software for creating, customizing, and interacting with an animal-like character Online social networking services
09 - Scientific and electric apparatus and instruments
38 - Telecommunications services
42 - Scientific, technological and industrial services, research and design
45 - Legal and security services; personal services for individuals.
Goods & Services
Downloadable computer software for detecting a user's
location and displaying relevant local information of
general interest; downloadable computer software enabling
users to search, view, share, review, upload, compile, and
post information about locations, events, points of
interest, routes, geographic features, geospatial data, and
3D mapping activities; downloadable software for
establishing geolocation information and geospatial data;
downloadable software for taking photographs, recording
audio and videos, and posting photographs, audio, and videos
with geolocation information on an interactive digital map;
downloadable software for listening to audio recordings and
viewing social feeds, photographs, and videos with
geolocation information on an interactive digital map;
downloadable augmented reality software for integrating
electronic data with real world environments for the
purposes of entertainment and education; downloadable
software for integrating electronic data with real world
environments for the purpose of locating points of interest,
events, routes, and locations; downloadable augmented
reality software for integrating electronic data with real
world environments for the purpose of viewing photographs
and videos taken at specific locations; downloadable
augmented reality software for integrating electronic data
with real world environments for the purposes of accessing
and viewing location-based photographs, videos, audio, text,
and social feeds. Electronic and digital transmission of messages, sound,
video, photographs, and information; electronic transmission
of geospatial data from mobile devices; electronic data
transmission; transmission of location-based messaging. Providing online non-downloadable software for detecting a
user's location and displaying relevant local information of
general interest; providing online non-downloadable software
enabling users to search, view, share, review, upload,
compile, and post information about locations, events,
points of interest, routes, geographic features, geospatial
data, and 3D mapping activities; providing online
non-downloadable software for establishing geolocation
information and geospatial data; providing online
non-downloadable software for taking photographs, recording
audio and videos, and posting photographs, audio, and videos
with geolocation information on an interactive digital map;
providing online non-downloadable software for listening to
audio recordings and viewing social feeds, photographs, and
videos with geolocation information on an interactive
digital map; providing online non-downloadable augmented
reality software for integrating electronic data with real
world environments for the purposes of entertainment and
education; providing online non-downloadable software for
integrating electronic data with real world environments for
the purpose of locating points of interest, events, routes,
and locations; providing online non-downloadable augmented
reality software for integrating electronic data with real
world environments for the purpose of viewing photographs
and videos taken at specific locations; providing online
non-downloadable augmented reality software for integrating
electronic data with real world environments for the
purposes of accessing and viewing location-based
photographs, videos, audio, text, and social feeds;
providing online non-downloadable software for social
networking; providing virtual digital environments through
cloud computing in which users can interact for
recreational, leisure or entertainment purposes. Online social networking services; online social networking
services accessible by means of downloadable mobile
applications and online non-downloadable software.
Systems and methods for providing a shared augmented reality environment are provided. In particular, the latency of communication is reduced by using a peer-to-peer protocol to determine where to send datagrams. Datagrams describe actions that occur within the shared augmented reality environment, and the processing of datagrams is split between an intermediary node of a communications network (e.g., a cell tower) and a server. As a result, the intermediary node may provide updates to a local state of a client device when a datagram is labelled peer-to-peer, and otherwise provides updates to the master state on the server. This may reduce the latency of communication and allow users of the location-based parallel reality game to see actions occur more quickly in the shared augmented reality environment.
A63F 13/358 - Adapting the game course according to the network or server load, e.g. for reducing latency due to different connection speeds between clients
A63F 13/34 - Interconnection arrangements between game servers and game devicesInterconnection arrangements between game devicesInterconnection arrangements between game servers using peer-to-peer connections
A63F 13/332 - Interconnection arrangements between game servers and game devicesInterconnection arrangements between game devicesInterconnection arrangements between game servers using wide area network [WAN] connections using wireless networks, e.g. cellular phone networks
96.
Data hierarchy protocol for data transmission pathway selection
A dataflow hierarchy protocol is implemented by one or more devices to optimize how the one or more devices process datagrams for network communications. The dataflow hierarchy considers various available network pathways for dataflow. A device implementing the dataflow hierarchy selects one or more of the available network pathways to provide low latency in data communication with other devices. The device may sample various available network pathways to determine pathway metrics (e.g., latency) and select one or more network pathways based on the metrics. The available network pathways can include pathways through one or more intermediary nodes, such as pathways through a game server, pathways through a cell tower, and pathways through a network.
H04L 43/0817 - Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking functioning
A message router partially decodes messages to determine how to route the messages. The message router receives a message and identifies a field of the message as a candidate field for including an envelope identifier that indicates an envelope type of the message. The envelope type of the message indicates where information, such as where to route the message, is stored within the message. The message router attempts to decode the candidate field to determine whether the candidate field includes the envelope identifier, and responsive to the candidate filed including the envelope identifier, the message router determines the envelope type of the message. The message router routes the message according to the envelope type.
A depth prediction model for predicting a depth map from an input image is disclosed. The depth prediction model leverages wavelet decomposition to minimize computations. The depth prediction model comprises a plurality of encoding layers, a coarse prediction layer, a plurality of decoding layers, and a plurality of inverse discrete wavelet transforms (IDWTs). The encoding layers are configured to input the image and to downsample the image into feature maps including a coarse feature map. The coarse depth prediction layer is configured to input the coarse feature map and to output a coarse depth map. The decoding layers are configured to input the feature maps and to predict wavelet coefficients based on the feature maps. The IDWTs are configured to upsample the coarse depth map based on the predicted wavelet coefficients to the final depth map at the same resolution as the input image.
Processing of actions within a shared augmented reality experience is split between an edge node of a communications network (e.g., a cell tower) and a server. As a result, computation of the current state may be sharded naturally based on real-world location, with state updates generally provided by the edge node and the server providing conflict resolution based on a master state (e.g., where actions connected to different edge nodes potentially interfere with each other). In this way, latency may be reduced as game actions are communicated between clients connected to the same edge node using a peer-to-peer (P2P) protocol without routing the actions via the game server.
A63F 13/34 - Interconnection arrangements between game servers and game devicesInterconnection arrangements between game devicesInterconnection arrangements between game servers using peer-to-peer connections
A63F 13/352 - Details of game servers involving special game server arrangements, e.g. regional servers connected to a national server or a plurality of servers managing partitions of the game world
G06T 19/00 - Manipulating 3D models or images for computer graphics
H04L 67/131 - Protocols for games, networked simulations or virtual reality
100.
Cloud assisted generation of local map data using novel viewpoints
An augmented reality system generates computer-mediated reality on a client device. The client device has sensors including a camera configured to capture image data of an environment and a location sensor to capture location data describing a geolocation of the client device. The client device creates a three-dimensional (3-D) map with the image data and the location data for use in generating virtual objects to augment reality. The client device transmits the created 3-D map to an external server that may utilize the 3-D map to update a world map stored on the external server. The external server sends a local portion of the world map to the client device. The client device determines a distance between the client device and a mapping point to generate a computer-mediated reality image at the mapping point to be displayed on the client device.