A splat generation system and associated methods perform splatting based on a controlled sampling of a three-dimensional (3D) asset. The controlled sampling ensures minimal coverage the 3D asset primitives and provides enhanced coverage for primitives that have greater detail or variation. The system projects rays in different directions from a set of the primitives towards a bounding volume, and detects points at which some of the projected rays intersect the bounding volume. The system defines virtual cameras in the 3D space of the 3D asset based on a set of the intersection points, and obtains the minimal and/or enhanced coverage based on one or more views of the primitives captured in images taken by the virtual cameras. The system generates a set of splats from the one or more views in the captured images with an acceptable amount of loss and with less total data than the 3D asset.
A system and associated methods optimize the streaming of three-dimensional (3D) content based on a pre-culled segmentation of the 3D content. The pre-culled segmentation involves performing server-side occlusion culling or preprocessing of the 3D content so that only the visible primitives within a requested field-of-view are streamed to client devices rather than all primitives within the requested field-of-view. The system segments the 3D content primitives to different tiles and filters each tile to differentiate a first subset of visible primitives in each tile from a second subset of non-visible primitives in each tile. The system receives a request for the 3D content, retrieves the one or more tiles with primitives positioned within the requested field-of-view, and streams the first subset of visible primitives from each tile of the one or more tiles in response to the request without the second subset of non-visible primitives.
A real-time streaming system and associated methods are provided for initially presenting three-dimensional (3D) content at a low-fidelity first encoding format so that the initial visualization of the 3D content on the requesting device is presented with no or insignificant delay and for streaming prioritized 3D assets within the initial visualization at higher fidelities in different encoding formats that preserve the real-time responsiveness of the system. The system generates and streams a two-dimensional (2D) image for a first presentation of the 3D content at a first fidelity. The system increases a fidelity of a prioritized 3D asset while the 3D content does not change by streaming 3D primitives for the prioritized 3D asset in a second encoding format that increases the fidelity of the prioritized 3D asset in a second presentation of the 3D content from the first fidelity to a greater second fidelity.
A system and associated methods perform smooth temporally segmented encoding of dynamic unstructured spatial data or dynamic three-dimensional (3D) content. The system receives the primitives that define visual changes to the dynamic 3D content across multiple frames. The system determines a first set of primitives that remain unchanged for at least N frames and a second set of primitives that remain unchanged for less than the N frames. The system generates a first data stream that encodes the first set of primitives with temporal values at a reduced frame rate, and generates a second data stream that encodes the second set of primitives without temporal values at the desired frame rate. The system streams the first data stream and the second data stream in response to a request for the dynamic 3D content.
H04N 19/136 - Caractéristiques ou propriétés du signal vidéo entrant
H04N 19/119 - Aspects de subdivision adaptative, p. ex. subdivision d’une image en blocs de codage rectangulaires ou non
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
5.
Systems and Methods for Splatting with Adaptive Density Control
A splat generation system and associated methods implement splatting with adaptive density control to generate a splat representation of a three-dimensional (3D) model in which the splat density and quality in regions of the 3D model that are prioritized based on creator intent or that are commonly or consistently in a user’s field-of-view are increased relative to the splat density and quality in other regions of the 3D model. The system receives the different priorities associated with different parts of the 3D model based on the creator intent or tracked view paths. The system generates a first set of splats that represent the 3D model with a first fidelity, and generates a second set of splats at a greater second fidelity for a first subset of parts of the 3D model that have a higher priority than a second subset of parts of the 3D model.
A system and associated methods select splats at different fidelities to optimize a requested field-of-view of a three-dimensional (3D) model for presentation on a local device or for streaming to a remote device. The system determines priority values that are associated with parts of the 3D model within the field-of-view, selects first splats that represent a first part of the 3D model at a first fidelity in response to a first priority value being associated with the first part, selects second splats that represent a second part of the 3D model at a second fidelity in response to a second priority value being associated with the second part, and presents or streams the first splats and the second splats in response to the request in order to generate the field-of-view with different fidelities for the first part and the second part of the 3D model.
An adaptive streaming system dynamically streams different parts of a three-dimensional (3D) model at different resolutions to maximize visual detail and quality in response to changing network performance and/or client device rendering performance. The system receives a request to view the 3D model from a particular field-of-view. The system associates different priorities to different parts of the 3D model, selects first Gaussian splats associated with nodes at a first level in a tree based on the first Gaussian splats representing the different parts of the 3D model in the particular field-of-view with a first priority, and selects second Gaussian splats associated with nodes at a second level in the tree structure based on the second Gaussians splats representing the different parts of the 3D model in the particular field-of-view with a second priority. The system streams the selected Gaussian splats in response to the request.
A streaming system performs a dynamic densification of streamed content based on tracked user focus. The streaming system streams different content having a common classification to one or more users, and tracks a focus of the one or more users on different parts of the different content. The streaming system receives a request for new content, classifies the new content with the common classification, and streams a first parts of new content with greater detail than second parts of the new content in response to the request based on corresponding first parts from the different parts of the different content receiving more of the focus than corresponding second parts from the different parts of the different content.
09 - Appareils et instruments scientifiques et électriques
38 - Services de télécommunications
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
Downloadable software for integrating depth measurements with color photography to generate digital three-dimension content; Downloadable decoder software; Downloadable software for generating, manipulating, rendering, viewing, and editing point clouds and three-dimension content; Downloadable software for processing digital images for three-dimension content via the Internet and global communications networks; Downloadable software for generating visual effects or animations in three-dimensional spaces; Downloadable mobile applications for the distribution of spatial streaming content via the Internet and global communications networks; Downloadable software for the distribution of spatial streaming content via the Internet and global communications networks Streaming of audiovisual and multimedia material on the Internet; Video-on-demand transmission services; Streaming of audio, visual and audiovisual material via a global computer network; Streaming of video, audiovisual, video game, data, and software applications material on the Internet; Broadcasting of video and audio programming over the Internet; Transmission and delivery of visual, game, and multimedia content via the Internet; Transmission and delivery of audiovisual and multimedia content via the Internet and global communications networks Computer graphics design services; Computer graphics design services, namely, creating of three-dimensional computer models; Software as a service (SAAS) services featuring software for integrating depth measurements with color photography to generate digital three-dimension content; Providing temporary use of on-line non-downloadable software for processing digital images; Providing temporary use of on-line non-downloadable software for generating visual effects or animations in three-dimensional space; Data encryption and decoding services; Providing a website featuring non-downloadable software for downloading, distributing, and streaming spatial image data; Providing a website for the electronic storage of spatial image data; Software as a service (SAAS) services featuring software for generating, manipulating, rendering, viewing, and editing point clouds and other three-dimension content; Providing temporary use of on-line non-downloadable software for generating, manipulating, rendering, viewing and editing three-dimension content
10.
Systems and methods for loss weighted image sampling for non-uniform splat generation
A splat generation system and associated methods generate a non-uniform splat representation of a three-dimensional (3D) asset based on a loss weighted sampling of reference images that capture the 3D asset from different viewpoints. The system performs a first training iteration to define a different set of splats to reconstruct the field-of-view captured by a different one of the reference images. The system associates an amount of loss to each reference image based on an amount of variation by which a set of splats trained on that reference image reconstructs the field-of-view of that reference image. The system selects a next image to train on based on the amount of loss associated with each of the reference images, and retrains the set of splats representing the field-of-view of the selected next image by adjusting one or more of those splats to increase the reconstructed field-of-view accuracy.
Systems and associated methods are provided for distributed three-dimensional (3D) content generation whereby different 3D assets for a 3D scene are generated by different asset generators at different network tiers according to the latency sensitivity of each 3D asset. The system retrieves a 3D asset for the requested 3D scene, and differentiates a first 3D asset that is latency sensitive from a second 3D asset that is latency tolerant. The system generates the first 3D asset with a first asset generator at a first network tier, and generates the second 3D asset with a second asset generator at a more distant second network tier. The system distributes the generated primitives for the first 3D asset to the user device with a first amount of latency and the generated primitives for the second 3D asset to the user device with a second amount of latency.
A distribution system adaptively provides different lossy encodings for different views of a point cloud to a client device based on network or rendering performance of the client device. The distribution system receives a request to access the point cloud, and determines the one or more performance parameters that limit an amount of the point cloud data that the client device is able to receive or process in a given time. The distribution system selects and provides the client device with different sets of optimized splats for different views of the point cloud that satisfy the one or more performance parameters based on a cumulative amount of data encoded within the different sets of optimized splats being equal to or less than the amount of point cloud data that the client device is able to receive or process in the given time.
G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
H04N 19/597 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif spécialement adapté pour l’encodage de séquences vidéo multi-vues
13.
Systems and methods for reducing point cloud and texture data using adapted splatting techniques
An optimization system reduces the data encoded within a point cloud for streaming and/or rendering of a lossy representation of the point cloud. The optimization system generates a first optimized splat for a first visual characteristic of the point cloud by replacing a first set of points having a first common value for the first visual characteristic with a first replacement primitive, and generates a second optimized splat for a second visual characteristic of the point cloud by replacing a second set of points having a second common value for the second visual characteristic with a second replacement primitive. The optimization system provides the first optimized splat and the second optimized splat instead of the original points of the point cloud in response to a request to access the point cloud.
A system and associated methods perform smooth temporally segmented encoding of dynamic unstructured spatial data or dynamic three-dimensional (3D) content. The system receives the primitives that define visual changes to the dynamic 3D content across multiple frames. The system determines a first set of primitives that remain unchanged for at least N frames and a second set of primitives that remain unchanged for less than the N frames. The system generates a first data stream that encodes the first set of primitives with temporal values at a reduced frame rate, and generates a second data stream that encodes the second set of primitives without temporal values at the desired frame rate. The system streams the first data stream and the second data stream in response to a request for the dynamic 3D content.
H04N 19/136 - Caractéristiques ou propriétés du signal vidéo entrant
H04N 19/119 - Aspects de subdivision adaptative, p. ex. subdivision d’une image en blocs de codage rectangulaires ou non
H04N 19/172 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c.-à-d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p. ex. un objet la zone étant une image, une trame ou un champ
15.
Systems and methods for automated rigging and generating animations based on real musculoskeletal movements
A three-dimensional (3D) animation system and associated methods generate a musculoskeletal framework for a 3D model, automatically rig virtual muscles and virtual bones of the musculoskeletal framework to the 3D model, and realistically animate the 3D model based on real musculoskeletal movements associated with the virtual muscles and the virtual bones. The 3D animation system receives multiple scans of a subject, generates primitives that form a 3D model of the subject based on a first scan, and rig the 3D model for animation with the virtual muscles and virtual bones of the musculoskeletal framework that are defined from data of other scans. The 3D animation system animates the 3D model by determining an association between a virtual muscle and a set of primitives created from the rigging, and by adjusting the set of primitives according to a movement created from a simulated contraction of the virtual muscle.
A streaming system and associated methods provide adaptive streaming of point cloud data for out-of-order presentation of important visual features before less important visual features. The adaptive streaming includes retrieving the points for a requested point cloud, differentiating different sets of points that represent different features in the point cloud, prioritizing each set of points based on a feature that is represented by that set of points, and streaming the different sets of points across a data network to a client device in an order that is determined from the prioritization. Moreover, the adaptive streaming may dynamically select a different subset of points from each set of points and different subset of point data to stream with each selected subset of points to visually convey important detail of each feature without all the corresponding points or data that make up that feature in the point cloud.
A three-dimensional (3D) content creation system modifies operation of a radiance field, neural network, and/or other generative artificial intelligence in order to generate 3D content based on constraints that modify the 3D content modeling. The system receives constraints for reducing a first size of a first 3D representation of a 3D object, and generates different sets of 3D primitives with values for one or more parameters of the 3D primitives that satisfy the constraints. The system selects a particular set of 3D primitives that produces a visual representation that differs from the first 3D representation by less than a threshold amount, and presents the particular set of 3D primitives as a size-optimized second 3D representation of the 3D object with a second size that is less than the first size of the first 3D representation.
A streaming system and associated methods provide adaptive streaming of point cloud data for out-of-order presentation of important visual features before less important visual features. The adaptive streaming includes retrieving the points for a requested point cloud, differentiating different sets of points that represent different features in the point cloud, prioritizing each set of points based on a feature that is represented by that set of points, and streaming the different sets of points across a data network to a client device in an order that is determined from the prioritization. Moreover, the adaptive streaming may dynamically select a different subset of points from each set of points and different subset of point data to stream with each selected subset of points to visually convey important detail of each feature without all the corresponding points or data that make up that feature in the point cloud.
A system and associated methods select splats at different fidelities to optimize a requested field-of-view of a three-dimensional (3D) model for presentation on a local device or for streaming to a remote device. The system determines priority values that are associated with parts of the 3D model within the field-of-view, selects first splats that represent a first part of the 3D model at a first fidelity in response to a first priority value being associated with the first part, selects second splats that represent a second part of the 3D model at a second fidelity in response to a second priority value being associated with the second part, and presents or streams the first splats and the second splats in response to the request in order to generate the field-of-view with different fidelities for the first part and the second part of the 3D model.
A three-dimensional (3D) streaming system provides client-controlled adaptive streaming of 3D content in which the client may receive the 3D primitives of 3D content at different fidelities that the client selects. The system includes generating a 3D model at different fidelities, partitioning the 3D model into different volumetric spatial units, generating a manifest with an index for each volumetric spatial unit and an identifier for each fidelity, and presenting the 3D primitives in a first set of volumetric spatial units at a first fidelity and the 3D primitives in a second set of volumetric spatial units at a second fidelity in response to a request that includes indices of the first set of volumetric spatial units with a first identifier for the first fidelity and indices of the second set of volumetric spatial units with a second identifier for the second fidelity.
A splat generation system and associated methods implement splatting with adaptive density control to generate a splat representation of a three-dimensional (3D) model in which the splat density and quality in regions of the 3D model that are prioritized based on creator intent or that are commonly or consistently in a user's field-of-view are increased relative to the splat density and quality in other regions of the 3D model. The system receives the different priorities associated with different parts of the 3D model based on the creator intent or tracked view paths. The system generates a first set of splats that represent the 3D model with a first fidelity, and generates a second set of splats at a greater second fidelity for a first subset of parts of the 3D model that have a higher priority than a second subset of parts of the 3D model.
A splat compression system and associated methods are provided to efficiently compress and decompress data of a three-dimensional (3D) splat representation using textures so that the size of the splat representation is reduced with minimal loss in fidelity. The compression includes receiving the splats that make up the 3D splat representation, determining clusters that are associated with a different set of the splats that are positioned about a different common plane, and defining each cluster with a position based on the positional data from the different set of splats associated with that cluster. The compression includes converting the positional data from the different set of splats associated with each cluster to offsets from the position of the associated cluster, and generating the compressed 3D representation with a definition for each cluster and a texture that stores the offsets for the different set of splats associated with each cluster.
A three-dimensional (3D) interactivity system automatically and dynamically generates shape-conforming and computationally efficient colliders for detecting collisions with automatically differentiated features represented by different sets of points in a point cloud. The system selects a set of points that represent a particular feature of a 3D object, decimates the set of points to a subset of points that represent an approximate shape of the particular feature with fewer points than the set of points, and generates a collider with the approximate shape represented by the subset of points. The system may then use the collider in determining whether a collision element collides with the particular feature.
G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
G06T 3/4023 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement basé sur la décimation de pixels ou de lignes de pixelsChangement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement basé sur l’insertion de pixels ou de lignes de pixels
24.
Systems and Methods for Generating Point Clouds with Infinitely Scalable Resolutions from a Three-Dimensional Mesh Model
A modeling system converts polygons of a three-dimensional (3D) mesh model to points of a point cloud in an automated manner that increases the resolution and visual fidelity of the point cloud relative to the 3D mesh model. The system receives the polygons of the 3D mesh model, and generates points over the flat plane of each polygon according to a density and arrangement that increases the resolution of the points relative to the original polygon. The system receives an enhancement map with values for displacing the polygons of the 3D mesh model. The system displaces the generated points by mapping the values from positions in the enhancement map to corresponding positions of the generated points. The system generates the point cloud with the displaced points to provide improved visual quality and detail relative to the polygons of the 3D mesh model after enhancement with the enhancement map.
An optimization system reduces the data encoded within a point cloud for streaming and/or rendering of a lossy representation of the point cloud. The optimization system generates a first optimized splat for a first visual characteristic of the point cloud by replacing a first set of points having a first common value for the first visual characteristic with a first replacement primitive, and generates a second optimized splat for a second visual characteristic of the point cloud by replacing a second set of points having a second common value for the second visual characteristic with a second replacement primitive. The optimization system provides the first optimized splat and the second optimized splat instead of the original points of the point cloud in response to a request to access the point cloud.
A system and associated methods select splats at different fidelities to optimize a requested field-of-view of a three-dimensional (3D) model for presentation on a local device or for streaming to a remote device. The system determines priority values that are associated with parts of the 3D model within the field-of-view, selects first splats that represent a first part of the 3D model at a first fidelity in response to a first priority value being associated with the first part, selects second splats that represent a second part of the 3D model at a second fidelity in response to a second priority value being associated with the second part, and presents or streams the first splats and the second splats in response to the request in order to generate the field-of-view with different fidelities for the first part and the second part of the 3D model.
Disclosed is a graphics system and associated methods for generating and animating three-dimensional (“3D”) assets with a dynamic resolution. The graphics system receives a 3D asset at a first resolution, defines procedural surfaces that recreate the overall shape of the 3D asset, and generates the 3D asset at any desired resolution from the defined procedural surfaces. Specifically, the graphics system partitions the overall shape of the 3D object into simpler shapes, defines equations that recreate the simpler shapes, and generates new points amongst the existing points at positions along surfaces that are created by each of the equations. The graphics system generates the 3D asset at a second resolution that is greater than the first resolution by rendering the new points with the existing points.
G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
G06T 3/4053 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement basé sur la super-résolution, c.-à-d. où la résolution de l’image obtenue est plus élevée que la résolution du capteur
A system and associated methods select splats at different fidelities to optimize a requested field-of-view of a three-dimensional (3D) model for presentation on a local device or for streaming to a remote device. The system determines priority values that are associated with parts of the 3D model within the field-of-view, selects first splats that represent a first part of the 3D model at a first fidelity in response to a first priority value being associated with the first part, selects second splats that represent a second part of the 3D model at a second fidelity in response to a second priority value being associated with the second part, and presents or streams the first splats and the second splats in response to the request in order to generate the field-of-view with different fidelities for the first part and the second part of the 3D model.
A three-dimensional (3D) streaming system provides client-controlled adaptive streaming of 3D content in which the client may receive the 3D primitives of 3D content at different fidelities that the client selects. The system includes generating a 3D model at different fidelities, partitioning the 3D model into different volumetric spatial units, generating a manifest with an index for each volumetric spatial unit and an identifier for each fidelity, and presenting the 3D primitives in a first set of volumetric spatial units at a first fidelity and the 3D primitives in a second set of volumetric spatial units at a second fidelity in response to a request that includes indices of the first set of volumetric spatial units with a first identifier for the first fidelity and indices of the second set of volumetric spatial units with a second identifier for the second fidelity.
An optimization system reduces the data encoded within a point cloud for streaming and/or rendering of a lossy representation of the point cloud. The optimization system generates a first optimized splat for a first visual characteristic of the point cloud by replacing a first set of points having a first common value for the first visual characteristic with a first replacement primitive, and generates a second optimized splat for a second visual characteristic of the point cloud by replacing a second set of points having a second common value for the second visual characteristic with a second replacement primitive. The optimization system provides the first optimized splat and the second optimized splat instead of the original points of the point cloud in response to a request to access the point cloud.
Disclosed is an encoding system and associated methods for generating a graph-integrated tree-based representation of data that provides for direct lateral traversals of nodes in a each layer of the tree-based representation. The encoding system organizes data from a dataset to a tree-based representation with multiple layers and multiple nodes in each layer. The encoding system detects the nodes in each layer, and defines a graph structure that links the nodes in each layer for direct lateral access. The encoding system searches the tree-based representation in response to a query for a particular subset of the data by performing a single downward traversal to a particular layer with individual nodes that satisfy part of the query, and by laterally traversing the nodes in the particular layer using the graph structure to directly access a second node in the particular layer from a first node in the particular layer.
A three-dimensional (3D) streaming system and associated methods optimize 3D content streaming to remote devices by receiving 3D data that is distributed across a 3D space. The system partitions the 3D space into different regions, and compresses a different subset of the 3D data that is within each partitioned region as a separate compressed file. The system receives a request for a particular field-of-view in the 3D space and streams a set of compressed files that contain the different subset of 3D data for a set of the partitioned regions within the particular field-of-view. In response to a request for an updated field-of-view with a region that is not within the particular field-of-view, the system selects the compressed file that contains the 3D data within that region and streams the selected compressed file in response to the request for the updated field-of-view.
G06T 17/20 - Description filaire, p. ex. polygonalisation ou tessellation
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
H04L 65/613 - Diffusion en flux de paquets multimédias pour la prise en charge des services de diffusion par flux unidirectionnel, p. ex. radio sur Internet pour la commande de la source par la destination
H04L 65/80 - Dispositions, protocoles ou services dans les réseaux de communication de paquets de données pour prendre en charge les applications en temps réel en répondant à la qualité des services [QoS]
33.
Systems and methods for integrating spatial audio into point clouds
Spatial audio integrated point clouds expand the definition of point cloud points to include acoustic characteristics in addition to positional coordinates for the positioning of the points in a three-dimensional (3D) space and visual characteristics for how the individual points are presented at their respective positions in the 3D space. A system may generate a 3D scene based on the positioning of the points, and may track a path for sound that is emitted from a sound source. The system determines that the path reaches a position of one or more of the points, and performs a first adjustment to the sound according to the defined acoustic characteristics of the one or more points.
Disclosed is a system and associated methods for rigging points of a point cloud for animation and customizing the animation for different subsets of rigged points in order to rapidly and easily generate complex animations. Generating a complex animation involves defining an animation element in the point cloud space, defining an animation for moving the animation element, linking points of the point cloud to the animation element, and adjusting the animation from the animation element that is applied to a first subset of the linked points based on a selection of the first subset of linked points that is made using an adjustment tool. The system renders the complex animation by moving a second subset of the linked points according to the defined animation of the animation element and by moving the first subset of linked points according to the defined animation as adjusted by the adjustment tool.
Disclosed is a system that streams true three-dimensional (“3D”) image data over a data network in a manner that preserves the dimensionality and detail of a dynamic and changing 3D scene. The system generates the 3D image data to represent the 3D scene, and streams different set of the 3D image data that are within different viewing frustums requested by different devices. The system generates updates to the 3D image data based on changes occurring at different parts of the 3D scene. The system streams a first update to the first device in response to image data updated by the first update being within the first device's viewing frustum, and streams a second update to the second device in response to image data updated by the second update being within the second device's viewing frustum.
H04N 13/117 - Transformation de signaux d’images correspondant à des points de vue virtuels, p. ex. interpolation spatiale de l’image les positions des points de vue virtuels étant choisies par les spectateurs ou déterminées par suivi du spectateur
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
36.
Systems and methods for compressing motion in a point cloud
Disclosed is a system and associated methods for compressing motion within an animated point cloud. The resulting compressed file encodes different transforms that recreate the motion of different sets of points across different point clouds or frames of the animation in place of the data for the different sets of points from the different point clouds. The compression involves detecting a motion that changes positioning of a set of points between a first point cloud and subsequent point clouds of an uncompressed encoding of two or more frames of an animation. The compression further involves defining a transform that models the motion, and generating a compressed animated point cloud by encoding the data of the first point cloud in the compressed animated point cloud, and by replacing the data for the set of points in the one or more subsequent point clouds with the transform.
Disclosed is a graphics system and associated methodologies for selectively increasing the level-of-detail at specific parts of a mesh model based on a point cloud that provides a higher detailed representation of the same or similar three-dimensional (“3D”) object. The graphics system receives the mesh model and the point cloud of the 3D object. The graphics system determines a region-of-interest of the 3D object based in part on differences amongst points that represent part or all of the region-of-interest. The graphics system reconstructs the region-of-interest in the mesh model and generates a modified mesh model by modifying a first set of meshes representing the region-of-interest in the mesh model to a second set of meshes based on the positional elements of the point cloud points. The second set of meshes has more meshes and represents the region-of-interest at a higher level-of-detail than the first set of meshes.
Disclosed is a system and associated methods for color correcting three-dimensional (“3D”) objects by leveraging the 3D positional data associated with point cloud data points that form the 3D objects and by adjusting the defined or inherited color values of the data points to account for multiple factors that are derived from the data point 3D positions. The system determines a color variance between the color values of the first set of points and the color values of the second set of points, determines a variance factor based on the positional elements of the second set of points that contributes to the color variance between the points, and adjusts the color variance to a modified color variance based on the variance factor. The system then modifies or color corrects the color values of the second set of points based on the modified color variance.
Systems and associated methods are provided for distributed three-dimensional (3D) content generation whereby different 3D assets for a 3D scene are generated by different asset generators at different network tiers according to the latency sensitivity of each 3D asset. The system retrieves a 3D asset for the requested 3D scene, and differentiates a first 3D asset that is latency sensitive from a second 3D asset that is latency tolerant. The system generates the first 3D asset with a first asset generator at a first network tier, and generates the second 3D asset with a second asset generator at a more distant second network tier. The system distributes the generated primitives for the first 3D asset to the user device with a first amount of latency and the generated primitives for the second 3D asset to the user device with a second amount of latency.
A three-dimensional (3D) graphics system automatically defines or accurately adjusts normals for primitives of a 3D model. The system receives the 3D model primitives, selects a particular primitive at a specific position relative to other primitives of the 3D model, and defines a normal with a first direction for the particular primitive based on an association of the normal with the first direction to the specific position. The system then selects a set of primitives with positions next to the particular primitives, and defines a normal for each primitive of the set of primitives with a second direction that is perpendicular to a surface spanned between positions of the particular primitive and each primitive of the set of primitives.
A three-dimensional (3D) animation system automatically assigns accurate animation physics to points of a point cloud to realistically simulate motion of the points in response to different applied forces. The 3D animation system receives the points that are defined with positions in a 3D space and with visual characteristics. The 3D animation system analyzes one or more of the positions and the visual characteristics of the points, classifies the points based on a commonality in the positions or the visual characteristics of the points being associated with a particular classification, and maps a set of animation physics that is defined for the particular classification to the points. The 3D animation system may then animate the points based on the set of animation physics generating an effect in response to a force that is applied to the points.
G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
G06V 20/70 - Étiquetage du contenu de scène, p. ex. en tirant des représentations syntaxiques ou sémantiques
42.
Systems and methods for dynamic backfilling of a three-dimensional object
Disclosed is a graphics system and associated methods for dynamic backfilling a point cloud to conceal gaps that appear when zooming into the point cloud. The dynamically backfilling fills a gap with dynamically generated points that continue a texture, contour, pattern, shape, coloring, and/or other commonality of an original set of points from the point cloud that form a single continuous surface with that gap. The system defines a model based on images of an object or scene. The model represents the object or scene at different zoom depths with different dynamically generated points. The system determines that a requested render position exposes a gap in the original set of points. The system fills the gap by using the model to generate points at positions over the gap that continue a shape, form, or structure of a single continuous surface formed by the original set of points.
A three-dimensional (3D) graphics system automatically corrects the orientation of different 3D models based on a classification of the objects represented by each 3D model. The 3D graphics system receives a 3D model that is defined with multiple primitives distributed in a 3D space. The 3D graphics system determines a classification based on the primitives having a unique pattern, commonality, or feature that differentiates a particular object from other objects. The 3D graphics system maps points-of-reference that are associated with the classification to two or more primitives in the 3D space of the 3D model, generates an orientation vector based on the points-of-reference, and adjusts an orientation with which the 3D model is presented based on the orientation vector.
G06T 19/20 - Édition d'images tridimensionnelles [3D], p. ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
44.
Systems and methods for generating customized enhanced reality experiences based on multi-angle fiducial markers
Multi-angle fiducial markers with different sets of encoded cells are used for the generation of customized enhanced reality experiences on user devices. A device detects a multi-angle fiducial marker from its current position. The device decodes first data from a particular set of cells of the multi-angle fiducial marker that are resolvable from the current position of the device relative to the multi-angle fiducial marker position and one or more cells from the other sets of cells being unresolvable at the device current position. The device retrieves a first presentation of content from a source identified in the first data, defines a second presentation for the content based on the first data, and generates an enhanced reality experience with the second presentation of the content presented at a specific position relative to the position of the multi-angle fiducial maker in a display of the device.
G06K 7/14 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation électromagnétique, p. ex. lecture optiqueMéthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire utilisant la lumière sans sélection des longueurs d'onde, p. ex. lecture de la lumière blanche réfléchie
G06T 3/60 - Rotation d’images entières ou de parties d'image
45.
Systems and methods for improved interactivity with three-dimensional objects
Disclosed is a system and associated methods for improving interactions with three-dimensional (“3D”) objects in a 3D space by dynamically defining the positioning of the handles used to control the interactions with the camera and/or the 3D objects. The system analyzes the positioning of different constructs that form the 3D objects. From the analysis, the system defines handles at different dynamically determined positions about the 3D objects. The system applies an edit from a first dynamically determined position about a particular 3D object in response to a user interaction with a first handle defined at the first dynamically determined position, and applies the edit from a second dynamically determined position about the particular 3D object in response to a user interaction with a second handle defined at the second dynamically determined position.
A modeling system converts polygons of a three-dimensional (3D) mesh model to points of a point cloud in an automated manner that increases the resolution and visual fidelity of the point cloud relative to the 3D mesh model. The system receives the polygons of the 3D mesh model, and generates points over the flat plane of each polygon according to a density and arrangement that increases the resolution of the points relative to the original polygon. The system receives an enhancement map with values for displacing the polygons of the 3D mesh model. The system displaces the generated points by mapping the values from positions in the enhancement map to corresponding positions of the generated points. The system generates the point cloud with the displaced points to provide improved visual quality and detail relative to the polygons of the 3D mesh model after enhancement with the enhancement map.
Disclosed is a graphics system and associated methods for generating and animating three-dimensional (“3D”) assets with a dynamic resolution. The graphics system receives a 3D asset at a first resolution, defines procedural surfaces that recreate the overall shape of the 3D asset, and generates the 3D asset at any desired resolution from the defined procedural surfaces. Specifically, the graphics system partitions the overall shape of the 3D object into simpler shapes, defines equations that recreate the simpler shapes, and generates new points amongst the existing points at positions along surfaces that are created by each of the equations. The graphics system generates the 3D asset at a second resolution that is greater than the first resolution by rendering the new points with the existing points.
G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p. ex. d’êtres humains, d’animaux ou d’êtres virtuels
G06T 3/4053 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement basé sur la super-résolution, c.-à-d. où la résolution de l’image obtenue est plus élevée que la résolution du capteur
A three-dimensional (“3D”) interactive system and associated methods automatically add interactivity to a 3D scene by detecting and segments the primitives that represent different objects in the 3D scene, associating different animation models to the primitives of the represented objects, and separately animating the primitives of a represented object in response to a user interaction with one or more of those primitives based on the associated animation model. The system receives an undifferentiated 3D model of the scene, selects different sets of primitives that share unique commonality associated with different objects, and generates a differentiated 3D model with a different classification for each set of the different sets of primitives that represent a different object. The system detects an interaction with primitives of a particular classification, and animates the primitives according to an animation model that is defined for an object identified with the particular classification.
G06T 7/33 - Détermination des paramètres de transformation pour l'alignement des images, c.-à-d. recalage des images utilisant des procédés basés sur les caractéristiques
G06T 7/50 - Récupération de la profondeur ou de la forme
G06T 11/20 - Traçage à partir d'éléments de base, p. ex. de lignes ou de cercles
G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
G06T 19/20 - Édition d'images tridimensionnelles [3D], p. ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
G06V 10/77 - Traitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source
G06V 20/20 - ScènesÉléments spécifiques à la scène dans les scènes de réalité augmentée
49.
Systems and methods for modifying a user interface based on eye focus
Disclosed is a computing system and associated methods that use changes in eye focus or the depth at which a user is looking to modify a user interface. The computing system presents a three-dimensional (“3D”) environment with different user interface (“UI”) elements that are positioned in a foreground or near plane of the 3D environment and that partially or wholly obscure a background or far plane of the 3D environment. The computing system detects a change in user eye focus from the foreground to the background by using a sensor to track changes to the pupil or the amount of light reflecting off the user's eye. The computing system produces an unobstructed view to all or part of the background by adjusting positioning, opacity, or other properties of the UI elements in the foreground in response to detecting the change in the user eye focus.
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateurDispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p. ex. dispositions d'interface
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p. ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
50.
Systems and methods for generative drawing and customization of three-dimensional (“3D”) objects in 3D space using gestures
Disclosed is a system and associated methods for the generative drawing and customization of three-dimensional (“3D”) objects in 3D space using hand gestures. The system adapts the hand gestures as intuitive controls for rapidly creating and customizing the 3D objects to have a desired artistic effect or a desired look. The system selects a 3D model of a particular object in response to a first user input, sets a position in a virtual space at which to generate the particular object in response to a mapped position of a first hand gesture tracked in a physical space, and generates a first state representation of the particular object at the position in the virtual space in response to a second hand gesture. The first state representation presents the particular object at one of different modeled stages of the particular object lifecycle.
A three-dimensional (“3D”) interactive system uses the positional data of the point cloud points to identify exactly where user input contacts part of a 3D object represented by the point cloud points, and to generate precise haptic feedback based on the haptic characteristics of the contacted points. Specifically, the system determines that coordinates of the user input match or are within a threshold distance of a particular data point from a set of data points that form the 3D object. The system retrieves the haptic characteristics of the particular data point, and generates the haptic response on a haptic input device based on the haptic characteristics of the particular data point.
Disclosed is an encoding and decoding system and associated methods for producing a compressed waveform that encodes data points of a point cloud in a format and size that may be transmitted over a data network, decompressed, decoded, and rendered on a remote device without the buffering or lag associated with transmitting and rendering an uncompressed point cloud. The encoder receives a request from a remote device to access the point cloud, encodes a set of data points from the point cloud as one or more signals derived from values defined for the positional and non-positional elements of each data point from the set of data points, generates one or more compressed waveforms from compressing the one or more signals and transmits the one or more compressed waveforms to the remote device in response to the request for decompression, decoding, and image rendering.
Disclosed is a system and associated methods for compressing data in a three-dimensional (“3D”) model. The system receives the constructs that form different shapes of a 3D object represented by the 3D model. The system selects a set of the constructs based on the set of constructs forming a particular shape that is compressible with a function. The system defines the function that generates an approximate shape for the particular shape formed by the set of constructs, and compresses the 3D model by replacing the set of constructs with the function. The system may tune the function so that the approximate shape matches the particular shape with more specificity, may define a noise pattern that approximates and applies the non-uniformity of the particular shape to the approximate shape, and may define a gradient pattern that approximates and applies the coloring of the set of constructs to the approximate shape.
Disclosed is a graphics system and associated methods for preserving or improving image quality when rendering a decimated image or three-dimensional (“3D”) that has been decimated to remove some primitives from the original representation of the undecimated image or 3D model. The graphics system receives decimated primitives that are defined with a position, visual characteristics, and at least first and second surface normals. The graphics system defines a light source for illuminating the decimated primitives, determines that the second surface normal of a particular decimated primitive receives more light from the light source than the first surface normal of the particular decimated primitive, and generates a visualization for the particular primitive at the position of the particular primitive with the visual characteristics of the particular primitive adjusted according to an amount of light from the light source reaching the particular primitive via the second surface normal.
G06T 3/4023 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement basé sur la décimation de pixels ou de lignes de pixelsChangement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement basé sur l’insertion de pixels ou de lignes de pixels
Disclosed are systems and methods for the out-of-order predictive streaming of elements from a three-dimensional (“3D”) image file so that a recipient device is able to produce a first visualization of at least a first streamed element from a particular perspective, similar to the instant transfer of two-dimensional (“2D”) images, while the additional elements and perspectives of the 3D image are streamed. The sending device prioritizes the 3D image elements based on a predicted viewing order, streams a particular element from a particular perspective with a priority that is greater than a priority associated with other elements and other perspectives, determines a next element to stream after the particular element based on the next element being positioned adjacent to the particular element and having a priority that is greater than adjacent elements, and streams the next element to the recipient device.
H04N 13/117 - Transformation de signaux d’images correspondant à des points de vue virtuels, p. ex. interpolation spatiale de l’image les positions des points de vue virtuels étant choisies par les spectateurs ou déterminées par suivi du spectateur
G06T 7/194 - DécoupageDétection de bords impliquant une segmentation premier plan-arrière-plan
A cognitive modeling system uses a cognitive model to efficiently execute a variety of tasks over large datasets. The cognitive modeling system receives an input dataset and a query specifying a task to execute in relation to the input dataset. The cognitive modeling system determines an amount of similarity between each child node of the cognitive model and one or more of the input dataset and the query, selects a particular child node with the most determined amount of similarity, and executes the task using the particular child node. The task execution includes searching the particular child node for a connected set of neurons that match a particular part of the input dataset by a threshold amount, and applying an output that is associated with the connected set of neurons to the particular part of the input dataset.
G06N 3/063 - Réalisation physique, c.-à-d. mise en œuvre matérielle de réseaux neuronaux, de neurones ou de parties de neurone utilisant des moyens électroniques
57.
Systems and Methods for Multi-Modality Interactions in a Spatial Computing Environment
Disclosed is a spatial computing system and associated methods that provide multi-modality interactions for precise and imprecise interactions with two-dimensional (“2D”) and three-dimensional (“3D”) user interfaces (“UI”) that are presented in a 3D interactive space. The multi-modality interactions are provided by a dynamic spatial pointer. The dynamic spatial pointer has a first 3D representation for navigating the 3D interactive space and selecting one of the presented UI elements. The dynamic spatial pointer converts from the first 3D representation to a different second 3D representation in response to attaching to one of the 3D UI elements, and converts to a first 2D representation in response to attaching to one of the 2D UI elements. The second 3D representation remains attached to and tracks the 3D form of the 3D UI element, and the first 2D representation remains attached to and tracks the 2D plane of the 2D UI element.
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p. ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 3/0487 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p. ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p. ex. des gestes en fonction de la pression exercée enregistrée par une tablette numérique
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
58.
Systems and methods for automatic and dynamic generation of shape-conforming and computationally efficient colliders for point clouds
A three-dimensional (3D) interactivity system automatically and dynamically generates shape-conforming and computationally efficient colliders for detecting collisions with automatically differentiated features represented by different sets of points in a point cloud. The system selects a set of points that represent a particular feature of a 3D object, decimates the set of points to a subset of points that represent an approximate shape of the particular feature with fewer points than the set of points, and generates a collider with the approximate shape represented by the subset of points. The system may then use the collider in determining whether a collision element collides with the particular feature.
G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
G06T 3/4023 - Changement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement basé sur la décimation de pixels ou de lignes de pixelsChangement d'échelle d’images complètes ou de parties d’image, p. ex. agrandissement ou rétrécissement basé sur l’insertion de pixels ou de lignes de pixels
59.
Systems and methods for removing lighting effects from three-dimensional models
Disclosed is a system and associated methods that account for the change in coloring or tint that some wavelengths of light have on materials of an object, and that generate an object model with the accounted for change in coloring or tint removed from the pixels or constructs of that model. The system receives spectral data in different electromagnetic spectrum bands for a particular surface of the object. The system measures a first quality of the light that illuminates the object, and determines a reactivity of the particular surface to the first quality of the light based on the spectral data matching a spectral signature of a material having that reactivity. The system removes the light effects on the particular surface by adjusting the spectral data according to the reactivity to the first quality of the light and measuring the first quality in the light illuminating the object.
G01N 21/25 - CouleurPropriétés spectrales, c.-à-d. comparaison de l'effet du matériau sur la lumière pour plusieurs longueurs d'ondes ou plusieurs bandes de longueurs d'ondes différentes
G01N 21/31 - CouleurPropriétés spectrales, c.-à-d. comparaison de l'effet du matériau sur la lumière pour plusieurs longueurs d'ondes ou plusieurs bandes de longueurs d'ondes différentes en recherchant l'effet relatif du matériau pour les longueurs d'ondes caractéristiques d'éléments ou de molécules spécifiques, p. ex. spectrométrie d'absorption atomique
G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
60.
Systems and methods for customizing motion associated with point cloud animations
Disclosed is a system and associated methods for rigging points of a point cloud for animation and customizing the animation for different subsets of rigged points in order to rapidly and easily generate complex animations. Generating a complex animation involves defining an animation element in the point cloud space, defining an animation for moving the animation element, linking points of the point cloud to the animation element, and adjusting the animation from the animation element that is applied to a first subset of the linked points based on a selection of the first subset of linked points that is made using an adjustment tool. The system renders the complex animation by moving a second subset of the linked points according to the defined animation of the animation element and by moving the first subset of linked points according to the defined animation as adjusted by the adjustment tool.
Disclosed is a system for differentiating the selection of three-dimensional (“3D”) image data in a 3D space from other unselected 3D image data that may be positioned in front of the selected 3D image data, and for customizing editing operations that are presented in a user interface based on the object or material property represented in the selection. The system selects a set of 3D image data in response to a user input, and adjusts the transparency of unselected 3D image data that is positioned in front of the selected set of 3D image data. The system presents a differentiated visualization by rendering the selected set of 3D image data according to an original size, position, and visual characteristics defined for the selected set of 3D image, and by performing a partial or fully transparent rendering of the unselected 3D image as a result of the transparency adjustment.
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
62.
Systems and methods for editing three-dimensional image data with realistic camera and lens effects
Disclosed is an editing system for postprocessing three-dimensional (“3D”) image data to realistically recreate the effects associated with viewing or imaging a represented scene with different camera settings or lenses. The system receives an original image and an edit command with a camera setting or a camera lens. The system associates the selection to multiple image adjustments. The system performs a first of the multiple image adjustments on a first set of 3D image data from the original image in response to the first set of 3D image data satisfying specific positional or non-positional values defined for the first image adjustment, and performs a second of the multiple image adjustments on a second set of 3D image data from the original image in response to the second set of 3D image data satisfying the specific positional or non-positional values defined for the second image adjustment.
Disclosed is a system and associated methods for compressing motion within an animated point cloud. The resulting compressed file encodes different transforms that recreate the motion of different sets of points across different point clouds or frames of the animation in place of the data for the different sets of points from the different point clouds. The compression involves detecting a motion that changes positioning of a set of points between a first point cloud and subsequent point clouds of an uncompressed encoding of two or more frames of an animation. The compression further involves defining a transform that models the motion, and generating a compressed animated point cloud by encoding the data of the first point cloud in the compressed animated point cloud, and by replacing the data for the set of points in the one or more subsequent point clouds with the transform.
Disclosed is a system and associated methods for generating a composite image from scans or images that are aligned using invisible fiducials. The invisible fiducial is a transparent substance or a projected specific wavelength that is applied to and changes reflectivity of a surface at the specific wavelength without interfering with a capture of positions or visible color characteristics across the surface. The system performs first and second capture of a scene with the surface, and detects a position of the invisible fiducial in each capture based on values measured across the specific wavelength that satisfy a threshold associated with the invisible fiducial. The system aligns the first capture with the second capture based on the detected positions of the invisible fiducial, and generates a composite image by merging or combining the positions or visible color characteristics from the aligned captures.
G06T 5/50 - Amélioration ou restauration d'image utilisant plusieurs images, p. ex. moyenne ou soustraction
G06T 7/50 - Récupération de la profondeur ou de la forme
G06T 7/90 - Détermination de caractéristiques de couleur
G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
G06V 10/24 - Alignement, centrage, détection de l’orientation ou correction de l’image
G06V 10/50 - Extraction de caractéristiques d’images ou de vidéos en effectuant des opérations dans des blocs d’imagesExtraction de caractéristiques d’images ou de vidéos en utilisant des histogrammes, p. ex. l’histogramme de gradient orienté [HoG]Extraction de caractéristiques d’images ou de vidéos en utilisant l’addition des valeurs d’intensité d’imageAnalyse de projection
65.
Systems and methods for defining and automatically executing 2D/3D data manipulation workflows
An editing system and associated methods execute a workflow involving a sequence of two-dimensional/three-dimensional (“2D/3D”) data manipulations and/or associated operations to 2D/3D data of different formats by calling functions of different applications that are compatible with the changing format of the 2D/3D data throughout the workflow. The system determines that the 2D/3D data provided as input to a first workflow node is in a first format, and executes the first node by invoking a function of a first application that implements the operations associated with the first node on the 2D/3D data in the first format. Execution of the first node converts the 2D/3D data to a different second format that is passed to a second workflow node. The system executes a second workflow node by invoking a function of a second application that implements the operations associated with the second node on the 2D/3D data in the second format.
Disclosed is a system and associated methods for dynamically enhancing a three-dimensional (“3D”) animation that is generated from points of one or more point clouds. The system reduces noise and corrects gaps, holes, and/or distortions that are created in different frames as a result of adjusting the point cloud points to create the 3D animation. The system detects a set of points that share positional and/or non-positional commonality of a feature in the 3D animation. The system applies one or more adjustments to the set of points to animate feature from a current frame to a next frame, and detects a point from the set of points that deviates from the positional and/or non-positional commonality of the feature after applying the adjustments. The system dynamically enhances the 3D animation by correcting the point prior to rendering the next frame of the 3D animation.
Disclosed is a system and associated methods for improving interactions with three-dimensional (“3D”) objects in a 3D space by dynamically defining the positioning of the handles used to control the interactions with the camera and/or the 3D objects. The system analyzes the positioning of different constructs that form the 3D objects. From the analysis, the system defines handles at different dynamically determined positions about the 3D objects. The system applies an edit from a first dynamically determined position about a particular 3D object in response to a user interaction with a first handle defined at the first dynamically determined position, and applies the edit from a second dynamically determined position about the particular 3D object in response to a user interaction with a second handle defined at the second dynamically determined position.
Disclosed is a graphics system and associated methods for dynamic backfilling a point cloud to conceal gaps that appear when zooming into the point cloud. The dynamically backfilling fills a gap with dynamically generated points that continue a texture, contour, pattern, shape, coloring, and/or other commonality of an original set of points from the point cloud that form a single continuous surface with that gap. The system defines a model based on images of an object or scene. The model represents the object or scene at different zoom depths with different dynamically generated points. The system determines that a requested render position exposes a gap in the original set of points. The system fills the gap by using the model to generate points at positions over the gap that continue a shape, form, or structure of a single continuous surface formed by the original set of points.
Disclosed are systems and methods for the out-of-order predictive streaming of elements from a three-dimensional (“3D”) image file so that a recipient device is able to produce a first visualization of at least a first streamed element from a particular perspective, similar to the instant transfer of two-dimensional (“2D”) images, while the additional elements and perspectives of the 3D image are streamed. The sending device prioritizes the 3D image elements based on a predicted viewing order, streams a particular element from a particular perspective with a priority that is greater than a priority associated with other elements and other perspectives, determines a next element to stream after the particular element based on the next element being positioned adjacent to the particular element and having a priority that is greater than adjacent elements, and streams the next element to the recipient device.
H04N 13/117 - Transformation de signaux d’images correspondant à des points de vue virtuels, p. ex. interpolation spatiale de l’image les positions des points de vue virtuels étant choisies par les spectateurs ou déterminées par suivi du spectateur
G06T 7/194 - DécoupageDétection de bords impliquant une segmentation premier plan-arrière-plan
A multi-dimensional encoder (“MDE”) receives a file with different data points. Each data point is defined with uncompressed multi-dimensional data. The multi-dimensional data may include multiple elements for defining a data point position (e.g., a multi-dimensional position including x, y, and/or z coordinates), color values of the data point (e.g., red, green, and blue), and/or other data point attributes. The MDE assigns an index to each data point or each data point element, and maps the data points to a frequency domain based on a frequency with which values occur in two or more of the elements (e.g., multiple dimensions). The MDE generates a line that represents the frequency, and provides a compressed file format to a requesting device that includes the line and different sets of indices that are associated with different frequencies represented by the line.
An encoder is disclosed that uses hyperspectral data to produce a unified three-dimensional (“3D”) scan that incorporates depth for various points, surfaces, and features within a scene. The encoder may scan a particular point of the scene using frequencies from different electromagnetic spectrum bands, may determine spectral properties of the particular point based on returns measured across a first set of bands, may measure a distance of the particular point using frequencies of another band that does not interfere with the spectral properties at each of the first set of bands, and may encode the spectral properties and the distance of the particular point in a single hyperspectral dataset. The spectral signature encoded within the dataset may be used to classify the particular point or generate a point cloud or other visualization that accurately represents the spectral properties and distances of the scanned points.
G01S 17/89 - Systèmes lidar, spécialement adaptés pour des applications spécifiques pour la cartographie ou l'imagerie
G06T 7/521 - Récupération de la profondeur ou de la forme à partir de la télémétrie laser, p. ex. par interférométrieRécupération de la profondeur ou de la forme à partir de la projection de lumière structurée
G06V 10/143 - Détection ou éclairage à des longueurs d’onde différentes
G06V 10/75 - Organisation de procédés de l’appariement, p. ex. comparaisons simultanées ou séquentielles des caractéristiques d’images ou de vidéosApproches-approximative-fine, p. ex. approches multi-échellesAppariement de motifs d’image ou de vidéoMesures de proximité dans les espaces de caractéristiques utilisant l’analyse de contexteSélection des dictionnaires
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
72.
Systems and methods for full lateral traversal across layers of a tree-based representation
Disclosed is an encoding system and associated methods for generating a graph-integrated tree-based representation of data that provides for direct lateral traversals of nodes in a each layer of the tree-based representation. The encoding system organizes data from a dataset to a tree-based representation with multiple layers and multiple nodes in each layer. The encoding system detects the nodes in each layer, and defines a graph structure that links the nodes in each layer for direct lateral access. The encoding system searches the tree-based representation in response to a query for a particular subset of the data by performing a single downward traversal to a particular layer with individual nodes that satisfy part of the query, and by laterally traversing the nodes in the particular layer using the graph structure to directly access a second node in the particular layer from a first node in the particular layer.
Disclosed is a graphics system and associated methodologies for selectively increasing the level-of-detail at specific parts of a mesh model based on a point cloud that provides a higher detailed representation of the same or similar three-dimensional (“3D”) object. The graphics system receives the mesh model and the point cloud of the 3D object. The graphics system determines a region-of-interest of the 3D object based in part on differences amongst points that represent part or all of the region-of-interest. The graphics system reconstructs the region-of-interest in the mesh model and generates a modified mesh model by modifying a first set of meshes representing the region-of-interest in the mesh model to a second set of meshes based on the positional elements of the point cloud points. The second set of meshes has more meshes and represents the region-of-interest at a higher level-of-detail than the first set of meshes.
Disclosed is a compression system for compressing image data. The compression system receives an uncompressed image file with data points that are defined with absolute values for elements representing the data point position in a space. The compression system stores the absolute values defined for a first data point in a compressed image file, determines a difference between the absolute values of the first data point and the absolute values of a second data point, derives a relative value for the absolute values of the second data point from the difference, and stores the relative value in place of the absolute values of the second data point in the compressed image file.
Disclosed is an encoding and decoding system and associated methods for producing a compressed waveform that encodes data points of a point cloud in a format and size that may be transmitted over a data network, decompressed, decoded, and rendered on a remote device without the buffering or lag associated with transmitting and rendering an uncompressed point cloud. The encoder receives a request from a remote device to access the point cloud, encodes a set of data points from the point cloud as one or more signals derived from values defined for the positional and non-positional elements of each data point from the set of data points, generates one or more compressed waveforms from compressing the one or more signals and transmits the one or more compressed waveforms to the remote device in response to the request for decompression, decoding, and image rendering.
Disclosed is a system for encoding and/or rendering animations without temporal or spatial restrictions. The system may encode an animation as a point cloud with first data points having a first time value and different positional and non-positional values, and second data points having a second time value and different positional and non-positional values. Rendering the animation may include generating and presenting a first image for the first time value of the animation based on the positional and non-positional values of the first data points, and generating and presenting a second image for the second time value of the animation by changing a visualization at a first position in the first image based on the positional values of a data point from the second data points corresponding to the first position and the data point non-positional values differing from the visualization.
A graphics system and associated methods produce a continuous presentation and/or visualization from a point cloud with a distributed and disconnected set of data points that otherwise produce a discontinuous presentation and/or visualization of a scene. The graphics system receives the data points, and expands a polygonal mesh from the position of each particular data point such that each side of the polygonal mesh connects to a side of a polygonal mesh that is expanded from the position of each data point of a set of data points that neighbors the particular data point. The polygonal mesh of the particular data point spans a larger area or volume of the space than the particular data point. The graphics system produces the continuous visualization of the scene from rendering the polygonal mesh that is expanded from the position of each particular data point instead of rendering the data points.
Disclosed is a system and associated methods for color correcting three-dimensional (“3D”) objects by leveraging the 3D positional data associated with point cloud data points that form the 3D objects and by adjusting the defined or inherited color values of the data points to account for multiple factors that are derived from the data point 3D positions. The system determines a color variance between the color values of the first set of points and the color values of the second set of points, determines a variance factor based on the positional elements of the second set of points that contributes to the color variance between the points, and adjusts the color variance to a modified color variance based on the variance factor. The system then modifies or color corrects the color values of the second set of points based on the modified color variance.
Provided is a system for structured and controlled movement and viewing within a point cloud. The system may generate or obtain a plurality of data points and one or more waypoints for the point cloud, present a first subset of the plurality of data points in a field-of-view of a camera at an initial position and an initial orientation of a first waypoint, change the camera field-of-view from at least one of (i) the initial position to a modified position within a volume of positions defined by orientation controls of the first waypoint or (ii) the initial orientation to a modified orientation within a range of orientations defined by the orientation controls of the first waypoint, and may present a second subset of the plurality of data points in the camera field-of-view at one or more of the modified position and the modified orientation.
Disclosed is a three-dimensional (“3D”) scanning system that synchronizes the scanning of a scene with the viewing of the scan results relative to a live view of the scene. The system includes a first device that scans a first set of surfaces that are exposed to the first device from a first position. The system further includes a second device that receives the scan data as it is generated for each scanned surface of the first set of surfaces. The second device augments a visualization of a second set of surfaces, within a field-of-view of the second device from a second position, with the scan data that is generated for a subset of scanned surfaces from the first position corresponding to one or more surfaces of the second set of surfaces visualized from the second position.
Disclosed is an editing system that accounts for or leverages the three-dimensional (“3D”) positioning of 3D image data to edit attributes of a selected first set of image data based on attributes of an unselected second set of image data that is determined to be a threshold distance from the first set of image data and on the same surface as the first set of image data. The system leverages x, y, and z coordinates as well as surface normals to exclude image data from the editing that is within the threshold distance but that forms part of a different object, surface, or side about an edge of a surface than the first set of image data. The system also modifies the attribute adjustment based on the distance separating the first set of image data from a render position or each instance of the second set of image data.
Disclosed is a system and associated methods for controlling computer operation through three-dimensional (“3D”) menus, 3D toolbars, and other 3D elements that provide an efficient and dynamic organization of application functionality and icons across three dimensions. The system generates a first 3D element with a first number of regions with each region of the first number of regions providing access to different functionality from a first set of functionality, and a second 3D element with a second number of regions with each region of the second number of regions providing access to different functionality from a second set of functionality. The system connects the first 3D element to the second 3D element, and rotates one or more of the first element and the second 3D element to change the functionality that is accessible from the one or more of the first element and the second element.
G06F 3/048 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI]
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p. ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect utilisant des icônes
G06F 3/0487 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p. ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p. ex. des gestes en fonction de la pression exercée enregistrée par une tablette numérique
G06F 9/451 - Dispositions d’exécution pour interfaces utilisateur
83.
Systems and methods for interacting with three-dimensional graphical user interface elements to control computer operation
Disclosed are three-dimensional (“3D”) graphical user interface (“GUI”) elements for improving user interactions with a digital environment or a device by simplifying access to different data, functionality, and operations of the digital environment or the device. A 3D GUI element may include first visual information at a first position and second visual information at a second position within the 3D space represented by the 3D GUI element. In response to first input directed to the first visual information, the 3D GUI or system may perform a first action that is mapped to the first input and the first visual information within the 3D GUI element. In response to second input directed to the second visual information, the 3D GUI or system may perform a second action that is mapped to the second input and the second visual information within the 3D GUI element.
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p. ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect utilisant des icônes
G06F 3/0487 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p. ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p. ex. des gestes en fonction de la pression exercée enregistrée par une tablette numérique
G06F 9/451 - Dispositions d’exécution pour interfaces utilisateur
84.
Systems and methods for LiDAR-based camera metering, exposure adjustment, and image postprocessing
Disclosed is Light Detection and Ranging (“LiDAR”)-based camera metering, exposure adjustment, and image postprocessing. The LiDAR-based exposure adjustment may include emitting a laser from an imaging device, obtaining one or more measurements based on the laser reflecting off one or more objects in a scene and returning to the imaging device, adjusting exposure settings of the imaging device based on the one or more measurements, and capturing an image of the scene using the exposure settings. The LiDAR-based image postprocessing may include receiving an image of a scene and measurements or outputs from a LiDAR scan of the scene, and performing different adjustments to color values, contrast, brightness, saturation, levels, and other visual characteristics of different sets of pixels in the image based on different distance, material property, and/or other measurements obtained by the LiDAR for objects represented by the different sets of pixels.
Disclosed is an interface and/or system for presenting three-dimensional (“3D”) graphical user interface (“GUI”) elements to improve user interactions with a device. The system determines a first angle from which to render a 3D GUI. The system receives 3D images that are linked to the 3D GUI elements. The system generates the 3D GUI elements by rendering each of the 3D images from a first render position that aligns with the first angle. The system detects an input that changes the first angle to a second angle, and updates a visualization of the 3D GUI elements by rendering each of the 3D images from a second render position that aligns with the second angle.
G06F 3/048 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI]
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p. ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p. ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comportement ou d’aspect utilisant des icônes
G06F 3/0487 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p. ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p. ex. des gestes en fonction de la pression exercée enregistrée par une tablette numérique
G06F 9/451 - Dispositions d’exécution pour interfaces utilisateur
86.
Systems and methods for compressing three-dimensional image data
Disclosed is a system and associated methods for compressing data in a three-dimensional (“3D”) model. The system receives the constructs that form different shapes of a 3D object represented by the 3D model. The system selects a set of the constructs based on the set of constructs forming a particular shape that is compressible with a function. The system defines the function that generates an approximate shape for the particular shape formed by the set of constructs, and compresses the 3D model by replacing the set of constructs with the function. The system may tune the function so that the approximate shape matches the particular shape with more specificity, may define a noise pattern that approximates and applies the non-uniformity of the particular shape to the approximate shape, and may define a gradient pattern that approximates and applies the coloring of the set of constructs to the approximate shape.
Disclosed is a system for differentiating the selection of three-dimensional (“3D”) image data in a 3D space from other unselected 3D image data that may be positioned in front of the selected 3D image data, and for customizing editing operations that are presented in a user interface based on the object or material property represented in the selection. The system selects a set of 3D image data in response to a user input, and adjusts the transparency of unselected 3D image data that is positioned in front of the selected set of 3D image data. The system presents a differentiated visualization by rendering the selected set of 3D image data according to an original size, position, and visual characteristics defined for the selected set of 3D image, and by performing a partial or fully transparent rendering of the unselected 3D image as a result of the transparency adjustment.
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p. ex. des menus
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
88.
Systems and methods for removing lighting effects from three-dimensional models
Disclosed is a system and associated methods that account for the change in coloring or tint that some wavelengths of light have on materials of an object, and that generate an object model with the accounted for change in coloring or tint removed from the pixels or constructs of that model. The system receives spectral data in different electromagnetic spectrum bands for a particular surface of the object. The system measures a first quality of the light that illuminates the object, and determines a reactivity of the particular surface to the first quality of the light based on the spectral data matching a spectral signature of a material having that reactivity. The system removes the light effects on the particular surface by adjusting the spectral data according to the reactivity to the first quality of the light and measuring the first quality in the light illuminating the object.
G01N 21/31 - CouleurPropriétés spectrales, c.-à-d. comparaison de l'effet du matériau sur la lumière pour plusieurs longueurs d'ondes ou plusieurs bandes de longueurs d'ondes différentes en recherchant l'effet relatif du matériau pour les longueurs d'ondes caractéristiques d'éléments ou de molécules spécifiques, p. ex. spectrométrie d'absorption atomique
G01N 21/25 - CouleurPropriétés spectrales, c.-à-d. comparaison de l'effet du matériau sur la lumière pour plusieurs longueurs d'ondes ou plusieurs bandes de longueurs d'ondes différentes
G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
89.
Systems and methods for the accurate mapping of in-focus image data from two-dimensional images of a scene to a three-dimensional model of the scene
Disclosed is an imaging system and associated methods for mapping in-focus image data from two-dimensional (“2D”) images of a scene to a three-dimensional (“3D”) model of the scene. The imaging system receives the 2D images and the 3D model, determines the depth of field (“DOF”) and the field of view (“FOV”) for each 2D image, and selects a subset of 3D model constructs that form the FOV and are within the DOF of a particular 2D image. The imaging system determines pixels of the particular 2D image that represent a same set of points in the scene as the subset of 3D model constructs, and maps the visual characteristics from those pixels to non-positional elements of the subset of 3D model constructs.
G06T 7/90 - Détermination de caractéristiques de couleur
G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
G06T 7/521 - Récupération de la profondeur ou de la forme à partir de la télémétrie laser, p. ex. par interférométrieRécupération de la profondeur ou de la forme à partir de la projection de lumière structurée
90.
Systems and methods for generating consistently sharp, detailed, and in-focus three-dimensional models from pixels of two-dimensional images
Disclosed is a system and associated methods for generating a consistently sharp, detailed, and in-focus three-dimensional (“3D”) model of an object from two-dimensional (“2D”) images that collectively capture all sides of the object with multiple depths-of-field. The system receives a set of 2D images that capture a particular part of the object with different depths-of-field. The system determines a first pixel from a first 2D image and a second pixel from a second 2D image that represent a common point of the object, determines that the first pixel is out of focus based on the first 2D image depth-of-field and that the second pixel is in focus based on the second 2D image depth-of-field, and defines a 3D construct, that represents the common point in a 3D model of the object, using data of the in-focus second pixel instead of data of the out-of-focus first pixel.
Disclosed is a computing system and associated methods that use changes in eye focus or the depth at which a user is looking to modify a user interface. The computing system presents a three-dimensional (“3D”) environment with different user interface (“UI”) elements that are positioned in a foreground or near plane of the 3D environment and that partially or wholly obscure a background or far plane of the 3D environment. The computing system detects a change in user eye focus from the foreground to the background by using a sensor to track changes to the pupil or the amount of light reflecting off the user's eye. The computing system produces an unobstructed view to all or part of the background by adjusting positioning, opacity, or other properties of the UI elements in the foreground in response to detecting the change in the user eye focus.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p. ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
92.
Systems and methods for dynamic decimation of point clouds and data points in a three-dimensional space
An editing system may dynamically and intelligently determine which data points to remove, replace, and/or modify from a point cloud space so that more features, color information, and/or detail of the point cloud are preserved after decimation. The system may receive data points that are distributed in space, and may select one or more elements of the data points on which to base the decimation. For instance, the system may decimate a first subset of the data points by a first amount based on a first difference in values defined for the one or more elements of the first subset of data points, and may decimate a different second subset of the data points by a different second amount based on a second difference in values defined for the one or more elements of the second subset of data points.
Disclosed is a system and associated methods for the generative drawing and customization of three-dimensional (“3D”) objects in 3D space using hand gestures. The system adapts the hand gestures as intuitive controls for rapidly creating and customizing the 3D objects to have a desired artistic effect or a desired look. The system selects a 3D model of a particular object in response to a first user input, sets a position in a virtual space at which to generate the particular object in response to a mapped position of a first hand gesture tracked in a physical space, and generates a first state representation of the particular object at the position in the virtual space in response to a second hand gesture. The first state representation presents the particular object at one of different modeled stages of the particular object lifecycle.
A system prioritizes the rendering and streaming of image data based on risk maps that predict change in a three-dimensional (“3D”) environment. The system receives primitives that are distributed across a 3D space to represent the 3D environment. The system generates a first image based on primitives that fall within a first view frustum, and generates a risk map with a risk value for each particular pixel of the first image. Each risk value quantifies a probability that a pixel of the first image associated with that risk value changes as a result of changing the first view frustum to a second view frustum. The system then performs an out-of-order rendering of primitives that fall within the second view frustum based on the risk value for each first image pixel that is replaced in a second image with a rendered primitive from the second view frustum.
Disclosed is a system and associated methods for generating a mutable tree to efficiently access data within a three-dimensional (“3D”) environment. The system generates the mutable tree with a root node defined at a root node position, a first branch with nodes for each of a first set of subdivided regions that are a first distance from the root node position, and a second branch with nodes for each of a second set of subdivided regions that are a second distance from the root node position. The system sorts the mutable tree in response to a request to access data from a first position within the 3D environment so that the first node in the first branch is the first subtree node that is closest to the first position, and the first node in the second branch is the second subtree node that is closest to the first position.
Disclosed are editing tools for manipulating a three-dimensional (“3D”) data file or point cloud. An editing application may generate a visualization of the 3D data file or point cloud, and a user may invoke an editing tool over a particular region of the visualization that is rendered based on the positional and non-positional values of a first data point set and a second data point set from the 3D data file or point cloud. The editing tool may differentiate the first data point set from the second data point set based on unique commonality in the positional and/or non-positional values of the first data point set, and may edit less than all of the particular region by adjusting one or more of the positional and/or non-positional values of the first data point set while retaining the positional and non-positional values of the second data point set.
Disclosed is a spatial computing system and associated methods that provide multi-modality interactions for precise and imprecise interactions with two-dimensional (“2D”) and three-dimensional (“3D”) user interfaces (“UI”) that are presented in a 3D interactive space. The multi-modality interactions are provided by a dynamic spatial pointer. The dynamic spatial pointer has a first 3D representation for navigating the 3D interactive space and selecting one of the presented UI elements. The dynamic spatial pointer converts from the first 3D representation to a different second 3D representation in response to attaching to one of the 3D UI elements, and converts to a first 2D representation in response to attaching to one of the 2D UI elements. The second 3D representation remains attached to and tracks the 3D form of the 3D UI element, and the first 2D representation remains attached to and tracks the 2D plane of the 2D UI element.
G06F 3/0487 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p. ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p. ex. des gestes en fonction de la pression exercée enregistrée par une tablette numérique
G06F 3/0488 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p. ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p. ex. des gestes en fonction de la pression exercée enregistrée par une tablette numérique utilisant un écran tactile ou une tablette numérique, p. ex. entrée de commandes par des tracés gestuels
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p. ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
98.
Systems and methods for editing three-dimensional image data with realistic camera and lens effects
Disclosed is an editing system for postprocessing three-dimensional (“3D”) image data to realistically recreate the effects associated with viewing or imaging a represented scene with different camera settings or lenses. The system receives an original image and an edit command with a camera setting or a camera lens. The system associates the selection to multiple image adjustments. The system performs a first of the multiple image adjustments on a first set of 3D image data from the original image in response to the first set of 3D image data satisfying specific positional or non-positional values defined for the first image adjustment, and performs a second of the multiple image adjustments on a second set of 3D image data from the original image in response to the second set of 3D image data satisfying the specific positional or non-positional values defined for the second image adjustment.
Disclosed is a system and associated methods for generating a composite image from scans or images that are aligned using invisible fiducials. The invisible fiducial is a transparent substance or a projected specific wavelength that is applied to and changes reflectivity of a surface at the specific wavelength without interfering with a capture of positions or visible color characteristics across the surface. The system performs first and second capture of a scene with the surface, and detects a position of the invisible fiducial in each capture based on values measured across the specific wavelength that satisfy a threshold associated with the invisible fiducial. The system aligns the first capture with the second capture based on the detected positions of the invisible fiducial, and generates a composite image by merging or combining the positions or visible color characteristics from the aligned captures.
G06T 5/50 - Amélioration ou restauration d'image utilisant plusieurs images, p. ex. moyenne ou soustraction
G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
G06T 7/50 - Récupération de la profondeur ou de la forme
G06T 7/90 - Détermination de caractéristiques de couleur
G06V 10/50 - Extraction de caractéristiques d’images ou de vidéos en effectuant des opérations dans des blocs d’imagesExtraction de caractéristiques d’images ou de vidéos en utilisant des histogrammes, p. ex. l’histogramme de gradient orienté [HoG]Extraction de caractéristiques d’images ou de vidéos en utilisant l’addition des valeurs d’intensité d’imageAnalyse de projection
G06V 10/24 - Alignement, centrage, détection de l’orientation ou correction de l’image
100.
Systems and methods for splat filling a three-dimensional image using semi-measured data
Disclosed are systems and methods for splat filling a three-dimensional (“3D”) model using semi-measured data. The splat filling includes generating measured data points for a 3D representation of a scene with positions that are measured from scanning the scene, and with color values defined from measured color values of used pixels from a two-dimensional (“2D”) of the scene. The splat filling includes generating a semi-measured data point based on an unused pixel of the 2D image. The position of the semi-measured data point is derived based on a separation between the unused pixel and one or more used pixels of the 2D image, and based on the positions of measured data points that are defined with the measured color values of the one or more pixels. The color values of the semi-measured data point are defined directly from the measured color values associated with the unused pixel.