09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
2D/3D vision sensor system comprised of a light source and
an optical sensor for inline assembly verification, rapid
recognition of defects, quick identification of object types
and recognition of objects position in 3D space; vision
sensor system comprised of a light source and an optical
sensor for capturing 3D data; coordinate measuring machines;
portable articulated measuring devices for measuring
physical properties of objects; portable laser measurement
scanners for measuring physical properties of objects; laser
measurement scanner for measuring physical properties of
objects; downloadable computer software used to display 3D
data and measurements from the 3D data for use with
coordinate measuring machines; downloadable computer
software used to display, modify, or prove measurements from
3D data for use in measurement equipment for computer-aided
manufacturing equipment, and downloadable user manuals for
the aforementioned software, all sold as a unit;
downloadable computer software used to display, modify,
register or provide measurements based on 3D data for use
with laser measurement scanners, and downloadable user
manuals, all sold as a unit; portable laser measuring
scanner for measuring physical properties of objects and
laser measurement scanners for measuring physical properties
of objects; handheld 3D scanner capable of providing
midrange measurement volume for use in a variety of
industrial and scientific applications, including forensics
and architecture engineering and constructions services. Providing temporary use of online non-downloadable computer
software for displaying, modifying, registering, and
measuring 3D data for use with laser measurement scanners,
along with non-downloadable user manuals provided as a unit.
2.
COORDINATE MEASUREMENT DEVICE WITH AN INDIRECT TIME OF FLIGHT SENSOR
G01S 7/481 - Constructional features, e.g. arrangements of optical elements
G01S 7/4915 - Time delay measurement, e.g. operational details for pixel componentsPhase measurement
G01S 17/00 - Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
G01S 17/36 - Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated with phase comparison between the received signal and the contemporaneously transmitted signal
G01S 17/42 - Simultaneous measurement of distance and other coordinates
G01S 17/48 - Active triangulation systems, i.e. using the transmission and reflection of electromagnetic waves other than radio waves
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
3.
COORDINATE MEASUREMENT DEVICE WITH AN INDIRECT TIME OF FLIGHT SENSOR
A method includes: projecting, while a device is at a first pose, a pattern at a first angular position; acquiring, using a iToF device, a first frame of the pattern; projecting, while the device is at the first pose, the pattern at a second angular position; acquiring, using the iToF device, a second frame of the pattern; and computing first measurements to points in the environment. The method further includes projecting, while the device is at a second pose, the pattern at a third angular position; acquiring, using the iToF device, a third frame of the pattern; projecting, while the device is at the second pose, the pattern at a fourth angular position; acquiring, using the iToF device, a fourth frame of the pattern; and computing second measurements to points in the environment based at least in part on at least one of the third frame and the fourth frame.
G01S 7/481 - Constructional features, e.g. arrangements of optical elements
G01S 17/36 - Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated with phase comparison between the received signal and the contemporaneously transmitted signal
G01S 17/894 - 3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
4.
MEASURING THE POSITION OF A MIRROR SURFACE IN POINT CLOUDS CAPTURED BY A SCANNER
Embodiments for measuring/determining the position and shape of a mirror surface in 3D point clouds are provided given the knowledge of the position from where the points of the point cloud were captured. Starting from an initial rough estimation of a modelled mirror surface, the point cloud is divided into two parts: the presumable virtual points that are behind the modelled mirror with respect to the scanner position and the rest or remainder of the points in the point cloud. The presumable virtual points are mirrored back at the modelled mirror surface and then registered against the rest of the point cloud. The registered points cleaned from outliers together with the associated virtual points define the actual position and shape of the mirror surface.
Examples described herein provide a method that includes receiving a video of an environment. The method further includes extracting keyframes from the video using a machine learning model to generate extracted keyframes. The method further includes performing blur detection on the extracted keyframes to remove invalid keyframes from the extracted keyframes to generate candidate keyframes. The method further includes performing image enhancement on at least one of the invalid keyframes to generate at least one enhanced keyframe, the at least one enhanced keyframe being added to the candidate keyframes. The method further includes generating a desired output based at least in part on the candidate keyframes.
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/40 - ScenesScene-specific elements in video content
6.
TRAJECTORY ESTIMATION AND ALIGNMENT USING OMNIDIRECTIONAL IMAGES / VIDEOS
A system includes a camera to capture image data of an environment and to generate a series of trajectories. Each trajectory is generated based at least in part on a reliability threshold. The system further includes a processing system communicatively coupled to the camera, the processing system performing operations for aligning trajectories to a known layout. The operations include receiving, from the camera, the image data and the series of trajectories. The operations further include generating point clouds for each of the series of trajectories using the image data. The operations further include generating a layout for the environment based at least in part on the point clouds. The operations further include mapping the layout to the known layout and computing, during the mapping, mapping parameters. The operations further include, for each of the plurality of trajectories, aligning the trajectory to the known layout using the mapping parameters.
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
2D/3D vision sensor system comprised of a light source and an optical sensor for inline assembly verification, rapid recognition of defects, quick identification of object types and recognition of objects position in 3D space; vision sensor system comprised of a light source and an optical sensor for capturing 3D data; Coordinate measuring machines; portable articulated measurement devices for measuring physical properties of objects; portable laser measurement scanners for measuring physical properties of objects; laser measurement scanners for measuring physical properties of objects; downloadable computer software used to display 3D data and measurements from the 3D data for use with coordinate measuring machines; downloadable computer software used to display, modify, or provide measurements from 3D data for use in measurement equipment for computer-aided manufacturing equipment, and user manuals for the aforementioned software, all sold as a unit; downloadable computer software used to display, modify, register, or provide measurements based on 3D data for use with laser measurement scanners, and user manuals, all sold as a unit; Portable laser measurement scanner for measuring physical properties of objects and laser measurement scanners for measuring physical properties of objects; handheld 3D scanner capable of providing midrange measurement volume for use in a variety of industrial and scientific applications including forensics and architecture engineering and constructions services. Providing temporary use of online non-downloadable computer software used to display, modify, register, or provide measurements based on 3D data for use with laser measurement scanners, and user manuals, all sold as a unit.
Examples described herein provide a computer-implemented method that includes receiving a point cloud representative of a real-world environment. The method further includes simulating a projection of a laser projector into a virtual environment based at least in part on the point cloud, the virtual environment representative of the real-world environment. The method further includes evaluating the projection to determine whether at least one projector preference is satisfied. The method further includes, responsive to determining that the at least one projector preference is not satisfied, adjusting at least one of a position of the laser projector, an orientation of the laser projector, or a property of the laser projector.
Examples described herein provide a method for generating a three-dimensional (3D) model of an object of interest using panoramic images of an environment. The method includes detecting, using a trained machine learning model, the object of interest in a panoramic image of the environment. The method further includes determining 3D coordinates for the object of interest. The method further includes combining the 3D coordinates for the object of interest with an existing 3D model of the object of interest to create a revised 3D model of the object of interest.
G06V 10/46 - Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]Salient regional features
G06V 10/50 - Extraction of image or video features by performing operations within image blocksExtraction of image or video features by using histograms, e.g. histogram of oriented gradients [HoG]Extraction of image or video features by summing image-intensity valuesProjection analysis
G06V 10/75 - Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video featuresCoarse-fine approaches, e.g. multi-scale approachesImage or video pattern matchingProximity measures in feature spaces using context analysisSelection of dictionaries
Examples described herein provide a method for performing a predictive collision analysis. The method includes initiating the predictive collision analysis to be performed on a processing system, the predictive collision analysis being performed using a plurality of prediction properties. The method further includes setting a first prediction property of the plurality of prediction properties using a first signal received from an electronic steering wheel communicatively coupled to the processing system. The method further includes performing, by the processing system, the predictive collision analysis using the first prediction property.
Examples described herein provide a computer-implemented method for sharpening an image acquired during movement of a three-dimensional (3D) coordinate measurement device. The method includes receiving the image from the 3D coordinate measurement device, wherein the image was acquired while the 3D coordinate measurement device was moving. The method further includes sharpening the image to generate a sharpened image based at least in part on at least one of movement information about the movement of the 3D coordinate measurement device or depth information. The method further includes performing, using the sharpened image, a scanning operation.
G01S 17/42 - Simultaneous measurement of distance and other coordinates
G01S 17/86 - Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
G06T 3/4053 - Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
G06T 5/50 - Image enhancement or restoration using two or more images, e.g. averaging or subtraction
12.
SYSTEMS AND METHODS FOR VISUALIZING FLOOR DATA IN MIXED REALITY ENVIRONMENT
Described herein are systems and methods for point cloud alignment with a floor plan or a video stream of an environment. The systems and method comprise overlaying a graphical representation of a point cloud onto an image of the floor plan and the video stream. The systems and methods further comprise aligning the graphical representation of the point cloud and the image of the floor plan with the video stream using a point alignment. The systems and methods further comprise displaying an update of the graphical representation based at least in part on a further point alignment or a movement alignment.
A system for measuring 3D coordinates of surfaces in the environment is provided. The system includes a body configured to rotate about an axis. A light source is configured to emit a pattern of light, the pattern of light. A two-dimensional array of pixels is coupled to the body and configured to receive a reflection of the pattern of light. A controller is electrically coupled to the light source and the two dimensional array of pixels, the controller configured to a determine a distance to at least one surface in the environment based at least in part on a reflection of the pattern of light from a surface in the environment and a speed of light in air.
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G01S 7/481 - Constructional features, e.g. arrangements of optical elements
G01S 7/4863 - Detector arrays, e.g. charge-transfer gates
G01S 7/4865 - Time delay measurement, e.g. time-of-flight measurement, time of arrival measurement or determining the exact position of a peak
G01S 7/4914 - Detector arrays, e.g. charge-transfer gates
G01S 7/4915 - Time delay measurement, e.g. operational details for pixel componentsPhase measurement
G01S 17/36 - Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated with phase comparison between the received signal and the contemporaneously transmitted signal
G01S 17/42 - Simultaneous measurement of distance and other coordinates
G01S 17/87 - Combinations of systems using electromagnetic waves other than radio waves
A handheld three-dimensional (3D) measuring system operates in a target mode and a geometry mode. In the target mode, a target-mode projector projects a first line of light onto an object, and a first illuminator sends light to markers on or near the object. A first camera captures an image of the first line of light and the illuminated markers. In the geometry mode, a geometry-mode projector projects onto the object a first multiplicity of lines, which are captured by the first camera and a second camera. One or more processors determines 3D coordinates in the target mode and the geometry mode.
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G01C 11/02 - Picture-taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
H04N 13/239 - Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
H04N 23/55 - Optical parts specially adapted for electronic image sensorsMounting thereof
H04N 23/56 - Cameras or camera modules comprising electronic image sensorsControl thereof provided with illuminating means
H04N 23/80 - Camera processing pipelinesComponents thereof
15.
IMAGE-BASED LOCALIZATION AND TRACKING USING THREE-DIMENSIONAL DATA
An example method collects first data comprising first surface points within an environment by a sensor associated with a processing system. The method further determines an estimated position of the processing system by analyzing the first data using a simultaneous localization and mapping algorithm and identifies a first set of surface features from the first data. The method further collects second data comprising second surface points within the environment by a three-dimensional (3D) coordinate measuring device associated with the processing system and identifies a second set of surface features from the second data. The method further matches the first set of surface features to the second set of surface features and refines the estimated position of the processing system to generate a refined position of the processing system. The method further displays an augmented reality representation of the second data based at least in part on the refined position.
42 - Scientific, technological and industrial services, research and design
Goods & Services
Providing temporary use of online non-downloadable software
that connects to a cloud-based service for comparison and
analysis of scans and measures design data for use with
shared web services, use workflows, and featuring central
online access point for architecture engineering and
construction applications, namely, as-build documentation
and modelling, quality control, scan processing and
registration, 3d metrology for use in the inspection,
assembly, meshing, statistical process control, and public
safety analytics featuring 2d diagramming, 3d diagramming,
animation, and blood spatter analysis; providing temporary
use of online non-downloadable software for use with
cloud-based applications for providing access to remote
diagnostics on software and hardware, predictive
maintenance, and training, storage, integration and data
connection to third party apps, mobile device support and
browser based access and real-time access to highly accurate
3d reality data in real time; providing temporary use of
online non-downloadable cloud-based hosting platform
featuring cloud based applications for use with shared web
services for comparison and analysis of scans and measures
design data, for use with shared web services, use
workflows, and featuring central online access point for
architecture engineering and construction applications,
namely, as-build documentation and modelling, quality
control, scan processing and registration, 3d metrology for
use in the inspection, assembly, meshing, statistical
process control, and public safety analytics featuring 2d
diagramming, 3d diagramming, animation, and blood spatter
analysis; providing temporary use of online non downloadable
software for providing access to remote diagnostics on
software and hardware, predictive maintenance, and training,
storage, integration and data connection to third party
apps, mobile device support and browser based access and
real-time access to highly accurate 3d reality data in real
time; software as a service (SAAS) services featuring
software for cloud-based applications for use with shared
web services for comparison and analysis of scans and
measures design data retrieved from a device compared to
measurements in a database, for use with shared web
services, use workflows, and featuring central online access
point for architecture engineering and construction
applications, namely, as-build documentation and modelling,
quality control, scan processing and registration, 3d
metrology for use in the inspection, assembly, meshing,
statistical process control, and public safety analytics
featuring 2d diagramming, 3d diagramming, and animation,
blood spatter analysis; software as a service (SAAS)
services featuring software for providing access to remote
diagnostics on software and hardware, predictive
maintenance, and training, storage, integration and data
connection to third party apps, mobile device support and
browser based access and real-time access to highly accurate
3d reality data in real time.
17.
SYSTEM AND METHOD OF COMPRESSING SERIAL POINT CLOUD DATA
Examples described herein provide a method that is performed by a processing system. The method includes receiving a first point cloud comprising a first set of points associated with an environment. The first point cloud is organized into a plurality of segments. A second point cloud is received comprising a second set of points associated with the environment. The second point cloud is aligned with at least one of the plurality of segments. A change is identified between at least a portion of the first set of points within the at least one plurality of segments and a corresponding second set of points in the second point cloud. A third point cloud is generated based on the first point cloud, the second point cloud and the change.
42 - Scientific, technological and industrial services, research and design
Goods & Services
Providing temporary use of online non-downloadable software
that connects to a cloud-based service for comparison and
analysis of scans and measures design data for use with
shared web services, use workflows, and featuring central
online access point for architecture engineering and
construction applications, namely, as-build documentation
and modelling, quality control, scan processing and
registration, 3d metrology for use in the inspection,
assembly, meshing, statistical process control, and public
safety analytics featuring 2d diagramming, 3d diagramming,
animation, and blood spatter analysis; providing temporary
use of online non-downloadable software for use with
cloud-based applications for providing access to remote
diagnostics on software and hardware, predictive
maintenance, and training, storage, integration and data
connection to third party apps, mobile device support and
browser based access and real-time access to highly accurate
3d reality data in real time; providing temporary use of
online non-downloadable cloud-based hosting platform
featuring cloud-based applications for use with shared web
services for comparison and analysis of scans and measures
design data, for use with shared web services, use
workflows, and featuring central online access point for
architecture engineering and construction applications,
namely, as-build documentation and modelling, quality
control, scan processing and registration, 3d metrology for
use in the inspection, assembly, meshing, statistical
process control, and public safety analytics featuring 2d
diagramming, 3d diagramming, animation, and blood spatter
analysis; providing temporary use of online non-downloadable
software for providing access to remote diagnostics on
software and hardware, predictive maintenance, and training,
storage, integration and data connection to third party
apps, mobile device support and browser based access and
real-time access to highly accurate 3d reality data in real
time; software as a service (SAAS) services featuring
software for cloud-based applications for use with shared
web services for comparison and analysis of scans and
measures of design data retrieved from a device compared to
measurements in a database, for use with shared web
services, use workflows, and featuring central online access
point for architecture engineering and construction
applications, namely, as-build documentation and modelling,
quality control, scan processing and registration, 3d
metrology for use in the inspection, assembly, meshing,
statistical process control, and public safety analytics
featuring 2d diagramming, 3d diagramming, and animation,
blood spatter analysis; software as a service (SAAS)
services featuring software for providing access to remote
diagnostics on software and hardware, predictive
maintenance, and training, storage, integration and data
connection to third party apps, mobile device support and
browser based access and real-time access to highly accurate
3d reality data in real time.
19.
REAL-TIME FEEDBACK DURING VIDEO CAPTURE FOR VIDEOGRAMMETRY
A method for real-time feedback during video capture for videogrammetry is provided. The method includes capturing a video stream of an environment using a camera of a processing system. During the capturing, activity data is collected about the processing system using a sensor. The activity data is analyzed to determine whether the video stream satisfies a threshold capture quality. An indicator is generated as feedback to a user. Activity data is collected about the video processing system using the sensor. The activity data is analyzed to determine whether the video stream satisfies a threshold capture quality. The indicator is generated as feedback to a user based on the analyzing. A setting of the camera based at least in part on the indicator is modified. Videogrammetry is performed to generate a point cloud using images extracted from the video stream that include at least one image captured after the modifying.
A method includes capturing images of an object in a three dimensional (3D) space and converting a first image into a first depth map having first pixels and converting a second image into a second depth map having second pixels, at least one of the second pixels overlapping at least one of the first pixels. The method further includes identifying from the first depth map a first pixel that overlaps a second pixel from the second depth map, the second pixel representing a correct position of the first pixel and the second pixel in the 3D space. The method further includes determining a correction vector for a position of the first pixel, determining adjusted positions of the first pixels using the correction vector, determining an adjusted first depth map with the adjusted positions of first pixels, and merging the second depth map with the adjusted first depth map.
Examples described herein provide a method performed by a processing device. The method includes receiving three-dimensional (3D) data from a 3D coordinate measurement device. The method further includes structuring the 3D data to generate structured data based on a context between points of the 3D data. The method further includes performing a first compression on the structured data to generate first compressed structured data. The method further includes performing a second compression on the first compressed structured data to generate second compressed structured data. The method further includes, responsive to a request, providing the 3D data as uncompressed data by decompressing the second compressed structured data to generate the first uncompressed structured data; and decompressing the first uncompressed structured data to generate the 3D data.
H04N 19/12 - Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
H04N 19/91 - Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
22.
GENERATING A PARALLAX FREE TWO AND A HALF (2.5) DIMENSIONAL POINT CLOUD USING A HIGH RESOLUTION IMAGE
A computer-implemented method and a system are provided. The method includes retrieving a three-dimensional (3D) point cloud of an environment and a two- dimensional (2D) image of the environment, the 3D point cloud comprising 3D points, the 2D image comprising pixels having resolution information. The method includes transforming the 3D point cloud to a coordinate system of the 2D image. Further, the method includes generating a two and a half dimension (2.5D) point cloud by creating 2.5D points, wherein the generating comprises providing resolution information to 3D points in a field-of-view captured by the 2D image thereby creating the 2.5D points, the generating further comprising creating at least one of the 2.5D points between a given at least three nearest 3D points of the 3D points.
G06T 3/4053 - Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
A method (1800) includes steering (1802), with a projection mechanism of a three-dimensional (3D) projector, a beam of light to a projection angle relative to a gimbal mechanism of the 3D projector; steering, with the gimbal mechanism of the 3D projector, the 3D projector to a gimbal angle relative to a stand; determining (1804) a six degree-of-freedom (six-DOF) pose of the 3D projector based at least in part on steering at least one of the projecting mechanism or the gimbal mechanism to place the beam of light on each of a plurality of anchor targets in each of a first instance and a second instance, each anchor target having an anchor point with 3D coordinates known in a computer-aided design (CAD) model, each projection angle and gimbal angle to each of the plurality of anchor targets being different in first instance and the second instance; and storing (1806) the determined six-DOF pose of the 3D projector.
H04N 9/31 - Projection devices for colour picture display
B23Q 17/24 - Arrangements for indicating or measuring on machine tools using optics
B25H 1/00 - Work benchesPortable stands or supports for positioning portable tools or work to be operated on thereby
F16M 11/12 - Means for attachment of apparatusMeans allowing adjustment of the apparatus relatively to the stand allowing pivoting in more than one direction
G01C 15/00 - Surveying instruments or accessories not provided for in groups
G06T 19/00 - Manipulating 3D models or images for computer graphics
24.
GENERATING A PARALLAX FREE TWO AND A HALF (2.5) DIMENSIONAL POINT CLOUD USING A HIGH RESOLUTION IMAGE
A computer-implemented method, a system, and computer program product are provided. The method includes retrieving a three-dimensional (3D) point cloud of an environment and a two-dimensional (2D) image of the environment, the 3D point cloud comprising 3D points, the 2D image comprising pixels having resolution information. The method includes transforming the 3D point cloud to a coordinate system of the 2D image. Further, the method includes generating a two and a half dimension (2.5D) point cloud by creating 2.5D points, wherein the generating comprises providing resolution information to 3D points in a field-of-view captured by the 2D image thereby creating the 2.5D points, the generating further comprising creating one or more of the 2.5D points between a given at least three nearest 3D points of the 3D points.
A method includes attaching a 3D projector to a stand, the 3D projector having a projecting mechanism and a gimbal mechanism; determining a six degree-of-freedom pose of the 3D projector based at least in part on steering at least one of the projecting mechanism and the gimbal mechanism to place the beam of light on each of a plurality of anchor targets in each of a first instance and a second instance, each anchor target having an anchor point with 3D coordinates known in a CAD model, each projection angle and gimbal angle to each of the plurality of anchor targets being different in first instance and the second instance.
Examples described herein provide a method for point cloud alignment. The method includes receiving a first set of three-dimensional (3D) points of an environment, the first set of 3D points being captured by a 3D coordinate measurement device. The method further includes capturing a second set of 3D points of the environment using a sensor of a processing system. The method further includes aligning the first set of 3D points of the environment with the second set of 3D points of the environment to create a point cloud of the environment. The method further includes generating, on a display of the processing system, a graphical representation of the point cloud of the environment. The graphical representation displays at least a portion of the first set of 3D points of the environment as an augmented reality element.
A computer-implemented method includes establishing a connection between a point cloud browser and a transfer agent. The method further includes establishing a connection between the transfer agent and a target application. The method further includes in response to selection of one or more points in the point cloud browser, generating one or more objects in the target application.
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
G06F 3/04842 - Selection of displayed objects or displayed text elements
Examples described herein provide a method for point cloud alignment. The method includes receiving a first set of three-dimensional (3D) points of an environment. The method further includes capturing a second set of 3D points of the environment using a sensor of a processing system. The method further includes aligning the first set of 3D points of the environment with the second set of 3D points of the environment to create a point cloud of the environment. The method further includes generating, on a display of the processing system, a graphical representation of the point cloud of the environment. The graphical representation displays at least a portion of the first set of 3D points of the environment as an augmented reality element.
A method of operating a coordinate measurement device includes selecting an operating mode on the coordinate measurement device. A first light is emitted from at least one light source of the coordinate measurement device. At least two angles associated with the emitting of the first light are measured. A second light is received with an optical detector of the coordinate measurement device. The second light is a reflection of the first light off of at least one of the retroreflector and the surface. A first distance is determined based at least in part on a mode of the coordinate measurement device that is selected, the emitting of the first light, and the receiving of the second light. Three dimensional coordinates of a point in the environment is determined based on the measuring of the at least two angles and at least one of the first distance and the second distance.
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
2D/3D vision sensor system comprised of a light source and an optical sensor for inline assembly verification, rapid recognition of defects, quick identification of object types and recognition of objects position in 3D space; vision sensor system comprised of a light source and an optical sensor for capturing 3D data; Coordinate measuring machines; portable articulated measurement devices for measuring physical properties of objects; portable laser measurement scanners for measuring physical properties of objects; laser measurement scanners for measuring physical properties of objects; downloadable computer software used to display 3D data and measurements from the 3D data for use with coordinate measuring machines; downloadable computer software used to display, modify, or provide measurements from 3D data for use in measurement equipment for computer-aided manufacturing equipment, and user manuals for the aforementioned software, all sold as a unit; downloadable computer software used to display, modify, register, or provide measurements based on 3D data for use with laser measurement scanners, and user manuals, all sold as a unit; Portable laser measurement scanner for measuring physical properties of objects and laser measurement scanners for measuring physical properties of objects; handheld 3D scanner capable of providing midrange measurement volume for use in a variety of industrial and scientific applications including forensics and architecture engineering and constructions services Providing temporary use of online non-downloadable computer software used to display, modify, register, or provide measurements based on 3D data for use with laser measurement scanners, and user manuals, all sold as a unit
31.
REALITY CAPTURE USING CLOUD BASED COMPUTER NETWORKS
Reality capture using cloud based computer networks is provided. Techniques include receiving user input of an object to capture, the user input including a location, an accuracy category, and a size category of the object, and generating at least one option to capture the object, in response to user input. Techniques include responsive to a user selecting the at least one option to capture the object, configuring a plurality of drones with a first setting for capturing at least a first portion of the object, and configuring a scanner with a second setting for capturing at least a second portion of the object. Techniques include causing the plurality of drones to capture the first portion of the object, in response to the drones being initiated at the location and causing the scanner to capture the second portion of the object, in response to the scanner being initiated at the location.
A system and method for detecting construction site defects and hazards using artificial intelligence (AI) is provided. The system includes a movable base unit, a coordinate measurement scanner, a vision based sensor, and one or more processors. The one or more processors perform operations that include generating a two-dimensional (2D) map of the environment based at least in part on output from the coordinate measurement scanner, applying image recognition to the video stream data to identify and label a defect or hazard in the video data stream, correlating a location of the defect or hazard in the video stream data with the location in the 2D map, and recording the location of the defect or hazard in the 2D map.
42 - Scientific, technological and industrial services, research and design
Goods & Services
(1) Providing temporary use of online non-downloadable software that connects to a cloud-based service for comparison and analysis of scans and measures design data for use with shared web services, use workflows, and featuring central online access point for architecture engineering and construction applications, namely, as-build documentation and modelling, quality control, scan processing and registration, 3d metrology for use in the inspection, assembly, meshing, statistical process control, and public safety analytics featuring 2d diagramming, 3d diagramming, animation, and blood spatter analysis; providing temporary use of online non-downloadable software for use with cloud-based applications for providing access to remote diagnostics on software and hardware, predictive maintenance, and training, storage, integration and data connection to third party apps, mobile device support and browser based access and real-time access to highly accurate 3d reality data in real time; providing temporary use of online non-downloadable cloud-based hosting platform featuring cloud-based applications for use with shared web services for comparison and analysis of scans and measures design data, for use with shared web services, use workflows, and featuring central online access point for architecture engineering and construction applications, namely, as-build documentation and modelling, quality control, scan processing and registration, 3d metrology for use in the inspection, assembly, meshing, statistical process control, and public safety analytics featuring 2d diagramming, 3d diagramming, animation, and blood spatter analysis; providing temporary use of online non-downloadable software for providing access to remote diagnostics on software and hardware, predictive maintenance, and training, storage, integration and data connection to third party apps, mobile device support and browser based access and real-time access to highly accurate 3d reality data in real time; software as a service (SAAS) services featuring software for cloud-based applications for use with shared web services for comparison and analysis of scans and measures of design data retrieved from a device compared to measurements in a database, for use with shared web services, use workflows, and featuring central online access point for architecture engineering and construction applications, namely, as-build documentation and modelling, quality control, scan processing and registration, 3d metrology for use in the inspection, assembly, meshing, statistical process control, and public safety analytics featuring 2d diagramming, 3d diagramming, and animation, blood spatter analysis; software as a service (SAAS) services featuring software for providing access to remote diagnostics on software and hardware, predictive maintenance, and training, storage, integration and data connection to third party apps, mobile device support and browser based access and real-time access to highly accurate 3d reality data in real time.
34.
GAP FILLING FOR THREE-DIMENSIONAL DATA VISUALIZATION
Examples described herein provide a method that includes receiving three- dimensional (3D) data associated with an environment. The method further includes generating a graphical representation based at least in part on at least one of the 3D data. The method further includes filling in a gap in the graphical representation using downsampled frame buffer objects.
G06T 15/00 - 3D [Three Dimensional] image rendering
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computerOutput arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
G06T 7/174 - SegmentationEdge detection involving the use of two or more images
42 - Scientific, technological and industrial services, research and design
Goods & Services
(1) Providing temporary use of online non-downloadable software that connects to a cloud-based service for comparison and analysis of scans and measures design data for use with shared web services, use workflows, and featuring central online access point for architecture engineering and construction applications, namely, as-build documentation and modelling, quality control, scan processing and registration, 3d metrology for use in the inspection, assembly, meshing, statistical process control, and public safety analytics featuring 2d diagramming, 3d diagramming, animation, and blood spatter analysis; providing temporary use of online non-downloadable software for use with cloud-based applications for providing access to remote diagnostics on software and hardware, predictive maintenance, and training, storage, integration and data connection to third party apps, mobile device support and browser based access and real-time access to highly accurate 3d reality data in real time; providing temporary use of online non-downloadable cloud-based hosting platform featuring cloud based applications for use with shared web services for comparison and analysis of scans and measures design data, for use with shared web services, use workflows, and featuring central online access point for architecture engineering and construction applications, namely, as-build documentation and modelling, quality control, scan processing and registration, 3d metrology for use in the inspection, assembly, meshing, statistical process control, and public safety analytics featuring 2d diagramming, 3d diagramming, animation, and blood spatter analysis; providing temporary use of online non downloadable software for providing access to remote diagnostics on software and hardware, predictive maintenance, and training, storage, integration and data connection to third party apps, mobile device support and browser based access and real-time access to highly accurate 3d reality data in real time; software as a service (SAAS) services featuring software for cloud-based applications for use with shared web services for comparison and analysis of scans and measures design data retrieved from a device compared to measurements in a database, for use with shared web services, use workflows, and featuring central online access point for architecture engineering and construction applications, namely, as-build documentation and modelling, quality control, scan processing and registration, 3d metrology for use in the inspection, assembly, meshing, statistical process control, and public safety analytics featuring 2d diagramming, 3d diagramming, and animation, blood spatter analysis; software as a service (SAAS) services featuring software for providing access to remote diagnostics on software and hardware, predictive maintenance, and training, storage, integration and data connection to third party apps, mobile device support and browser based access and real-time access to highly accurate 3d reality data in real time.
36.
GENERATING GRAPHICAL REPRESENTATIONS FOR VIEWING 3D DATA AND/OR IMAGE DATA
A method includes receiving three-dimensional (3D) data and image data. The method further includes generating a graphical representation based at least in part on at least one of the 3D data or the image data, the graphical representation including a first region selectively switchable between a single-sub-region mode and a multi-sub- region mode. Responsive to the single- sub -region mode being enabled, the first region displays at least a first portion of the 3D data or at least a first portion of the image data. Responsive to the multi-sub-region mode being enabled, the first region includes at least a first sub-region and a second sub-region. The first sub-region displays at least a second portion of the 3D data or at least a second portion of the image data, and the second sub-region displays at least a third portion of the 3D data or at least a third portion of the image data.
Examples described herein provide a method that includes receiving three-dimensional (3D) data associated with an environment. The method further includes generating a graphical representation based at least in part on at least one of the 3D data. The method further includes filling in a gap in the graphical representation using downsampled frame buffer objects.
An example method for feature extraction includes receiving a selection of a point from a plurality of points, the plurality of points representing an object. The method further includes identifying a feature of interest for the object based at least in part on the point. The method further includes classifying the feature of interest. The method further includes constructing, based at least in part on results of the classifying, a geometric primitive or mathematical function associated with the plurality of points associated with the feature of interest. The method further includes generating a graphical representation of the feature of interest based at least in part on the geometric primitive or mathematical function.
G06V 10/40 - Extraction of image or video features
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
An example method for feature extraction includes receiving a selection of a point from a plurality of points, the plurality of points representing an object. The method further includes identifying a feature of interest for the object based at least in part on the point. The method further includes performing edge extraction on the feature of interest. The method further includes performing pre-processing on results of the edge extraction. The method further includes classifying the feature of interest based at least in part on results of the pre-processing. The method further includes constructing, based at least in part on results of the classifying, a geometric primitive or mathematical function that has a best fit to a set of points from the plurality of points associated with the feature of interest. The method further includes generating a graphical representation of the feature of interest using the geometric primitive or mathematical function.
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/77 - Processing image or video features in feature spacesArrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]Blind source separation
G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
09 - Scientific and electric apparatus and instruments
38 - Telecommunications services
42 - Scientific, technological and industrial services, research and design
Goods & Services
Downloadable and recorded software for processing images and
text; recorded computer software platform for use with
cameras and scanners in a variety of industrial and
scientific applications including forensics, architecture
engineering and construction services capturing 360 degree
views and 2d images; recorded and downloadable software for
the collection, editing, organizing and modifying of data
and images; downloadable software which interacts directly
with cloud-based applications; recorded and downloadable
software applications for use with cameras and scanners for
use in a variety of industrial and scientific applications
including forensics, architecture engineering and
construction services; downloadable mobile applications used
to display, modify, edit or provide measurements and capture
360 degree views and 2D images; recorded computer software,
namely, data storage software for use in a variety of
industrial and scientific applications including forensics,
architecture engineering and construction services. Electronic transmission of digital files among internet
users; providing multiple-user access to global computer
information networks concerning the development of
construction projects. Hosting of digital content on the internet concerning the
development and status of construction projects; providing
temporary use of a web-based applications that enable users
to manage and share digital images and related digital
content concerning the development and status of
construction projects; developing and managing application
software for delivery of digital multi-media content
provided for construction project management for use on
wireless mobile devices; development of computer database in
the field of digital multi-media content provided for
construction project management for use on wireless mobile
devices; providing computer software for digital image
processing.
41.
Photosensor processing for improved line scanner performance
A method includes providing a measuring device having a projector, a camera with a photosensitive array, and at least one processor, projecting with the projector a line of light onto an object, capturing with the camera an image of the projected line of light on the object within a window subregion of the photosensitive array, and calculating with the at least one processor three-dimensional (3D) coordinates of points on the object based at least in part on the projected line of light and on the captured image.
G06T 7/70 - Determining position or orientation of objects or cameras
G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
A 3D measuring instrument and method of operation is provided that includes a registration camera and a an autofocus camera. The method includes capturing with the registration camera a first registration image of a first plurality of points and a first image with the first camera with the instrument in a first pose. A plurality of three-dimensional (3D) coordinates of points are determined based on the first image. A second registration image of a second plurality of points is captured in a second pose and a focal length of the autofocus camera is adjusted. A second surface image is captured with the first camera having the adjusted focal length. A compensation parameter is determined based in part on the captured second surface image. The determined compensation parameter is stored.
A method and system of correcting a point cloud is provided. The method includes selecting a region within the point cloud. At least two objects within the region are identified. The at least two objects are re-aligned. At least a portion of the point cloud is aligned based at least in part on the realignment of the at least two objects.
A mobile three-dimensional (3D) measuring system includes a 3D measuring device configured to capture 3D data in a multi-level architecture, and an orientation sensor configured to estimate an altitude. One or more processing units coupled with the 3D measuring device and the orientation sensor perform a method that includes receiving a first portion of the 3D data captured by the 3D measuring device. The method further includes determining a level index based on the altitude. The level index indicates a level of the multi-level architecture at which the first portion is captured. The level index is associated with the first portion. Further, a map of the multi-level architecture is generated using the first portion, the generating comprises registering the first portion with a second portion of the 3D data responsive to the level index of the first portion being equal to the level index of the second portion.
According to some aspects of the invention, auxiliary axis measurement systems for determining three-dimensional coordinates of an object are provided as shown and described herein. According to some aspects of the invention, methods for operating auxiliary axis measurement systems for determining three-dimensional coordinates of an object are provided as shown and described herein.
G05B 19/401 - Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by control arrangements for measuring, e.g. calibration and initialisation, measuring workpiece for machining purposes
G01B 11/00 - Measuring arrangements characterised by the use of optical techniques
G01B 21/04 - Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring length, width, or thickness by measuring coordinates of points
46.
SYSTEM AND METHOD OF SCANNING AN ENVIRONMENT AND GENERATING TWO DIMENSIONAL IMAGES OF THE ENVIRONMENT
A system and method for scanning an environment and generating an annotated 2D map is provided. The method includes acquiring, via a 2D scanner, a plurality of 2D coordinates on object surfaces in the environment, the 2D scanner having a light source and an image sensor, the image sensor being arranged to receive light reflected from the object points. A first 360° image is acquired at a first position of the environment, via a 360° camera having a plurality of cameras and a controller, the controller being operable to merge the images acquired by the plurality of cameras to generate an image having a 360° view, the 360° camera being movable from the first to a second position. A 2D map is generated based at least in part on the plurality of two-dimensional coordinates of points. The first 360° image is integrated with the 2D map.
A computer-implemented method is provided that includes retrieving at least one selected image from a plurality of aerial images of an environment, the at least one selected image comprising surface regions that are concurrently in a three-dimensional (3D) point cloud of the environment. The method further includes detecting areas of the surface regions in the at least one selected image, such that coordinates of the areas of the surface regions are extracted from the at least one selected image. The method further includes comparing the at least one selected image to the 3D point cloud to align common locations in both the at least one selected image and the 3D point cloud. The method further includes displaying an integration of a drawing of the coordinates of the areas of the surface regions in a representation of the 3D point cloud.
A computer-implemented method is provided that includes causing an aerial vehicle to scan an environment in a predesignated pattern, such that a first set of images are captured. The method further includes detecting an emergency scene in the first set of images of the environment. The method further includes determining locations at which the aerial vehicle is to capture a second set of images of the emergency scene in the environment. The method further includes causing the aerial vehicle to acquire the second set of images at the locations. The method further includes determining selected images of the second set of images focused on the emergency scene. The method further includes extracting the selected images from the second set of images, the selected images comprising a representation of the emergency scene.
A computer-implemented method is provided that includes detecting at least one reflective surface in at least one two-dimensional (2D) image of an environment. The method further includes generating bounding coordinates encompassing the at least one reflective surface in the 2D image. The method further includes projecting the bounding coordinates of the 2D image into a three-dimensional (3D) space of the environment. The method further includes identifying a reflection artifact encompassed by the bounding coordinates in the 3D space. The method further includes removing the reflection artifact identified in the bounding coordinates.
Examples described herein provide a computer-implemented method that includes receiving a video stream from a camera. The method further includes detecting, within the video stream, an object of interest using a first trained machine learning model. The method further includes, responsive to determining that a confidence score associated with the object of interest fails to satisfy a threshold, determining, using a second trained machine learning model, a direction to move the camera to cause the confidence score to satisfy the threshold. The method further includes presenting an indication of the direction to move the camera to cause the confidence score to satisfy the threshold.
A system and method for determining a distance is provided. The system includes a scanner that captures a scan-point by emitting a light having a base frequency and at least one measurement frequency and receiving a reflection of the light. Processors determine the distance to the scan-point by using a method that comprises: generating a signal in response to receiving the reflection of light; determining a first distance to the scan-point based on a phase-shift of the signal and the measurement frequency; determining a second distance and a third distance based on a phase-shift of the signal determined using a Fourier transform at the measurement frequency on a pair of adjacent half-cycles; determining a corrected second distance and a corrected third distance by compensating for an error in the second distance and third distance by performing the Fourier transform on the pair of adjacent half-cycles.
G01S 7/4915 - Time delay measurement, e.g. operational details for pixel componentsPhase measurement
G01S 17/36 - Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated with phase comparison between the received signal and the contemporaneously transmitted signal
G01S 17/894 - 3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
52.
Correction of current scan data using pre-existing data
A system and method for measuring coordinate values of an environment is provided. The system includes a coordinate measurement scanner that includes a light source that steers a beam of light to illuminate object points in the environment, and an image sensor arranged to receive light reflected from the object points to determine coordinates of the object points in the environment. The system also includes one or more processors for performing a method that includes receiving a previously generated map of the environment and causing the scanner to measure a plurality of coordinate values as the scanner is moved through the environment, the coordinate values forming a point cloud. The plurality of coordinate values are registered with the previously generated map into a single frame of reference. A current map of the environment is generated based at least in part on the previously generated map and the point cloud.
G06T 3/00 - Geometric image transformations in the plane of the image
G01S 7/481 - Constructional features, e.g. arrangements of optical elements
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G06F 30/13 - Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
A system and a method for removing artifacts from a 3D coordinate data are provided. The system includes one or more processors and a measuring device. The one or more processors are operable to receive training data and train the 3D measuring device to identify artifacts by analyzing the training data. The one or more processors are further operable to identify artifacts in live data based on the training of the processor system. The one or more processors are further operable to generate clear scan data by filtering the artifacts from the live data and output the clear scan data.
A method is provided that includes recording a landmark at a first scan position of a scanner, the landmark based at least in part on a semantic feature of scan data captured by the scanner. The semantic feature is identified using line-segments of the scan data. The method further includes capturing, by the scanner while moving through the environment, additional scan data at a second scan position. The method further includes, responsive to the scanner returning to the first scan position associated with the landmark, computing a measurement error. The method further includes correcting, using the measurement error, at least a portion of the scan data or the additional scan data.
Examples described herein provide a method that includes capturing, using a camera, a first image of an environment. The method further includes performing, by a processing system, a first positioning to establish a position of the first image in a layout of the environment. The method further includes detecting, by the processing system, a feature in the first image. The method further includes performing, by the processing system, a second positioning based at least in part on the feature to refine the position of the first image in the layout. The method further includes capturing, using the camera, a second image of the environment and automatically registering the second image to the layout. The method further includes generating a digital twin representation of the environment using the first image based at least in part on the refined position of the first image in the layout and using the second image.
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 10/77 - Processing image or video features in feature spacesArrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]Blind source separation
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
09 - Scientific and electric apparatus and instruments
38 - Telecommunications services
42 - Scientific, technological and industrial services, research and design
Goods & Services
(1) Downloadable and recorded software for processing images and text; recorded computer software platform for use with cameras and scanners in a variety of industrial and scientific applications including forensics, architecture engineering and construction services capturing 360 degree views and 2d images; recorded and downloadable software for the collection, editing, organizing and modifying of data and images; downloadable software which interacts directly with cloud-based applications; recorded and downloadable software applications for use with cameras and scanners for use in a variety of industrial and scientific applications including forensics, architecture engineering and construction services; downloadable mobile applications used to display, modify, edit or provide measurements and capture 360 degree views and 2D images; recorded computer software, namely, data storage software for use in a variety of industrial and scientific applications including forensics, architecture engineering and construction services. (1) Electronic transmission of digital files among internet users; providing multiple-user access to global computer information networks concerning the development of construction projects.
(2) Hosting of digital content on the internet concerning the development and status of construction projects; providing temporary use of a web-based applications that enable users to manage and share digital images and related digital content concerning the development and status of construction projects; developing and managing application software for delivery of digital multi-media content provided for construction project management for use on wireless mobile devices; development of computer database in the field of digital multi-media content provided for construction project management for use on wireless mobile devices; providing computer software for digital image processing.
A system includes one or more processors that are configured to compensate a measurement tool by performing a method. The method includes capturing a first data using the measurement tool. The method further includes capturing a second data using the measurement tool. The method further includes detecting a first natural feature in the first data. The method further includes computing a difference in positions of the first natural feature in the first data and the second data respectively. The method further includes computing a compensation parameter to adjust the measurement tool based on the difference computed.
G06F 18/2413 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
58.
SEGMENTATION OF COMPUTED TOMOGRAPHY VOXEL DATA USING MACHINE LEARNING
Examples described herein provide a method that includes creating two-dimensional (2D) slices from a plurality of computed tomography (CT) voxel data sets. The method further includes adding artificial noise to the 2D slices to generate artificially noisy 2D slices. The method further includes creating patches from the 2D slices and the artificially noisy 2D slices. The method further includes training an autoencoder using the patches.
A method for measuring gaps between material layers include inserting a probe tip within a through-hole defined in a structural component. The probe tip is arranged at the end of a probe assembly attached to articulated arm coordinate measuring machine (AACMM). The method further includes contacting the probe tip with a hole surface of the through-hole. The method further includes translating the probe tip along the hole surface in a direction parallel to an axis through the through-hole. The probe tip passes over a gap along the through-hole. The method further includes measuring a radial position of the probe tip during the translation along the hole surface and across the gap including a deflection of radial position of the probe tip as the probe tip crosses the gap. The method further includes calculating a gap size of the gap based on the deflection and a size of the probe tip.
A system includes a three-dimensional (3D) scanner that captures a 3D point cloud corresponding to one or more objects in a surrounding environment. The system further includes a camera that captures a control image by capturing a plurality of images of the surrounding environment, and an auxiliary camera configured to capture an ultrawide-angle image of the surrounding environment. One or more processors of the system colorize the 3D point cloud using the ultrawide-angle image by mapping the ultrawide-angle image to the 3D point cloud. The system performs a limited system calibration before colorizing each 3D point cloud, and a periodic full system calibration before/after a plurality of 3D point clouds are colorized.
G01S 17/86 - Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G06T 3/4038 - Image mosaicing, e.g. composing plane images from plane sub-images
G06T 7/80 - Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
G06V 10/75 - Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video featuresCoarse-fine approaches, e.g. multi-scale approachesImage or video pattern matchingProximity measures in feature spaces using context analysisSelection of dictionaries
H04N 17/00 - Diagnosis, testing or measuring for television systems or their details
H04N 23/698 - Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
61.
TRACKING DATA ACQUIRED BY COORDINATE MEASUREMENT DEVICES THROUGH A WORKFLOW
A method that includes providing a database for storing meta-data that describes steps in a workflow and an order of the steps in the workflow. The meta-data includes, for each of the steps: a reference to an input data file for the step; a description of a transaction performed at the step; and a reference to an output data file generated by the step based at least in part on applying the transaction to the input data file. Data that includes meta-data for a step in the workflow is received and the data is stored in the database. A trace of the workflow is generated based at least in part on contents of the database. The generating is based on receiving a request from a requestor for the trace of the workflow. At least a subset of the trace is output to the requestor.
H04L 9/32 - Arrangements for secret or secure communicationsNetwork security protocols including means for verifying the identity or authority of a user of the system
09 - Scientific and electric apparatus and instruments
Goods & Services
Simultaneous localization and mapping mobile handheld
scanner; simultaneous localization and mapping mobile
handheld scanner featuring an integrated panoramic camera;
downloadable mobile app.
09 - Scientific and electric apparatus and instruments
35 - Advertising and business services
37 - Construction and mining; installation and repair services
42 - Scientific, technological and industrial services, research and design
Goods & Services
Handheld mapper, namely, surveying apparatus, sensors for
the determination of distances for creating maps and floor
plans of buildings; handheld scanner to capture and generate
2D and 3D floorplans and high-resolution data; handheld
device, namely, 2D and 3D scanners, sensors for the
determination of distances that enables rapid measurement
and documentation of building floor plans in 2D and 3D;
coordinate measuring machines (CMMs); portable articulated
measurement devices for measuring physical properties of
objects and locations; portable laser measurement scanners
for measuring physical properties of objects and locations;
computer software for use with CMMs and for use in
measurement equipment for computer-aided manufacturing
equipment, and software user manuals, all sold as a unit;
computer software for use with laser measurement scanners,
and user manuals, all sold as a unit; all of the
aforementioned for use in the field of construction,
manufacturing, and forensics. Data processing services in the field of construction,
manufacturing, forensics and coordinate measuring; data
collection and retrieval services in the field of
construction, manufacturing, forensics and coordinate
measuring. Portable laser projection and measurement system
installation, maintenance, and repair; consultation in the
field of the installation, maintenance, and repair of
portable laser projection and measurement systems; surveying
apparatus and instruments for use in the field of
construction, manufacturing, and forensics. Providing temporary use of online non-downloadable computer
software used to display, modify, register, analyze, and
provide measurements based on 3D data for use with laser
measurement scanners, and user manuals, all sold as a unit;
digitized 3D data capture services; mapping services,
namely, digitized 3D data mapping capture; geophysical
survey services; design and development of computer hardware
and software for mapping and surveying; all of the
aforementioned for use in the field of construction,
manufacturing, and forensics; advice, information and
consultancy services relating to all the aforesaid services;
data storage.
64.
LASER SCANNER FOR VERIFYING POSITIONING OF COMPONENTS OF ASSEMBLIES
Examples described herein provide a method that includes receiving, from a camera, a first image captured at a first location of an environment. The method further includes receiving, by a three-dimensional (3D) coordinate measurement device, first 3D coordinate data captured at the first location of the environment. The method further includes receiving, from the camera, a second image captured at a second location of the environment. The method further includes detecting, by a processing system, first features of the first image and second features of the second image. The method further includes determining, by the processing system, whether a correspondence exists between the first image and the second image. The method further includes, responsive to determining that the correspondence exists between the first image and the second image, causing the 3D coordinate measurement device to capture, at the second location, second 3D coordinate data.
A mobile three-dimensional (3D) measuring system includes a 3D measuring device comprising a first sensor and a second sensor. The 3D measuring system further includes a computing system coupled with the 3D measuring device. A computing device is coupled with the computing system. The 3D measuring device continuously transmits a first data from the first sensor, and a second data from the second sensor to the computing system as it is moved in an environment. The computing system generates a 3D point cloud representing the environment. The computing system generates a 2D projection corresponding to the 3D point cloud. The computing device displays the 2D projection as a live feedback of a movement of the 3D measuring device.
A mobile 3D measuring system includes a 3D measuring device comprising a sensor that emits a plurality of scan lines in a field of view of the sensor. The 3D system further includes a field of view manipulator coupled with the 3D measuring device, the field of view manipulator comprising a passive optic element that redirects a first scan line from the plurality of scan lines. The 3D system further includes a computing system coupled with the 3D measuring device. The 3D measuring device continuously transmits a captured data from the sensor to the computing system as the 3D measuring device is moved in an environment, the captured data is based on receiving reflections corresponding to the plurality of scan lines, including a reflection of the first scan line that is redirected. The computing system generates a 3D point cloud representing the environment based on the captured data.
A mobile three-dimensional (3D) measuring system includes a 3D measuring device, and a support apparatus. The 3D measuring device is coupled to the support apparatus. The support apparatus includes a pole mount that includes a gimbal at the top of the pole mount, wherein the 3D measuring device is attached to the gimbal. The support apparatus further includes a counterweight at the bottom of the pole mount, the counterweight matches a weight of the 3D measuring device.
Examples described herein provide a method that includes communicatively connecting a camera to a processing system. The processing system includes a light detecting and ranging (LIDAR) sensor. The method further includes capturing, by the processing system, three-dimensional (3D) coordinate data of an environment using the LIDAR sensor while the processing system moves through the environment. The method further includes capturing, by the camera, a panoramic image of the environment. The method further includes associating the panoramic image of the environment with the 3D coordinate data of the environment to generate a dataset for the environment. The method further includes generating a digital twin representation of the environment using the dataset for the environment.
G01S 17/894 - 3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
G03B 37/04 - Panoramic or wide-screen photographyPhotographing extended surfaces, e.g. for surveyingPhotographing internal surfaces, e.g. of pipe with cameras or projectors providing touching or overlapping fields of view
Examples described herein provide a method that includes receiving a model corresponding to an assembly. The method further includes defining an object of interest in the model. The method further includes receiving a point cloud generated based on data obtained by scanning the assembly using a laser scanner. The method further includes aligning the point cloud to the model. The method further includes determining whether a component corresponding to the object of interest is located correctly relative to the assembly based at least in part on the point cloud aligned to the model. The method further includes, responsive to determining that the component is not located correctly, taking a corrective action.
42 - Scientific, technological and industrial services, research and design
Goods & Services
Providing temporary use of online non-downloadable software that connects to a cloud-based service for comparison and analysis of scans and measures design data for use with shared web services, use workflows, and featuring central online access point for architecture engineering and construction applications, namely, as-build documentation and modelling, quality control, scan processing and registration, 3d metrology for use in the inspection, assembly, meshing, statistical process control, and public safety analytics featuring 2d diagramming, 3d diagramming, animation, and blood spatter analysis; providing temporary use of online non-downloadable software for use with cloud-based applications for providing access to remote diagnostics on software and hardware, predictive maintenance, and training, storage, integration and data connection to third party apps, mobile device support and browser based access and real-time access to highly accurate 3d reality data in real time; providing temporary use of online non-downloadable cloud-based hosting platform featuring cloud-based applications for use with shared web services for comparison and analysis of scans and measures design data, for use with shared web services, use workflows, and featuring central online access point for architecture engineering and construction applications, namely, as-build documentation and modelling, quality control, scan processing and registration, 3d metrology for use in the inspection, assembly, meshing, statistical process control, and public safety analytics featuring 2d diagramming, 3d diagramming, animation, and blood spatter analysis;
providing temporary use of online non1downloadable software for providing access to remote diagnostics on software and hardware, predictive maintenance, and training, storage, integration and data connection to third party apps, mobile device support and browser based access and real-time access to highly accurate 3d reality data in real time; software as a service (SAAS) services featuring software for cloud-based applications for use with shared web services for comparison and analysis of scans and measures of design data retrieved from a device compared to measurements in a database, for use with shared web services, use workflows, and featuring central online access point for architecture engineering and construction applications, namely, as-build documentation and modelling, quality control, scan processing and registration, 3d metrology for use in the inspection, assembly, meshing, statistical process control, and public safety analytics featuring 2d diagramming, 3d diagramming, and animation, blood spatter analysis; software as a service (SAAS) services featuring software for providing access to remote diagnostics on software and hardware, predictive maintenance, and training, storage, integration and data connection to third party apps, mobile device support and browser based access and real-time access to highly accurate 3d reality data in real time
42 - Scientific, technological and industrial services, research and design
Goods & Services
Providing temporary use of online non-downloadable software that connects to a cloud-based service for comparison and analysis of scans and measures design data for use with shared web services, use workflows, and featuring central online access point for architecture engineering and construction applications, namely, as-build documentation and modelling, quality control, scan processing and registration, 3d metrology for use in the inspection, assembly, meshing, statistical process control, and public safety analytics featuring 2d diagramming, 3d diagramming, animation, and blood spatter analysis; providing temporary use of online non-downloadable software for use with cloud-based applications for providing access to remote diagnostics on software and hardware, predictive maintenance, and training, storage, integration and data connection to third party apps, mobile device support and browser based access and real-time access to highly accurate 3d reality data in real time; providing temporary use of online non-downloadable cloud-based computer software platform featuring cloud based applications for use as a hosting platform for shared web services for comparison and analysis of scans and measures design data, for use with shared web services, use workflows, and featuring central online access point for architecture engineering and construction applications, namely, as-build documentation and modelling, quality control, scan processing and registration, 3d metrology for use in the inspection, assembly, meshing, statistical process control, and public safety analytics featuring 2d diagramming, 3d diagramming, animation, and blood spatter analysis; providing temporary use of online non downloadable software for providing access to remote diagnostics on software and hardware, predictive maintenance, and training, storage, integration and data connection to third party apps, mobile device support and browser based access and real-time access to highly accurate 3d reality data in real time; software as a service (SAAS) services featuring software for cloud-based applications for use with shared web services for comparison and analysis of scans and measures design data retrieved from a device compared to measurements in a database, for use with shared web services, use workflows, and featuring central online access point for architecture engineering and construction applications, namely, as-build documentation and modelling, quality control, scan processing and registration, 3d metrology for use in the inspection, assembly, meshing, statistical process control, and public safety analytics featuring 2d diagramming, 3d diagramming, and animation, blood spatter analysis; software as a service (SAAS) services featuring software for providing access to remote diagnostics on software and hardware, predictive maintenance, and training, storage, integration and data connection to third party apps, mobile device support and browser based access and real-time access to highly accurate 3d reality data in real time
72.
METHOD OF REMOTELY CONTROLLING A LASER TRACKER USING A MOBILE COMPUTING DEVICE
A laser tracker system and method of operating the laser tracker system is provided. The method includes providing a mobile computing device coupled for communication to a computer network. Identifying with the mobile computing device at least one laser tracker device on the computer network, the at least one laser tracker device including a first laser tracker device. The mobile computing device is connected to the first laser tracker device to transmit signals therebetween via the computer network in response to a first input from a user. One or more control functions are performed on the first laser tracker device in response to one or more second inputs from the user, wherein at least one of the one or more control functions includes selecting with the mobile computing device a retroreflective target and locking the first light beam on the retroreflective target.
Examples described herein provide a method that includes communicatively connecting a camera to a processing system. The processing system includes a light detecting and ranging (LIDAR) sensor. The method further includes capturing, by the processing system, three-dimensional (3D) coordinate data of an environment using the LIDAR sensor while the processing system moves through the environment. The method further includes capturing, by the camera, a panoramic image of the environment. The method further includes associating the panoramic image of the environment with the 3D coordinate data of the environment to generate a dataset for the environment. The method further includes generating a digital twin representation of the environment using the dataset for the environment.
09 - Scientific and electric apparatus and instruments
Goods & Services
(1) Simultaneous localization and mapping mobile handheld scanner; simultaneous localization and mapping mobile handheld scanner featuring an integrated panoramic camera; downloadable mobile app.
75.
Software camera view lock allowing editing of drawing without any shift in the view
A software camera lock is provided. A first image is displayed as a 3D image, wherein a semi-transparent second image overlays the first image. A software camera is inserted at a fixed location in the 3D image, wherein the software camera provides a field-of-view (FOV) displaying a portion of the 3D image, the FOV displaying a first reference in the FOV, the second image displaying a second reference that represents first reference and comprising an object. Software camera is locked in FOV using a lock software camera mode. A model is inserted in first image to match a location of the object in second image, wherein locking the software camera in the FOV causes the FOV of the first image to be maintained in place as the model is being moved in the first image to match the location of the object in second image.
A method for creating an augmented reality scene, the method comprising, by a computing device with a processor and a memory, receiving a first video image data and a second video image data; calculating an error value for a current pose between the two images by comparing the pixel colors in the first video image data and the second video image data; warping pixel coordinates into a second video image data through the use of the map of depth hypotheses for each pixel; varying the pose between the first video image data and the second video image data to find a warp that corresponds to a minimum error value; calculating, using the estimated poses, a new depth measurement for each pixel that is visible in both the first video image data and the second video image data.
A computer-implemented method includes identifying, by a controller, a part that is being transported to a workstation. The method further includes capturing a 3D scan of the part using a dynamic machine vision sensor. The method further includes validating the part by comparing the 3D scan of the part with a 3D model of the part. The method further includes, based on a determination that the part is valid, projecting a hologram that includes a sequence of assembly steps associated with the part. The method further includes, upon completion of the sequence of assembly steps, capturing a 3D scan of an item that is assembled using the part. The method further includes validating the item by comparing the 3D scan of the item with a 3D model of the item. The method further includes notifying a validity of the item.
G05B 19/4099 - Surface or curve machining, making 3D objects, e.g. desktop manufacturing
B23P 19/04 - Machines for simply fitting together or separating metal parts or objects, or metal and non-metal parts, whether or not involving some deformationTools or devices therefor so far as not provided for in other classes for assembling or disassembling parts
Examples described herein provide a method that includes receiving three-dimensional (3D) data of an object in an environment. The method further includes generating the point cloud-defined boundary around the object based at least in part on the 3D data.
Examples described herein provide a method that includes capturing data about an environment. The method further includes generating a database of two-dimensional (2D) features and associated three-dimensional (3D) coordinates based at least in part on the data about the environment. The method further includes determining a position (x, y, z) and an orientation (pitch, roll, yaw)of a device within the environment based at least in part on the database of 2D features and associated 3D coordinates. The method further includes causing the device to display, on a display of the device, an augmented reality element at a predetermined location based at least in part on the position and the orientation of the device.
G06T 19/00 - Manipulating 3D models or images for computer graphics
G06T 7/70 - Determining position or orientation of objects or cameras
G06V 10/40 - Extraction of image or video features
G06F 3/04815 - Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
80.
ALIGNING SCANS OF AN ENVIRONMENT USING A REFERENCE OBJECT
An example method includes receiving a first plurality of coordinate measurement points capturing a portion of an environment and a reference object within the environment, the first plurality of coordinate measurement points defining at least a portion of a first point cloud. The method further includes receiving a second plurality of coordinate measurement points from a position other than the at least one aerial position, the second plurality of coordinate measurement points capturing at least some of the portion of the environment and the reference object within the environment, the second plurality of coordinate measurement points defining at least a portion of a second point cloud. The method further includes aligning the first point cloud and the second point cloud based at least in part on the reference object captured in the first point cloud and the reference object captured the second point cloud to generate a combined point cloud.
42 - Scientific, technological and industrial services, research and design
Goods & Services
Providing temporary use of non-downloadable software for use with laser alignment devices, to assist in detecting and operating global positioning systems and sensors for determining position and distances
82.
AUGMENTED REALITY ALIGNMENT AND VISUALIZATION OF A POINT CLOUD
An example method includes generating a graphical representation of a point cloud of an environment overlaid on a video stream of the environment. The method further includes receiving a first selection of a first point pair, the first point pair including a first virtual point of the point cloud and a first real point of the environment, the first real point corresponding to the first virtual point. The method further includes receiving a second selection of a second point pair, the second point pair including a second virtual point of the point cloud and a second real point of the environment, the second real point corresponding to the second virtual point. The method further includes aligning the point cloud to the environment based at least in part on the first point pair and the second point pair and updating the graphical representation based at least in part on the aligning.
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Downloadable software for management of point clouds and
images from a diverse range of sensors including terrestrial
and mobile laser scanners, drones, 360° cameras, and mobile
phones. 3D point cloud processing software, namely, online
non-downloadable computer software to connects hardware with
cloud-based applications and services to allow users
georeference, align, merge, and classify point clouds and
logically organize data.
84.
SYSTEM AND METHOD OF COMBINING THREE DIMENSIONAL DATA
According to one aspect of the disclosure a method for generating a three-dimensional model of an environment is provided. The method includes acquiring a first plurality of 3D coordinates of surfaces in the environment in a first coordinate frame of reference using a first measurement device, the first plurality of 3D coordinates including at least one subset of 3D coordinates of a target, the first measurement device optically measuring the plurality of 3D coordinates. A second plurality of 3D coordinates of the environment are acquired in a second frame of reference using a second measurement device, the second measurement device being operably disposed in a fixed relationship to the target. The second plurality of 3D coordinates with the first plurality of 3D coordinates are registered in the first coordinate frame of reference based at least in part on the at least one subset of 3D coordinates and the fixed relationship.
G06F 30/13 - Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
Techniques are described to generate a 3D scene by mapping a point cloud with a 2D image, and colorize portions of the 3D scene synthetically. An input is received to select, from the 3D scene, a portion to be colorized synthetically. The colorizing includes generating a reflectance image based on an intensity image of the point cloud. The colorizing further includes generating an occlusion mask that identifies the selected portion in the reflectance image. The colorizing further includes estimating, using a trained machine learning model, a color for each of the one or more points in the selected portion based on the reflectance image, the occlusion mask, and the 2D image. The 3D scene is updated by using the estimated colors from the trained machine learning model to colorize the selected portion.
A system and method for measuring three-dimensional (3D) coordinate values of an environment is provided. The system includes a movable base unit a first scanner and a second scanner. One or more processors performing a method that includes causing the first scanner to determine first plurality of coordinate values in a first frame of reference based at least in part on a measurement by at least one sensor. The second scanner determines a second plurality of 3D coordinate values in a second frame of reference as the base unit is moved from a first position to a second position. The determining of the first coordinate values and the second plurality of 3D coordinate values being performed simultaneously. The second plurality of 3D coordinate values are registered in a common frame of reference based on the first plurality of coordinate values.
G01B 11/00 - Measuring arrangements characterised by the use of optical techniques
G01B 5/008 - Measuring arrangements characterised by the use of mechanical techniques for measuring coordinates of points using coordinate measuring machines
G01C 3/02 - Measuring distances in line of sightOptical rangefinders Details
G01C 21/16 - NavigationNavigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigatedDead reckoning by integrating acceleration or speed, i.e. inertial navigation
G01S 17/06 - Systems determining position data of a target
G06T 7/70 - Determining position or orientation of objects or cameras
87.
User interface for three-dimensional measurement device
A system and method for providing feedback on a quality of a 3D scan is provided. The system includes a coordinate scanner configured to optically measure and determine a plurality of three-dimensional coordinates to a plurality of locations on at least one surface in the environment, the coordinate scanner being configured to move through the environment while acquiring the plurality of three-dimensional coordinates. A display having a graphical user interface. One or more processors are provided that are configured to determine a quality attribute of a process of measuring the plurality of three-dimensional coordinates based at least in part on the movement of the coordinate scanner in the environment and display a graphical quality indicator on the graphical user interface based at least in part on the quality attribute, the quality indicator is a graphical element having at least one movable element.
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G01B 11/00 - Measuring arrangements characterised by the use of optical techniques
G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
(1) Downloadable software for management of point clouds and images from a diverse range of sensors including terrestrial and mobile laser scanners, drones, 360° cameras, and mobile phones. (1) 3D point cloud processing software, namely, online non-downloadable computer software to connects hardware with cloud-based applications and services to allow users georeference, align, merge, and classify point clouds and logically organize data.
89.
Artificial panorama image production and in-painting for occluded areas in images
A system includes a three-dimensional (3D) scanner, a camera with a viewpoint that is different from a viewpoint of the 3D scanner, and one or more processors coupled with the 3D scanner and the camera. The processors access a point cloud from the 3D scanner and one or more images from the camera, the point cloud comprises a plurality of 3D scan-points, a 3D scan-point represents a distance of a point in a surrounding environment from the 3D scanner, and an image comprises a plurality of pixels, a pixel represents a color of a point in the surrounding environment. The processors generate, using the point cloud and the one or more images, an artificial image that represents a portion of the surrounding environment viewed from an arbitrary position in an arbitrary direction, wherein generating the artificial image comprises colorizing each pixel in the artificial image.
H04N 13/25 - Image signal generators using stereoscopic image cameras using two or more image sensors with different characteristics other than in their location or field of view, e.g. having different resolutions or colour pickup characteristicsImage signal generators using stereoscopic image cameras using image signals from one sensor to control the characteristics of another sensor
H04N 13/271 - Image signal generators wherein the generated image signals comprise depth maps or disparity maps
Examples described herein provide a method that includes receiving point cloud data from a three-dimensional (3D) coordinate measurement device, the point cloud data corresponding at least in part to the object. The method further includes analyzing, by a processing system, the point cloud data by comparing a point of the point cloud data to a corresponding reference point from reference data to determine a distance between the point and the corresponding reference point, wherein the point and the corresponding reference point are associated with the object. The method further includes determining, by the processing system, whether a change to a location of the object occurred by comparing the distance to a distance threshold. The method further includes, responsive to determining that the change to the location of the object occurred, displaying a change indicium on a display of the processing system.
Examples described herein provide a method that includes performing at least one scan with a laser scanner, the laser scanner to generate a data, set that includes a plurality of three-dimensional coordinates of a floor. The method further includes determining, from the plurality of three-dimensional coordinates, with a processing device, a floor flatness and levelness deviation relative to a reference plane. The method further includes displaying, on a computer display, a graphical representation of the floor flatness and levelness deviation. The method further includes adjusting the floor flatness and levelness to be within a predetermined specification in response to determining the floor flatness and levelness deviation.
E01C 23/01 - Devices or auxiliary means for setting-out or checking the configuration of new surfacing, e.g. templates, screed supportsApplications of apparatus for measuring, indicating, or recording the surface configuration of existing surfacing, e.g. profilographs
G01C 5/00 - Measuring heightMeasuring distances transverse to line of sightLevelling between separated pointsSurveyors' levels
Examples described herein provide a method for denoising data. The method includes receiving an image pair, a disparity map associated with the image pair, and a scanned point cloud associated with the image pair. The method includes generating, using a machine learning model, a predicted point cloud based at least in part on the image pair and the disparity map. The method includes comparing the scanned point cloud to the predicted point cloud to identify noise in the scanned point cloud. The method includes generating a new point cloud without at least some of the noise based at least in part on comparing the scanned point cloud to the predicted point cloud.
09 - Scientific and electric apparatus and instruments
38 - Telecommunications services
Goods & Services
Downloadable mobile application for construction progress
management in 360-degree views; downloadable construction
progress management software for documentation of
photographs, documents, organization, viewing, sharing, and
managing construction site progress in 360-degree views;
downloadable mobile application and computer software for
use in the safeguarding of digital files, including text,
graphics, still images, and multimedia files concerning the
development of construction projects; downloadable mobile
application and computer software for upload, storage,
retrieval, download, transmission and delivery of digital
content concerning the development of construction projects. Electronic transmission of digital files among internet
users for use in the development of construction projects;
internet services, namely, providing multiple-user access to
information on the internet concerning the development of
construction projects.
A scanner that can detect types of targets in a scan are includes a processor, housing and a 3D scanner disposed within the housing The processor is configured to identify locations of one more checkerboard targets disposed in the scan area by: identifying transition locations where adjacent segments on a single scan line transition from a first color to a second color; recording locations of the transition locations as first to second color transition locations; identifying and recording transition locations where adjacent segments on a single scan line transition from the second color to the first color as second to first color transition locations; forming a transition line through adjacent first to second color transition locations and adjacent second to first color transition locations; and identifying a location of a checkerboard target based on the transition line.
A system includes a first type of measurement device that captures first 2D images, a second type of measurement device that captures 3D scans. A 3D scan includes a point cloud and a second 2D image. The system also includes processors that register the first 2D images. The method includes accessing the 3D scan that records at least a portion of the surrounding environment that is also captured by a first 2D image. Further, 2D features in the second 2D image are detected, and 3D coordinates from the point cloud are associated to the 2D features. 2D features are also detected in the first 2D image, and matching 2D features from the first 2D image and the second 2D image are identified. A position and orientation of the first 2D image is calculated in a coordinate system of the 3D scan using the matching 2D features.
A system includes a three-dimensional (3D) scanner, a camera, and one or more processors coupled with the 3D scanner and the camera. The processors capture a frame that includes a point cloud comprising plurality of 3D scan points and a 2D image. A 3D scan point represents a distance of a point in a surrounding environment from the 3D scanner. A pixel represents a color of a point in the surrounding environment. The processors identify, using a machine learning model, a subset of pixels that represents a reflective surface in the 2D image. Further, for each pixel in the subset of pixels, one or more corresponding 3D scan points is determined. An updated point cloud is created in the frame by removing the corresponding 3D scan points from the point cloud.
H04N 13/25 - Image signal generators using stereoscopic image cameras using two or more image sensors with different characteristics other than in their location or field of view, e.g. having different resolutions or colour pickup characteristicsImage signal generators using stereoscopic image cameras using image signals from one sensor to control the characteristics of another sensor
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Downloadable software for the management of point clouds and images from a diverse range of sensors, namely, terrestrial and mobile laser scanners, drones, 360° cameras, and mobile phones Providing temporary use of online non-downloadable computer software, namely, 3D point cloud processing software that connects hardware with cloud-based applications and services to allow users georeference, align, merge, and classify point clouds and logically organize data
09 - Scientific and electric apparatus and instruments
Goods & Services
Electronic transmission of digital files among internet users for use in the development of construction projects; internet services, namely, providing multiple-user access to information on the internet concerning the development of construction projects Downloadable mobile application software for construction progress management in 360-degree views; downloadable construction progress management software for documentation of photographs and documents and for organization, viewing, sharing, and managing of construction site progress in 360-degree views; Downloadable mobile application software and downloadable computer software for use in the safeguarding of digital files, including text, graphics, still images, and multimedia files concerning the development of construction projects; downloadable mobile application software and downloadable computer software for upload, storage, retrieval, download, transmission and delivery of digital content concerning the development of construction projects
09 - Scientific and electric apparatus and instruments
35 - Advertising and business services
37 - Construction and mining; installation and repair services
42 - Scientific, technological and industrial services, research and design
Goods & Services
(1) Handheld mapper, namely, surveying apparatus, sensors for the determination of distances for creating maps and floor plans of buildings; handheld scanner to capture and generate 2D and 3D floorplans and high-resolution data; handheld device, namely, 2D and 3D scanners, sensors for the determination of distances that enables rapid measurement and documentation of building floor plans in 2D and 3D; coordinate measuring machines (CMMs); portable articulated measurement devices for measuring physical properties of objects and locations; portable laser measurement scanners for measuring physical properties of objects and locations; computer software for use with CMMs and for use in measurement equipment for computer-aided manufacturing equipment, and software user manuals, all sold as a unit; computer software for use with laser measurement scanners, and user manuals, all sold as a unit; all of the aforementioned for use in the field of construction, manufacturing, and forensics. (1) Data processing services in the field of construction, manufacturing, forensics and coordinate measuring; data collection and retrieval services in the field of construction, manufacturing, forensics and coordinate measuring.
(2) Portable laser projection and measurement system installation, maintenance, and repair; consultation in the field of the installation, maintenance, and repair of portable laser projection and measurement systems; surveying apparatus and instruments for use in the field of construction, manufacturing, and forensics.
(3) Providing temporary use of online non-downloadable computer software used to display, modify, register, analyze, and provide measurements based on 3D data for use with laser measurement scanners, and user manuals, all sold as a unit; digitized 3D data capture services; mapping services, namely, digitized 3D data mapping capture; geophysical survey services; design and development of computer hardware and software for mapping and surveying; all of the aforementioned for use in the field of construction, manufacturing, and forensics; advice, information and consultancy services relating to all the aforesaid services; data storage.
42 - Scientific, technological and industrial services, research and design
Goods & Services
Electronic transmission of digital files among internet users; providing multiple-user access to global computer information networks concerning the development of construction projects Hosting of digital content on the internet concerning the development and status of construction projects; Providing temporary use of non-downloadable web-based decentralized applications that enables users to manage and share digital images and related digital content concerning the development and status of construction projects; Developing and managing application software for delivery of digital multi-media content provided for construction project management for use on wireless mobile devices; development of computer database in the field of digital multi-media content provided for construction project management for use on wireless mobile devices; providing temporary use of online non-downloadable computer software for digital image processing