Computerized systems and methods are disclosed, including a computer system that executes software that may receive a geographic location having one or more coordinates of a structure, receive a validation of the structure location, and generate unmanned aircraft information based on the one or more coordinates of the validated location. The unmanned aircraft information may include an offset from the walls of the structure to direct an unmanned aircraft to fly an autonomous flight path offset from the walls, and camera control information to direct a camera of the unmanned aircraft to capture images of the walls at a predetermined time interval while the unmanned aircraft is flying the flight path. The computer system may receive images of the walls captured by the camera while the unmanned aircraft is flying the autonomous flight path and generate a structure report based at least in part on the images.
G01S 19/39 - Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
B60R 1/27 - Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
B64C 39/02 - Aircraft not otherwise provided for characterised by special use
G06F 16/58 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
G06F 16/583 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
G06F 16/587 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
Methods and systems are disclosed including an imaging system comprising an image-capturing system having two or more image-capturing devices and positioned on a platform over a predefined target area at a first altitude above the Earth, the image-capturing devices configured to capture a set of images depicting contiguous, substantially contiguous, or partially overlapping geographic coverage sub-areas within the predefined target area, the image-capturing devices having variable focal lengths and variable fields of view; and a computer system selectively adjusting the orientation of the field of view of at least one of the image-capturing devices based at least in part on a change in the focal length of the image-capturing device(s).
H04N 23/698 - Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
G01C 11/02 - Picture-taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
H04N 23/90 - Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
H04N 23/695 - Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
G03B 37/04 - Panoramic or wide-screen photographyPhotographing extended surfaces, e.g. for surveyingPhotographing internal surfaces, e.g. of pipe with cameras or projectors providing touching or overlapping fields of view
G03B 15/00 - Special procedures for taking photographsApparatus therefor
Systems and methods for roof condition assessment from digital images using machine learning are disclosed, including receiving an image of a structure having roof characteristic(s), first pixel values depicting the structure, second pixel values outside of the structure depicting a background surrounding the structure, and first geolocation data; generating a synthetic shape image of the structure from the image using machine learning, including pixel values forming a synthetic outline shape, and having second geolocation data; mapping the synthetic shape onto the image, based on the first and second geolocation data, and changing the second pixel values so as to not depict the background; assessing roof characteristic(s) based on the first pixel values with a second machine learning algorithm resulting in a plurality of probabilities, each for a respective roof condition classification category, and determining a composite probability based upon the plurality of probabilities so as to classify the roof characteristic(s).
G06F 18/214 - Generating training patternsBootstrap methods, e.g. bagging or boosting
G06F 18/2415 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
4.
SYSTEMS AND METHODS FOR AUTOMATED DETECTION OF CHANGES IN EXTENT OF STRUCTURES USING IMAGERY
Systems and methods for automated detection of changes in extent of structures using imagery are disclosed, including a non-transitory computer readable medium storing computer executable code that when executed by a processor cause the processor to: align an outline of a structure at a first instance of time to pixels within an image depicting the structure, the image captured at a second instance of time; assess a degree of alignment between the outline and the pixels within the image depicting the structure, using a machine learning model to generate an alignment confidence score; determine an existence of a change in extent of the structure based upon the alignment confidence score indicating that the outline and the pixels within the image are not aligned; identify a shape of the change in extent of the structure; and store the shape of the change in extent of the structure.
G06V 10/75 - Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video featuresCoarse-fine approaches, e.g. multi-scale approachesImage or video pattern matchingProximity measures in feature spaces using context analysisSelection of dictionaries
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
Apparatuses, systems, methods, and medium are disclosed for precise geospatial structure geometry extraction from multi-view imagery, including a non-transitory computer readable medium storing computer executable code that when executed by a processor cause the processor to: receive an image of a structure having an outline, the image having pixels with first pixel values depicting the structure and second pixel values outside of the structure depicting a background of a geographic area surrounding the structure, and image metadata including first geolocation data; and generate a synthetic shape image of the structure from the image using a machine learning algorithm, the synthetic shape image including pixels having pixel values forming a synthetic shape of the outline, the synthetic shape image having second geolocation data derived from the first geolocation data.
Systems and methods are disclosed, including a non-transitory computer readable medium storing computer executable instructions that when executed by a processor cause the processor to identify a first image, a second image, and a third image, the first image overlapping the second image and the third image, the second image overlapping the third image; determine a first connectivity between the first image and the second image; determine a second connectivity between the first image and the third image; determine a third connectivity between the second image and the third image, the second connectivity being less than the first connectivity, the third connectivity being greater than the second connectivity; assign the first image, the second image, and the third image to a cluster based on the first connectivity and the third connectivity; conduct a bundle adjustment process on the cluster of the first image, the second image, and the third image.
G06V 10/75 - Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video featuresCoarse-fine approaches, e.g. multi-scale approachesImage or video pattern matchingProximity measures in feature spaces using context analysisSelection of dictionaries
G06T 5/50 - Image enhancement or restoration using two or more images, e.g. averaging or subtraction
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/46 - Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]Salient regional features
7.
SYSTEMS AND METHODS FOR AUTOMATED STRUCTURE MODELING FROM DIGITAL IMAGERY
Methods and systems for automated structure modeling form digital imagery are disclosed, including a method comprising receiving target digital images depicting a target structure; automatically identifying target elements of the target structure in the target digital images using convolutional neural network semantic segmentation; automatically generating a heat map model depicting a likelihood of a location of the target elements of the target structure; automatically generating a two-dimensional model or a three-dimensional model of the target structure based on the heat map model without further utilizing the target digital images; and extracting information regarding the target elements from the two-dimensional or the three-dimensional model of the target structure.
G06F 30/13 - Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
Methods and systems are disclosed for creating a computerized 3D model to include material property information for one or more regions of image textures of the computerized 3D model, including a method comprising: creating a computerized 3D model having image textures; examining a portion of a first image texture of the computerized 3D model having unknown material properties; assigning a material having a material property to the portion of the first image texture to indicate a physical material of a physical object represented by the portion of the first image texture, the material property having material property information about the physical materials; associating the material property information with the portion of the first image texture; and replacing the portion of the first image texture in the 3D model with a simulated texture of the assigned material.
G06T 15/00 - 3D [Three Dimensional] image rendering
G09G 5/36 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of individual graphic patterns using a bit-mapped memory
Systems and methods for automated detection of changes in extent of structures using imagery are disclosed, including a non-transitory computer readable medium storing computer executable code that when executed by a processor cause the processor to: align, with an image classifier model, an outline of a structure at a first instance of time to pixels within an image depicting the structure captured at a second instance of time; assess a degree of alignment between the outline and the pixels depicting the structure, so as to classify similarities between the structure depicted within the pixels of the image and the outline using a machine learning model to generate an alignment confidence score; and determine an existence of a change in the structure based upon the alignment confidence score indicating a level of confidence below a predetermined threshold level of confidence that the outline and the pixels within the image are aligned.
G06V 10/75 - Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video featuresCoarse-fine approaches, e.g. multi-scale approachesImage or video pattern matchingProximity measures in feature spaces using context analysisSelection of dictionaries
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
Image capture systems are disclosed, including an image capture system, comprising: an image capture device mounted on a moving platform, the image capture device having a sensor for capturing an aerial image having pixels; and a detection computer executing an abnormality detection algorithm for detecting an abnormality in the pixels of the aerial image immediately after the aerial image is captured by scanning the aerial image utilizing predetermined parameters indicative of characteristics of the abnormality and then automatically scheduling a re-shoot of the aerial image such that the re-shoot occurs prior to landing of the moving platform, wherein the abnormality detection algorithm causes the detection computer to scan the aerial image using pattern recognition techniques to detect the abnormality in the pixels of the aerial image.
Computerized systems and methods are disclosed, including a computer system that executes software that may receive a geographic location having one or more coordinates of a structure, receive a validation of the structure location, and generate unmanned aircraft information based on the one or more coordinates of the validated location. The unmanned aircraft information may include an offset from the walls of the structure to direct an unmanned aircraft to fly an autonomous flight path offset from the walls, and camera control information to direct a camera of the unmanned aircraft to capture images of the walls at a predetermined time interval while the unmanned aircraft is flying the flight path. The computer system may receive images of the walls captured by the camera while the unmanned aircraft is flying the autonomous flight path and generate a structure report based at least in part on the images.
G01S 19/39 - Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
G06F 16/583 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
G06F 16/58 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
B60R 1/00 - Optical viewing arrangementsReal-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
G06T 11/60 - Editing figures and textCombining figures or text
B64U 101/30 - UAVs specially adapted for particular uses or applications for imaging, photography or videography
Methods and systems are disclosed including an imaging system comprising an image- capturing system having two or more image-capturing devices and positioned on a platform over a predefined target area at a first altitude above the Earth, the image-capturing devices configured to capture a set of images depicting contiguous, substantially contiguous, or partially overlapping geographic coverage sub-areas within the predefined target area, the image-capturing devices having variable focal lengths and variable fields of view; and a computer system selectively adjusting the orientation of the field of view of at least one of the image-capturing devices based at least in part on a change in the focal length of the image-capturing device(s).
G01C 11/02 - Picture-taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
H04N 5/232 - Devices for controlling television cameras, e.g. remote control
G03B 15/00 - Special procedures for taking photographsApparatus therefor
G03B 37/04 - Panoramic or wide-screen photographyPhotographing extended surfaces, e.g. for surveyingPhotographing internal surfaces, e.g. of pipe with cameras or projectors providing touching or overlapping fields of view
13.
VARIABLE FOCAL LENGTH MULTI-CAMERA AERIAL IMAGING SYSTEM AND METHOD
Methods and systems are disclosed including an imaging system comprising an image- capturing system having two or more image-capturing devices and positioned on a platform over a predefined target area at a first altitude above the Earth, the image-capturing devices configured to capture a set of images depicting contiguous, substantially contiguous, or partially overlapping geographic coverage sub-areas within the predefined target area, the image-capturing devices having variable focal lengths and variable fields of view; and a computer system selectively adjusting the orientation of the field of view of at least one of the image-capturing devices based at least in part on a change in the focal length of the image-capturing device(s).
G01C 11/02 - Picture-taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
G03B 15/00 - Special procedures for taking photographsApparatus therefor
G03B 37/04 - Panoramic or wide-screen photographyPhotographing extended surfaces, e.g. for surveyingPhotographing internal surfaces, e.g. of pipe with cameras or projectors providing touching or overlapping fields of view
14.
Systems and methods for taking, processing, retrieving, and displaying images from unmanned aerial vehicles
Systems and methods for taking, processing, retrieving, and/or displaying images from unmanned aerial vehicles are disclosed, including an unmanned aerial vehicle, comprising: an image capture device; and a controller configured to: determine a flight plan of the unmanned aerial vehicle, the flight plan configured such that the unmanned aerial vehicle and fields of view of the image capture device are restricted to a geographic area within boundaries of a geographic location identified by coordinates of the geographic location; execute the flight plan; and capture, with the image capture device, one or more aerial images restricted to fields of view within the boundaries of the geographic location while executing the flight plan, such that items outside of the boundaries are not captured in the one or more aerial images.
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
G05D 1/10 - Simultaneous control of position or course in three dimensions
G06F 16/58 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
B64U 101/30 - UAVs specially adapted for particular uses or applications for imaging, photography or videography
G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
15.
SYSTEMS AND METHODS FOR AUTOMATED STRUCTURE MODELING FROM DIGITAL IMAGERY
Methods and systems for are disclosed, including a method comprising receiving target digital images depicting a target structure; automatically identifying target elements of the target structure in the target digital images using convolutional neural network semantic segmentation; automatically generating a heat map model depicting a likelihood of a location of the target elements of the target structure; automatically generating a two-dimensional model or a three-dimensional model of the target structure based on the heat map model without further utilizing the target digital images; and extracting information regarding the target elements from the two-dimensional or the three-dimensional model of the target structure.
G06F 30/13 - Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
Automated methods and systems are disclosed, including a method comprising: obtaining a first three-dimensional-data point cloud of a horizontal surface of an object of interest, the first three-dimensional-data point cloud having a first resolution and having a three-dimensional location associated with each point in the first three-dimensional-data point cloud; capturing one or more aerial image, at one or more oblique angle, depicting at least a vertical surface of the object of interest; analyzing the one or more aerial image with a computer system to determine three-dimensional locations of additional points on the object of interest; and updating the first three-dimensional-data point cloud with the three-dimensional locations of the additional points on the object of interest to create a second three-dimensional-data point cloud having a second resolution greater than the first resolution of the first three-dimensional-data point cloud.
Computer systems and methods are described for automatically generating a 3D model, including, with computer processor(s), obtaining geo-referenced images representing the geographic location of a structure containing one or more real façade texture of the structure; locating a geographical position of real façade texture(s) of the structure; selecting base oblique image(s) from the images by analyzing image raster content of the real façade texture depicted in the images with selection logic; analyzing the real façade texture to locate a geographical position of at least one occlusion using pixel pattern recognition of the real façade texture; locating oblique image(s) having an unoccluded image characteristic of the occlusion in the real façade texture; applying the real façade texture to wire-frame data of the structure to create a 3D model of the structure; and applying the unoccluded image characteristic to the real façade texture to remove the occlusion from the real façade texture.
Systems and methods are disclosed for creating a mosaic image of two or more geo-referenced source images, the geo-referenced source images having the same orientation, based on a ground confidence map created by analyzing pixels of one or more of the geo-referenced source images, the ground confidence map having values and data indicative of particular geographic locations represented by the values, at least one of the values indicative of a statistical probability that the particular geographic locations represented by the values represents the ground; and using routes for steering mosaic cut lines based at least in part on the values indicative of the statistical probability that the particular geographic locations represented by the values represents the ground of the ground confidence map, such that the routes have an increased statistical probability of cutting through pixels representative of the ground versus routes not based on the ground confidence map.
Systems and methods are disclosed, including a non-transitory computer readable medium storing computer executable instructions that when executed by a processor cause the processor to identify a first image, a second image, and a third image, the first image overlapping the second image and the third image, the second image overlapping the third image; determine a first connectivity between the first image and the second image; determine a second connectivity between the first image and the third image; determine a third connectivity between the second image and the third image, the second connectivity being less than the first connectivity, the third connectivity being greater than the second connectivity; assign the first image, the second image, and the third image to a cluster based on the first connectivity and the third connectivity; conduct a bundle adjustment process on the cluster of the first image, the second image, and the third image.
Systems and methods are disclosed, including a non-transitory computer readable medium storing computer executable instructions that when executed by a processor cause the processor to identify a first image, a second image, and a third image, the first image overlapping the second image and the third image, the second image overlapping the third image; determine a first connectivity between the first image and the second image; determine a second connectivity between the first image and the third image; determine a third connectivity between the second image and the third image, the second connectivity being less than the first connectivity, the third connectivity being greater than the second connectivity; assign the first image, the second image, and the third image to a cluster based on the first connectivity and the third connectivity; conduct a bundle adjustment process on the cluster of the first image, the second image, and the third image.
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06T 5/50 - Image enhancement or restoration using two or more images, e.g. averaging or subtraction
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G06V 10/46 - Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]Salient regional features
G06V 10/75 - Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video featuresCoarse-fine approaches, e.g. multi-scale approachesImage or video pattern matchingProximity measures in feature spaces using context analysisSelection of dictionaries
21.
SYSTEMS FOR THE CLASSIFICATION OF INTERIOR STRUCTURE AREAS BASED ON EXTERIOR IMAGES
Methods and systems are disclosed, including a computer system configured to automatically determine home living areas from digital imagery, comprising receiving digital image(s) depicting an exterior surface of a structure with exterior features having feature classification(s) of an interior of the structure; processing the depicted exterior surface into exterior feature segments with an exterior surface feature classifier model, each of the exterior feature segments corresponding to exterior feature(s); project each of the plurality of exterior feature segments into a coordinate system based at least in part on geographic image metadata, the projected exterior feature segments forming a structure model; generate a segmented classification map of the interior of the structure by fitting one or more geometric section into the structure model in a position and orientation based at least in part on the plurality of exterior feature segments.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 20/17 - Terrestrial scenes taken from planes or by drones
22.
SYSTEMS FOR THE CLASSIFICATION OF INTERIOR STRUCTURE AREAS BASED ON EXTERIOR IMAGES
Methods and systems are disclosed, including a computer system configured to automatically determine home living areas from digital imagery, comprising receiving digital image(s) depicting an exterior surface of a structure with exterior features having feature classification(s) of an interior of the structure; processing the depicted exterior surface into exterior feature segments with an exterior surface feature classifier model, each of the exterior feature segments corresponding to exterior feature(s); project each of the plurality of exterior feature segments into a coordinate system based at least in part on geographic image metadata, the projected exterior feature segments forming a structure model; generate a segmented classification map of the interior of the structure by fitting one or more geometric section into the structure model in a position and orientation based at least in part on the plurality of exterior feature segments.
Apparatuses, systems, methods, and medium are disclosed for precise geospatial structure geometry extraction from multi-view imagery, including a non-transitory computer readable medium storing computer executable code that when executed by a processor cause the processor to: receive an image of a structure having an outline, the image having pixels with first pixel values depicting the structure and second pixel values outside of the structure depicting a background of a geographic area surrounding the structure, and image metadata including first geolocation data; and generate a synthetic shape image of the structure from the image using a machine learning algorithm, the synthetic shape image including pixels having pixel values forming a synthetic shape of the outline, the synthetic shape image having second geolocation data derived from the first geolocation data.
Apparatuses, systems, methods, and medium are disclosed for precise geospatial structure geometry extraction from multi-view imagery, including a non-transitory computer readable medium storing computer executable code that when executed by a processor cause the processor to: receive an image of a structure having an outline, the image having pixels with first pixel values depicting the structure and second pixel values outside of the structure depicting a background of a geographic area surrounding the structure, and image metadata including first geolocation data; and generate a synthetic shape image of the structure from the image using a machine learning algorithm, the synthetic shape image including pixels having pixel values forming a synthetic shape of the outline, the synthetic shape image having second geolocation data derived from the first geolocation data.
Methods and systems are disclosed, including a computer system configured to automatically determine home living areas from digital imagery, comprising receiving digital image(s) depicting an exterior surface of a structure with exterior features having feature classification(s) of an interior of the structure; processing the depicted exterior surface into exterior feature segments with an exterior surface feature classifier model, each of the exterior feature segments corresponding to exterior feature(s); project each of the plurality of exterior feature segments into a coordinate system based at least in part on geographic image metadata, the projected exterior feature segments forming a structure model; generate a segmented classification map of the interior of the structure by fitting one or more geometric section into the structure model in a position and orientation based at least in part on the plurality of exterior feature segments.
G06F 30/13 - Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
Apparatuses, systems, methods, and medium are disclosed for precise geospatial structure geometry extraction from multi-view imagery, including a non-transitory computer readable medium storing computer executable code that when executed by a processor cause the processor to: receive an image of a structure having an outline, the image having pixels with first pixel values depicting the structure and second pixel values outside of the structure depicting a background of a geographic area surrounding the structure, and image metadata including first geolocation data; and generate a synthetic shape image of the structure from the image using a machine learning algorithm, the synthetic shape image including pixels having pixel values forming a synthetic shape of the outline, the synthetic shape image having second geolocation data derived from the first geolocation data.
Methods and systems for automated faux-manual image-marking of a digital image are disclosed, including a method comprising obtaining results of an automated analysis of one or more digital image indicative of determinations of structure abnormalities of one or more portions of a structure depicted in the one or more digital image; applying automatically, on the one or more digital image, with one or more computer processors, standardized markings indicative of the location in the image of the structure abnormalities of the structure depicted in the image; and generating, automatically with the one or more computer processors, one or more faux-manual markings by modifying one or more of the standardized markings, utilizing one or more image-manipulation algorithm, wherein the faux-manual markings mimic an appearance of manual markings on the structure in the real world.
Methods and systems for automated faux-manual image-marking of a digital image are disclosed, including a method comprising obtaining results of an automated analysis of one or more digital image indicative of determinations of structure abnormalities of one or more portions of a structure depicted in the one or more digital image; applying automatically, on the one or more digital image, with one or more computer processors, standardized markings indicative of the location in the image of the structure abnormalities of the structure depicted in the image; and generating, automatically with the one or more computer processors, one or more faux-manual markings by modifying one or more of the standardized markings, utilizing one or more image-manipulation algorithm, wherein the faux-manual markings mimic an appearance of manual markings on the structure in the real world.
Methods and systems for automated faux-manual image-marking of a digital image are disclosed, including a method comprising obtaining results of an automated analysis of one or more digital image indicative of determinations of structure abnormalities of one or more portions of a structure depicted in the one or more digital image; applying automatically, on the one or more digital image, with one or more computer processors, standardized markings indicative of the location in the image of the structure abnormalities of the structure depicted in the image; and generating, automatically with the one or more computer processors, one or more faux-manual markings by modifying one or more of the standardized markings, utilizing one or more image-manipulation algorithm, wherein the faux-manual markings mimic an appearance of manual markings on the structure in the real world.
Systems and methods for roof condition assessment from digital images using machine learning are disclosed, including receiving an image of a structure having roof characteristic(s), first pixel values depicting the structure, second pixel values outside of the structure depicting a background surrounding the structure, and first geolocation data; generating a synthetic shape image of the structure from the image using machine learning, including pixel values forming a synthetic outline shape, and having second geolocation data; mapping the synthetic shape onto the image, based on the first and second geolocation data, and changing the second pixel values so as to not depict the background; assessing roof characteristic(s) based on the first pixel values with a second machine learning algorithm resulting in a plurality of probabilities, each for a respective roof condition classification category, and determining a composite probability based upon the plurality of probabilities so as to classify the roof characteristic(s).
G06F 18/2415 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
Systems and methods for roof condition assessment from digital images using machine learning are disclosed, including receiving an image of a structure having roof characteristic(s), first pixel values depicting the structure, second pixel values outside of the structure depicting a background surrounding the structure, and first geolocation data; generating a synthetic shape image of the structure from the image using machine learning, including pixel values forming a synthetic outline shape, and having second geolocation data; mapping the synthetic shape onto the image, based on the first and second geolocation data, and changing the second pixel values so as to not depict the background; assessing roof characteristic(s) based on the first pixel values with a second machine learning algorithm resulting in a plurality of probabilities, each for a respective roof condition classification category, and determining a composite probability based upon the plurality of probabilities so as to classify the roof characteristic(s).
Automated methods and systems for feature extraction are disclosed, including automated methods performed by at least one processor running computer executable instructions stored on at least one non-transitory computer readable medium, comprising determining and isolating an object of interest within a point cloud; forming a modified point cloud having one or more data points with first location coordinates of the object of interest; and generating a boundary outline having second location coordinates of the object of interest using spectral analysis of at least one section of at least one image identified with the first location coordinates and depicting the object of interest.
G06K 9/62 - Methods or arrangements for recognition using electronic means
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/56 - Extraction of image or video features relating to colour
Systems and methods for roof condition assessment from digital images using machine learning are disclosed, including receiving an image of a structure having roof characteristic(s), first pixel values depicting the structure, second pixel values outside of the structure depicting a background surrounding the structure, and first geolocation data; generating a synthetic shape image of the structure from the image using machine learning, including pixel values forming a synthetic outline shape, and having second geolocation data; mapping the synthetic shape onto the image, based on the first and second geolocation data, and changing the second pixel values so as to not depict the background; assessing roof characteristic(s) based on the first pixel values with a second machine learning algorithm resulting in a plurality of probabilities, each for a respective roof condition classification category, and determining a composite probability based upon the plurality of probabilities so as to classify the roof characteristic(s).
A method of automatically transforming a computerized 3D model having regions of images utilized as textures on one or more physical objects represented in the 3D model (such as building sides and roofs, walls, landscapes, mountain sides, trees and the like) to include material property information for one or more regions of the textures of the 3D model. In this method, image textures applied to the 3D model are examined by comparing, utilizing a computer, at least a portion of each image texture to entries in a palette of material entries. The material palette entry that best matches the one contained in the image texture is assigned to indicate a physical material of the physical object represented by the 3D model. Then, material property information is stored in the computerized 3D model for the image textures that are assigned a material palette entry.
G06T 15/00 - 3D [Three Dimensional] image rendering
G09G 5/36 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of individual graphic patterns using a bit-mapped memory
Systems and methods are disclosed for using spatial filter to reduce bundle adjustment block size, including a method comprising: assigning a plurality of feature tracks to a voxel corresponding to a region of a geographic area, the voxel having a length, a width and a height, each feature track including a geographic coordinate within the region, a first image identifier identifying a first image, a second image identifier identifying a second image, a first pixel coordinate identifying a first location of a first feature in the first image, and a second pixel coordinate identifying a second location of the first feature within the second image; determining a quality metric value of the feature tracks assigned to the voxel; and conducting bundle adjustment on a subset of the feature tracks assigned to the voxel, the subset of the feature tracks based on the quality metric value.
Systems and methods are disclosed for using spatial filter to reduce bundle adjustment block size, including a method comprising: assigning a plurality of feature tracks to a voxel corresponding to a region of a geographic area, the voxel having a length, a width and a height, each feature track including a geographic coordinate within the region, a first image identifier identifying a first image, a second image identifier identifying a second image, a first pixel coordinate identifying a first location of a first feature in the first image, and a second pixel coordinate identifying a second location of the first feature within the second image; determining a quality metric value of the feature tracks assigned to the voxel; and conducting bundle adjustment on a subset of the feature tracks assigned to the voxel, the subset of the feature tracks based on the quality metric value.
Systems and methods for automated detection of changes in extent of structures using imagery are disclosed, including a non-transitory computer readable medium storing computer executable code that when executed by a processor cause the processor to: align, with an image classifier model, a structure shape of a structure at a first instance of time to pixels within an aerial image depicting the structure captured at a second instance of time; assess a degree of alignment between the structure shape and the pixels, so as to classify similarities between the structure depicted within the pixels and the structure shape using a machine learning model to generate an alignment confidence score; and determine an existence of a change in the structure based upon the alignment confidence score indicating a level of confidence below a predetermined threshold level of confidence that the structure shape and the pixels within the aerial image are aligned.
The present disclosure describes systems and processes, including processes in which first location data is received. Visual access to a first image corresponding to the first location data is provided, the first image including a roof structure of a building. A first computer input capable of signaling a designation from the user of a building roof structure location within the first image is provided. A designation of the building roof structure location within the first image is received. Responsive to receiving the designation of the building roof structure location, a second computer input capable of signaling user-acceptance of the building roof structure location within the first image is provided. Subsequent to receiving the user-acceptance confirming the designation of the building roof structure location, a report of the roof structure is provided.
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
G01B 11/28 - Measuring arrangements characterised by the use of optical techniques for measuring areas
G06T 7/62 - Analysis of geometric attributes of area, perimeter, diameter or volume
G06F 30/13 - Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
39.
Systems and methods for automated detection of changes in extent of structures using imagery
Systems and methods for automated detection of changes in extent of structures using imagery are disclosed, including a non-transitory computer readable medium storing computer executable code that when executed by a processor cause the processor to: align, with an image classifier model, a structure shape of a structure at a first instance of time to pixels within an aerial image depicting the structure captured at a second instance of time; assess a degree of alignment between the structure shape and the pixels, so as to classify similarities between the structure depicted within the pixels and the structure shape using a machine learning model to generate an alignment confidence score; and determine an existence of a change in the structure based upon the alignment confidence score indicating a level of confidence below a predetermined threshold level of confidence that the structure shape and the pixels within the aerial image are aligned.
Systems and methods are disclosed for using spatial filter to reduce bundle adjustment block size, including a method comprising: assigning a plurality of feature tracks to a voxel corresponding to a region of a geographic area, the voxel having a length, a width and a height, each feature track including a geographic coordinate within the region, a first image identifier identifying a first image, a second image identifier identifying a second image, a first pixel coordinate identifying a first location of a first feature in the first image, and a second pixel coordinate identifying a second location of the first feature within the second image; determining a quality metric value of the feature tracks assigned to the voxel; and conducting bundle adjustment on a subset of the feature tracks assigned to the voxel, the subset of the feature tracks based on the quality metric value.
Systems and methods for automated detection of changes in extent of structures using imagery are disclosed, including a non-transitory computer readable medium storing computer executable code that when executed by a processor cause the processor to: align, with an image classifier model, a structure shape of a structure at a first instance of time to pixels within an aerial image depicting the structure captured at a second instance of time; assess a degree of alignment between the structure shape and the pixels, so as to classify similarities between the structure depicted within the pixels and the structure shape using a machine learning model to generate an alignment confidence score; and determine an existence of a change in the structure based upon the alignment confidence score indicating a level of confidence below a predetermined threshold level of confidence that the structure shape and the pixels within the aerial image are aligned.
Image capture systems including a moving platform; an image capture device having a sensor for capturing an image, the image having pixels, mounted on the moving platform; and a detection computer executing an abnormality detection algorithm for detecting an abnormality in the pixels of the image immediately after the image is captured by scanning the image utilizing predetermined parameters indicative of characteristics of the abnormality and then automatically and immediately causing a re-shoot of the image.
A method for creating image products includes the following steps. Image data and positional data corresponding to the image data are captured and processed to create geo-referenced images. Edge detection procedures are performed on the geo-referenced images to identify edges and produce geo-referenced, edge-detected images. The geo-referenced, edge-detected images are saved in a database. A user interface to view and interact with the geo-referenced image is also provided such that the user can consistently select the same Points of Interest between multiple interactions and multiple users.
Image processing systems and methods are disclosed, including an image processing system comprising a computer running image processing software causing the computer to: divide an oblique aerial image into a plurality of sections, choose reference aerial image(s), having a consistent color distribution, for a first section and a second section; create a color-balancing transformation for the first and second sections of the oblique aerial image such that the first color distribution of the first section matches the consistent color distribution of the chosen reference aerial image and the second color distribution of the second section matches the consistent color distribution of the chosen reference aerial image; color-balance pixel(s) in the first and section sections of the oblique aerial image, such that at least one color-balancing transformation of the first and second sections matches the consistent color distribution of the reference aerial image(s).
Computer systems and methods are described for automatically generating a 3D model, including, with computer processor(s), obtaining geo-referenced images representing the geographic location of a structure containing one or more real façade texture of the structure; locating a geographical position of real façade texture(s) of the structure; selecting base oblique image(s) from the images by analyzing image raster content of the real façade texture depicted in the images with selection logic; analyzing the real façade texture to locate a geographical position of at least one occlusion using pixel pattern recognition of the real façade texture; locating oblique image(s) having an unoccluded image characteristic of the occlusion in the real façade texture; applying the real façade texture to wire-frame data of the structure to create a 3D model of the structure; and applying the unoccluded image characteristic to the real façade texture to remove the occlusion from the real façade texture.
Automated methods and systems are disclosed, including a method comprising: obtaining a first three-dimensional-data point cloud of a horizontal surface of an object of interest, the first three-dimensional-data point cloud having a first resolution and having a three-dimensional location associated with each point in the first three-dimensional-data point cloud; capturing one or more aerial image, at one or more oblique angle, depicting at least a vertical surface of the object of interest; analyzing the one or more aerial image with a computer system to determine three-dimensional locations of additional points on the object of interest; and updating the first three-dimensional-data point cloud with the three-dimensional locations of the additional points on the object of interest to create a second three-dimensional-data point cloud having a second resolution greater than the first resolution of the first three-dimensional-data point cloud.
Computerized systems and methods are disclosed, including a computer system that executes software that may receive a geographic location having one or more coordinates of a structure, receive a validation of the structure location, and generate unmanned aircraft information based on the one or more coordinates of the validated location. The unmanned aircraft information may include an offset from the walls of the structure to direct an unmanned aircraft to fly an autonomous flight path offset from the walls, and camera control information to direct a camera of the unmanned aircraft to capture images of the walls at a predetermined time interval while the unmanned aircraft is flying the flight path. The computer system may receive images of the walls captured by the camera while the unmanned aircraft is flying the autonomous flight path and generate a structure report based at least in part on the images.
G06F 16/583 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
G06F 16/58 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
H04N 21/431 - Generation of visual interfacesContent or additional data rendering
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
B60R 1/00 - Optical viewing arrangementsReal-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
G01S 19/39 - Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
G06T 11/60 - Editing figures and textCombining figures or text
A light deflection system for a LIDAR system on an aerial vehicle and method of use are herein disclosed. The light deflection system includes a light deflection element having a first end and a second end. The light deflection element is rotatable and balanced about an axis extending from the first end to the second end. The light deflection element has a first side with a first reflective surface at a first angle in relation to the axis and deflects light in a nadir direction relative to the aerial vehicle. The light deflection element also has a second side having a second reflective surface at a second angle in relation to the axis. The first angle is different from the second angle and configured to deflective light at an oblique angle relative to the aerial vehicle.
Systems and methods for creating oblique-mosaic image(s) for geographical area(s) are disclosed, including a computer system running software that when executed causes the system to create a mathematical model of a sensor of a virtual camera having an elevation greater than an elevation of a desired geographical area to be imaged, the mathematical model having an oblique-mosaic pixel map; determine surface locations for pixels included in the oblique-mosaic pixel map; select source oblique images of the surface locations of the pixels captured at an oblique angle and compass direction similar to an oblique angle and compass direction of the virtual camera; and reproject source oblique image pixels for pixels included in the oblique-mosaic pixel map such that reprojected pixels have differing sizes determined by matching projections from the virtual camera so as to present an oblique appearance, and thereby create the oblique-mosaic image of the desired geographical area.
Systems and methods are disclosed including a moving platform system suitable for mounting and use on a moving platform for communicating in real-time, comprising: a position system monitoring location of the moving platform and generating a sequence of time-based position data; a non-line of sight communication system; a high-speed line of sight communication system; and a computer system monitoring an availability of the non-line of sight communication system and the high-speed line of sight communication system and initiating connections when the non-line of sight communication system and the high-speed line of sight communication system are available, and receiving the sequence of time-based position data and transmitting the sequence of time-based position data via the at least one of the currently available non-line of sight communication system and the high-speed line of sight communication system.
Automated methods and systems are disclosed, including a method comprising: capturing images and three-dimensional LIDAR data of a geographic area with an image capturing device and a LIDAR system, the images depicting an object of interest and the three-dimensional LIDAR data including the object of interest, the image capturing device capturing the images of a vertical surface of the object of interest at one or more oblique angle, and the LIDAR system capturing the three-dimensional LIDAR data of a horizontal surface of the object of interest at a nadir angle; analyzing the images with a computer system to determine three dimensional locations of points on the object of interest; and updating the three-dimensional LIDAR data with the three dimensional locations of points on the object of interest determined by analyzing the images to create a 3D point cloud having a resolution greater than a resolution of the three-dimensional LIDAR data.
Systems and methods are disclosed for creating a mosaic image of two or more geo-referenced source images, the geo-referenced source images having the same orientation, based on a ground confidence map created by analyzing pixels of one or more of the geo-referenced source images, the ground confidence map having values and data indicative of particular geographic locations represented by the values, at least one of the values indicative of a statistical probability that the particular geographic locations represented by the values represents the ground; and using routes for steering mosaic cut lines based at least in part on the values indicative of the statistical probability that the particular geographic locations represented by the values represents the ground of the ground confidence map, such that the routes have an increased statistical probability of cutting through pixels representative of the ground versus routes not based on the ground confidence map.
The present disclosure describes systems and processes, including processes in which first location data is received. Visual access to a first image corresponding to the first location data is provided, the first image including a roof structure of a building. A first computer input capable of signaling a designation from the user of a building roof structure location within the first image is provided. A designation of the building roof structure location within the first image is received. Responsive to receiving the designation of the building roof structure location, a second computer input capable of signaling user-acceptance of the building roof structure location within the first image is provided. Subsequent to receiving the user-acceptance confirming the designation of the building roof structure location, a report of the roof structure is provided.
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
G01B 11/28 - Measuring arrangements characterised by the use of optical techniques for measuring areas
G06T 7/62 - Analysis of geometric attributes of area, perimeter, diameter or volume
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 30/13 - Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
54.
Automated system and methodology for feature extraction
Automated methods and systems for feature extraction are disclosed, including automated methods performed by at least one processor running computer executable instructions stored on at least one non-transitory computer readable medium, comprising determining and isolating an object of interest within a point cloud; forming a modified point cloud having one or more data points with first location coordinates of the object of interest; and generating a boundary outline having second location coordinates of the object of interest using spectral analysis of at least one section of at least one image identified with the first location coordinates and depicting the object of interest.
A computerized system is disclosed. The computer system executes software that receives a geographic location having one or more coordinates of a roof, receives a validation of the location of the roof, and generates unmanned aircraft information based on the one or more coordinates of the validated location. The unmanned aircraft information includes an offset from the roof to direct an unmanned aircraft to fly an autonomous flight path above the roof, and camera control information to direct a camera of the unmanned aircraft to capture images of the roof at a predetermined time interval while the unmanned aircraft is flying the flight path. The computer system receives images of the roof captured by the camera while the unmanned aircraft is flying the autonomous flight path and generates a structure report for the roof based at least in part on the images.
G06F 16/583 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
G06F 16/58 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
H04N 21/431 - Generation of visual interfacesContent or additional data rendering
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
B60R 1/00 - Optical viewing arrangementsReal-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
G01S 19/39 - Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
G06T 11/60 - Editing figures and textCombining figures or text
Methods and systems are disclosed including a method comprising, with one or more computer processors, associating geographic position data and orientation data of the one or more video capture devices with each video frame of a geographic area; analyzing the geographic position data and orientation data and the video frames to generate geo-referencing data for pixels of the video frames; determining a geographical boundary of the video frame from the geo-referencing data; receiving, one or more layers of geographic information system (GIS) data using the determined geographical boundary of the video frame; and determining overlay position of the geographic information system (GIS) data on the video frames in real time based at least in part on the geo-referencing data; and overlaying at least a portion of the geographic information system (GIS) data on the video frames based on the overlay position.
Image capture systems including a moving platform; an image capture device having a sensor for capturing an image, the image having pixels, mounted on the moving platform; and a detection computer executing an abnormality detection algorithm for detecting an abnormality in the pixels of the image immediately after the image is captured by scanning the image utilizing predetermined parameters indicative of characteristics of the abnormality and then automatically and immediately causing a re-shoot of the image.
A method of automatically transforming a computerized 3D model having regions of images utilized as textures on one or more physical objects represented in the 3D model (such as building sides and roofs, walls, landscapes, mountain sides, trees and the like) to include material property information for one or more regions of the textures of the 3D model. In this method, image textures applied to the 3D model are examined by comparing, utilizing a computer, at least a portion of each image texture to entries in a palette of material entries. The material palette entry that best matches the one contained in the image texture is assigned to indicate a physical material of the physical object represented by the 3D model. Then, material property information is stored in the computerized 3D model for the image textures that are assigned a material palette entry.
G06T 15/00 - 3D [Three Dimensional] image rendering
G09G 5/36 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of individual graphic patterns using a bit-mapped memory
Computer systems and methods are described for automatically generating a 3D model, including identifying wire-frame data of a structure within an area of interest; obtaining, using a geographical location of the structure, multiple geo-referenced images representing the geographic location of the structure and containing one or more real façade texture of the structure; locating a geographical position of one or more real façade texture of the structure; selecting one or more base oblique image from the multiple geo-referenced images by analyzing image raster content of the real façade texture depicted in the multiple geo-referenced images with selection logic, the selection logic analyzing at least two factors of each of the multiple geo-referenced images; and, applying the real façade texture of the one or more base oblique image to the wire-frame data of the structure to create a three dimensional model providing a real-life representation of physical characteristics of the structure.
Image processing systems and methods are disclosed, including an image processing system comprising a computer running image processing software causing the computer to divide an oblique aerial image into a plurality of sections having different color distributions, choose a first reference aerial image, having a consistent color distribution, for the first section, color-balance at least one pixel in the first section of the oblique aerial image, such that the first color distribution of the first section matches the consistent color distribution of the first reference aerial image, by performing color-balancing transformations for color bands for each of the at least one pixel in the first section.
Methods and systems are disclosed including a computer storage medium, comprising instructions that when executed by one or more processors included in an Unmanned Aerial Vehicle (UAV), cause the UAV to perform operations, comprising: receiving, by the UAV, a flight plan configured to direct the UAV to fly a flight path having a plurality of waypoints adjacent to and above a structure and to capture sensor data of the structure from a camera on the UAV while the UAV is flying the flight path; adjusting an angle of an optical axis of the camera mounted to a gimbal to a predetermined angle within a range of 25 degrees to 75 degrees relative to a downward direction, and capturing sensor data of at least a portion of a roof of the structure with the optical axis of the camera aligned with at least one predetermined location on the structure.
A computerized system, comprising: a computer system having an input unit, a display unit, one or more processors and one or more non-transitory computer readable medium, the one or more processors executing image display and analysis software to cause the one or more processors to: receive an identification of a structure from the input device, the structure having multiple sides, an outline, and a height; obtain characteristics of a camera mounted onto an unmanned aircraft; generate unmanned aircraft information including: flight path information configured to direct the unmanned aircraft to fly a flight path around the structure that is laterally and vertically offset from the structure, the lateral and vertical offset being dependent upon the height of the structure, an orientation of the camera relative to the unmanned aircraft, and the characteristics of the camera; and, store the unmanned aircraft information on the one or more non-transitory computer readable medium.
G01S 19/39 - Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
G06F 16/583 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
G06F 16/58 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
B60R 1/00 - Optical viewing arrangementsReal-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
G06T 11/60 - Editing figures and textCombining figures or text
H04N 5/445 - Receiver circuitry for displaying additional information
A computerized system, comprising: a computer system having an input unit, a display unit, one or more processors and one or more non-transitory computer readable medium, the one or more processors executing image display and analysis software to cause the one or more processors to: receive an identification of a structure from the input device, the structure having multiple sides, an outline, and a height; obtain characteristics of a camera mounted onto an unmanned aircraft; generate unmanned aircraft information including: flight path information configured to direct the unmanned aircraft to fly a flight path around the structure that is laterally and vertically offset from the structure, the lateral and vertical offset being dependent upon the height of the structure, an orientation of the camera relative to the unmanned aircraft, and the characteristics of the camera; and, store the unmanned aircraft information on the one or more non-transitory computer readable medium.
G06F 17/30 - Information retrieval; Database structures therefor
G06T 11/60 - Editing figures and textCombining figures or text
B60R 1/00 - Optical viewing arrangementsReal-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
G01S 19/39 - Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
Computer systems and methods are described for automatically generating a 3D model, including locating a geographical location of a structure using wire-frame data of the structure; obtaining, using the geographical location of the structure, geo-referenced images representing the geographic location of the structure and containing one or more real façade texture of the structure; locating a geographical position of one or more real façade texture of the structure; selecting one or more base oblique image from the geo-referenced images by analyzing, with selection logic, image raster content of the real façade texture depicted in the multiple geo-referenced images, the selection logic using a factorial analysis of the image raster content; and applying the real façade texture of the base oblique image to the wire-frame data of the structure to create a three dimensional model providing a real-life representation of physical characteristics of the structure.
A method of automatically transforming a computerized 3D model having regions of images utilized as textures on one or more physical objects represented in the 3D model (such as building sides and roofs, walls, landscapes, mountain sides, trees and the like) to include material property information for one or more regions of the textures of the 3D model. In this method, image textures applied to the 3D model are examined by comparing, utilizing a computer, at least a portion of each image texture to entries in a palette of material entries. The material palette entry that best matches the one contained in the image texture is assigned to indicate a physical material of the physical object represented by the 3D model. Then, material property information is stored in the computerized 3D model for the image textures that are assigned a material palette entry.
G06T 15/00 - 3D [Three Dimensional] image rendering
G09G 5/36 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of individual graphic patterns using a bit-mapped memory
Image capture systems including a moving platform; an image capture device having a sensor for capturing an image, the image having pixels, mounted on the moving platform; and a detection computer executing an abnormality detection algorithm for detecting an abnormality in the pixels of the image immediately after the image is captured by scanning the image utilizing predetermined parameters indicative of characteristics of the abnormality and then automatically and immediately causing a re-shoot of the image.
A computer system running image processing software receives an identification of a desired scene of a geographical area for which an oblique-mosaic image is desired including one or more geometry parameters of a virtual camera; creates a mathematical model of the virtual camera having mathematical values that define the camera geometry parameters that configure the model to capture the geographical area, and looking down at an oblique angle; creates a ground elevation model of the ground and vertical structures within the oblique-mosaic pixel map, wherein source images were captured at an oblique angle and compass direction similar to the oblique angle and compass direction of the virtual camera; and reprojects, with the mathematical model, source oblique image pixels of the overlapping source images for pixels included in the oblique-mosaic pixel map using the ground elevation model to thereby create the oblique-mosaic image of the geographical area.
A system and method for generating multi-3D perspective floor plans having real-life physical characteristics. The multi-3D perspective floor plans may be generated using image data and related to a floor plan of a structure.
G06F 30/13 - Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
G06T 19/00 - Manipulating 3D models or images for computer graphics
H04W 4/02 - Services making use of location information
G01C 15/00 - Surveying instruments or accessories not provided for in groups
The present disclosure describes systems and processes, including processes in which first location data is received. Visual access to a first image corresponding to the first location data is provided, the first image including a roof structure of a building. A first computer input capable of signaling a designation from the user of a building roof structure location within the first image is provided. A designation of the building roof structure location within the first image is received. Responsive to receiving the designation of the building roof structure location, a second computer input capable of signaling user-acceptance of the building roof structure location within the first image is provided. Responsive to receiving the user-acceptance confirming the designation of the building roof structure location, visual access to one or more second images corresponding to geographic location coordinates of the building roof structure location is provided.
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06T 7/62 - Analysis of geometric attributes of area, perimeter, diameter or volume
Systems and methods for creating a ground confidence map of a geographic area, comprising the steps of creating a ground confidence map of a geographic area, the ground confidence map having a plurality of pixels with each pixel corresponding to a particular geographic location; assigning the pixels in the ground confidence map with pixel values indicative of composite ground confidence scores; and storing pixel values indicative of a statistical probability that the geographical location represented by the particular pixels represent the ground.
37 - Construction and mining; installation and repair services
41 - Education, entertainment, sporting and cultural services
42 - Scientific, technological and industrial services, research and design
Goods & Services
Providing construction and repair data, information and
reports regarding buildings and other structures to
homeowners, contractors, insurance adjusters, insurance
companies, and infrastructure managers; providing damage
assessment data, information and reports regarding buildings
and other structures to homeowners, contractors, insurance
adjusters, insurance companies, and infrastructure managers. Digital imaging services; aerial imagery services;
educational services, namely, conducting conferences and
workshops in the field of geo-referenced, orthogonal, and
oblique aerial imaging. Providing information and analysis relating to measurements
of roofs, walls, windows, solar panels, other architectural
or design features of buildings or other structures,
vegetation, or terrain obtained from aerial imagery for use
by professionals in the fields of insurance, finance, real
estate, assessment, construction, and public safety and
planning and infrastructure management; providing temporary
use of on-line non-downloadable software for photogrammetric
analysis; providing online application services featuring
software for use in generating damage assessments, cost
estimates, material lists, and proposals for construction
and repair of buildings and other structures, and for
ordering materials from building material suppliers;
providing information and analysis relating to measurements
of roofs, walls, windows, solar panels, other architectural
or design features of buildings or other structures,
vegetation, or terrain made on orthogonal and oblique
imagery for purposes of construction and repair by third
parties; providing computer modeling services based upon
aerial imagery, namely, creating from aerial imagery
computer representations of roofs, walls, windows, solar
panels, other architectural or design features of buildings
or other structures, vegetation, or terrain.
74.
System and method for performing sensitive geo-spatial processing in non-sensitive operator environments
Methods and systems are disclosed including dividing a sensitive geographic region into three or more work regions; selecting geo-referenced aerial images for each particular work region such that at least a portion of the particular work region is depicted; transmitting, without information indicative of the geographic location depicted, a first of the images for a first of the work regions to an operator user device; receiving, from the device, at least one image coordinate selected within the first image by the operator, wherein the image coordinate is a relative coordinate based on a unique pixel location within the image raster content of the first image; and transmitting, without information indicative of the geographic location depicted, a second of the images for a second of the work regions to the operator user device, wherein the first and second work regions are not geographically contiguous.
A computer system is described for automatically generating a 3D model, including non-transitory computer readable medium storing instructions that when executed by hardware cause it to obtain a series of geographical points regarding a structure within a geographic area; identify a geographic location of the structure; retrieve multiple geo-referenced oblique images representing the geographic location and containing a real façade texture of the structure; locate a geographical position of a real façade texture of the structure; select one or more base oblique image from the multiple oblique images by analyzing, with selection logic, image raster content of the real façade texture depicted in the multiple oblique images, and, apply the real façade texture of the one or more base oblique image to the series of geographical points of the structure to create a three dimensional model providing a real-life representation of physical characteristics of the structure.
A computerized method performed by an unmanned aerial vehicle (UAV), comprising: receiving, by the UAV, a flight plan comprising a plurality of inspection locations for a structure, wherein the plurality of inspection locations each comprise a waypoint having a geospatial reference; navigating to ascend to a first altitude above the structure; conducting an inspection for an object of interest in at least one of the plurality of inspection locations according to the flight plan, the inspection comprising: navigating to a position above a surface of the structure associated with the object of interest based on monitoring an active sensor, and obtaining, while within a particular distance from the surface of the structure, information from one or more sensors describing the structure such that obtained information includes at least a particular level of detail; navigating to another inspection location of the plurality of inspection locations; and navigating to a landing location.
B60R 1/00 - Optical viewing arrangementsReal-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
G01S 19/39 - Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
G06F 17/30 - Information retrieval; Database structures therefor
G06T 11/60 - Editing figures and textCombining figures or text
H04N 5/445 - Receiver circuitry for displaying additional information
Apparatuses, systems, and methods are disclosed including an unmanned aerial vehicle (UAV), comprising: a collision detection and avoidance system comprising at least one active distance detector; and one or more processors configured to: receive a flight path with instructions for the UAV to travel from its current location to at least one other location; determine direction priorities for the collision detection and avoidance system based at least in part on the flight path; determine an obstacle for avoidance by the UAV based on the flight path going through the obstacle; receive distance data generated by the collision detection and avoidance system concerning the obstacle; process the distance data based at least in part on the determined direction priorities; and execute a target path for traveling around the obstacle and to the at least one other location based at least in part on the flight path and the distance data.
B60R 1/00 - Optical viewing arrangementsReal-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
G01S 19/39 - Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
G06F 17/30 - Information retrieval; Database structures therefor
G06T 11/60 - Editing figures and textCombining figures or text
H04N 5/445 - Receiver circuitry for displaying additional information
Methods and systems are disclosed including a computer storage medium, comprising instructions that when executed by one or more processors included in an Unmanned Aerial Vehicle (UAV), cause the UAV to perform operations, comprising: receiving, by the UAV, a flight plan configured to direct the UAV to fly a flight path having a plurality of waypoints adjacent to and above a structure and to capture sensor data of the structure from a camera on the UAV while the UAV is flying the flight path; adjusting an angle of an optical axis of the camera mounted to a gimbal to a predetermined angle within a range of 25 degrees to 75 degrees relative to a downward direction, and capturing sensor data of at least a portion of a roof of the structure with the optical axis of the camera aligned with at least one predetermined location on the structure.
G01C 23/00 - Combined instruments indicating more than one navigational value, e.g. for aircraftCombined measuring devices for measuring two or more variables of movement, e.g. distance, speed or acceleration
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
B60R 1/00 - Optical viewing arrangementsReal-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
G01S 19/39 - Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
G06F 17/30 - Information retrieval; Database structures therefor
G06T 11/60 - Editing figures and textCombining figures or text
H04N 5/445 - Receiver circuitry for displaying additional information
Systems and methods are disclosed for early access to captured images including receiving a request for at least one image of a geographic area from a client application of an operator user device; querying records within a geospatial database to locate one or more records of images accessible by the geospatial database and depicting at least a portion of the geographic area; reading information within the one or more records depicting at least a portion of the geographic area to determine a status of an image within the one or more records, the status of the image indicating that the image is an in process captured image in which the image has not been fully processed; and presenting at least a portion of the image to the client application of the operator user device with a status indicator indicating the stage in processing of the geographic area.
Methods and systems are disclosed including a computerized system, comprising: a computer system having an input unit, a display unit, one or more processors, and one or more non-transitory computer readable medium, the one or more processors executing software to cause the one or more processors to: display on the display unit one or more images, from an image database, depicting a structure; receive a validation from the input unit indicating a validation of a location of the structure depicted in the one or more images; generate unmanned aircraft information including flight path information configured to direct an unmanned aircraft to fly a flight path above the structure and capture sensor data from a camera on the unmanned aircraft while the unmanned aircraft is flying the flight path; receive the sensor data from the unmanned aircraft; and generate a structure report based at least in part on the sensor data.
G01C 23/00 - Combined instruments indicating more than one navigational value, e.g. for aircraftCombined measuring devices for measuring two or more variables of movement, e.g. distance, speed or acceleration
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
B60R 1/00 - Optical viewing arrangementsReal-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
G01S 19/39 - Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
G06F 17/30 - Information retrieval; Database structures therefor
G06T 11/60 - Editing figures and textCombining figures or text
H04N 5/445 - Receiver circuitry for displaying additional information
A computer system running image processing software receives an identification of a desired scene of a geographical area for which an oblique-mosaic image is desired including one or more geometry parameters of a virtual camera; creates a mathematical model of the virtual camera having mathematical values that define the camera geometry parameters that configure the model to capture the geographical area, and looking down at an oblique angle; creates a ground elevation model of the ground and vertical structures within the oblique-mosaic pixel map, wherein source images were captured at an oblique angle and compass direction similar to the oblique angle and compass direction of the virtual camera; and reprojects, with the mathematical model, source oblique image pixels of the overlapping source images for pixels included in the oblique-mosaic pixel map using the ground elevation model to thereby create the oblique-mosaic image of the geographical area.
37 - Construction and mining; installation and repair services
41 - Education, entertainment, sporting and cultural services
42 - Scientific, technological and industrial services, research and design
Goods & Services
(1) Providing construction and repair data, information and reports regarding buildings and other residential, commercial and industrial structures to homeowners, contractors, insurance adjusters, insurance companies and infrastructure managers; providing damage assessment data, information and reports regarding buildings and other residential, commercial and industrial structures to homeowners, contractors, insurance adjusters, insurance companies and infrastructure managers.
(2) Digital imaging services; aerial imagery services.
(3) Educational services, namely, conducting conferences and workshops in the field of geo-referenced, orthogonal and oblique aerial imaging.
(4) Providing information and analysis relating to measurements of roofs, walls, windows, solar panels, other architectural or design features of buildings and other residential, commercial and industrial structures, vegetation and terrain obtained from aerial imagery for use by professionals in the fields of insurance, finance, real estate, assessment, construction, and public safety and planning and infrastructure management; providing temporary use of on-line non-downloadable software for photogrammetric analysis.
(5) Providing online application services featuring software for use in generating damage assessments, cost estimates, material lists and proposals for construction and repair of buildings and other residential, commercial and industrial structures, and for ordering materials from building material suppliers; providing information and analysis relating to measurements of roofs, walls, windows, solar panels, other architectural and design features of buildings and other residential, commercial and industrial structures, vegetation and terrain made on orthogonal and oblique imagery for purposes of construction and repair by third parties.
(6) Providing computer modeling services based upon aerial imagery, namely, creating from aerial imagery computer representations of roofs, walls, windows, solar panels, other architectural or design features of buildings and other residential, commercial and industrial structures, vegetation and terrain.
An automated method performed by at least one processor running computer executable instructions stored on at least one non-transitory computer readable medium, comprising: classifying first data points identifying at least one man-made roof structure within a point cloud and classifying second data points associated with at least one of natural structures and ground surface to form a modified point cloud; identifying at least one feature of the man-made roof structure in the modified point cloud; and generating a roof report including the at least one feature.
G01B 13/20 - Measuring arrangements characterised by the use of fluids for measuring areas, e.g. pneumatic planimeters
G01C 11/02 - Picture-taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
G06K 9/46 - Extraction of features or characteristics of the image
G06K 9/66 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix references adjustable by an adaptive method, e.g. learning
An automated method performed by at least one processor running computer executable instructions stored on at least one non-transitory computer readable medium, comprising: classifying first data points identifying at least one man-made roof structure within a point cloud and classifying second data points associated with at least one of natural structures and ground surface to form a modified point cloud; identifying at least one feature of the man-made roof structure in the modified point cloud; and generating a roof report including the at least one feature.
An automated method performed by at least one processor running computer executable instructions stored on at least one non-transitory computer readable medium, comprising: classifying first data points identifying at least one man-made roof structure within a point cloud and classifying second data points associated with at least one of natural structures and ground surface to form a modified point cloud; identifying at least one feature of the man-made roof structure in the modified point cloud; and generating a roof report including the at least one feature.
Image capture systems including a moving platform; an image capture device having a sensor for capturing an image, the image having pixels, mounted on the moving platform; and a detection computer executing an abnormality detection algorithm for detecting an abnormality in the pixels of the image immediately after the image is captured by scanning the image utilizing predetermined parameters indicative of characteristics of the abnormality and then automatically and immediately causing a re-shoot of the image.
A method comprising receiving aerial images captured by one or more unmanned aerial vehicle; receiving metadata associated with the aerial images captured by the one or more unmanned aerial vehicle; geo-referencing the aerial images based on a geographic location of a surface to determine geographic coordinates of pixels of the aerial images; receiving a geographic location from a user; retrieving one or more of the aerial images associated with the geographic location based on the determined geographic coordinates; and displaying to the user one or more overview image depicting the geographic location and overlaid with one or more icons indicative of and associated with the retrieved aerial images associated with the geographic location.
A method comprising receiving aerial images captured by one or more unmanned aerial vehicle; receiving metadata associated with the aerial images captured by the one or more unmanned aerial vehicle; geo-referencing the aerial images based on a geographic location of a surface to determine geographic coordinates of pixels of the aerial images; receiving a geographic location from a user; retrieving one or more of the aerial images associated with the geographic location based on the determined geographic coordinates; and displaying to the user one or more overview image depicting the geographic location and overlaid with one or more icons indicative of and associated with the retrieved aerial images associated with the geographic location.
A computer system is described for automatically generating a 3D model, including hardware and non-transitory computer readable medium accessible by the hardware and storing instructions that when executed by the hardware cause it to obtain wire-frame data of a structure within a geographic area; identify a geographic location of the structure; retrieve multiple geo-referenced oblique images representing the geographic location and containing a real façade texture of the structure; locate a geographical position of a real façade texture of the structure; select one or more base oblique image from the multiple oblique images by analyzing, with selection logic, image raster content of the real façade texture depicted in the multiple oblique images, and, apply the real façade texture of the one or more base oblique image to the wire-frame data of the structure to create a three dimensional model providing a real-life representation of physical characteristics of the structure.
Automated methods and systems are disclosed, including a method comprising: capturing images and three-dimensional LIDAR data of a geographic area with an image capturing device and a LIDAR system, as well as location and orientation data for each of the images corresponding to the location and orientation of the image capturing device capturing the images, the images depicting an object of interest and the three-dimensional LIDAR data including the object of interest; storing the three-dimensional LIDAR data on a non-transitory computer readable medium; analyzing the images with a computer system to determine three dimensional locations of points on the object of interest; and updating the three-dimensional LIDAR data with the three dimensional locations of points on the object of interest determined by analyzing the images to create a 3D point cloud.
G16H 40/20 - ICT specially adapted for the management or administration of healthcare resources or facilitiesICT specially adapted for the management or operation of medical equipment or devices for the management or administration of healthcare resources or facilities, e.g. managing hospital staff or surgery rooms
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
Processes and systems including providing at least one computer input field for a user to input location data generally corresponding to the location of the building; providing visual access to a first aerial image of a region including a roof structure of a building corresponding to said location data, the first aerial image having a first resolution; providing a computer input capable of signaling user-acceptance of a final location within the aerial image; and providing visual access to one or more second aerial images of an aerial imagery database corresponding to location coordinates of the final location, the one or more second aerial images having a second resolution that is higher resolution than the first resolution.
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06T 7/62 - Analysis of geometric attributes of area, perimeter, diameter or volume
93.
Mosaic oblique images and methods of making and using same
A computer system running image processing software receives identification of a geographical area for which an oblique-mosaic image is desired; assigns surface locations to pixels included in an oblique-mosaic pixel map of the geographical area encompassing multiple source images, the oblique-mosaic pixel map being part of a mathematical model of a virtual camera looking down at an oblique angle onto the geographical area; creates a ground elevation model of the ground and vertical structures within the oblique-mosaic pixel map using overlapping source images of the geographical area, wherein the source images were captured at an oblique angle and compass direction similar to the oblique angle and compass direction of the virtual camera; and reprojects, with the mathematical model, source oblique image pixels of the overlapping source images for pixels included in the oblique-mosaic pixel map using the ground elevation model to thereby create an oblique-mosaic image of the geographical area.
A method of automatically transforming a computerized 3D model having regions of images utilized as textures on one or more physical objects represented in the 3D model (such as building sides and roofs, walls, landscapes, mountain sides, trees and the like) to include material property information for one or more regions of the textures of the 3D model. In this method, image textures applied to the 3D model are examined by comparing, utilizing a computer, at least a portion of each image texture to entries in a palette of material entries. The material palette entry that best matches the one contained in the image texture is assigned to indicate a physical material of the physical object represented by the 3D model. Then, material property information is stored in the computerized 3D model for the image textures that are assigned a material palette entry.
G06T 15/00 - 3D [Three Dimensional] image rendering
G09G 5/36 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of individual graphic patterns using a bit-mapped memory
An unmanned aircraft structure evaluation system includes a computer system with an input unit, a display unit, one or more processors, and one or more non-transitory computer readable medium. Image display and analysis software causes the one or more processors to generate unmanned aircraft information. The unmanned aircraft information includes flight path information configured to direct an unmanned aircraft to fly a flight path around the structure.
G01C 23/00 - Combined instruments indicating more than one navigational value, e.g. for aircraftCombined measuring devices for measuring two or more variables of movement, e.g. distance, speed or acceleration
G05D 1/00 - Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
B60R 1/00 - Optical viewing arrangementsReal-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
G01S 19/39 - Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
G06F 17/30 - Information retrieval; Database structures therefor
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
G06T 11/60 - Editing figures and textCombining figures or text
96.
System and process for color-balancing a series of oblique images
Image processing systems and methods are disclosed, including an image processing system comprising a computer running image processing software causing the computer to receive an oblique aerial image having at least a first section and a second section, wherein the first section has a first color distribution and the second section has a second color distribution; receive at least two reference images having consistent color distributions; receive geographic information about the at least two reference images and the oblique aerial image; choose at least a one reference image for each of the first and second sections based on the geographic information; and create at least one color-balancing transformation for the first and second sections.
Automated methods and systems of creating three dimensional LIDAR data are disclosed, including a method comprising capturing images of a geographic area with one or more image capturing devices as well as location and orientation data for each of the images corresponding to the location and orientation of the one or more image capturing devices capturing the images, the images depicting an object of interest; capturing three-dimensional LIDAR data of the geographic area with one or more LIDAR system such that the three-dimensional LIDAR data includes the object of interest; storing the three-dimensional LIDAR data on a non-transitory computer readable medium; analyzing the images with a computer system to determine three dimensional locations of points on the object of interest; and updating the three-dimensional LIDAR data with the three dimensional locations of points on the object of interest determined by analyzing the images.
Systems and methods are disclosed for early access to captured images including generating and storing within a geospatial database a plurality of placeholder records having information identifying a particular captured image and including at least one geographic image boundary field containing information indicative of a real-world geographic area depicted within the image, an image file location field, and an image status field; receive a plurality of signals from one or more processing computer, at least two of the signals having the information identifying particular captured images, and second information indicative of updates indicating a change in at least one of the image location and image processing status for the image identified by the first information; and populating at least one of the image location and the image processing status of the placeholders within the geospatial database with the information indicative of updates for identified captured images.
A computer system is described for automatically generating a 3D model, including hardware and non-transitory computer readable medium accessible by the hardware and storing instructions that when executed by the hardware cause it to obtain wire-frame data of a structure within a geographic area; identify a geographic location of the structure; retrieve multiple geo-referenced oblique images representing the geographic location and containing a real façade texture of the structure; locate a geographical position of a real façade texture of the structure; select one or more base oblique image from the multiple oblique images by analyzing, with selection logic, image raster content of the real façade texture depicted in the multiple oblique images, and, apply the real façade texture of the one or more base oblique image to the wire-frame data of the structure to create a three dimensional model providing a real-life representation of physical characteristics of the structure.
Image capture systems including a moving platform; an image capture device having a sensor for capturing an image, the image having pixels, mounted on the moving platform; and a detection computer executing an abnormality detection algorithm for detecting an abnormality in the pixels of the image immediately after the image is captured by scanning the image utilizing predetermined parameters indicative of characteristics of the abnormality and then automatically and immediately causing a re-shoot of the image.