Methods and systems are described herein for navigating a vehicle. A target location may be received, and based on the target location and vehicle parameters, an acceleration profile for the vehicle may be generated. The acceleration profile may include a first duration for accelerating the vehicle and a second duration for not accelerating the vehicle. When the acceleration profile is generated, a trajectory profile for travelling to the target location may be generated using the acceleration profile. The trajectory profile may include multiple instances of the acceleration profile. When the trajectory profile is being executed, a distance travelled by the vehicle after each instance of the acceleration profile is executed may be calculated. Based on the distance travelled after each instance of the acceleration profile is executed, an instance of when the target location is reached may be determined.
G01C 21/10 - NavigationInstruments de navigation non prévus dans les groupes en utilisant des mesures de la vitesse ou de l'accélération
G01C 21/00 - NavigationInstruments de navigation non prévus dans les groupes
G01C 21/34 - Recherche d'itinéraireGuidage en matière d'itinéraire
G01C 22/00 - Mesure de la distance parcourue sur le sol par des véhicules, des personnes, des animaux ou autres corps solides en mouvement, p. ex. en utilisant des odomètres ou en utilisant des podomètres
2.
COMPUTER VISION CLASSIFIER DEFINED PATH PLANNING FOR UNMANNED AERIAL VEHICLES
Methods and systems are described herein for enabling aerial vehicle navigation in GPS-denied areas. The system may use a camera to record images of terrain as the aerial vehicle is flying to a target location. The system may then detect (e.g., using a machine learning model) objects within those images and compare those objects with objects within an electronic map that was loaded onto the aerial vehicle. When the system finds one or more objects within the electronic map that match the objects detected within the recorded images, the system may retrieve locations (e.g., GPS coordinates) of the objects within the electronic map and calculate, based on the coordinates, the location of the aerial vehicle. Once the location of the aerial vehicle is determined, the system may navigate to a target location or otherwise adjust a flight path of the aerial vehicle.
Methods and systems are described herein for navigating a vehicle. A target location may be received, and based on the target location and vehicle parameters, an acceleration profile for the vehicle may be generated. The acceleration profile may include a first duration for accelerating the vehicle and a second duration for not accelerating the vehicle. When the acceleration profile is generated, a trajectory profile for travelling to the target location may be generated using the acceleration profile. The trajectory profile may include multiple instances of the acceleration profile. When the trajectory profile is being executed, a distance travelled by the vehicle after each instance of the acceleration profile is executed may be calculated. Based on the distance travelled after each instance of the acceleration profile is executed, an instance of when the target location is reached may be determined.
G01C 21/10 - NavigationInstruments de navigation non prévus dans les groupes en utilisant des mesures de la vitesse ou de l'accélération
G01C 21/00 - NavigationInstruments de navigation non prévus dans les groupes
G01C 21/34 - Recherche d'itinéraireGuidage en matière d'itinéraire
G01C 22/00 - Mesure de la distance parcourue sur le sol par des véhicules, des personnes, des animaux ou autres corps solides en mouvement, p. ex. en utilisant des odomètres ou en utilisant des podomètres
Methods and systems are described herein for navigating a vehicle. A target location may be received, and based on the target location and vehicle parameters, an acceleration profile for the vehicle may be generated. The acceleration profile may include a first duration for accelerating the vehicle and a second duration for not accelerating the vehicle. When the acceleration profile is generated, a trajectory profile for travelling to the target location may be generated using the acceleration profile. The trajectory profile may include multiple instances of the acceleration profile. When the trajectory profile is being executed, a distance travelled by the vehicle after each instance of the acceleration profile is executed may be calculated. Based on the distance travelled after each instance of the acceleration profile is executed, an instance of when the target location is reached may be determined.
Methods and systems are described herein for detecting motion-induced errors received from inertial-type input devices and for generating accurate vehicle control commands that account for operator movement. These methods and systems may determine, using motion data from inertial sensors, whether the hand/arm of the operator is moving in the same motion as the body of the operator, and if both are moving in the same way, these systems and methods may determine that the motion is not intended to be a motion-induced command. However, if the hand/arm of the operator is moving in a different motion from the body of the operator, these methods and systems may determine that the operator intended the motion to be a motion-induced command to a vehicle.
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateurLeurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p. ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaison
Methods and systems are described herein for determining three-dimensional locations of objects within identified portions of images. An image processing system may receive an image and an identification of location within an image. The image may be input into a machine learning model to detect one or more objects within the identified location. Multiple images may then be used to generate location estimations of those objects. Based on the location estimations, an accurate three-dimensional location may be calculated.
G06V 10/774 - Génération d'ensembles de motifs de formationTraitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source méthodes de Bootstrap, p. ex. "bagging” ou “boosting”
G06V 20/17 - Scènes terrestres transmises par des avions ou des drones
G06V 40/20 - Mouvements ou comportement, p. ex. reconnaissance des gestes
H04N 7/18 - Systèmes de télévision en circuit fermé [CCTV], c.-à-d. systèmes dans lesquels le signal vidéo n'est pas diffusé
H04N 23/661 - Transmission des signaux de commande de la caméra par le biais de réseaux, p. ex. la commande via Internet
Methods and systems are described herein for a payload management system that may detect that a payload has been attached to an uncrewed vehicle and determine whether the payload is a restricted payload or an unrestricted payload. Based on determining that the payload is an unrestricted payload, the payload management system may establish a connection between the payload and the operator using a first communication channel that has already been established between the uncrewed vehicle and the operator. Based on determining that the payload is a restricted payload, the payload management system may establish a connection between the payload and operator using a second communication channel. The payload management system may listen for restricted payload commands over the second communication channel, and when a payload command is received via the second communication channel, the payload command may be executed using the restricted payload.
G05B 15/02 - Systèmes commandés par un calculateur électriques
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p. ex. utilisant des pilotes automatiques
Methods and systems are described herein for a payload management system that may detect that a payload has been attached to an uncrewed vehicle and determine whether the payload is a restricted payload or an unrestricted payload. Based on determining that the payload is an unrestricted payload, the payload management system may establish a connection between the payload and the operator using a first communication channel that has already been established between the uncrewed vehicle and the operator. Based on determining that the payload is a restricted payload, the payload management system may establish a connection between the payload and operator using a second communication channel. The payload management system may listen for restricted payload commands over the second communication channel, and when a payload command is received via the second communication channel, the payload command may be executed using the restricted payload.
Methods and systems are described herein for enabling aerial vehicle navigation in GPS-denied areas. The system may use a camera to record images of terrain as the aerial vehicle is flying to a target location. The system may then detect (e.g., using a machine learning model) objects within those images and compare those objects with objects within an electronic map that was loaded onto the aerial vehicle. When the system finds one or more objects within the electronic map that match the objects detected within the recorded images, the system may retrieve locations (e.g., GPS coordinates) of the objects within the electronic map and calculate, based on the coordinates, the location of the aerial vehicle. Once the location of the aerial vehicle is determined, the system may navigate to a target location or otherwise adjust a flight path of the aerial vehicle.
Methods and systems are described herein for enabling aerial vehicle navigation in GPS-denied areas. The system may use a camera to record images of terrain as the aerial vehicle is flying to a target location. The system may then detect (e.g., using a machine learning model) objects within those images and compare those objects with objects within an electronic map that was loaded onto the aerial vehicle. When the system finds one or more objects within the electronic map that match the objects detected within the recorded images, the system may retrieve locations (e.g., GPS coordinates) of the objects within the electronic map and calculate, based on the coordinates, the location of the aerial vehicle. Once the location of the aerial vehicle is determined, the system may navigate to a target location or otherwise adjust a flight path of the aerial vehicle.
G05D 1/80 - Dispositions visant à réagir aux défaillances du système ou d’origine humaine ou à les prévenir
G05D 1/46 - Commande de la position ou du cap dans les trois dimensions
G05D 1/243 - Moyens de capture de signaux provenant naturellement de l’environnement, p. ex. signaux optiques, acoustiques, gravitationnels ou magnétiques ambiants
G05D 1/246 - Dispositions pour déterminer la position ou l’orientation utilisant des cartes d’environnement, p. ex. localisation et cartographie simultanées [SLAM]
G06V 20/17 - Scènes terrestres transmises par des avions ou des drones
G06T 7/90 - Détermination de caractéristiques de couleur
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06V 10/762 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant le regroupement, p. ex. de visages similaires sur les réseaux sociaux
B64U 20/87 - Montage des dispositifs d’imagerie, p. ex. montage des suspensions à cardan
Methods and systems are described herein for a layered fail-safe redundancy system and architecture for privileged operation execution. The system may receive vehicle maneuvering commands from a controller over a first channel. When a user input is received to initiate a privileged mode for executing privileged commands, the system may receive a privileged command over a second channel. The system may identify, based on the privileged mode of operation and the privileged command, a privileged operation to be performed by a vehicle. The system may then transmit a request to the vehicle to perform the privileged operation.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p. ex. utilisant des pilotes automatiques
12.
SYSTEMS AND METHODS OF DETECTING INTENT OF SPATIAL CONTROL
Systems and methods of manipulating/controlling robots. In many scenarios, data collected by a sensor (connected to a robot) may not have very high precision (e.g., a regular commercial/inexpensive sensor) or may be subjected to dynamic environmental changes. Thus, the data collected by the sensor may not indicate the parameter captured by the sensor with high accuracy. The present robotic control system is directed at such scenarios. In some embodiments, the disclosed embodiments can be used for computing a sliding velocity limit boundary for a spatial controller. In some embodiments, the disclosed embodiments can be used for teleoperation of a vehicle located in the field of view of a camera.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p. ex. utilisant des pilotes automatiques
B62D 57/02 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p. ex. par jambes mécaniques
G05D 1/222 - Dispositions de commande à distance actionnées par des humains
G05D 1/223 - Dispositions d’entrée de commande sur les dispositifs de commande à distance, p. ex. manches à balai ou écrans tactiles
G05D 1/224 - Dispositions de sortie sur les dispositifs de commande à distance, p. ex. écrans, dispositifs haptiques ou haut-parleurs
G05D 1/24 - Dispositions pour déterminer la position ou l’orientation
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateurLeurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p. ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaison
13.
LAYERED FAIL-SAFE REDUNDANCY ARCHITECTURE AND PROCESS FOR USE BY SINGLE DATA BUS MOBILE DEVICE
Methods and systems are described herein for a layered fail-safe redundancy system and architecture for privileged operation execution. The system may receive vehicle maneuvering commands from a controller over a first channel. When a user input is received to initiate a privileged mode for executing privileged commands, the system may receive a privileged command over a second channel. The system may identify, based on the privileged mode of operation and the privileged command, a privileged operation to be performed by a vehicle. The system may then transmit a request to the vehicle to perform the privileged operation.
Methods and systems are described herein for determining three-dimensional locations of objects within identified portions of images. An image processing system may receive an image and an identification of location within an image. The image may be input into a machine learning model to detect one or more objects within the identified location. Multiple images may then be used to generate location estimations of those objects. Based on the location estimations, an accurate three-dimensional location may be calculated.
Methods and systems are described herein for determining three-dimensional locations of objects within identified portions of images. An image processing system may receive an image and an identification of location within an image. The image may be input into a machine learning model to detect one or more objects within the identified location. Multiple images may then be used to generate location estimations of those objects. Based on the location estimations, an accurate three-dimensional location may be calculated.
G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
G06V 10/774 - Génération d'ensembles de motifs de formationTraitement des caractéristiques d’images ou de vidéos dans les espaces de caractéristiquesDispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p. ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]Séparation aveugle de source méthodes de Bootstrap, p. ex. "bagging” ou “boosting”
G06V 20/17 - Scènes terrestres transmises par des avions ou des drones
G06V 40/20 - Mouvements ou comportement, p. ex. reconnaissance des gestes
H04N 23/661 - Transmission des signaux de commande de la caméra par le biais de réseaux, p. ex. la commande via Internet
Methods and systems are described herein for hosting and arbitrating algorithms for the generation of structured frames of data from one or more sources of unstructured input frames. A plurality of frames may be received from a recording device and a plurality of object types to be recognized in the plurality of frames may be determined. A determination may be made of multiple machine learning models for recognizing the object types. The frames may be sequentially input into the machine learning models to obtain a plurality of sets of objects from the plurality of machine learning models and object indicators may be received from those machine learning models. A set of composite frames with the plurality of indicators corresponding to the plurality of objects may be generated, and an output stream may be generated including the set of composite frames to be played back in chronological order.
G06V 10/96 - Gestion de tâches de reconnaissance d’images ou de vidéos
G05B 13/02 - Systèmes de commande adaptatifs, c.-à-d. systèmes se réglant eux-mêmes automatiquement pour obtenir un rendement optimal suivant un critère prédéterminé électriques
G06V 10/70 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
G06V 10/94 - Architectures logicielles ou matérielles spécialement adaptées à la compréhension d’images ou de vidéos
G06V 20/56 - Contexte ou environnement de l’image à l’extérieur d’un véhicule à partir de capteurs embarqués
Methods and systems are described herein for hosting and arbitrating algorithms for the generation of structured frames of data from one or more sources of unstructured input frames. A plurality of frames may be received from a recording device and a plurality of object types to be recognized in the plurality of frames may be determined. A determination may be made of multiple machine learning models for recognizing the object types. The frames may be sequentially input into the machine learning models to obtain a plurality of sets of objects from the plurality of machine learning models and object indicators may be received from those machine learning models. A set of composite frames with the plurality of indicators corresponding to the plurality of objects may be generated, and an output stream may be generated including the set of composite frames to be played back in chronological order.
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
G05B 13/02 - Systèmes de commande adaptatifs, c.-à-d. systèmes se réglant eux-mêmes automatiquement pour obtenir un rendement optimal suivant un critère prédéterminé électriques
G06V 10/70 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique
G06V 10/94 - Architectures logicielles ou matérielles spécialement adaptées à la compréhension d’images ou de vidéos
G06V 10/96 - Gestion de tâches de reconnaissance d’images ou de vidéos
G06V 20/56 - Contexte ou environnement de l’image à l’extérieur d’un véhicule à partir de capteurs embarqués
18.
Systems and methods of detecting intent of spatial control
Systems and methods of manipulating/controlling robots. In many scenarios, data collected by a sensor (connected to a robot) may not have very high precision (e.g., a regular commercial/inexpensive sensor) or may be subjected to dynamic environmental changes. Thus, the data collected by the sensor may not indicate the parameter captured by the sensor with high accuracy. The present robotic control system is directed at such scenarios. In some embodiments, the disclosed embodiments can be used for computing a sliding velocity limit boundary for a spatial controller. In some embodiments, the disclosed embodiments can be used for teleoperation of a vehicle located in the field of view of a camera.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p. ex. utilisant des pilotes automatiques
B62D 57/02 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p. ex. par jambes mécaniques
G05D 1/222 - Dispositions de commande à distance actionnées par des humains
G05D 1/223 - Dispositions d’entrée de commande sur les dispositifs de commande à distance, p. ex. manches à balai ou écrans tactiles
G05D 1/224 - Dispositions de sortie sur les dispositifs de commande à distance, p. ex. écrans, dispositifs haptiques ou haut-parleurs
G05D 1/24 - Dispositions pour déterminer la position ou l’orientation
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateurLeurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p. ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaison
19.
ARCHITECTURE FOR DISTRIBUTED ARTIFICIAL INTELLIGENCE AUGMENTATION
Methods and systems are described herein for determining three-dimensional locations of objects within a video stream and linking those objects with known objects. An image processing system may receive an image and image metadata and detect an object and a location of the object within the image. The estimated location of each object is then determined within the three-dimensional space. In addition, the image processing system may retrieve, for a plurality of known objects, a plurality of known locations within the three-dimensional space and determine, based on estimated location and the known location data, which of the known objects matches the detected object in the image. An indicator for the object is then generated at the location of the object within the image.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
Methods and systems are described herein for hosting and arbitrating algorithms for the generation of structured frames of data from one or more sources of unstructured input frames. A plurality of frames may be received from a recording device and a plurality of object types to be recognized in the plurality of frames may be determined. A determination may be made of multiple machine learning models for recognizing the object types. The frames may be sequentially input into the machine learning models to obtain a plurality of sets of objects from the plurality of machine learning models and object indicators may be received from those machine learning models. A set of composite frames with the plurality of indicators corresponding to the plurality of objects may be generated, and an output stream may be generated including the set of composite frames to be played back in chronological order.
G06V 10/70 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique
G05B 13/02 - Systèmes de commande adaptatifs, c.-à-d. systèmes se réglant eux-mêmes automatiquement pour obtenir un rendement optimal suivant un critère prédéterminé électriques
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
G06V 10/94 - Architectures logicielles ou matérielles spécialement adaptées à la compréhension d’images ou de vidéos
G06V 10/96 - Gestion de tâches de reconnaissance d’images ou de vidéos
G06V 20/56 - Contexte ou environnement de l’image à l’extérieur d’un véhicule à partir de capteurs embarqués
Methods and systems are described herein for detecting motion-induced errors received from inertial-type input devices and for generating accurate vehicle control commands that account for operator movement. These methods and systems may determine, using motion data from inertial sensors, whether the hand/arm of the operator is moving in the same motion as the body of the operator, and if both are moving in the same way, these systems and methods may determine that the motion is not intended to be a motion-induced command. However, if the hand/arm of the operator is moving in a different motion from the body of the operator, these methods and systems may determine that the operator intended the motion to be a motion-induced command to a vehicle.
G06F 3/0488 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p. ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p. ex. des gestes en fonction de la pression exercée enregistrée par une tablette numérique utilisant un écran tactile ou une tablette numérique, p. ex. entrée de commandes par des tracés gestuels
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p. ex. utilisant des pilotes automatiques
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateurLeurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p. ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaison
22.
Architecture for distributed artificial intelligence augmentation with objects detected using artificial intelligence models from data received from multiple sources
Methods and systems are described herein for generating composite data streams. A data stream processing system may receive multiple data streams from, for example, multiple unmanned vehicles and determine, based on the type of data within each data stream, a machine learning model for each data stream for processing the type of data. Each machine learning model may receive the frames of a corresponding data stream and output indications and locations of objects within those data streams. The data stream processing system may then generate a composite data stream with indications of the detected objects.
G06V 20/40 - ScènesÉléments spécifiques à la scène dans le contenu vidéo
B64C 39/02 - Aéronefs non prévus ailleurs caractérisés par un emploi spécial
G06N 20/20 - Techniques d’ensemble en apprentissage automatique
G06V 10/70 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique
B64U 101/30 - Véhicules aériens sans pilote spécialement adaptés à des utilisations ou à des applications spécifiques à l’imagerie, à la photographie ou à la vidéographie
Methods and systems are described herein for hosting and arbitrating algorithms for the generation of structured frames of data from one or more sources of unstructured input frames. A plurality of frames may be received from a recording device and a plurality of object types to be recognized in the plurality of frames may be determined. A determination may be made of multiple machine learning models for recognizing the object types. The frames may be sequentially input into the machine learning models to obtain a plurality of sets of objects from the plurality of machine learning models and object indicators may be received from those machine learning models. A set of composite frames with the plurality of indicators corresponding to the plurality of objects may be generated, and an output stream may be generated including the set of composite frames to be played back in chronological order.
G06V 10/70 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique
G06V 10/96 - Gestion de tâches de reconnaissance d’images ou de vidéos
G05B 13/02 - Systèmes de commande adaptatifs, c.-à-d. systèmes se réglant eux-mêmes automatiquement pour obtenir un rendement optimal suivant un critère prédéterminé électriques
G06V 10/94 - Architectures logicielles ou matérielles spécialement adaptées à la compréhension d’images ou de vidéos
G06V 20/56 - Contexte ou environnement de l’image à l’extérieur d’un véhicule à partir de capteurs embarqués
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p. ex. des objets vidéo
Methods and systems are described herein for detecting motion-induced errors received from inertial-type input devices and for generating accurate vehicle control commands that account for operator movement. These methods and systems may determine, using motion data from inertial sensors, whether the hand/arm of the operator is moving in the same motion as the body of the operator, and if both are moving in the same way, these systems and methods may determine that the motion is not intended to be a motion-induced command. However, if the hand/arm of the operator is moving in a different motion from the body of the operator, these methods and systems may determine that the operator intended the motion to be a motion-induced command to a vehicle.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateurLeurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p. ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaison
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p. ex. utilisant des pilotes automatiques
25.
UNIVERSAL CONTROL ARCHITECTURE FOR CONTROL OF UNMANNED SYSTEMS
A common command and control architecture (alternatively termed herein as a “universal control architecture”) is disclosed that allows different unmanned systems, including different types of unmanned systems (e.g., air, ground, and/or maritime unmanned systems), to be controlled simultaneously through a common control device (e.g., a controller that can be an input and/or output device). The universal control architecture brings significant efficiency gains in engineering, deployment, training, maintenance, and future upgrades of unmanned systems. In addition, the disclosed common command and control architecture breaks the traditional stovepipe development involving deployment models and thus reducing hardware and software maintenance, creating a streamlined training/proficiency initiative, reducing physical space requirements for transport, and creating a scalable, more connected interoperable approach to control of unmanned systems over existing unmanned systems technology.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p. ex. utilisant des pilotes automatiques
26.
Systems and methods of detecting intent of spatial control
Systems and methods of manipulating/controlling robots. In many scenarios, data collected by a sensor (connected to a robot) may not have very high precision (e.g., a regular commercial/inexpensive sensor) or may be subjected to dynamic environmental changes. Thus, the data collected by the sensor may not indicate the parameter captured by the sensor with high accuracy. The present robotic control system is directed at such scenarios. In some embodiments, the disclosed embodiments can be used for computing a sliding velocity limit boundary for a spatial controller. In some embodiments, the disclosed embodiments can be used for teleoperation of a vehicle located in the field of view of a camera.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p. ex. utilisant des pilotes automatiques
B62D 57/02 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p. ex. par jambes mécaniques
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateurLeurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p. ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaison
27.
Systems and methods of remote teleoperation of robotic vehicles
Systems and methods of manipulating/controlling robots. In many scenarios, data collected by a sensor (connected to a robot) may not have very high precision (e.g., a regular commercial/inexpensive sensor) or may be subjected to dynamic environmental changes. Thus, the data collected by the sensor may not indicate the parameter captured by the sensor with high accuracy. The present robotic control system is directed at such scenarios. In some embodiments, the disclosed embodiments can be used for computing a sliding velocity limit boundary for a spatial controller. In some embodiments, the disclosed embodiments can be used for teleoperation of a vehicle located in the field of view of a camera.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p. ex. utilisant des pilotes automatiques
B62D 57/02 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p. ex. par jambes mécaniques
G05D 1/222 - Dispositions de commande à distance actionnées par des humains
G05D 1/223 - Dispositions d’entrée de commande sur les dispositifs de commande à distance, p. ex. manches à balai ou écrans tactiles
G05D 1/224 - Dispositions de sortie sur les dispositifs de commande à distance, p. ex. écrans, dispositifs haptiques ou haut-parleurs
G05D 1/24 - Dispositions pour déterminer la position ou l’orientation
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateurLeurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p. ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaison
Systems and methods of manipulating/controlling robots. In many scenarios, data collected by a sensor (connected to a robot) may not have very high precision (e.g., a regular commercial/inexpensive sensor) or may be subjected to dynamic environmental changes. Thus, the data collected by the sensor may not indicate the parameter captured by the sensor with high accuracy. The present robotic control system is directed at such scenarios. In some embodiments, the disclosed embodiments can be used for computing a sliding velocity limit boundary for a spatial controller. In some embodiments, the disclosed embodiments can be used for teleoperation of a vehicle located in the field of view of a camera.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p. ex. utilisant des pilotes automatiques
B62D 57/02 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p. ex. par jambes mécaniques
G05D 1/222 - Dispositions de commande à distance actionnées par des humains
G05D 1/223 - Dispositions d’entrée de commande sur les dispositifs de commande à distance, p. ex. manches à balai ou écrans tactiles
G05D 1/224 - Dispositions de sortie sur les dispositifs de commande à distance, p. ex. écrans, dispositifs haptiques ou haut-parleurs
G05D 1/24 - Dispositions pour déterminer la position ou l’orientation
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateurLeurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p. ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaison
09 - Appareils et instruments scientifiques et électriques
35 - Publicité; Affaires commerciales
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
Software for monitoring and controlling communication between computers and automated machine systems; automated process control system, namely, software used to monitor the status of industrial processes, namely, power generation, electrical distribution, and oil and gas processing; computer software system for remotely monitoring environmental conditions and controlling devices; computer software for use in connection with the operation of autonomous vehicles, drones, robotic control systems, and remote monitoring devices Business advisory services in the field of robotic control systems and remote monitoring systems; business management consultation in the field of robotic control systems and remote monitoring systems Integration of computer systems for engineering computer programs in the fields of robotics, control systems, computer systems and electrical systems; engineering design services in the fields of robotics, control systems, computer systems, process automation systems, electrical systems; installation of computer software for automation and systems integration services in the fields of robotics, control systems, computer systems, process automation systems, electrical systems; providing temporary use of non-downloadable cloud-based software for use in connection with the operation of autonomous vehicles, drones, robotic control systems, and remote monitoring devices; software as a service (SAAS) services, namely, hosting software for use by others for providing integrated communication with computerized global networks in the field of robotics; software as a service (SAAS) services, namely, hosting software for use by others for video telephony in the field of robotics; software as a service (SAAS) services, namely, hosting software for use by others for the transmission of voice, data, images, audio, video and, text between fixed or remote stations and devices in the field of robotics; software as a service (SAAS) services, namely, hosting software and data for use by others for wireless content delivery in the field of robotics; electronic data storage services for archiving electronic data; cloud computing featuring software to enable uploading, posting, showing, displaying, sharing, analyzing or otherwise providing electronic media or information over the internet or other communications networks and database management in the field of robotics; providing temporary use of online non-downloadable cloud computing software for use by others for viewing, analyzing, displaying, playing, uploading, downloading, sharing, analyzing and transmitting audio, video, text and data in the field of robotics; Design and implementation of custom robotic software and custom robotic technology solutions to be used in connection with customized robotic control systems and remote monitoring devices, all the forgoing excluding consulting services relating to prefabricated hardware and prefabricated software systems for others
09 - Appareils et instruments scientifiques et électriques
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
Downloadable computer software for use in connection with the operation of autonomous vehicles, drones, robotic control systems, and remote monitoring devices Integration of computer systems for engineering computer programs in the fields of robotics and robotics control systems; integration services in the fields of robotics and robotics control systems; providing temporary use of non-downloadable cloudbased software for use in connection with drones, robotic control systems and remote monitoring devices; design and implementation of software and technology solutions to be used in connection with robotic control systems and remote monitoring devices; downloading, sharing, analyzing and transmitting audio, video, text and data; design and implementation of software and technology solutions to be used in connection with robotic control systems and remote monitoring devices