A method of manipulating boxes includes receiving a minimum box size for a plurality of boxes varying in size located in a walled container. The method also includes dividing a grip area of a gripper into a plurality of zones. The method further includes locating a set of candidate boxes based on an image from a visual sensor. For each zone, the method additionally includes, determining an overlap of a respective zone with one or more neighboring boxes to the set of candidate boxes. The method also includes determining a grasp pose for a target candidate box that avoids one or more walls of the walled container. The method further includes executing the grasp pose to lift the target candidate box by the gripper where the gripper activates each zone of the plurality of zones that does not overlap a respective neighboring box to the target candidate box.
Systems and methods are described for detecting changes at a location based on image data by a mobile robot. A system can instruct navigation of the mobile robot to a location. For example, the system can instruct navigation to the location as part of an inspection mission. The system can obtain input identifying a change detection. Based on the change detection and obtained image data associated with the location, the system can perform the change detection and detect a change associated with the location. For example, the system can perform the change detection based on one or more regions of interest of the obtained image data. Based on the detected change and a reference model, the system can determine presence of an anomaly condition in the obtained image data.
G05D 1/689 - dirigeant des charges utiles vers des cibles fixes ou en mouvement (positionnement d’équipements tractés, poussés ou suspendus G05D 1/672)
B62D 57/032 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques avec des pieds ou des patins soulevés alternativement ou dans un ordre déterminé
Systems and methods are described for detecting changes at a location based on image data by a mobile robot. A system can instruct navigation of the mobile robot to a location. For example, the system can instruct navigation to the location as part of an inspection mission. The system can obtain input identifying a change detection. Based on the change detection and obtained image data associated with the location, the system can perform the change detection and detect a change associated with the location. For example, the system can perform the change detection based on one or more regions of interest of the obtained image data. Based on the detected change and a reference model, the system can determine presence of an anomaly condition in the obtained image data.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
G05D 1/225 - actionnées par des ordinateurs externes
G05D 1/229 - Données d’entrée de commande, p. ex. points de passage
G05D 1/249 - provenant de capteurs de positionnement situés à l’extérieur du véhicule, p. ex. caméras
G05D 1/246 - utilisant des cartes d’environnement, p. ex. localisation et cartographie simultanées [SLAM]
G05D 1/689 - dirigeant des charges utiles vers des cibles fixes ou en mouvement (positionnement d’équipements tractés, poussés ou suspendus G05D 1/672)
A dynamic planning controller receives a maneuver for a robot and a current state of the robot and transforms the maneuver and the current state of the robot into a nonlinear optimization problem. The nonlinear optimization problem is configured to optimize an unknown force and an unknown position vector. At a first time instance, the controller linearizes the nonlinear optimization problem into a first linear optimization problem and determines a first solution to the first linear optimization problem using quadratic programming. At a second time instance, the controller linearizes the nonlinear optimization problem into a second linear optimization problem based on the first solution at the first time instance and determines a second solution to the second linear optimization problem based on the first solution using the quadratic programming. The controller also generates a joint command to control motion of the robot during the maneuver based on the second solution.
G05B 13/04 - Systèmes de commande adaptatifs, c. à d. systèmes se réglant eux-mêmes automatiquement pour obtenir un rendement optimal suivant un critère prédéterminé électriques impliquant l'usage de modèles ou de simulateurs
G06N 5/01 - Techniques de recherche dynamique; Heuristiques; Arbres dynamiques; Séparation et évaluation
Systems and methods are described for outputting light and/or audio using one or more light and/or audio sources of a robot. The light sources may be located on one or more legs of the robot, a bottom portion of the robot, and/or a top portion of the robot. The audio sources may include a speaker and/or an audio resonator. A system can obtain sensor data associated with an environment of the robot. Based on the sensor data, the system can identify an alert. For example, the system can identify an entity based on the sensor data and identify an alert for the entity. The system can instruct an output of light and/or audio indicative of the alert using the one or more light and/or audio sources. The system can adjust parameters of the output based on the sensor data.
A method includes receiving sensor data for an environment about the robot. The sensor data is captured by one or more sensors of the robot. The method includes detecting one or more objects in the environment using the received sensor data. For each detected object, the method includes authoring an interaction behavior indicating a behavior that the robot is capable of performing with respect to the corresponding detected object. The method also includes augmenting a localization map of the environment to reflect the respective interaction behavior of each detected object.
Systems and methods are described for outputting light and/or audio using one or more light and/or audio sources of a robot. The light sources may be located on one or more legs of the robot, a bottom portion of the robot, and/or a top portion of the robot. The audio sources may include a speaker and/or an audio resonator. A system can obtain sensor data associated with an environment of the robot. Based on the sensor data, the system can identify an alert. For example, the system can identify an entity based on the sensor data and identify an alert for the entity. The system can instruct an output of light and/or audio indicative of the alert using the one or more light and/or audio sources. The system can adjust parameters of the output based on the sensor data.
B62D 57/02 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques
B62D 57/032 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques avec des pieds ou des patins soulevés alternativement ou dans un ordre déterminé
B62D 57/024 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques spécialement adaptés pour se déplacer sur des surfaces inclinées ou verticales
Disclosed herein are systems and methods directed to an industrial robot that can perform mobile manipulation (e.g., dexterous mobile manipulation). A robotic arm may be capable of precise control when reaching into tight spaces, may be robust to impacts and collisions, and/or may limit the mass of the robotic arm to reduce the load on the battery and increase runtime. A robotic arm may include differently configured proximal joints and/or distal joints. Proximal joints may be designed to promote modularity and may include separate functional units, such as modular actuators, encoder, bearings, and/or clutches. Distal joints may be designed to promote integration and may include offset actuators to enable a through-bore for the internal routing of vacuum, power, and signal connections.
Techniques and apparatuses for recognizing accented speech are described. In some embodiments, an accent module recognizes accented speech using an accent library based on device data, uses different speech recognition correction levels based on an application field into which recognized words are set to be provided, or updates an accent library based on corrections made to incorrectly recognized speech.
G10L 15/01 - Estimation ou évaluation des systèmes de reconnaissance de la parole
G10L 15/06 - Création de gabarits de référence; Entraînement des systèmes de reconnaissance de la parole, p.ex. adaptation aux caractéristiques de la voix du locuteur
G10L 15/187 - Contexte phonémique, p.ex. règles de prononciation, contraintes phonotactiques ou n-grammes de phonèmes
G10L 15/22 - Procédures utilisées pendant le processus de reconnaissance de la parole, p.ex. dialogue homme-machine
G10L 15/30 - Reconnaissance distribuée, p.ex. dans les systèmes client-serveur, pour les applications en téléphonie mobile ou réseaux
10.
ENVIRONMENTAL FEATURE-SPECIFIC ACTIONS FOR ROBOT NAVIGATION
Systems and methods are described for reacting to a feature in an environment of a robot based on a classification of the feature. A system can detect the feature in the environment using a first sensor on the robot. For example, the system can detect the feature using a feature detection system based on sensor data from a camera. The system can detect a mover in the environment using a second sensor on the robot. For example, the system can detect the mover using a mover detection system based on sensor data from a lidar sensor. The system can fuse the data from detecting the feature and detecting the mover to produce fused data. The system can classify the feature based on the fused data and react to the feature based on classifying the feature.
Systems and methods are described for reacting to a feature in an environment of a robot based on a classification of the feature. A system can detect the feature in the environment using a first sensor on the robot. For example, the system can detect the feature using a feature detection system based on sensor data from a camera. The system can detect a mover in the environment using a second sensor on the robot. For example, the system can detect the mover using a mover detection system based on sensor data from a lidar sensor. The system can fuse the data from detecting the feature and detecting the mover to produce fused data. The system can classify the feature based on the fused data and react to the feature based on classifying the feature.
Systems and methods for a perception system for a lower body powered exoskeleton device are provided. The perception system includes a camera configured to capture one or more images of terrain in proximity to the exoskeleton device, an at least one processor. The at least one processor is programmed to perform footstep planning for the exoskeleton device based, at least in part, on the captured one or more images of terrain, and issue an instruction to perform a first action based, at least in part, on the footstep planning.
Systems and methods for a perception system for a lower body powered exoskeleton device are provided. The perception system includes a camera configured to capture one or more images of terrain in proximity to the exoskeleton device, an at least one processor. The at least one processor is programmed to perform footstep planning for the exoskeleton device based, at least in part, on the captured one or more images of terrain, and issue an instruction to perform a first action based, at least in part, on the footstep planning.
A61B 5/00 - Mesure servant à établir un diagnostic ; Identification des individus
A61F 5/01 - Dispositifs orthopédiques, p.ex. dispositifs pour immobiliser ou pour exercer des pressions de façon durable pour le traitement des os fracturés ou déformés, tels que éclisses, plâtres orthopédiques ou attelles
B62D 57/032 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques avec des pieds ou des patins soulevés alternativement ou dans un ordre déterminé
An example robot includes: a motor disposed at a joint configured to control motion of a member of the robot; a transmission including an input member coupled to and configured to rotate with the motor, an intermediate member, and an output member, where the intermediate member is fixed such that as the input member rotates, the output member rotates therewith at a different speed; a pad frictionally coupled to a side surface of the output member of the transmission and coupled to the member of the robot; and a spring configured to apply an axial preload on the pad, wherein the axial preload defines a torque limit that, when exceeded by a torque load on the member of the robot, the output member of the transmission slips relative to the pad.
Methods and apparatus for operating a mobile robot in a loading dock environment are provided. The method comprises capturing, by a camera system of the mobile robot, at least one image of the loading dock environment, and processing, by at least one hardware processor of the mobile robot, the at least one image using a machine learning model trained to identify one or more features of the loading dock environment.
B25J 5/00 - Manipulateurs montés sur roues ou sur support mobile
G05D 1/243 - Moyens de capture de signaux provenant naturellement de l’environnement, p. ex. signaux optiques, acoustiques, gravitationnels ou magnétiques ambiants (utilisant des aides à la navigation passive extérieures au véhicule G05D 1/244;utilisant des signaux provenant de capteurs de positionnement situés à l’extérieur du véhicule G05D 1/249)
G05D 107/70 - Sites industriels, p. ex. entrepôts ou usines
G06T 17/10 - Description de volumes, p.ex. de cylindres, de cubes ou utilisant la GSC [géométrie solide constructive]
G06V 10/40 - Extraction de caractéristiques d’images ou de vidéos
G06V 10/70 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique
G06V 20/56 - Contexte ou environnement de l’image à l’extérieur d’un véhicule à partir de capteurs embarqués
16.
METHODS AND APPARATUS FOR REDUCING MULTIPATH ARTIFACTS FOR A CAMERA SYSTEM OF A MOBILE ROBOT
Methods and apparatus for determining a pose of an object sensed by a camera system of a mobile robot are described. The method includes acquiring, using the camera system, a first image of the object from a first perspective and a second image of the object from a second perspective, and determining, by a processor of the camera system, a pose of the object based, at least in part, on a first set of sparse features associated with the object detected in the first image and a second set of sparse features associated with the object detected in the second image.
A method of grasping and/or placing multiple objects by a gripper of a mobile robot. The multi-grasp method includes determining one or more candidate groups of objects to grasp by the suction-based gripper of the mobile robot, each of the one or more candidate groups of objects including a plurality of objects, determining a grasp quality score for each of the one or more candidate groups of objects, and grasping, by the suction-based gripper of the mobile robot, all objects in a candidate group of objects based, at least in part, on the grasp quality score. The multi-place method includes determining an allowed width associated with the conveyor, selecting a multi-place technique based, at least in part, on the allowed width and a dimension of the multiple grasped objects, and controlling the mobile robot to place the multiple grasped objects on the conveyor based on the selected multi-place technique.
A method of grasping and/or placing multiple objects by a gripper of a mobile robot. The multi-grasp method includes determining one or more candidate groups of objects to grasp by the suction-based gripper of the mobile robot, each of the one or more candidate groups of objects including a plurality of objects, determining a grasp quality score for each of the one or more candidate groups of objects, and grasping, by the suction-based gripper of the mobile robot, all objects in a candidate group of objects based, at least in part, on the grasp quality score. The multi-place method includes determining an allowed width associated with the conveyor, selecting a multi-place technique based, at least in part, on the allowed width and a dimension of the multiple grasped objects, and controlling the mobile robot to place the multiple grasped objects on the conveyor based on the selected multi-place technique.
A computer-implemented method includes generating a joint-torque-limit model for the articulated arm based on allowable joint torque sets corresponding to a base pose of the base. The method also include receiving a first requested joint torque set for a first arm pose of the articulated arm and determining, using the joint-torque-limit model, an optimized joint torque set corresponding to the first requested joint torque set. The method also includes receiving a second requested joint torque set for a second arm pose of the articulated arm and generating an adjusted joint torque set by adjusting the second requested joint torque set based on the optimized joint torque set. The method also includes sending the adjusted joint torque set to the articulated arm.
An example method may include i) determining a first distance between a pair of feet of a robot at a first time, where the pair of feet is in contact with a ground surface; ii) determining a second distance between the pair of feet of the robot at a second time, where the pair of feet remains in contact with the ground surface from the first time to the second time; iii) comparing a difference between the determined first and second distances to a threshold difference; iv) determining that the difference between determined first and second distances exceeds the threshold difference; and v) based on the determination that the difference between the determined first and second distances exceeds the threshold difference, causing the robot to react.
B62D 57/02 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques
B25J 13/08 - Commandes pour manipulateurs au moyens de dispositifs sensibles, p.ex. à la vue ou au toucher
B62D 57/032 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques avec des pieds ou des patins soulevés alternativement ou dans un ordre déterminé
Disclosed are techniques that provide a “best” picture taken within a few seconds of the moment when a capture command is received (e.g., when the “shutter” button is pressed). In some situations, several still images are automatically (that is, without the user's input) captured. These images are compared to find a “best” image that is presented to the photographer for consideration. Video is also captured automatically and analyzed to see if there is an action scene or other motion content around the time of the capture command. If the analysis reveals anything interesting, then the video clip is presented to the photographer. The video clip may be cropped to match the still-capture scene and to remove transitory parts. Higher-precision horizon detection may be provided based on motion analysis and on pixel-data analysis.
H04N 23/60 - Commande des caméras ou des modules de caméras
H04N 1/00 - Balayage, transmission ou reproduction de documents ou similaires, p.ex. transmission de fac-similés; Leurs détails
H04N 1/21 - Enregistrement intermédiaire de l'information
H04N 23/61 - Commande des caméras ou des modules de caméras en fonction des objets reconnus
H04N 23/611 - Commande des caméras ou des modules de caméras en fonction des objets reconnus les objets reconnus comprenant des parties du corps humain
H04N 23/62 - Commande des paramètres via des interfaces utilisateur
H04N 23/63 - Commande des caméras ou des modules de caméras en utilisant des viseurs électroniques
H04N 23/667 - Changement de mode de fonctionnement de la caméra, p. ex. entre les modes photo et vidéo, sport et normal ou haute et basse résolutions
H04N 23/68 - Commande des caméras ou des modules de caméras pour une prise de vue stable de la scène, p. ex. en compensant les vibrations du boîtier de l'appareil photo
H04N 23/80 - Chaînes de traitement de la caméra; Leurs composants
Embodiments are provided for communicating notifications and other textual data associated with applications installed on an electronic device. According to certain aspects, a user can interface with an input device to send a wake up trigger to the electronic device. The electronic device retrieves application notifications and converts (288) the application notifications to audio data. The electronic device also sends the audio data to an audio output device for annunciation. The user may also use the input device to send a request to the electronic device to activate the display screen. The electronic device identifies an application corresponding to an annunciated notification, and activates the display screen and initiates the application.
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G06F 3/0488 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer utilisant un écran tactile ou une tablette numérique, p.ex. entrée de commandes par des tracés gestuels
G10L 13/04 - Procédés d'élaboration de parole synthétique; Synthétiseurs de parole - Détails des systèmes de synthèse de la parole, p.ex. structure du synthétiseur ou gestion de la mémoire
25.
Method and Apparatus for Using Image Data to Aid Voice Recognition
A device performs a method for using image data to aid voice recognition. The method includes the device capturing image data of a vicinity of the device and adjusting, based on the image data, a set of parameters for voice recognition performed by the device. The set of parameters for the device performing voice recognition include, but are not limited to: a trigger threshold of a trigger for voice recognition; a set of beamforming parameters; a database for voice recognition; and/or an algorithm for voice recognition. The algorithm may include using noise suppression or using acoustic beamforming.
G10L 15/22 - Procédures utilisées pendant le processus de reconnaissance de la parole, p.ex. dialogue homme-machine
B60N 2/00 - Sièges spécialement adaptés aux véhicules; Agencement ou montage des sièges dans les véhicules
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06V 20/59 - Contexte ou environnement de l’image à l’intérieur d’un véhicule, p.ex. concernant l’occupation des sièges, l’état du conducteur ou les conditions de l’éclairage intérieur
G06V 40/16 - Visages humains, p.ex. parties du visage, croquis ou expressions
G06V 40/18 - Caractéristiques de l’œil, p.ex. de l’iris
G06V 40/20 - Mouvements ou comportement, p.ex. reconnaissance des gestes
G10L 15/20 - Techniques de reconnaissance de la parole spécialement adaptées de par leur robustesse contre les perturbations environnantes, p.ex. en milieu bruyant ou reconnaissance de la parole émise dans une situation de stress
G10L 15/24 - Reconnaissance de la parole utilisant des caractéristiques non acoustiques
G10L 15/25 - Reconnaissance de la parole utilisant des caractéristiques non acoustiques utilisant la position des lèvres, le mouvement des lèvres ou l’analyse du visage
G10L 15/26 - Systèmes de synthèse de texte à partir de la parole
Methods and apparatus for controlling a robotic gripper of a robotic device are provided. The method includes activating a plurality of vacuum assemblies of the robotic gripper to grasp one or more objects, disabling one or more of the plurality of vacuum assemblies having a seal quality with the one or more objects that is less than a first threshold, assigning a score to each of the one or more disabled vacuum assemblies, reactivating the one or more disabled vacuum assemblies in an order based, at least in part, on the assigned scores, and grasping the one or more objects with the robotic gripper when a grasp quality of the robotic gripper is higher than a second threshold.
Methods and apparatus for estimating a ceiling location of a container within which a mobile robot is configured to operate are provided. The method comprises sensing distance measurement data associated with the ceiling of the container using one or more distance sensors arranged on an end effector of a mobile robot, and determining a ceiling estimate of the container based on the distance measurement data.
Methods and apparatus for controlling a robotic gripper of a robotic device are provided. The method includes activating a plurality of vacuum assemblies of the robotic gripper to grasp one or more objects, disabling one or more of the plurality of vacuum assemblies having a seal quality with the one or more objects that is less than a first threshold, assigning a score to each of the one or more disabled vacuum assemblies, reactivating the one or more disabled vacuum assemblies in an order based, at least in part, on the assigned scores, and grasping the one or more objects with the robotic gripper when a grasp quality of the robotic gripper is higher than a second threshold.
Methods and apparatus for automated calibration for a LIDAR system of a mobile robot are provided. The method comprises capturing a plurality of LIDAR measurements. The plurality of LIDAR measurements include a first set of LIDAR measurements as the mobile robot spins in a first direction at a first location, the first location being a first distance to a calibration target, and a second set of LIDAR measurements as the mobile robot spins in a second direction at a second location, the second location being a second distance to the calibration target, wherein the first direction and the second direction are different and the second distance is different than the first distance. The method further comprises processing the plurality of LIDAR measurements to determine calibration data, and generating alignment instructions for the LIDAR system based, at least in part, on the calibration data.
A computer-implemented method, when executed by data processing hardware of a robot having an articulated arm and a base, causes data processing hardware to perform operations. The operations include determining a first location of a workspace of the articulated arm associated with a current base configuration of the base of the robot. The operations also include receiving a task request defining a task for the robot to perform outside of the workspace of the articulated arm at the first location. The operations also include generating base parameters associated with the task request. The operations further include instructing, using the generated base parameters, the base of the robot to move from the current base configuration to an anticipatory base configuration.
A method of identifying stairs from footfalls includes receiving a plurality of footfall locations of a robot traversing an environment. Each respective footfall location indicates a location where a leg of the robot contacted a support surface. The method also includes determining a plurality of candidate footfall location pairs based on the plurality of footfall locations. The candidate footfall location pair includes a first and a second candidate footfall location. The method further includes clustering the first candidate footfall location into a first cluster group based on a height of the first candidate footfall location and clustering the second candidate footfall location into a second cluster group based on a height of the second candidate footfall location. The method additionally includes generating a stair model by representing each of the cluster groups as a corresponding stair and delineating each stair based on a respective midpoint between each adjacent cluster group.
B62D 57/024 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques spécialement adaptés pour se déplacer sur des surfaces inclinées ou verticales
B62D 57/032 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques avec des pieds ou des patins soulevés alternativement ou dans un ordre déterminé
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
G06V 10/762 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant le regroupement, p.ex. de visages similaires sur les réseaux sociaux
33.
SYSTEMS AND METHODS FOR CONTROLLING MOVEMENTS OF ROBOTIC ACTUATORS
An electronic circuit comprises a charge storing component, a set of one or more switching components coupled to the charge storing component, and an additional switching component coupled to each of the one or more switching components in the set. The additional switching component is configured to operate in a first state or a second state based on a received current or voltage. The first state prevents current to flow from the charge storing component to each of the one or more switching components in the set and the second state allows current to flow from the charge storing component to each of the one or more switching components in the set.
B25J 9/12 - Manipulateurs à commande programmée caractérisés par des moyens pour régler la position des éléments manipulateurs électriques
G05F 1/10 - Régulation de la tension ou de l'intensité
H02P 3/22 - Dispositions pour l'arrêt ou le ralentissement de moteurs, génératrices électriques ou de convertisseurs dynamo-électriques pour arrêter ou ralentir individuellement un moteur dynamo-électrique ou un convertisseur dynamo-électrique pour arrêter ou ralentir un moteur à courant alternatif par freinage sur court-circuit ou sur résistance
H03K 17/56 - Commutation ou ouverture de porte électronique, c. à d. par d'autres moyens que la fermeture et l'ouverture de contacts caractérisée par l'utilisation de composants spécifiés par l'utilisation, comme éléments actifs, de dispositifs à semi-conducteurs
34.
ANCHORING BASED TRANSFORMATION FOR ALIGNING SENSOR DATA OF A ROBOT WITH A SITE MODEL
Systems and methods are described for the display of a transformed virtual representation of sensor data overlaid on a site model. A system can obtain a site model identifying a site. For example, the site model can include a map, a blueprint, or a graph. The system can obtain sensor data from a sensor of a robot. The sensor data can include route data identifying route waypoints and/or route edges associated with the robot. The system can receive input identifying an association between a virtual representation of the sensor data and the site model. Based on the association, the system can transform the virtual representation of the sensor data and instruct display of the transformed data overlaid on the site model.
B62D 57/032 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques avec des pieds ou des patins soulevés alternativement ou dans un ordre déterminé
G05D 1/229 - Données d’entrée de commande, p. ex. points de passage
G05D 1/246 - utilisant des cartes d’environnement, p. ex. localisation et cartographie simultanées [SLAM]
G06T 3/40 - Changement d'échelle d'une image entière ou d'une partie d'image
G06T 11/20 - Traçage à partir d'éléments de base, p.ex. de lignes ou de cercles
35.
OBJECT CLIMBING BY LEGGED ROBOTS USING TRAINING OBJECTS
Systems and methods are described for climbing of objects in an environment of a robot based on sensor data. A system can obtain sensor data of the environment. For example, the system can obtain sensor data from one or more sensors of robot. The system can identify the object based on the sensor data. Further, the system can determine that the object is climbable based on determining that the object corresponds to a particular training object. The system can determine that the object corresponds to the particular training object based on a particular characteristic of the object. The system can identify a climbing operation associated with the training object and instruct the robot to climb on the object based on the climbing operation.
B62D 57/032 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques avec des pieds ou des patins soulevés alternativement ou dans un ordre déterminé
36.
ROBOT MOVEMENT AND INTERACTION WITH MASSIVE BODIES
The invention includes systems and methods for determining movement of a robot. A computing system of the robot receives information comprising a reference behavior specification, a current state of the robot, and a characteristic of a massive body coupled to or expected to be coupled to the robot. The computing system determines, based on the information, a set of movement parameters for the robot, the set of movement parameters reflecting a goal trajectory for the robot. The computing system instructs the robot to move consistent with the set of movement parameters.
The invention includes systems and methods for fabrication and use of an assembly for a component of a robot. The assembly includes a first member including a set of electrically conductive annular surfaces, and a second member including a set of electrically conductive components configured to contact the set of electrically conductive annular surfaces. The first member and the second member are included within the component of the robot. Each component in the set of electrically conductive components includes a first convex curvilinear portion configured to contact a corresponding annular surface in the set of electrically conductive annular surfaces.
H01R 39/10 - Bagues collectrices autres qu'avec une surface de contact externe cylindrique, p.ex. bagues collectrices plates
H01R 39/24 - Contacts feuilletés; Contacts à fils, p.ex. balais métalliques, fibres de carbone
H01R 39/64 - Dispositifs pour le captage ininterrompu du courant
B62D 57/032 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques avec des pieds ou des patins soulevés alternativement ou dans un ordre déterminé
H01R 13/24 - Contacts pour coopération par aboutage montés élastiquement
The invention includes systems and methods for routing data packets in a robot. The method comprises routing, using a first switching device, data packets between a first host processor and a first electronic device of the robot, and routing, using the first switching device, data packets between a second host processor and a second electronic device of the robot.
The invention includes systems and methods for determining movement of a robot. A computing system of the robot receives information comprising a reference behavior specification, a current state of the robot, and a characteristic of a massive body coupled to or expected to be coupled to the robot. The computing system determines, based on the information, a set of movement parameters for the robot, the set of movement parameters reflecting a goal trajectory for the robot. The computing system instructs the robot to move consistent with the set of movement parameters.
An apparatus for a robot includes a set of at least three proximal links. Each proximal link is configured to rotate about a respective joint. Each joint is aligned on a common axis. The apparatus also includes a set of at least three distal links. Each distal link is coupled to a corresponding proximal link and configured to rotate about a second respective joint. Each proximal link comprises an actuator configured to move at least one of the proximal link or the corresponding distal link.
The invention includes systems and methods for fabrication and use of an assembly for a component of a robot. The assembly includes a first member including a set of electrically conductive annular surfaces, and a second member including a set of electrically conductive components configured to contact the set of electrically conductive annular surfaces. The first member and the second member are included within the component of the robot. Each component in the set of electrically conductive components includes a first convex curvilinear portion configured to contact a corresponding annular surface in the set of electrically conductive annular surfaces.
B25J 19/00 - Accessoires adaptés aux manipulateurs, p.ex. pour contrôler, pour observer; Dispositifs de sécurité combinés avec les manipulateurs ou spécialement conçus pour être utilisés en association avec ces manipulateurs
An apparatus for a robot includes a set of at least three proximal links. Each proximal link is configured to rotate about a respective joint. Each joint is aligned on a common axis. The apparatus also includes a set of at least three distal links. Each distal link is coupled to a corresponding proximal link and configured to rotate about a second respective joint. Each proximal link comprises an actuator configured to move at least one of the proximal link or the corresponding distal link.
The invention includes systems and methods for routing data packets in a robot. The method comprises routing, using a first switching device, data packets between a first host processor and a first electronic device of the robot, and routing, using the first switching device, data packets between a second host processor and a second electronic device of the robot.
A method for a stair tracking for modeled and perceived terrain includes receiving, at data processing hardware, sensor data about an environment of a robot. The method also includes generating, by the data processing hardware, a set of maps based on voxels corresponding to the received sensor data. The set of maps includes a ground height map and a map of movement limitations for the robot. The map of movement limitations identifies illegal regions within the environment that the robot should avoid entering. The method further includes generating a stair model for a set of stairs within the environment based on the sensor data, merging the stair model and the map of movement limitations to generate an enhanced stair map, and controlling the robot based on the enhanced stair map or the ground height map to traverse the environment.
B62D 57/024 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques spécialement adaptés pour se déplacer sur des surfaces inclinées ou verticales
B62D 57/032 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques avec des pieds ou des patins soulevés alternativement ou dans un ordre déterminé
Example methods and devices for touch-down detection for a robotic device are described herein. In an example embodiment, a computing system may receive a force signal due to a force experienced at a limb of a robotic device. The system may receive an output signal from a sensor of the end component of the limb. Responsive to the received signals, the system may determine whether the force signal satisfies a first threshold and determine whether the output signal satisfies a second threshold. Based on at least one of the force signal satisfying the first threshold or the output signal satisfying the second threshold, the system of the robotic device may provide a touch-down output indicating touch-down of the end component of the limb with a portion of an environment.
B62D 57/032 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques avec des pieds ou des patins soulevés alternativement ou dans un ordre déterminé
G01L 5/00 - Appareils ou procédés pour la mesure des forces, du travail, de la puissance mécanique ou du couple, spécialement adaptés à des fins spécifiques
A method for estimating a ground plane of a legged robot includes determining one or more physical contact points of the legged robot based on first sensor information of the legged robot, determining one or more virtual contact points of the legged robot based on second sensor information of the legged robot, determining a ground plane estimation of the ground surface based on both the one or more physical contact points and the one or more virtual contact points, and controlling a pose of the legged robot based on the ground plane estimation.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
B62D 57/032 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques avec des pieds ou des patins soulevés alternativement ou dans un ordre déterminé
A method on a mobile device for a wireless network is described. An audio input is monitored for a trigger phrase spoken by a user of the mobile device. A command phrase spoken by the user after the trigger phrase is buffered. The command phrase corresponds to a call command and a call parameter. A set of target contacts associated with the mobile device is selected based on respective voice validation scores and respective contact confidence scores. The respective voice validation scores are based on the call parameter. The respective contact confidence scores are based on a user context associated with the user. A call to a priority contact of the set of target contacts is automatically placed if the voice validation score of the priority contact meets a validation threshold and the contact confidence score of the priority contact meets a confidence threshold.
H04M 1/60 - COMMUNICATIONS TÉLÉPHONIQUES Équipement de sous-station, p.ex. pour utilisation par l'abonné comprenant des amplificateurs de parole
G06F 16/60 - Recherche d’informations; Structures de bases de données à cet effet; Structures de systèmes de fichiers à cet effet de données audio
G10L 15/20 - Techniques de reconnaissance de la parole spécialement adaptées de par leur robustesse contre les perturbations environnantes, p.ex. en milieu bruyant ou reconnaissance de la parole émise dans une situation de stress
G10L 15/28 - Reconnaissance de la parole - Détails de structure des systèmes de reconnaissance de la parole
H04M 1/27 - Dispositifs dans lesquels plusieurs signaux peuvent être enregistrés simultanément
48.
Systems and methods for communicating notifications and textual data associated with applications
Embodiments are provided for communicating notifications and other textual data associated with applications installed on an electronic device. According to certain aspects, a user can interface with an input device to send (218) a wake up trigger to the electronic device. The electronic device retrieves (222) application notifications and converts (288) the application notifications to audio data. The electronic device also sends (230) the audio data to an audio output device for annunciation (232). The user may also use the input device to send (242) a request to the electronic device to activate the display screen. The electronic device identifies (248) an application corresponding to an annunciated notification, and activates (254) the display screen and initiates the application.
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G06F 3/0488 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer utilisant un écran tactile ou une tablette numérique, p.ex. entrée de commandes par des tracés gestuels
G10L 13/04 - Procédés d'élaboration de parole synthétique; Synthétiseurs de parole - Détails des systèmes de synthèse de la parole, p.ex. structure du synthétiseur ou gestion de la mémoire
A method includes obtaining, from an operator of a robot, a return execution lease associated with one or more commands for controlling the robot that is scheduled within a sequence of execution leases. The robot is configured to execute commands associated with a current execution lease that is an earliest execution lease in the sequence of execution leases that is not expired. The method includes obtaining an execution lease expiration trigger triggering expiration of the current execution lease. After obtaining the trigger, the method includes determining that the return execution lease is a next current execution lease in the sequence. While the return execution lease is the current execution lease, the method includes executing the one or more commands for controlling the robot associated with the return execution lease which cause the robot to navigate to a return location remote from a current location of the robot.
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
G05B 19/042 - Commande à programme autre que la commande numérique, c.à d. dans des automatismes à séquence ou dans des automates à logique utilisant des processeurs numériques
Methods and apparatus for implementing a safety system for a mobile robot are described. The method comprises receiving first sensor data from one or more sensors, the first sensor data being captured at a first time, identifying, based on the first sensor data, a first unobserved portion of a safety field in an environment of a mobile robot, assigning, to each of a plurality of contiguous regions within the first unobserved portion of the safety field, an occupancy state, updating, at a second time after the first time, the occupancy state of one or more of the plurality of contiguous regions, and determining one or more operating parameters for the mobile robot, the one or more operating parameters based, at least in part, on the occupancy state of at least some regions of the plurality of contiguous regions at the second time.
According to one disclosed method, one or more sensors of a robot may receive data corresponding to one or more locations of the robot along a path the robot is following within an environment on a first occasion. Based on the received data, a determination may be made that one or more stairs exist in a first region of the environment. Further, when the robot is at a position along the path the robot is following on the first occasion, a determination may be made that the robot is expected to enter the first region. The robot may be controlled to operate in a first operational mode associated with traversal of stairs when it is determined that one or more stairs exist in the first region and the robot is expected to enter the first region.
B62D 57/024 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques spécialement adaptés pour se déplacer sur des surfaces inclinées ou verticales
B62D 57/032 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques avec des pieds ou des patins soulevés alternativement ou dans un ordre déterminé
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
A robot leg assembly including a hip joint and an upper leg member. A proximal end portion of the upper leg member rotatably coupled to the hip joint. The robot leg assembly including a knee joint rotatably coupled to a distal end portion of the upper leg member, a lower leg member rotatably coupled to the knee joint, a linear actuator disposed on the upper leg member and defining a motion axis, and a motor coupled to the linear actuator and a linkage coupled to the translation stage and to the lower leg member. The linear actuator includes a translation stage moveable along the motion axis to translate rotational motion of the motor to linear motion of the translation stage along the motion axis, which moves the linkage to rotate the lower leg member relative to the upper leg member at the knee joint.
B62D 57/032 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques avec des pieds ou des patins soulevés alternativement ou dans un ordre déterminé
A computing system may provide a model of a robot. The model may be configured to determine simulated motions of the robot based on sets of control parameters. The computing system may also operate the model with multiple sets of control parameters to simulate respective motions of the robot. The computing system may further determine respective scores for each respective simulated motion of the robot, wherein the respective scores are based on constraints associated with each limb of the robot and a goal. The constraints include actuator constraints and joint constraints for limbs of the robot. Additionally, the computing system may select, based on the respective scores, a set of control parameters associated with a particular score. Further, the computing system may modify a behavior of the robot based on the selected set of control parameters to perform a coordinated exertion of forces by actuators of the robot.
B62D 57/032 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques avec des pieds ou des patins soulevés alternativement ou dans un ordre déterminé
An example implementation involves controlling robots with non-constant body pitch and height. The implementation involves obtaining a model of the robot that represents the robot as a first point mass rigidly coupled with a second point mass along a longitudinal axis. The implementation also involves determining a state of a first pair of legs, and determining a height of the first point mass based on the model and the state of the first pair of legs. The implementation further involves determining a first amount of vertical force for at least one leg of the first pair of legs to apply along a vertical axis against a surface while the at least one leg is in contact with the surface. Additionally, the implementation involves causing the at least one leg of the first pair of legs to begin applying the amount of vertical force against the surface.
B62D 57/032 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques avec des pieds ou des patins soulevés alternativement ou dans un ordre déterminé
A kit includes a computing device configured to control motion of equipment for receiving one or more parcels in an environment of a mobile robot. The kit also includes a structure configured to couple to the equipment. The structure comprises an identifier configured to be sensed by a sensor of the mobile robot.
A kit includes a computing device configured to control motion of equipment for receiving one or more parcels in an environment of a mobile robot. The kit also includes a structure configured to couple to the equipment. The structure comprises an identifier configured to be sensed by a sensor of the mobile robot.
A computing device receives location information for a mobile robot. The computing device also receives location information for an entity in an environment of the mobile robot. The computing device determines a distance between the mobile robot and the entity in the environment of the mobile robot. The computing device determines one or more operating parameters for the mobile robot. The one or more operating parameters are based on the determined distance.
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
B25J 5/00 - Manipulateurs montés sur roues ou sur support mobile
B66F 9/06 - Dispositifs pour lever ou descendre des marchandises volumineuses ou lourdes aux fins de chargement ou de déchargement se déplaçant, avec leurs charges, sur des roues ou sur un dispositif analogue, p.ex. chariots élévateurs à fourche
A computing device receives location information for a mobile robot. The computing device also receives location information for an entity in an environment of the mobile robot. The computing device determines a distance between the mobile robot and the entity in the environment of the mobile robot. The computing device determines one or more operating parameters for the mobile robot. The one or more operating parameters are based on the determined distance.
An example robot includes a hydraulic actuator cylinder controlling motion of a member of the robot. The hydraulic actuator cylinder comprises a piston, a first chamber, and a second chamber. A valve system controls hydraulic fluid flow between a hydraulic supply line of pressurized hydraulic fluid, the first and second chambers, and a return line. A controller may provide a first signal to the valve system so as to begin moving the piston based on a trajectory comprising moving in a forward direction, stopping, and moving in a reverse direction. The controller may provide a second signal to the valve system so as to cause the piston to override the trajectory as it moves in the forward direction and stop at a given position, and then provide a third signal to the valve system so as to resume moving the piston in the reverse direction based on the trajectory.
F15B 9/09 - Servomoteurs à asservissement, c. à d. dans lesquels la position de l'organe commandé correspond à celle de l'organe qui commande les servomoteurs étant du type à mouvement possible alternatif ou oscillant commandés par des clapets agissant sur l'alimentation de fluide ou sur l'orifice de sortie du fluide du servomoteur avec moyens de commande électriques
B62D 57/032 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques avec des pieds ou des patins soulevés alternativement ou dans un ordre déterminé
B25J 5/00 - Manipulateurs montés sur roues ou sur support mobile
Systems and methods for determining movement of a robot about an environment are provided. A computing system of the robot (i) receives information including a navigation target for the robot and a kinematic state of the robot; (ii) determines, based on the information and a trajectory target for the robot, a retargeted trajectory for the robot; (iii) determines, based on the retargeted trajectory, a centroidal trajectory for the robot and a kinematic trajectory for the robot consistent with the centroidal trajectory; and (iv) determines, based on the centroidal trajectory and the kinematic trajectory, a set of vectors having a vector for each of one or more joints of the robot.
B25J 13/08 - Commandes pour manipulateurs au moyens de dispositifs sensibles, p.ex. à la vue ou au toucher
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
B62D 57/02 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques
A method for detecting boxes includes receiving a plurality of image frame pairs for an area of interest including at least one target box. Each image frame pair includes a monocular image frame and a respective depth image frame. For each image frame pair, the method includes determining corners for a rectangle associated with the at least one target box within the respective monocular image frame. Based on the determined corners, the method includes the following: performing edge detection and determining faces within the respective monocular image frame; and extracting planes corresponding to the at least one target box from the respective depth image frame. The method includes matching the determined faces to the extracted planes and generating a box estimation based on the determined corners, the performed edge detection, and the matched faces of the at least one target box.
G06V 10/25 - Détermination d’une région d’intérêt [ROI] ou d’un volume d’intérêt [VOI]
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/80 - Fusion, c. à d. combinaison des données de diverses sources au niveau du capteur, du prétraitement, de l’extraction des caractéristiques ou de la classification
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
An actuation pressure to actuate one or more hydraulic actuators may be determined based on a load on the one or more hydraulic actuators of a robotic device. Based on the determined actuation pressure, a pressure rail from among a set of pressure rails at respective pressures may be selected. One or more valves may connect the selected pressure rail to a metering valve. The hydraulic drive system may operate in a discrete mode in which the metering valve opens such that hydraulic fluid flows from the selected pressure rail through the metering valve to the one or more hydraulic actuators at approximately the supply pressure. Responsive to a control state of the robotic device, the hydraulic drive system may operate in a continuous mode in which the metering valve throttles the hydraulic fluid such that the supply pressure is reduced to the determined actuation pressure.
F15B 11/18 - Systèmes de servomoteurs dépourvus d'asservissement avec plusieurs servomoteurs utilisés en combinaison pour obtenir le fonctionnement par étape d'un organe commandé unique
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
B25J 9/14 - Manipulateurs à commande programmée caractérisés par des moyens pour régler la position des éléments manipulateurs à fluide
09 - Appareils et instruments scientifiques et électriques
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
Database management being management of data collected by robotic equipment and internet of things (IoT) enabled devices; database management services in the nature of databases comprised of data collected by robotic equipment and internet of things (IoT) sensors; data processing services, in the nature of collecting and analyzing data gathered by robotic equipment and internet of things (IoT) enabled devices to identify historical trends, predict outcomes, and assess performance Downloadable and recorded computer software, both for the remote management of robotic equipment and internet of things (IoT) enabled devices for controlling, operating, and monitoring the status of robots, machine tools, industrial machines, and internet of things (IoT) enabled devices, for the management of data collected by robotic equipment and internet of things (IoT) enabled devices, for analyzing the data gathered by robotic equipment and internet of things (IoT) enabled devices to identify historical trends, predict outcomes, and assess performance, sending real-time instructions, setting routes and missions, and modifying the scheduled tasks of robotic equipment and internet of things (IoT) enabled devices, for the real-time monitoring of on-sight sensors being laser scanning sensors, gas sensors, radiation sensors, vibration sensors, partial discharge sensors, thermal sensors, and audio sensors, for the real-time monitoring of camera and video images collected by robotic equipment and internet of things (IoT) enabled devices, and for teleoperation being the remote operation of robotic equipment and internet of things (IoT) enabled devices; computer interface software, namely, downloadable and recorded computer software both for use in database management, storing and managing electronic data Providing online non-downloadable computer software platforms for the remote management of robotic equipment and internet of things (IoT) enabled devices in the nature of controlling, operating, and monitoring the status of robots, machine tools, industrial machines and internet of things (IoT) enabled devices, for the management of data collected by robotic equipment and internet of things (IoT) enabled devices, for analyzing the data gathered by robotic equipment and internet of things (IoT) enabled devices to identify historical trends, predict outcomes, and assess performance, for sending real-time instructions, setting routes and missions, and modifying the scheduled tasks of robotic equipment and internet of things (IoT) enabled devices, for the real-time monitoring of on-sight sensors being laser scanning sensors, gas sensors, radiation sensors, vibration sensors, partial discharge sensors, thermal sensors, and audio sensors, for the real-time monitoring of camera and video images collected by robotic equipment and internet of things (IoT) enabled devices, and for teleoperation being the remote operation of robotic equipment and internet of things (IoT) enabled devices; providing temporary use of non-downloadable cloud-based software for managing electronic data interfaces
A method for calibrating a position measurement system includes receiving measurement data from the position measurement system and determining that the measurement data includes periodic distortion data. The position measurement system includes a nonius track and a master track. The method also includes modifying the measurement data by decomposing the periodic distortion data into periodic components and removing the periodic components from the measurement data.
G01D 5/244 - Moyens mécaniques pour le transfert de la grandeur de sortie d'un organe sensible; Moyens pour convertir la grandeur de sortie d'un organe sensible en une autre variable, lorsque la forme ou la nature de l'organe sensible n'imposent pas un moyen de conversion déterminé; Transducteurs non spécialement adaptés à une variable particulière utilisant des moyens électriques ou magnétiques produisant des impulsions ou des trains d'impulsions
G01D 5/347 - Moyens mécaniques pour le transfert de la grandeur de sortie d'un organe sensible; Moyens pour convertir la grandeur de sortie d'un organe sensible en une autre variable, lorsque la forme ou la nature de l'organe sensible n'imposent pas un moyen de conversion déterminé; Transducteurs non spécialement adaptés à une variable particulière utilisant des moyens optiques, c. à d. utilisant de la lumière infrarouge, visible ou ultraviolette avec atténuation ou obturation complète ou partielle des rayons lumineux les rayons lumineux étant détectés par des cellules photo-électriques en utilisant le déplacement d'échelles de codage
Using the techniques discussed herein, a set of images is captured by one or more array imagers (106). Each array imager includes multiple imagers configured in various manners. Each array imager captures multiple images of substantially a same scene at substantially a same time. The images captured by each array image are encoded by multiple processors (112, 114). Each processor can encode sets of images captured by a different array imager, or each processor can encode different sets of images captured by the same array imager. The encoding of the images is performed using various image-compression techniques so that the information that results from the encoding is smaller, in terms of storage size, than the uncompressed images.
H04N 13/282 - Générateurs de signaux d’images pour la génération de signaux d’images correspondant à au moins trois points de vue géométriques, p.ex. systèmes multi-vues
G06T 1/20 - Architectures de processeurs; Configuration de processeurs p.ex. configuration en pipeline
H04N 13/161 - Encodage, multiplexage ou démultiplexage de différentes composantes des signaux d’images
H04N 19/107 - Sélection du mode de codage ou du mode de prédiction entre codage prédictif spatial et temporel, p.ex. rafraîchissement d’image
H04N 19/42 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques - caractérisés par les détails de mise en œuvre ou le matériel spécialement adapté à la compression ou à la décompression vidéo, p.ex. la mise en œuvre de logiciels spécialisés
H04N 19/436 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques - caractérisés par les détails de mise en œuvre ou le matériel spécialement adapté à la compression ou à la décompression vidéo, p.ex. la mise en œuvre de logiciels spécialisés utilisant des dispositions de calcul parallélisées
H04N 19/503 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre la prédiction temporelle
H04N 19/593 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre des techniques de prédiction spatiale
H04N 19/597 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif spécialement adapté pour l’encodage de séquences vidéo multi-vues
H04N 19/62 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant un codage par transformée par transformation en fréquence en trois dimensions
H04N 23/45 - Caméras ou modules de caméras comprenant des capteurs d'images électroniques; Leur commande pour générer des signaux d'image à partir de plusieurs capteurs d'image de type différent ou fonctionnant dans des modes différents, p. ex. avec un capteur CMOS pour les images en mouvement en combinaison avec un dispositif à couplage de charge [CCD]
H04N 23/75 - Circuits de compensation de la variation de luminosité dans la scène en agissant sur la partie optique de la caméra
H04N 23/80 - Chaînes de traitement de la caméra; Leurs composants
H04N 23/957 - Caméras ou modules de caméras à champ lumineux ou plénoptiques
A method of manipulating boxes includes receiving a minimum box size for a plurality of boxes varying in size located in a walled container. The method also includes dividing a grip area of a gripper into a plurality of zones. The method further includes locating a set of candidate boxes based on an image from a visual sensor. For each zone, the method additionally includes, determining an overlap of a respective zone with one or more neighboring boxes to the set of candidate boxes. The method also includes determining a grasp pose for a target candidate box that avoids one or more walls of the walled container. The method further includes executing the grasp pose to lift the target candidate box by the gripper where the gripper activates each zone of the plurality of zones that does not overlap a respective neighboring box to the target candidate box.
Methods and apparatus for online camera calibration are provided. The method comprises receiving a first image captured by a first camera of a robot, wherein the first image includes an object having at least one known dimension, receiving a second image captured by a second camera of the robot, wherein the second image includes the object, wherein a field of view of the first camera and a field of view of the second camera at least partially overlap, projecting a plurality of points on the object in the first image to pixel locations in the second image, and determining, based on pixel locations of the plurality of points on the object in second image and the projected plurality of points on the object, a reprojection error.
Methods and apparatus for performing automated inspection of one or more assets in an environment using a mobile robot are provided. The method, comprises defining, within an image captured by a sensor of a robot, a region of interest that includes an asset in an environment of the robot, wherein the asset is associated with an asset identifier, configuring at least one parameter of a computer vision model based on the asset identifier, processing image data within the region of interest using the computer vision model to determine whether an alert should be generated, and outputting the alert when it is determined that the alert should be generated.
One disclosed method involves at least one application controlling navigation of a robot through an environment based at least in part on a topological map, the topological map including at least a first waypoint, a second waypoint, and a first edge representing a first path between the first waypoint and the second waypoint. The at least one application determines that the topological map includes at least one feature that identifies a first service that is configured to control the robot to perform at least one operation, and instructs the first service to perform the at least one operation as the robot travels along at least a portion of the first path.
A method for online authoring of robot autonomy applications includes receiving sensor data of an environment about a robot while the robot traverses through the environment. The method also includes generating an environmental map representative of the environment about the robot based on the received sensor data. While generating the environmental map, the method includes localizing a current position of the robot within the environmental map and, at each corresponding target location of one or more target locations within the environment, recording a respective action for the robot to perform. The method also includes generating a behavior tree for navigating the robot to each corresponding target location and controlling the robot to perform the respective action at each corresponding target location within the environment during a future mission when the current position of the robot within the environmental map reaches the corresponding target location.
According to one disclosed method, one or more sensors of a robot may receive data corresponding to one or more locations of the robot along a path the robot is following within an environment on a first occasion. Based on the received data, a determination may be made that one or more stairs exist in a first region of the environment. Further, when the robot is at a position along the path the robot is following on the first occasion, a determination may be made that the robot is expected to enter the first region. The robot may be controlled to operate in a first operational mode associated with traversal of stairs when it is determined that one or more stairs exist in the first region and the robot is expected to enter the first region.
B62D 57/024 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques spécialement adaptés pour se déplacer sur des surfaces inclinées ou verticales
B62D 57/032 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques avec des pieds ou des patins soulevés alternativement ou dans un ordre déterminé
Methods and apparatus for navigating a robot along a route through an environment, the route being associated with a mission, are provided. The method comprises identifying, based on sensor data received by one or more sensors of the robot, a set of potential obstacles in the environment, determining, based at least in part on stored data indicating a set of footfall locations of the robot during a previous execution of the mission, that at least one of the potential obstacles in the set is an obstacle, and navigating the robot to avoid stepping on the obstacle.
B62D 57/032 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques avec des pieds ou des patins soulevés alternativement ou dans un ordre déterminé
Methods and apparatus for navigating a robot along a route through an environment, the route being associated with a mission, are provided. The method comprises identifying, based on sensor data received by one or more sensors of the robot, a set of potential obstacles in the environment, determining, based at least in part on stored data indicating a set of footfall locations of the robot during a previous execution of the mission, that at least one of the potential obstacles in the set is an obstacle, and navigating the robot to avoid stepping on the obstacle.
Methods and apparatus for performing automated inspection of one or more assets in an environment using a mobile robot are provided. The method, comprises defining, within an image captured by a sensor of a robot, a region of interest that includes an asset in an environment of the robot, wherein the asset is associated with an asset identifier, configuring at least one parameter of a computer vision model based on the asset identifier, processing image data within the region of interest using the computer vision model to determine whether an alert should be generated, and outputting the alert when it is determined that the alert should be generated.
An example implementation involves receiving measurements from an inertial sensor coupled to the robot and detecting an occurrence of a foot of the legged robot making contact with a surface. The implementation also involves reducing a gain value of an amplifier from a nominal value to a reduced value upon detecting the occurrence. The amplifier receives the measurements from the inertial sensor and provides a modulated output based on the gain value. The implementation further involves increasing the gain value from the reduced value to the nominal value over a predetermined duration of time after detecting the occurrence. The gain value is increased according to a profile indicative of a manner in which to increase the gain value of the predetermined duration of time. The implementation also involves controlling at least one actuator of the legged robot based on the modulated output during the predetermined duration of time.
B62D 57/02 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques
G01L 5/00 - Appareils ou procédés pour la mesure des forces, du travail, de la puissance mécanique ou du couple, spécialement adaptés à des fins spécifiques
B62D 57/032 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques avec des pieds ou des patins soulevés alternativement ou dans un ordre déterminé
82.
SYSTEMS AND METHODS FOR SYNCRONIZING MULTIPLE ELECTRONIC DEVICES
Embodiments are provided for syncing multiple electronic devices for collective audio playback. According to certain aspects, a master device connects (218) to a slave device via a wireless connection. The master device calculates (224) a network latency via a series of network latency pings with the slave device and sends (225) the network latency to the slave device. Further, the master devices sends (232) a portion of an audio file as well as a timing instruction including a system time to the slave device. The master device initiates (234) playback of the portion of the audio file and the slave devices initiates (236) playback of the portion of the audio file according to the timing instruction and a calculated system clock offset value.
H04H 20/38 - Dispositions de distribution lorsque des stations inférieures, p.ex. des récepteurs, interagissent avec la radiodiffusion
H04H 20/61 - Dispositions spécialement adaptées à des applications spécifiques, p.ex. aux informations sur le trafic ou aux récepteurs mobiles à la radiodiffusion locale, p.ex. la radiodiffusion en interne
H04H 20/08 - Dispositions pour la retransmission des informations radiodiffusées entre des appareils terminaux
H04H 20/18 - Dispositions de synchronisation de la radiodiffusion ou de la distribution par l'intermédiaire de plusieurs systèmes
H04N 21/43 - Traitement de contenu ou données additionnelles, p.ex. démultiplexage de données additionnelles d'un flux vidéo numérique; Opérations élémentaires de client, p.ex. surveillance du réseau domestique ou synchronisation de l'horloge du décodeur; Intergiciel de client
H04N 21/242 - Procédés de synchronisation, p.ex. traitement de références d'horloge de programme [PCR]
H04H 60/88 - Dispositions caractérisées par des systèmes de transmission autres que ceux utilisés pour la radiodiffusion, p.ex. Internet caractérisées par le système de transmission lui-même le système de transmission étant Internet l’accès se faisant au moyen de réseaux informatiques qui sont des réseaux sans fil
A method for generating intermediate waypoints for a navigation system of a robot includes receiving a navigation route. The navigation route includes a series of high-level waypoints that begin at a starting location and end at a destination location and is based on high-level navigation data. The high-level navigation data is representative of locations of static obstacles in an area the robot is to navigate. The method also includes receiving image data of an environment about the robot from an image sensor and generating at least one intermediate waypoint based on the image data. The method also includes adding the at least one intermediate waypoint to the series of high-level waypoints of the navigation route and navigating the robot from the starting location along the series of high-level waypoints and the at least one intermediate waypoint toward the destination location.
A content moving device which enables providing content stored on a first user device, such as a DVR, in a first format and resolution to be provided to a second user device, such as a portable media player (PMP) in a second format and resolution. The content moving device identifies content on the first user device as candidate content which may be desired by the PMP and receives the candidate content from the DVR. The content moving device transcodes the candidate content at times independent of a request from the PMP for the content. The content moving device may provide a list of available transcoded content to the PMP for selection, and provide selected content to the PMP. The content moving device may also provide information relating to any protection schemes of the content provided to the PMP, such as DRM rights and decryption keys. The content moving device performs the often computationally intense and time consuming transcoding of user content to enable the user to move content between different user devices in a convenient manner.
H04N 21/4363 - Adaptation du flux vidéo à un réseau local spécifique, p.ex. un réseau IEEE 1394 ou Bluetooth®
H04N 21/4402 - Traitement de flux élémentaires vidéo, p.ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène MPEG-4 impliquant des opérations de reformatage de signaux vidéo pour la redistribution domestique, le stockage ou l'affichage en temps réel
H04N 21/45 - Opérations de gestion réalisées par le client pour faciliter la réception de contenu ou l'interaction avec le contenu, ou pour l'administration des données liées à l'utilisateur final ou au dispositif client lui-même, p.ex. apprentissage des préféren
H04N 21/41 - Structure de client; Structure de périphérique de client
85.
ALERT PERIPHERAL FOR NOTIFICATION OF EVENTS OCCURRING ON A PROGRAMMABLE USER EQUIPMENT WITH COMMUNICATION CAPABILITIES
An alert peripheral device that provides sensory notification to a user of the device includes: a power subsystem; a communication mechanism by which notification signals is received from a first user equipment (UE) that generates and transmits the notification signals in response to detection of specific events at the first UE; and a response notification mechanism that provides a sensory response of the peripheral device following receipt of a notification of a detected event (NDE) signal. The device further includes an embedded controller coupled to each of the other components and which includes firmware that when executed on the embedded controller configures the embedded controller to: establish a communication link between the communication mechanism and the first UE; and in response to detecting a receipt of the NDE signal from the first UE, trigger the response notification mechanism to exhibit the sensory response.
H04W 68/02 - Dispositions pour augmenter l'efficacité du canal d'avertissement ou de messagerie
H04W 4/12 - Messagerie; Boîtes aux lettres; Annonces
H04M 19/04 - Dispositions d'alimentation de courant pour systèmes téléphoniques fournissant un courant de sonnerie ou des tonalités de surveillance, p.ex. tonalité de numérotation ou tonalité d’occupation le courant de sonnerie étant produit aux sous-stations
H04B 1/3827 - TRANSMISSION - Détails des systèmes de transmission non caractérisés par le milieu utilisé pour la transmission Émetteurs-récepteurs, c. à d. dispositifs dans lesquels l'émetteur et le récepteur forment un ensemble structural et dans lesquels au moins une partie est utilisée pour des fonctions d'émission et de réception Émetteurs-récepteurs portatifs
H04W 76/40 - Gestion de la connexion pour la distribution ou la diffusion sélective
H04W 76/14 - Gestion de la connexion Établissement de la connexion Établissement de la connexion en mode direct
H04M 1/72412 - Interfaces utilisateur spécialement adaptées aux téléphones sans fil ou mobiles avec des moyens de soutien local des applications accroissant la fonctionnalité par interfaçage avec des accessoires externes utilisant des interfaces sans fil bidirectionnelles à courte portée
H04W 4/14 - Services d'envoi de messages courts, p.ex. SMS ou données peu structurées de services supplémentaires [USSD]
A drive system includes a linear actuator with a drive shaft and having an actuation axis extending along a length of the linear actuator. A motor assembly of the drive system couples to drive shaft and is configured to rotate the drive shaft about the actuation axis of the linear actuator. The drive system further includes a nut attached to the drive shaft and a carrier housing the nut. A linkage system of the drive system extends from a proximal end away from the motor assembly to a distal end. The proximal end of the linkage system rotatably attaches to the carrier at a first proximal attachment location where the first proximal attachment location offset is from the actuation axis. The drive system also includes an output link rotatably coupled to the distal end of the linkage system where the output link is offset from the actuation axis.
F16H 37/12 - Transmissions comportant principalement une transmission à engrenages ou à friction, des maillons ou des leviers, des cames, ou bien des organes appartenant à deux des trois types ci-dessus au moins
87.
System and method for decoding using parallel processing
An apparatus for decoding frames of a compressed video data stream having at least one frame divided into partitions, includes a memory and a processor configured to execute instructions stored in the memory to read partition data information indicative of a partition location for at least one of the partitions, decode a first partition of the partitions that includes a first sequence of blocks, decode a second partition of the partitions that includes a second sequence of blocks identified from the partition data information using decoded information of the first partition.
H04N 19/61 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant un codage par transformée combiné avec un codage prédictif
H04N 19/91 - Codage entropique, p.ex. codage à longueur variable ou codage arithmétique
H04N 19/82 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques - Détails des opérations de filtrage spécialement adaptées à la compression vidéo, p.ex. pour l'interpolation de pixels mettant en œuvre le filtrage dans une boucle de prédiction
H04N 19/17 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c. à d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p.ex. un objet
H04N 19/593 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage prédictif mettant en œuvre des techniques de prédiction spatiale
H04N 19/44 - Décodeurs spécialement adaptés à cet effet, p.ex. décodeurs vidéo asymétriques par rapport à l’encodeur
H04N 19/174 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c. à d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p.ex. un objet la zone étant une tranche, p.ex. une ligne de blocs ou un groupe de blocs
H04N 19/176 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques utilisant le codage adaptatif caractérisés par l’unité de codage, c. à d. la partie structurelle ou sémantique du signal vidéo étant l’objet ou le sujet du codage adaptatif l’unité étant une zone de l'image, p.ex. un objet la zone étant un bloc, p.ex. un macrobloc
H04N 19/436 - Procédés ou dispositions pour le codage, le décodage, la compression ou la décompression de signaux vidéo numériques - caractérisés par les détails de mise en œuvre ou le matériel spécialement adapté à la compression ou à la décompression vidéo, p.ex. la mise en œuvre de logiciels spécialisés utilisant des dispositions de calcul parallélisées
H04N 19/51 - Estimation ou compensation du mouvement
A robot includes a drive system configured to maneuver the robot about an environment and data processing hardware in communication with memory hardware and the drive system. The memory hardware stores instructions that when executed on the data processing hardware cause the data processing hardware to perform operations. The operations include receiving image data of the robot maneuvering in the environment and executing at least one waypoint heuristic. The at least one waypoint heuristic is configured to trigger a waypoint placement on a waypoint map. In response to the at least one waypoint heuristic triggering the waypoint placement, the operations include recording a waypoint on the waypoint map where the waypoint is associated with at least one waypoint edge and includes sensor data obtained by the robot. The at least one waypoint edge includes a pose transform expressing how to move between two waypoints.
A robotic device includes a control system. The control system receives a first measurement indicative of a first distance between a center of mass of the machine and a first position in which a first leg of the machine last made initial contact with a surface. The control system also receives a second measurement indicative of a second distance between the center of mass of the machine and a second position in which the first leg of the machine was last raised from the surface. The control system further determines a third position in which to place a second leg of the machine based on the received first measurement and the received second measurement. Additionally, the control system provides instructions to move the second leg of the machine to the determined third position.
B62D 57/032 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques avec des pieds ou des patins soulevés alternativement ou dans un ordre déterminé
A method for palletizing by a robot includes positioning an object at an initial position adjacent to a target object location, tilting the object at an angle relative to a ground plane, shifting the object in a first direction from the initial position toward a first alignment position, shifting the object in a second direction from the first alignment position toward a second alignment position, and releasing the object from the robot to pivot the object toward the target object location.
B25J 15/06 - Têtes de préhension avec moyens de retenue magnétiques ou fonctionnant par succion
B65G 57/24 - Empilage d'objets de forme particulière à trois dimensions, p.ex. cubiques, cylindriques en couches, chacune selon une disposition horizontale prédéterminée les couches étant transférées comme un tout, p.ex. sur palettes
91.
NAME COMPOSITION ASSISTANCE IN MESSAGING APPLICATIONS
A method includes identifying, at an electronic device a candidate name responsive to user input indicating a salutational trigger during composition of a body of a message of a messaging application. Identifying the candidate name including at least one of: parsing a recipient-specific portion of a recipient message address of the message; parsing a display name associated with the recipient message address; parsing a content of the message body; parsing an attachment name associated with an attachment field of the message; identifying the candidate name from a contact record selected from a contacts database based on a recipient-specific portion of a recipient message address of the message; and parsing user-readable content of an application from which composition of the message was triggered. The method further includes facilitating composition of a recipient name in the body of the message based on the candidate name.
H04L 51/48 - Adressage des messages, p.ex. format des adresses ou messages anonymes, alias
H04L 51/02 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p.ex. courriel en utilisant des réactions automatiques ou la délégation par l’utilisateur, p.ex. des réponses automatiques ou des messages générés par un agent conversationnel
An example implementation includes (i) receiving sensor data that indicates topographical features of an environment in which a robotic device is operating, (ii) processing the sensor data into a topographical map that includes a two-dimensional matrix of discrete cells, the discrete cells indicating sample heights of respective portions of the environment, (iii) determining, for a first foot of the robotic device, a first step path extending from a first lift-off location to a first touch-down location, (iv) identifying, within the topographical map, a first scan patch of cells that encompass the first step path, (v) determining a first high point among the first scan patch of cells; and (vi) during the first step, directing the robotic device to lift the first foot to a first swing height that is higher than the determined first high point.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
B62D 57/032 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques avec des pieds ou des patins soulevés alternativement ou dans un ordre déterminé
93.
SYSTEMS AND METHODS FOR EQUALIZING AUDIO FOR PLAYBACK ON AN ELECTRONIC DEVICE
Embodiments are provided for receiving a request to output audio at a first speaker and a second speaker of an electronic device, determining that the electronic device is oriented in a portrait orientation or a landscape orientation, identifying, based on the determined orientation, a first equalization setting for the first speaker and a second equalization setting for the second speaker, providing, for output at the first speaker, a first audio signal with the first equalization setting, and providing, for output at the second speaker, a second audio signal with the second equalization setting.
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/03 - Dispositions pour convertir sous forme codée la position ou le déplacement d'un élément
H04R 29/00 - Dispositifs de contrôle; Dispositifs de tests
A method includes receiving sensor data of an environment about a robot and generating a plurality of waypoints and a plurality of edges each connecting a pair of the waypoints. The method includes receiving a target destination for the robot to navigate to and determining a route specification based on waypoints and corresponding edges for the robot to follow for navigating the robot to the target destination selected from waypoints and edges previously generated. For each waypoint, the method includes generating a goal region encompassing the corresponding waypoint and generating at least one constraint region encompassing a goal region. The at least one constraint region establishes boundaries for the robot to remain within while traversing toward the target destination. The method includes navigating the robot to the target destination by traversing the robot through each goal region while maintaining the robot within the at least one constraint region.
An example implementation for determining mechanically-timed footsteps may involve a robot having a first foot in contact with a ground surface and a second foot not in contact with the ground surface. The robot may determine a position of its center of mass and center of mass velocity, and based on these, determine a capture point for the robot. The robot may also determine a threshold position for the capture point, where the threshold position is based on a target trajectory for the capture point after the second foot contacts the ground surface. The robot may determine that the capture point has reached this threshold position and based on this determination, and cause the second foot to contact the ground surface.
B62D 57/032 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques avec des pieds ou des patins soulevés alternativement ou dans un ordre déterminé
G05D 1/08 - Commande de l'attitude, c. à d. élimination ou réduction des effets du roulis, du tangage ou des embardées
A robot includes an input link, an output link, and a wire routing. The output link is coupled to the input link at an inline twist joint where the output link is configured to rotate about the longitudinal axis of the output link relative to the input link. The wire routing traverses the inline twist joint to couple the input link and the output link. The wire routing includes an input link section, an output link section, and an omega section. A first position of the wire routing coaxially aligns at a start of the omega section on the input link with a second position of the wire routing at an end of the omega section on an output link.
B25J 19/00 - Accessoires adaptés aux manipulateurs, p.ex. pour contrôler, pour observer; Dispositifs de sécurité combinés avec les manipulateurs ou spécialement conçus pour être utilisés en association avec ces manipulateurs
A gripper mechanism includes a pair of gripper jaws, a linear actuator, and a rocker bogey. The linear actuator drives a first gripper jaw to move relative to a second gripper jaw. Here, the linear actuator includes a screw shaft and a drive nut where the drive nut includes a protrusion having protrusion axis expending along a length of the protrusion. The protrusion axis is perpendicular to an actuation axis of the linear actuator along a length of the screw shaft. The rocker bogey is coupled to the drive nut at the protrusion to form a pivot point for the rocker bogey and to enable the rocker bogey to pivot about the protrusion axis when the linear actuator drives the first gripper jaw to move relative to the second gripper jaw.
A robot system includes: an upper body section including one or more end-effectors; a lower body section including one or more legs; and an intermediate body section coupling the upper and lower body sections. An upper body control system operates at least one of the end-effectors. The intermediate body section experiences a first intermediate body linear force and/or moment based on an end-effector force acting on the at least one end-effector. A lower body control system operates the one or more legs. The one or more legs experience respective surface reaction forces. The intermediate body section experiences a second intermediate body linear force and/or moment based on the surface reaction forces. The lower body control system operates the one or more legs so that the second intermediate body linear force balances the first intermediate linear force and the second intermediate body moment balances the first intermediate body moment.
B62D 57/00 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles
B62D 57/02 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques
B62D 57/024 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques spécialement adaptés pour se déplacer sur des surfaces inclinées ou verticales
B62D 57/032 - Véhicules caractérisés par des moyens de propulsion ou de prise avec le sol autres que les roues ou les chenilles, seuls ou en complément aux roues ou aux chenilles avec moyens de propulsion en prise avec le sol, p.ex. par jambes mécaniques avec des pieds ou des patins soulevés alternativement ou dans un ordre déterminé
Aspects of the present disclosure provide techniques to undo a portion of a mission recording of a robot by physically moving the robot back through the mission recording in reverse. As a result, after the undo process is completed, the robot is positioned at an earlier point in the mission and the user can continue to record further mission data from that point. The portion of the mission recording that was performed in reverse can be omitted from subsequent performance of the mission, for example by deleting that portion from the mission recording or otherwise marking that portion as inactive. In this manner, the mistake in the initial mission recording is not retained, but the robot need not perform the entire mission recording again.
The disclosure provides a method for generating a joint command. The method includes receiving a maneuver script including a plurality of maneuvers for a legged robot to perform where each maneuver is associated with a cost. The method further includes identifying that two or more maneuvers of the plurality of maneuvers of the maneuver script occur at the same time instance. The method also includes determining a combined maneuver for the legged robot to perform at the time instance based on the two or more maneuvers and the costs associated with the two or more maneuvers. The method additionally includes generating a joint command to control motion of the legged robot at the time instance where the joint command commands a set of joints of the legged robot. Here, the set of joints correspond to the combined maneuver.
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique