A method of grasp generation for a robot includes searching within a configuration space of a robot hand model for robot hand configurations to engage an object model with a grasp type. The method includes generating a set of candidate grasps based on the robot hand configurations. Grasping of the object model with the robot hand model is simulated in a physics engine using simulated grasps generated based on a given candidate grasp. A simulated grasp is assigned a score based on a response of the object model to an applied wrench disturbance when the object is engaged with the simulated grasp. The method includes generating a set of feasible grasps for the given candidate grasp based on the respective simulated grasps having a score above a score threshold at a target wrench disturbance.
In an implementation, a stereo vision system comprises a first, a second, and a third camera. The first and second cameras are separated by a first baseline, the second and third cameras by a second baseline, and the first and third cameras by a third baseline. The third baseline is greater than the second baseline which is greater than the first baseline. The stereo vision system is operable to determine a first depth characterization of a first object using data received from the first camera and the second camera, determine a second depth characterization of a second object using data received from the second camera and the third camera, and determine a third depth characterization of a third object using data received from the first camera and the third camera. The third object is farther away than the second object which is farther away than the first object.
The present disclosure relates to protecting fragile members of robots from damage during fall events. In response to detecting a fall event, a fragile member of a robot can be actuated to a defensive configuration to avoid or reduce damage. An actuatable protective member can be actuated to protect a fragile member to avoid or reduce damage to the fragile member. Actuatable protective members can be dedicated protective members, or can be other members of the robot which serve different functionality outside of a fall event but act as a protective member during a fall event.
B25J 19/00 - Accessories fitted to manipulators, e.g. for monitoring, for viewingSafety devices combined with or specially adapted for use in connection with manipulators
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted supporting base and legVehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
In an implementation of a method of operation of a robot fleet management system, the robot fleet management system accesses a set of tasks available to be performed by a fleet of robots, accesses a respective power consumption for each task from the set of tasks, and accesses a respective power state of each robot in the fleet. The robot fleet management system allocates a selected robot to a selected task, based at least in part on the power state of at least the selected robot and the power consumption for at least the selected task. The power consumption may be determined by the robot fleet management system and/or be provided by the task provider. The set of tasks includes tethered and untethered tasks. The robot fleet management system allocates the selected robot to an untethered task after determining the selected robot has sufficient power to complete the untethered task.
In an implementation of a method of operation of a robot fleet management system, the robot fleet management system accesses a set of tasks available to be performed by a fleet of robots, accesses a respective power consumption for each task from the set of tasks, and accesses a respective power state of each robot in the fleet. The robot fleet management system allocates a selected robot to a selected task, based at least in part on the power state of at least the selected robot and the power consumption for at least the selected task. The power consumption may be determined by the robot fleet management system and/or be provided by the task provider. The set of tasks includes tethered and untethered tasks. The robot fleet management system allocates the selected robot to an untethered task after determining the selected robot has sufficient power to complete the untethered task.
In an implementation of a method of operation of a robot fleet management system, the robot fleet management system accesses a set of tasks available to be performed by a fleet of robots, accesses a respective power consumption for each task from the set of tasks, and accesses a respective power state of each robot in the fleet. The robot fleet management system allocates a selected robot to a selected task, based at least in part on the power state of at least the selected robot and the power consumption for at least the selected task. The power consumption may be determined by the robot fleet management system and/or be provided by the task provider. The set of tasks includes tethered and untethered tasks. The robot fleet management system allocates the selected robot to an untethered task after determining the selected robot has sufficient power to complete the untethered task.
Robot control systems, methods, control modules and computer program products that leverage one or more large language model(s) (LLMs) and a repository of omnichannel customer data in order to autonomously interact with a customer are described. A robot identifies a customer and accesses data about the customer from a database of omnichannel customer data. The robot generates a natural language (NL) query that includes customer data expressed in NL, contextual information expressed in NL, and a request for something to say to the customer. The LLM provides something to say for the robot, which the robot converts into audio signals and projects to the customer. The interaction may continue bidirectionally, with the robot transcribing responses from the customer in NL and querying the LLM for return responses.
Systems, methods, and computer program products for generating training data are described. Action data and context data are recorded for a robot body performing an action or task in an environment. The context data is augmented virtually to include variations from the recorded environment while the action data remains unchanged, and instances of training data are generated including the augmentations, to produce a large and varied training data set.
A robot includes a mobile base comprising a base body with a set of active wheels, an upper body comprising a torso with arms, and a pedestal linkage having a first end coupled to the base body by a first pivotable joint and a second end coupled to the torso by a second pivotable joint, wherein the pedestal linkage is pivotable relative to the base body to transform the robot between an elevated, elongated, or standing configuration and a lowered, contracted, or sitting configuration. In the second configuration, an omniwheel positioned at the base of the torso contacts the ground to improve stability of the system, and the pedestal linkage is received in a slot in the base body to produce a congruous work surface over which the torso may be rotated to face and upon which objects may be placed and manipulated during transport.
Robot control systems, methods, control modules and computer program products that leverage one or more large language model(s) (LLMs) in order to achieve at least some degree of autonomy are described. Robot control parameters, environment details, and/or instructions may advantageously be specified in natural language (NL) and communicated with the LLM via an NL prompt or query. The NL query may include a request for one or more work objectives from the LLM, such as “What can I do here?”, thereby establishing a form of agency by which the robot system may identify activities to perform without operator intervention. The LLM may also be queried to convert each work objective into a task plan providing a sequence of steps that the robot system may execute to complete the work objective. Optionally, the robot system may communicate with an operator to determine whether or not to execute a task plan.
Disclosed techniques for decreasing teach times of robot systems may obtain a first set of parameters of a first trained robot-control model of a first robot trained to perform a task and determine, based on the first set of parameters, a second set of parameters of a second robot-control model of a second robot before the second robot is trained to perform the task. In some cases, a plurality of sets of parameters from trained robot-control models of respective robots trained to perform a task may be obtained. Thus, for example, a convergence of values of those parameters on a value, or range of potential values, may be determined. Embodiments may determine values for parameters of the control model of the (e.g., second) robot to be trained within a range, or a threshold, based on values of corresponding parameters of the trained robot(s).
Robot control systems, methods, control modules, and computer program products that leverage one or more large language model(s) (LLMs) in order to achieve at least some degree of autonomy are described. Robot control parameters, environment details, and/or instruction sets may advantageously be specified in natural language (NL) and communicated with the LLM via an NL prompt or query. An NL response from the LLM may then be converted into a task plan. A task plan that successfully completes a first instance of a work objective may be parameterized and re-used to complete a second instance of the work objective. Parameterization of a task plan may include replacing one or more nouns/objects in the NL task plan with variables, while optionally preserving one or more verbs/actions in the NL task plan.
Systems, methods, and control modules for operating robot systems are described. A touch heatmap is generated for an object, indicating at least one touch region for the object. Touch regions indicate regions of an object which are most suitable or most prone to touching when the object is grasped. At least one end effector is controlled in accordance with at least one grasp primitive, to grasp the object in accordance with at least one touch region specified in the touch heatmap.
Systems, methods, and control modules for operating robot systems are described. A touch heatmap is generated for an object, indicating at least one touch region for the object. Touch regions indicate regions of an object which are most suitable or most prone to touching when the object is grasped. At least one end effector is controlled in accordance with at least one grasp primitive, to grasp the object in accordance with at least one touch region specified in the touch heatmap.
Systems, methods, and control modules for operating robot systems are described. A touch heatmap is generated for an object, indicating at least one touch region for the object. Touch regions indicate regions of an object which are most suitable or most prone to touching when the object is grasped. At least one end effector is controlled in accordance with at least one grasp primitive, to grasp the object in accordance with at least one touch region specified in the touch heatmap.
A fluidic tactile sensor includes a core having an outer core portion, an inner core portion, and a first channel having a first opening at a first surface portion of the outer core portion. An elastic skin is disposed over the first surface portion. A cell is formed between the first surface portion and the elastic skin and fluidly is fluidly connected to the first channel. The cell contains a fluid. A contact force applied to the elastic skin produces a measurable change in fluid pressure inside the cell.
A fluidic tactile sensor includes a core having an outer surface, channels formed within the core, and an elastic skin disposed over a surface portion of the outer surface. Cells are formed between the surface portion and the elastic skin. Each cell is connected to one of the channels through an opening of the channel on the surface portion. Compressible fluid volumes extend between the elastic skin and the core. Each compressible fluid volume includes a first fluid volume formed inside one of the cells and a second fluid volume formed inside the channel connected to the one of the cells. The first and second fluid volumes contain portions of a continuous compressible fluid medium. A contact force applied to the elastic skin at a location corresponding to a given cell produces a measurable change in a fluid pressure of the continuous compressible fluid medium associated with the given cell.
A robotic digit includes a digit base frame, a joint head coupled to the digit base frame, and an articulated digit body coupled to the joint head. A first actuator and a second actuator are mounted to the digit base frame. The first actuator includes a first actuator output coupled to the joint head by a first mechanical linkage. The first actuator output causes a first relative movement between the joint head and the digit base frame through the first mechanical linkage. The second actuator has second actuator output coupled to the joint head. The second output causes a second relative movement between the joint head and the digit base frame that is different from the first relative movement through the second mechanical linkage.
Robots, systems, methods, and computer program products for completing work objectives and evaluating states of robots are described. A robot accesses a library of reusable work primitives, each reusable work primitive corresponding to a respective basic sub-action that the robot is trained to autonomously perform. Each reusable work primitive is paired with an associated percept, which is used to evaluate a state representation of a robot to determine whether a desired outcome for the reusable work primitive is achieved.
Robots, systems, methods, and computer program products for completing work objectives and evaluating states of robots are described. A robot accesses a library of reusable work primitives, each reusable work primitive corresponding to a respective basic sub-action that the robot is trained to autonomously perform. Each reusable work primitive is paired with an associated percept, which is used to evaluate a state representation of a robot to determine whether a desired outcome for the reusable work primitive is achieved.
G05B 19/4155 - Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by programme execution, i.e. part programme or machine function execution, e.g. selection of a programme
21.
SYSTEMS, METHODS, AND COMPUTER PROGRAM PRODUCTS FOR AUTOMATING TASKS
Systems, methods, and computer program products for automating tasks are described. A multi-step framework enables a gradient towards task automation. An agent performs a task while sensors collect data. The data are used to generate a script that characterizes the discrete actions executed by the agent in the performance of the task. The script is used by a robot teleoperation system to control a robot to perform the task. The robot teleoperation system maps the script into an ordered set of action commands that the robot is operative to auto-complete to enable the robot to semi-autonomously perform the task. The ordered set of action commands is converted into an automation program that may be accessed by an autonomous robot and executed to cause the autonomous robot to autonomously perform the task. In training, simulated instances of the robot may perform simulated instances of the task in simulated environments.
Robot control systems, methods, control modules and computer program products that leverage one or more large language model(s) (LLMs) in order to achieve at least some degree of autonomy are described. Robot control parameters and/or instructions may advantageously be specified in natural language (NL) and communicated with the LLM via an NL prompt or query. The LLM module provides a task plan in NL, which can be evaluated for at least one fault or error. If at least one fault or error is identified, the LLM module can be queried to provide a resolution.
A robot has a controller and a sensor. The sensor is communicatively coupled to the controller. The controller includes a contact response system. A safety response of the contact response system is activated for the sensor. A method of operation of the robot includes detecting, by the sensor, a contact between the robot and a human, the contact resulting from a motion of the robot, and determining, by the controller, whether the contact between the robot and the human is an expected or unexpected contact. In response to determining the contact between the robot and the human is an expected contact, the safety response is deactivated for the sensor to allow the robot to proceed with its motion uninterrupted. In response to determining the contact between the robot and the human is an unexpected contact, the contact response system causes the robot to interrupt the motion of the robot.
A robot has a controller and a sensor. The sensor is communicatively coupled to the controller. The controller includes a contact response system. A safety response of the contact response system is activated for the sensor. A method of operation of the robot includes detecting, by the sensor, a contact between the robot and a human, the contact resulting from a motion of the robot, and determining, by the controller, whether the contact between the robot and the human is an expected or unexpected contact. In response to determining the contact between the robot and the human is an expected contact, the safety response is deactivated for the sensor to allow the robot to proceed with its motion uninterrupted. In response to determining the contact between the robot and the human is an unexpected contact, the contact response system causes the robot to interrupt the motion of the robot.
A robotic wrist includes a wrist frame, a first actuator having a first actuator output, and a second actuator having a second actuator output. A first mechanical linkage includes a first input coupled to the first actuator output and a first output coupled to the wrist frame. A second mechanical linkage includes a second input coupled to the second actuator output and a second output coupled to the wrist frame. A rotational position of the first output about a first axis is responsive to a position of the first actuator output. A rotational position of the second output about a second axis that is transverse to the first axis is responsive to a different between a position of the first actuator output and a position of the second actuator output.
B25J 19/00 - Accessories fitted to manipulators, e.g. for monitoring, for viewingSafety devices combined with or specially adapted for use in connection with manipulators
A robotic wrist for a robotic arm includes a hybrid differential with a first cam, a second cam, a first differential input, and a second differential input. The first cam and the second cam are disposed about first and second pivots oriented along a first rotational axis. An abduction output is coupled to the second cam and has a second rotational axis transverse to the first rotational axis. The robotic wrist includes a first actuator, a second actuator, a first link coupling an output of the first actuator to the first differential input, and a second link coupling an output of the second actuator to the second differential input. Synchronous motion of the actuators causes flexion of the abduction output about the first rotation axis, and asynchronous motion of the actuators causes abduction motion of the abduction output about the second rotation axis.
Robot control systems, methods, control modules and computer program products that leverage one or more large language model(s) (LLMs) in order to achieve at least some degree of autonomy are described. Robot control parameters and/or instructions may advantageously be specified in natural language (NL) and communicated with the LLM via a recursive sequence of NL prompts or queries. Corresponding NL responses from the LLM may then be converted into robot control parameters and/or instructions. In this way, an LLM may be leveraged by the robot control system to enhance the autonomy of various operations and/or functions, including without limitation task planning, motion planning, human interaction, and/or reasoning about the environment.
A robotic elbow has a first actuator having a rotary output and a first interface for coupling the elbow to a first robotic arm segment. A flexion frame rotatably mounts to the first actuator. A first link is coupled at one end to the output of the first actuator and at another end to the flexion frame. A second link is coupled at one end to the output of the first actuator and at another end to the flexion frame. The first link and the second link are coupled to the rotary output in a selected rotary phase displacement. A second actuator is mounted in the flexion frame, the second actuator has an output and a second interface for coupling the robotic elbow to a second robotic arm segment.
Systems, methods, and control modules for controlling robot systems are described. A state of a robot body is identified based on environment and context data, and a state prediction model is applied to predict subsequent states of the robot body. The robot body is controlled to transition to predicted states. Transitions to states can be validated, and predicted states updated when transitioning of the robot body is not aligned with predicted states.
Systems, methods, and control modules for controlling robot systems are described. A state of a robot body is identified based on environment and context data, and a state prediction model is applied to predict subsequent states of the robot body. The robot body is controlled to transition to predicted states. Transitions to states can be validated, and predicted states updated when transitioning of the robot body is not aligned with predicted states.
A mobile robot system has a robot body attached to a mobile base. The robot body has a torso, a first robotic arm mechanically coupled to the torso, a first robotic leg, and a second robotic leg. The first robotic leg and the second robotic leg are controllably actuatable to enable the robot body to execute bipedal walking. The mobile base has a platform to receive a lower end of the first robotic leg and a lower end of the second robotic leg, at least one wheel and a controllable steering mechanism to enable the mobile base to travel both while the robot body is positioned on the platform and while the robot body is not positioned on the platform. The mobile base also has a plurality of components, at least one of which operable to support at least one function of the robot body.
B62D 57/028 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members having wheels and mechanical legs
B60L 53/80 - Exchanging energy storage elements, e.g. removable batteries
B60L 58/12 - Methods or circuit arrangements for monitoring or controlling batteries or fuel cells, specially adapted for electric vehicles for monitoring or controlling batteries responding to state of charge [SoC]
32.
SYSTEMS, DEVICES, AND METHODS FOR A MOBILE ROBOT SYSTEM
A mobile robot system includes a mobile base and a humanoid robot. The mobile base includes a chassis having a platform. The mobile base includes a propulsion system that is coupled to the chassis and operable to propel the chassis within an environment. The humanoid robot includes a torso and two robotic legs. The humanoid robot has a first locomotion mode in which the humanoid robot is supported on the platform and travel of the humanoid robot within the environment is by movement of the mobile base within the environment. The humanoid robot has a second locomotion mode in which the humanoid robot is not supported on the platform and travel of the humanoid robot within the environment is by movement of the two robotic legs.
B62D 57/028 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members having wheels and mechanical legs
B60L 53/126 - Methods for pairing a vehicle and a charging station, e.g. establishing a one-to-one relation between a wireless power transmitter and a wireless power receiver
B60L 58/12 - Methods or circuit arrangements for monitoring or controlling batteries or fuel cells, specially adapted for electric vehicles for monitoring or controlling batteries responding to state of charge [SoC]
B60L 58/22 - Balancing the charge of battery modules
33.
METHOD AND SYSTEM OF GENERATING A FEASIBLE SMOOTH REFERENCE TRAJECTORY FOR AN ACTUATOR
A method of controlling an actuator having a maximum velocity and a maximum acceleration includes accessing a reference trajectory signal comprising temporally-spaced reference positions for the actuator and accessing a trajectory template having at least a third derivative continuity and at least one characteristic constrained by the maximum velocity and the maximum acceleration. The method includes generating a smooth reference trajectory signal based on the reference trajectory signal and the trajectory template and outputting the smooth reference trajectory signal. Controls for the actuator can be generated based on the smooth reference trajectory signal.
Systems, methods, and computer program products for autonomous, multi-agent goal-seeking are described. In an exemplary implementation, a hierarchical operational structure of a business employs autonomous AI-based controllers and robot systems in a bidirectional communication network to leverage fast and complete data sharing across all levels of the business. The comprehensive data collection is used to support the formulation of, and measure progress against, a hierarchical goal structure in which the top-level business controller specifies top-level business objectives and successive lower-level tiers of the business control hierarchy execute tasks and specify successively lower-level objectives for the tiers below. The ground level of the hierarchy comprises autonomous robot workers that autonomously perform ground-level tasks and data collection to support the business, and deliver reports back upstream to inform the higher-level objective setting.
G05B 19/4155 - Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by programme execution, i.e. part programme or machine function execution, e.g. selection of a programme
G05B 13/02 - Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
35.
SYSTEMS, DEVICES, AND METHODS FOR OPERATING A ROBOTIC SYSTEM
A robotic system includes a robot, and an interface to a large language model (LLM). The robot operates in an environment that includes a human. In an example method of operation of the robotic system, the robot initiates a task. After initiating the task, the robot detects that the human is waiting for the robot to complete the task. The interface sends a query to the LLM. The query includes a natural language statement describing a context in the natural language for the query. The interface receives a response from the LLM in reply to the query. The response includes a natural language statement describing material related to the context and suitable for an interim interaction that can be initiated by the robot with the human. The robot initiates the interim interaction with the human. The interim interaction may be initiated autonomously by the robot, and may include a diversion.
Robot control systems, methods, control modules and computer program products that leverage one or more large language model(s) (LLMs) in order to achieve at least some degree of autonomy are described. Robot control parameters and/or instructions may advantageously be specified in natural language (NL) and communicated with the LLM via an NL prompt or query. The LLM module provides a task plan in NL, which can be evaluated for at least one fault or error. If at least one fault or error is identified, the LLM module can be queried to provide a resolution.
Robot control systems, methods, control modules and computer program products that leverage one or more large language model(s) (LLMs) in order to achieve at least some degree of autonomy are described. Robot control parameters and/or instructions may advantageously be specified in natural language (NL) and communicated with the LLM via an NL prompt or query. An NL response from the LLM may then be converted into robot control parameters and/or instructions. In this way, an LLM may be leveraged by the robot control system to enhance the autonomy of various operations and/or functions, including without limitation task planning, motion planning, human interaction, and/or reasoning about the environment.
Robot control systems, methods, control modules, and computer program products that leverage one or more large language model(s) (LLMs) in order to achieve at least some degree of autonomy are described. Robot control parameters, environment details, and/or instruction sets may advantageously be specified in natural language (NL) and communicated with the LLM via an NL prompt or query. An NL response from the LLM may then be converted into a task plan. A task plan that successfully completes a first instance of a work objective may be parameterized and re-used to complete a second instance of the work objective. Parameterization of a task plan may include replacing one or more nouns/objects in the NL task plan with variables, while optionally preserving one or more verbs/actions in the NL task plan.
Robot control systems, methods, control modules and computer program products that leverage one or more large language model(s) (LLMs) in order to achieve at least some degree of autonomy are described. Robot control parameters and/or instructions may advantageously be specified in natural language (NL) and communicated with the LLM via a recursive sequence of NL prompts or queries. Corresponding NL responses from the LLM may then be converted into robot control parameters and/or instructions. In this way, an LLM may be leveraged by the robot control system to enhance the autonomy of various operations and/or functions, including without limitation task planning, motion planning, human interaction, and/or reasoning about the environment.
Robot control systems, methods, control modules and computer program products that leverage one or more large language model(s) (LLMs) in order to achieve at least some degree of autonomy are described. Robot control parameters, environment details, and/or instructions may advantageously be specified in natural language (NL) and communicated with the LLM via an NL prompt or query. The NL query may include a request for one or more work objectives from the LLM, such as "What can I do here?", thereby establishing a form of agency by which the robot system may identify activities to perform without operator intervention. The LLM may also be queried to convert each work objective into a task plan providing a sequence of steps that the robot system may execute to complete the work objective. Optionally, the robot system may communicate with an operator to determine whether or not to execute a task plan.
Robot control systems, methods, control modules and computer program products that leverage one or more large language model(s) (LLMs) and a repository of omnichannel customer data in order to autonomously interact with a customer are described. A robot identifies a customer and accesses data about the customer from a database of omnichannel customer data. The robot generates a natural language (NL) query that includes customer data expressed in NL, contextual information expressed in NL, and a request for something to say to the customer. The LLM provides something to say for the robot, which the robot converts into audio signals and projects to the customer. The interaction may continue bidirectionally, with the robot transcribing responses from the customer in NL and querying the LLM for return responses.
A robotic system includes a robot, and an interface to a large language model (LLM). The robot operates in an environment that includes a human. In an example method of operation of the robotic system, the robot initiates a task. After initiating the task, the robot detects that the human is waiting for the robot to complete the task. The interface sends a query to the LLM. The query includes a natural language statement describing a context in the natural language for the query. The interface receives a response from the LLM in reply to the query. The response includes a natural language statement describing material related to the context and suitable for an interim interaction that can be initiated by the robot with the human. The robot initiates the interim interaction with the human. The interim interaction may be initiated autonomously by the robot, and may include a diversion.
A robotic system includes a robot, an object recognition subsystem, an interface to a large language model (LLM), and a system controller. The robot operates in an environment that includes a first and a second object. In an example method of operation of the robotic system, the object recognition subsystem assigns a first label to the first object. The interface sends a query, including the first label, to the LLM. The interface receives a response from the LLM, the response in reply to the query and including a second label. The object recognition subsystem assigns the second label to the second object. In some implementations, the object recognition subsystem includes sensors and a sensor data processor. The sensors scan the environment to generate sensor data, and the sensor data processor detects the presence of the first and the second object based at least in part on the sensor data.
Robot control systems, methods, control modules and computer program products that leverage one or more large language model(s) (LLMs) in order to achieve at least some degree of autonomy are described. Robot control parameters and/or instructions may advantageously be specified in natural language (NL) and communicated with the LLM via an NL prompt or query. An NL response from the LLM may then be converted into robot control parameters and/or instructions. In this way, an LLM may be leveraged by the robot control system to enhance the autonomy of various operations and/or functions, including without limitation task planning, motion planning, human interaction, and/or reasoning about the environment.
Robot control systems, methods, control modules, and computer program products that leverage one or more large language model(s) (LLMs) in order to achieve at least some degree of autonomy are described. Robot control parameters, environment details, and/or instruction sets may advantageously be specified in natural language (NL) and communicated with the LLM via an NL prompt or query. An NL response from the LLM may then be converted into a task plan. A task plan that successfully completes a first instance of a work objective may be parameterized and re-used to complete a second instance of the work objective. Parameterization of a task plan may include replacing one or more nouns/objects in the NL task plan with variables, while optionally preserving one or more verbs/actions in the NL task plan.
Robot control systems, methods, control modules and computer program products that leverage one or more large language model(s) (LLMs) in order to achieve at least some degree of autonomy are described. Robot control parameters, environment details, and/or instructions may advantageously be specified in natural language (NL) and communicated with the LLM via an NL prompt or query. The NL query may include a request for one or more work objectives from the LLM, such as “What can I do here?”, thereby establishing a form of agency by which the robot system may identify activities to perform without operator intervention. The LLM may also be queried to convert each work objective into a task plan providing a sequence of steps that the robot system may execute to complete the work objective. Optionally, the robot system may communicate with an operator to determine whether or not to execute a task plan.
A robotic system includes a robot, and an interface to a large language model (LLM). The robot operates in an environment that includes a human. In an example method of operation of the robotic system, the robot initiates a task. After initiating the task, the robot detects that the human is waiting for the robot to complete the task. The interface sends a query to the LLM. The query includes a natural language statement describing a context in the natural language for the query. The interface receives a response from the LLM in reply to the query. The response includes a natural language statement describing material related to the context and suitable for an interim interaction that can be initiated by the robot with the human. The robot initiates the interim interaction with the human. The interim interaction may be initiated autonomously by the robot, and may include a diversion.
A robotic system includes a robot, an object recognition subsystem, an interface to a large language model (LLM), and a system controller. The robot operates in an environment that includes a first and a second object. In an example method of operation of the robotic system, the object recognition subsystem assigns a first label to the first object. The interface sends a query, including the first label, to the LLM. The interface receives a response from the LLM, the response in reply to the query and including a second label. The object recognition subsystem assigns the second label to the second object. In some implementations, the object recognition subsystem includes sensors and a sensor data processor. The sensors scan the environment to generate sensor data, and the sensor data processor detects the presence of the first and the second object based at least in part on the sensor data.
A robotic system includes a robot, an object recognition subsystem, an interface to a large language model (LLM), and a system controller. The robot operates in an environment that includes a first and a second object. In an example method of operation of the robotic system, the object recognition subsystem assigns a first label to the first object. The interface sends a query, including the first label, to the LLM. The interface receives a response from the LLM, the response in reply to the query and including a second label. The object recognition subsystem assigns the second label to the second object. In some implementations, the object recognition subsystem includes sensors and a sensor data processor. The sensors scan the environment to generate sensor data, and the sensor data processor detects the presence of the first and the second object based at least in part on the sensor data.
Systems, devices, and methods for training and operating (semi-)autonomous robots to complete multiple different work objectives are described. A robot control system stores a library of reusable work primitives each corresponding to a respective basic sub-task or sub-action that the robot is operative to autonomously perform. A work objective is analyzed to determine a sequence (i.e., a combination and/or permutation) of reusable work primitives that, when executed by the robot, will complete the work objective. The robot executes the sequence of reusable work primitives to complete the work objective. The reusable work primitives may include one or more reusable grasp primitives that enable(s) a robot's end effector to grasp objects. Simulated instances of real physical robots may be trained in simulated environments to develop control instructions that, once uploaded to the real physical robots, enable such real physical robots to autonomously perform reusable work primitives.
Robot control systems, methods, control modules and computer program products that leverage one or more large language model(s) (LLMs) in order to achieve at least some degree of autonomy are described. Robot control parameters and/or instructions may advantageously be specified in natural language (NL) and communicated with the LLM via an NL prompt or query. The LLM module provides a task plan in NL, which can be evaluated for at least one fault or error. If at least one fault or error is identified, the LLM module can be queried to provide a resolution.
An actuator for a robotic joint includes an actuator housing. A motor is disposed within the actuator housing. The motor includes a stator that is rotationally fixed relative to the actuator housing and a rotor that is rotatable relative to the stator and actuator housing. An output shaft is coupled to a rotational motion of the rotor by a gear reducer. A first rotary encoder is coupled to the rotor to measure one or more parameters of the rotational motion of the rotor. A second rotary encoder is coupled to the output shaft to measure one or more parameters of a rotational motion of the output shaft.
A robotic torso for a humanoid robot includes a first torso member having an axial axis aligned with a reference axis and a second torso member having at least one mounting portion for at least one humanoid component. The robotic torso includes a series of actuators arranged in a column between the first torso member and the second torso member and coupling the first torso member to the second torso member. Each of the actuators has a rotatable member defining a rotational axis. The rotational axes of each adjacent pair of actuators in the series of actuators are orthogonal to each other.
A hydraulic valve includes two ports, a chamber within the valve housing which, in operation, is at least partially filled with a hydraulic fluid, and a fluid switch within the chamber. The fluid switch is movable between at least a first position and a second position. In the first position, the ports are fluidly coupled to each other. In the second position, the ports are fluidly isolated from each other. An external surface of the fluid switch is separated from an internal surface of the chamber by a first micro-gap. Another external surface of the fluid switch is separated from another internal surface of the chamber by a second micro-gap. The first micro-gap and the second micro-gap are fluidly coupled to the chamber, and each have a respective size of less than about five micrometers.
F16K 11/24 - Multiple-way valves, e.g. mixing valvesPipe fittings incorporating such valvesArrangement of valves and flow lines specially adapted for mixing fluid with two or more closure members not moving as a unit operated by separate actuating members with an electromagnetically-operated valve, e.g. for washing machines
B25J 9/14 - Programme-controlled manipulators characterised by positioning means for manipulator elements fluid
A hydraulic valve includes two ports, a chamber within the valve housing which, in operation, is at least partially filled with a hydraulic fluid, and a fluid switch within the chamber. The fluid switch is movable between at least a first position and a second position. In the first position, the ports are fluidly coupled to each other. In the second position, the ports are fluidly isolated from each other. An external surface of the fluid switch is separated from an internal surface of the chamber by a first micro-gap. Another external surface of the fluid switch is separated from another internal surface of the chamber by a second micro-gap. The first micro-gap and the second micro-gap are fluidly coupled to the chamber, and each have a respective size of less than about five micrometers.
F16K 11/065 - Multiple-way valves, e.g. mixing valvesPipe fittings incorporating such valvesArrangement of valves and flow lines specially adapted for mixing fluid with all movable sealing faces moving as one unit comprising only sliding valves with linearly sliding closure members
56.
SYSTEMS, METHODS, AND CONTROL MODULES FOR CONTROLLING END EFFECTORS OF ROBOT SYSTEMS
Systems, methods, and control modules for controlling robot systems are described. A present and future state of an end effector are identified based on haptic feedback from touching an object. The end effector is transformed towards the future state. Deviations in the transformation are corrected based on further haptic feedback from touching the object. Transformation and correction of deviations are further informed by additional sensor data such as image data and/or proprioceptive data.
An electrohydraulic valve includes a valve housing having a common chamber and a metering port in communication with the common chamber. The valve housing is coupled to a valve manifold having an inlet port and an outlet port. A first nozzle in fluid communication with the inlet port has a first orifice. A second nozzle in fluid communication with the outlet port has a second orifice. A first valve disposed within the common chamber is operable to move the first valve between a closed position where the first valve closes the first orifice and an open position where the first valve opens the first orifice. A second valve disposed within the common chamber is operable to move the second valve between a closed position where the second valve closes the second orifice and an open position where the second valve opens the second orifice.
F16K 11/24 - Multiple-way valves, e.g. mixing valvesPipe fittings incorporating such valvesArrangement of valves and flow lines specially adapted for mixing fluid with two or more closure members not moving as a unit operated by separate actuating members with an electromagnetically-operated valve, e.g. for washing machines
F15B 13/044 - Fluid distribution or supply devices characterised by their adaptation to the control of servomotors for use with a single servomotor operated by electrically-controlled means, e.g. solenoids, torque-motors
58.
SYSTEMS, METHODS, AND CONTROL MODULES FOR CONTROLLING END EFFECTORS OF ROBOT SYSTEMS
Systems, methods, and control modules for controlling robot systems are described. A present and future state of an end effector are identified based on haptic feedback from touching an object. The end effector is transformed towards the future state. Deviations in the transformation are corrected based on further haptic feedback from touching the object. Transformation and correction of deviations are further informed by additional sensor data such as image data and/or proprioceptive data.
Robot control systems, methods, control modules and computer program products that leverage one or more large language model(s) (LLMs) in order to achieve at least some degree of autonomy are described. Robot control parameters and/or instructions may advantageously be specified in natural language (NL) and communicated with the LLM via a recursive sequence of NL prompts or queries. Corresponding NL responses from the LLM may then be converted into robot control parameters and/or instructions. In this way, an LLM may be leveraged by the robot control system to enhance the autonomy of various operations and/or functions, including without limitation task planning, motion planning, human interaction, and/or reasoning about the environment.
An electrohydraulic valve includes a valve housing having a common chamber and a metering port in communication with the common chamber. The valve housing is coupled to a valve manifold having an inlet port and an outlet port. A first nozzle in fluid communication with the inlet port has a first orifice. A second nozzle in fluid communication with the outlet port has a second orifice. A first valve disposed within the common chamber is operable to move the first valve between a closed position where the first valve closes the first orifice and an open position where the first valve opens the first orifice. A second valve disposed within the common chamber is operable to move the second valve between a closed position where the second valve closes the second orifice and an open position where the second valve opens the second orifice.
A robot includes a robot body having a first robotic leg, a second robotic leg, and a robotic torso. The first robotic leg includes a first foot, a first lower leg member coupled to the first foot, and a first upper leg member coupled to the first lower leg member. The second robotic leg includes a second foot, a second lower leg member coupled to the second foot, and a second upper leg member coupled to the second lower leg member. The robot includes a mobile base having a platform to which the first and second feet of the robot body are fastened.
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted supporting base and legVehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
B60L 50/60 - Electric propulsion with power supplied within the vehicle using propulsion power supplied by batteries or fuel cells using power supplied by batteries
B60P 3/06 - Vehicles adapted to transport, to carry or to comprise special loads or objects for carrying vehicles
62.
SYSTEMS, METHODS, AND COMPUTER PROGRAM PRODUCTS FOR AUTOMATING TASKS FOR ROBOTS
Systems, methods, and computer program products for automating tasks are described. A multi-step framework enables a gradient towards task automation. An agent performs a task while sensors collect data. The data are used to generate a script that characterizes the discrete actions executed by the agent in the performance of the task. The script is used by a robot teleoperation system to control a robot to perform the task. The robot teleoperation system maps the script into an ordered set of action commands that the robot is operative to auto-complete to enable the robot to semi-autonomously perform the task. The ordered set of action commands is converted into an automation program that may be accessed by an autonomous robot and executed to cause the autonomous robot to autonomously perform the task. In training, simulated instances of the robot may perform simulated instances of the task in simulated environments.
G05B 19/042 - Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
B25J 3/04 - Manipulators of leader-follower type, i.e. both controlling unit and controlled unit perform corresponding spatial movements involving servo mechanisms
Systems, methods, and computer program products for automating tasks are described. A multi-step framework enables a gradient towards task automation. An agent performs a task while sensors collect data. The data are used to generate a script that characterizes the discrete actions executed by the agent in the performance of the task. The script is used by a robot teleoperation system to control a robot to perform the task. The robot teleoperation system maps the script into an ordered set of action commands that the robot is operative to auto-complete to enable the robot to semi-autonomously perform the task. The ordered set of action commands is converted into an automation program that may be accessed by an autonomous robot and executed to cause the autonomous robot to autonomously perform the task. In training, simulated instances of the robot may perform simulated instances of the task in simulated environments.
Systems, methods, and computer program products for automating tasks are described. A multi-step framework enables a gradient towards task automation. An agent performs a task while sensors collect data. The data are used to generate a script that characterizes the discrete actions executed by the agent in the performance of the task. The script is used by a robot teleoperation system to control a robot to perform the task. The robot teleoperation system maps the script into an ordered set of action commands that the robot is operative to auto-complete to enable the robot to semi-autonomously perform the task. The ordered set of action commands is converted into an automation program that may be accessed by an autonomous robot and executed to cause the autonomous robot to autonomously perform the task. In training, simulated instances of the robot may perform simulated instances of the task in simulated environments.
Systems, methods, and computer program products for automating tasks are described. A multi-step framework enables a gradient towards task automation. An agent performs a task while sensors collect data. The data are used to generate a script that characterizes the discrete actions executed by the agent in the performance of the task. The script is used by a robot teleoperation system to control a robot to perform the task. The robot teleoperation system maps the script into an ordered set of action commands that the robot is operative to auto-complete to enable the robot to semi-autonomously perform the task. The ordered set of action commands is converted into an automation program that may be accessed by an autonomous robot and executed to cause the autonomous robot to autonomously perform the task. In training, simulated instances of the robot may perform simulated instances of the task in simulated environments.
G05B 19/42 - Recording and playback systems, i.e. in which the programme is recorded from a cycle of operations, e.g. the cycle of operations being manually controlled, after which this record is played back on the same machine
G05D 1/223 - Command input arrangements on the remote controller, e.g. joysticks or touch screens
G05D 1/225 - Remote-control arrangements operated by off-board computers
G05D 1/243 - Means capturing signals occurring naturally from the environment, e.g. ambient optical, acoustic, gravitational or magnetic signals
G05D 1/648 - Performing a task within a working area or space, e.g. cleaning
66.
SYSTEMS, METHODS, AND COMPUTER PROGRAM PRODUCTS FOR AUTOMATING TASKS
Systems, methods, and computer program products for automating tasks are described. A multi-step framework enables a gradient towards task automation. An agent performs a task while sensors collect data. The data are used to generate a script that characterizes the discrete actions executed by the agent in the performance of the task. The script is used by a robot teleoperation system to control a robot to perform the task. The robot teleoperation system maps the script into an ordered set of action commands that the robot is operative to auto-complete to enable the robot to semi-autonomously perform the task. The ordered set of action commands is converted into an automation program that may be accessed by an autonomous robot and executed to cause the autonomous robot to autonomously perform the task. In training, simulated instances of the robot may perform simulated instances of the task in simulated environments.
Systems, methods, and computer program products for managing and populating environment models are described. An environment model is accessed which represents an environment, and the environment model is populated with instances of object models. Locations where the instances of object models should be positioned in the environment model are identified, by determining where in the environment model a respective size of each instance when viewed from a vantage point at the environment model matches a size of the object represented by the respective instance when viewed from a corresponding vantage point at the environment.
Systems, methods, and computer program products for managing and populating environment models are described. An environment model is accessed which represents an environment, and the environment model is populated with instances of object models. Locations where the instances of object models should be positioned in the environment model are identified, by determining where in the environment model a respective size of each instance when viewed from a vantage point at the environment model matches a size of the object represented by the respective instance when viewed from a corresponding vantage point at the environment.
Systems, methods, and computer program products for managing and populating environment models are described. An environment model is accessed which represents an environment, and the environment model is populated with instances of object models. Locations where the instances of object models should be positioned in the environment model are identified, by determining where in the environment model a respective size of each instance when viewed from a vantage point at the environment model matches a size of the object represented by the respective instance when viewed from a corresponding vantage point at the environment.
Robot control systems, methods, control modules and computer program products that leverage one or more large language model(s) (LLMs) in order to achieve at least some degree of autonomy are described. Robot control parameters and/or instructions may advantageously be specified in natural language (NL) and communicated with the LLM via an NL prompt or query. An NL response from the LLM may then be converted into robot control parameters and/or instructions. In this way, an LLM may be leveraged by the robot control system to enhance the autonomy of various operations and/or functions, including without limitation task planning, motion planning, human interaction, and/or reasoning about the environment.
An example computer-implemented method for reducing a number of polygons in a polygon mesh model of an object determines one or more viewpoints, and, for each viewpoint of the one or more viewpoints, determines a respective first subset of polygons of the set of one or more polygons, the respective first subset of polygons being candidates for removal from the polygon mesh model. The method further determines a first intersection of the respective first subsets of polygons, and removes from the polygon mesh model at least some of the polygons in the first intersection. The method identifies candidates for removal from the polygon mesh model as those polygons of the set of one or more polygons which are non-visible in the computer simulation from the one or more viewpoints.
An example computer-implemented method for reducing a number of polygons in a polygon mesh model of an object determines one or more viewpoints, and, for each viewpoint of the one or more viewpoints, determines a respective first subset of polygons of the set of one or more polygons, the respective first subset of polygons being candidates for removal from the polygon mesh model. The method further determines a first intersection of the respective first subsets of polygons, and removes from the polygon mesh model at least some of the polygons in the first intersection. The method identifies candidates for removal from the polygon mesh model as those polygons of the set of one or more polygons which are non-visible in the computer simulation from the one or more viewpoints.
Systems, methods, and computer program products for managing simulated environments are described. A simulated environment is accessed which represents a physical environment, and representations of objects are included, maintained, or removed in the simulated environment based on whether objects are represented in image data of the physical environment, and based on whether objects are occluded by other objects in the image data of the physical environment. Future occlusion can also be predicted based on motion paths of objects.
Systems, methods, and computer program products for managing simulated environments are described. A simulated environment is accessed which represents a physical environment, and representations of objects are included, maintained, or removed in the simulated environment based on whether objects are represented in image data of the physical environment, and based on whether objects are occluded by other objects in the image data of the physical environment. Future occlusion can also be predicted based on motion paths of objects.
Systems, methods, and computer program products for managing simulated environments are described. A simulated environment is accessed which represents a physical environment, and representations of objects are included, maintained, or removed in the simulated environment based on whether objects are represented in image data of the physical environment, and based on whether objects are occluded by other objects in the image data of the physical environment. Future occlusion can also be predicted based on motion paths of objects.
Systems, methods, and computer program products for managing simulated environments are described. A simulated environment is accessed which represents a physical environment, and representations of objects are included, maintained, or removed in the simulated environment based on whether objects are represented in image data of the physical environment, and based on whether objects are occluded by other objects in the image data of the physical environment. Future occlusion can also be predicted based on motion paths of objects.
A fluidic tactile sensor includes a core having an outer core portion, an inner core portion, and a first channel having a first opening at a first surface portion of the outer core portion. An elastic skin is disposed over the first surface portion. A cell is formed between the first surface portion and the elastic skin and fluidly is fluidly connected to the first channel. The cell contains a fluid. A contact force applied to the elastic skin produces a measurable change in fluid pressure inside the cell.
A robot is provided having a kinematic chain comprising a plurality of joints and links, including a root joint connected to a robot pedestal, and at least one end effector. A plurality of actuators are fixedly mounted on the robot pedestal. A plurality of tendons is connected to a corresponding plurality of actuation points on the kinematic chain and to actuators in the plurality of actuators, arranged to translate actuator position and force to actuation points for tendon-driven joints on the kinematic chain with losses in precision due to variability of tendons in the plurality of tendons. A controller operates the kinematic chain to perform a task. The controller is configured to generate actuator command data in dependence on the actuator states and image data in a manner that compensates for the losses in precision in the tendon-driven mechanisms.
Robots, robot systems, and methods for operating the same based on environment models including haptic data are described. An environment model which includes representations of objects in an environment is accessed, and a robot system is controlled based on the environment model. The environment model incudes haptic data, which provides more effective control of the robot. The environment model is populated based on visual profiles, haptic profiles, and/or other data profiles for objects or features retrieved from respective databases. Identification of objects or features can be based on cross-referencing between visual and haptic profiles, to populate the environment model with data not directly collected by a robot which is populating the model, or data not directly collected from the actual objects or features in the environment.
Robots, robot systems, and methods for operating the same based on environment models including haptic data are described. An environment model which includes representations of objects in an environment is accessed, and a robot system is controlled based on the environment model. The environment model incudes haptic data, which provides more effective control of the robot. The environment model is populated based on visual profiles, haptic profiles, and/or other data profiles for objects or features retrieved from respective databases. Identification of objects or features can be based on cross-referencing between visual and haptic profiles, to populate the environment model with data not directly collected by a robot which is populating the model, or data not directly collected from the actual objects or features in the environment.
Robots, robot systems, and methods for operating the same based on environment models including haptic data are described. An environment model which includes representations of objects in an environment is accessed, and a robot system is controlled based on the environment model. The environment model incudes haptic data, which provides more effective control of the robot. The environment model is populated based on visual profiles, haptic profiles, and/or other data profiles for objects or features retrieved from respective databases. Identification of objects or features can be based on cross-referencing between visual and haptic profiles, to populate the environment model with data not directly collected by a robot which is populating the model, or data not directly collected from the actual objects or features in the environment.
Robots, robot systems, and methods for operating the same based on environment models including haptic data are described. An environment model which includes representations of objects in an environment is accessed, and a robot system is controlled based on the environment model. The environment model incudes haptic data, which provides more effective control of the robot. The environment model is populated based on visual profiles, haptic profiles, and/or other data profiles for objects or features retrieved from respective databases. Identification of objects or features can be based on cross-referencing between visual and haptic profiles, to populate the environment model with data not directly collected by a robot which is populating the model, or data not directly collected from the actual objects or features in the environment.
Robots, systems, methods, and computer program products for training and operating (semi-)autonomous robots to complete work objectives are described. A robot accesses a library of reusable work primitives from a catalog of libraries of reusable work primitives, each reusable work primitive corresponding to a respective basic sub-action that the robot is trained to autonomously perform. A work objective is analyzed to determine a sequence of reusable work primitives that complete the work objective, and the robot executes the sequence to complete the work objective. A robot can be deployed with access to an appropriate library of reusable work primitives, based on expectations for the robot. The robot is trained to perform reusable work primitives in multiple libraries, by generating control instructions which cause the robot to perform each reusable work primitive. Training is performed by real-world robots performing reusable work primitives, or simulated robot instances performing the reusable work primitives.
Robots, systems, methods, and computer program products for training and operating (semi-)autonomous robots to complete work objectives are described. A robot accesses a library of reusable work primitives from a catalog of libraries of reusable work primitives, each reusable work primitive corresponding to a respective basic sub-action that the robot is trained to autonomously perform. A work objective is analyzed to determine a sequence of reusable work primitives that complete the work objective, and the robot executes the sequence to complete the work objective. A robot can be deployed with access to an appropriate library of reusable work primitives, based on expectations for the robot. The robot is trained to perform reusable work primitives in multiple libraries, by generating control instructions which cause the robot to perform each reusable work primitive. Training is performed by real-world robots performing reusable work primitives, or simulated robot instances performing the reusable work primitives.
A robotic joint has a first portion that includes a first actuator and a second actuator, a first spherical linkage having a first end mechanically coupled to the first actuator and a second end mechanically coupled to a second portion of the robotic joint, and a second spherical linkage having a third end mechanically coupled to the second actuator and a fourth end mechanically coupled to the second portion. The first and second spherical linkages are segments of a spherical shell. The first and second actuators are operable in combination to control movement of the second portion relative to the first portion with two degrees of freedom. Each actuator causes a first respective movement in the same direction as each other to control a flexion or an extension, and a second respective movement in opposite directions to each other to control an abduction or an adduction.
A robotic joint has a first portion that includes a first actuator and a second actuator, a first spherical linkage having a first end mechanically coupled to the first actuator and a second end mechanically coupled to a second portion of the robotic joint, and a second spherical linkage having a third end mechanically coupled to the second actuator and a fourth end mechanically coupled to the second portion. The first and second spherical linkages are segments of a spherical shell. The first and second actuators are operable in combination to control movement of the second portion relative to the first portion with two degrees of freedom. Each actuator causes a first respective movement in the same direction as each other to control a flexion or an extension, and a second respective movement in opposite directions to each other to control an abduction or an adduction.
B25J 19/00 - Accessories fitted to manipulators, e.g. for monitoring, for viewingSafety devices combined with or specially adapted for use in connection with manipulators
A robotic joint has a first portion that includes a first actuator and a second actuator, a first spherical linkage having a first end mechanically coupled to the first actuator and a second end mechanically coupled to a second portion of the robotic joint, and a second spherical linkage having a third end mechanically coupled to the second actuator and a fourth end mechanically coupled to the second portion. The first and second spherical linkages are segments of a spherical shell. The first and second actuators are operable in combination to control movement of the second portion relative to the first portion with two degrees of freedom. Each actuator causes a first respective movement in the same direction as each other to control a flexion or an extension, and a second respective movement in opposite directions to each other to control an abduction or an adduction.
A software compensated robotic system makes use of recurrent neural networks and image processing to control operation and/or movement of an end effector. Images are used to compensate for variations in the response of the robotic system to command signals. This compensation allows for the use of components having lower reproducibility, precision and/or accuracy that would otherwise be practical.
G06F 1/16 - Constructional details or arrangements
G05B 19/4155 - Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by programme execution, i.e. part programme or machine function execution, e.g. selection of a programme
A robot is provided having a kinematic chain comprising a plurality of joints and links, including a root joint connected to a robot pedestal, and at least one end effector. A plurality of actuators are fixedly mounted on the robot pedestal. A plurality of tendons is connected to a corresponding plurality of actuation points on the kinematic chain and to actuators in the plurality of actuators, arranged to translate actuator position and force to actuation points for tendon-driven joints on the kinematic chain with losses in precision due to variability of tendons in the plurality of tendons. A controller operates the kinematic chain to perform a task. The controller is configured to generate actuator command data in dependence on the actuator states and image data in a manner that compensates for the losses in precision in the tendon-driven mechanisms.
Systems, devices, and methods for training and operating (semi-)autonomous robots to complete multiple different work objectives are described. A robot control system stores a library of reusable work primitives each corresponding to a respective basic sub-task or sub-action that the robot is operative to autonomously perform. A work objective is analyzed to determine a sequence (i.e., a combination and/or permutation) of reusable work primitives that, when executed by the robot, will complete the work objective. The robot executes the sequence of reusable work primitives to complete the work objective. The reusable work primitives may include one or more reusable grasp primitives that enable(s) a robot's end effector to grasp objects. Simulated instances of real physical robots may be trained in simulated environments to develop control instructions that, once uploaded to the real physical robots, enable such real physical robots to autonomously perform reusable work primitives.
In an implementation, a position transducer includes a printed circuit board (PCB) and a wiper in sliding contact with the PCB. The PCB includes a first and a second connector pad, and a conductive trace comprising two legs. One leg has an end electrically communicatively coupled to the first connector pad, and the other leg has an end electrically communicatively coupled to the second connector pad. The wiper includes a first blade electrically communicatively coupled to the first leg and a second blade electrically communicatively coupled to the second leg. In operation, an electrical path length of a conductive path between the first and the second connector pad depends, at least in part, on a relative position of the PCB and the wiper. One or more of the position transducers can be used to determine a relative position of actuatable components of a robotic digit.
G01D 5/165 - Mechanical means for transferring the output of a sensing memberMeans for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for convertingTransducers not specially adapted for a specific variable using electric or magnetic means influencing the magnitude of a current or voltage by varying resistance by relative movement of a point of contact and a resistive track
In an implementation, a position transducer includes a printed circuit board (PCB) and a wiper in sliding contact with the PCB. The PCB includes a first and a second connector pad, and a conductive trace comprising two legs. One leg has an end electrically communicatively coupled to the first connector pad, and the other leg has an end electrically communicatively coupled to the second connector pad. The wiper includes a first blade electrically communicatively coupled to the first leg and a second blade electrically communicatively coupled to the second leg. In operation, an electrical path length of a conductive path between the first and the second connector pad depends, at least in part, on a relative position of the PCB and the wiper. One or more of the position transducers can be used to determine a relative position of actuatable components of a robotic digit.
G01D 5/165 - Mechanical means for transferring the output of a sensing memberMeans for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for convertingTransducers not specially adapted for a specific variable using electric or magnetic means influencing the magnitude of a current or voltage by varying resistance by relative movement of a point of contact and a resistive track
96.
Method and system of measuring tactile force feedback
Provided is a process that includes: obtaining low-sensitivity values mapped to respective locations within an area of a sensor array; determining differences between respective ones of the low-sensitivity values and respective ones of prior values for the locations, one example being tare values for the locations; obtaining a set of detection locations within the area of the sensor array based on differences exceeding a threshold; obtaining, after switching at least a portion of the sensor array to a high sensitivity mode, high sensitivity values mapped to respective locations within the area of the sensor array, each of the locations mapped to a respective second prior value, one example being a tare value in the high sensitivity mode; determining, for each location within the set of detected locations, a value based on the respective high sensitivity value and the respective second prior value; and outputting, the values and the locations.
The present disclosure describes robots, tele-operation systems, methods, and computer program products where a robot is selectively operable in a plurality of control modes. Based on identification of a fault condition (when the robot fails to act in a suitable or sufficient manner), a control mode of the robot can be changed to provide a human operator with more explicit control over the robot. In this way, the fault condition can be resolved by human operator input, and the control modes, AI, or control paradigm for the robot can be trained to perform better in the future.
The present disclosure describes robots, tele-operation systems, methods, and computer program products where a robot is selectively operable in a plurality of control modes. Based on identification of a fault condition (when the robot fails to act in a suitable or sufficient manner), a control mode of the robot can be changed to provide a human operator with more explicit control over the robot. In this way, the fault condition can be resolved by human operator input, and the control modes, AI, or control paradigm for the robot can be trained to perform better in the future.
The present disclosure describes robots, tele-operation systems, methods, and computer program products where a robot is selectively operable in a plurality of control modes. Based on identification of a fault condition (when the robot fails to act in a suitable or sufficient manner), a control mode of the robot can be changed to provide a human operator with more explicit control over the robot. In this way, the fault condition can be resolved by human operator input, and the control modes, AI, or control paradigm for the robot can be trained to perform better in the future.
The present disclosure describes robots, tele-operation systems, methods, and computer program products where a robot is selectively operable in a plurality of control modes. Based on identification of a fault condition (when the robot fails to act in a suitable or sufficient manner), a control mode of the robot can be changed to provide a human operator with more explicit control over the robot. In this way, the fault condition can be resolved by human operator input, and the control modes, AI, or control paradigm for the robot can be trained to perform better in the future.
B25J 3/04 - Manipulators of leader-follower type, i.e. both controlling unit and controlled unit perform corresponding spatial movements involving servo mechanisms