A robot control method, and a computer-readable storage medium and a wheel-legged biped robot using the same are provided. The method includes: determining a kinetic model of the wheel-legged biped robot; determining, using the kinetic model, a sliding surface of the wheel-legged biped robot; determining, according to the sliding surface, a double power reaching law and a sliding mode control law of the wheel-legged biped robot; and controlling, according to the sliding surface, the double power reaching law and the sliding mode control law, the wheel-legged biped robot. Through the above-mentioned method, the adaptability of the wheel-legged biped robot to uncertain external disturbances can be enhanced, thereby improving its robustness to effectively maintain its balance even in the environment with complex terrain.
G05D 1/495 - Control of attitude, i.e. control of roll, pitch or yaw to ensure stability
B62D 57/028 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members having wheels and mechanical legs
Path planning for mobile machine in large scale navigation disclosed. A path for moving a mobile machine is planned by: determining a start map node in a map graph based on a start point in the path and a goal map node in the map graph based on a goal point in the path; determining whether the start map node and the goal map node correspond to the same submap; and if so, planning the path between the start point and the goal point using a real-time path planning method; otherwise, obtaining the path between the start point and the goal point by merging a node path between the start map node and the goal map node, a first real-time path between the start point and a first stop point, and a second real-time path between the goal point and a last stop point.
A swimming pool cleaning system includes: a pool cleaning robot (10) that is to at least clean a floor and pool walls of a swimming pool; and a surface cleaning device (20) that is to at least clean a pool surface of the swimming pool. The surface cleaning device (20) is wirelessly connectable to an external control device and has a connecting cable (21) that electrically connects the surface cleaning device (20) to the pool cleaning robot (10), so that the pool cleaning robot (10) communicates with the external control device via the surface cleaning device (20).
A palletizing method includes: detecting, by one or more depth cameras, a stacking pattern on a pallet; generating a height map matrix based on the stacking pattern, wherein a value at each element of a plurality of elements in the height map matrix indicates a height of the stacking pattern at a position on the pallet corresponding to the element; for each box of the one or more boxes: traversing the elements in the height map matrix to obtain a mask matrix generated based on hypothetical situations that a target vertex of the box rests on a position corresponding to each of the elements; determining reward function values corresponding to one or more of elements in the mask matrix with an element value of 1; determining a box number and a box placement posture corresponding to a largest reward function value of the reward function values.
An object tracking method, and a terminal device and a computer-readable storage medium using the same are provided. The method includes: obtaining first feature information of a target human body in a first image and a first detection frame of a head area of the target human body; obtaining second feature information of each human object in a second image and a second detection frame of a head area of the human object by performing a first image detection on the second image; and recognizing the target human body from the human object in the second image according to a first similarity between the first feature information and the second feature information and a second similarity between the first detection frame and the second detection frame. The above-mentioned method can effectively improve the accuracy of object matching, thereby enhancing the reliability of the results of multi-object tracking.
G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestriansBody parts, e.g. hands
G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
G06V 10/74 - Image or video pattern matchingProximity measures in feature spaces
G06V 10/77 - Processing image or video features in feature spacesArrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]Blind source separation
6.
METHOD AND DEVICE FOR OBSTACLE AVOIDANCE CONTROL FOR ROBOT AND COMPUTER-READABLE STORAGE MEDIUM
An obstacle avoidance control method for a robot includes: obtaining, by a depth camera perception system, envelope motion information of each movable obstacle in a target operating environment in a current control cycle, and accessing a pre-stored actual voxel block position of each environmental obstacle in the target operating environment, wherein the actual voxel block position of each environmental obstacle was measured by the depth camera perception system before the movable obstacles were present in the target operating environment; and determining a number of target obstacles that need to be avoided by a number of key obstacle-avoidance parts of a target robot in the target operating environment according to all of the envelope motion information and all of the actual voxel block positions.
The present application relates to the technical field of cleaning robot apparatuses, and in particular to a filtering and dust collecting device and a cleaning robot. The filtering and dust collecting device comprises a housing, an inner housing component, a first filtering structure, a second filtering structure and a third filtering structure. In the filtering and dust collecting device, the second filtering structure, the first filtering structure and the third filtering structure sequentially and hierarchically perform layer-by-layer filtering and separation on an airflow carrying debris and dust, so that the debris and dust suctioned along with the airflow are more thoroughly intercepted in the filtering and dust collecting device, and the hierarchical filtering and separation of debris and dust would not prone to causing blockage of the filtering structures. Therefore, the use of the technical solution can solve the problem of the working effect of dust collection and cleaning during cleaning work being affected due to the suction force of the cleaning robot being weakened because an HEPA filter screen in an existing cleaning robot is prone to being blocked by dust or debris.
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
A blink detection method includes: obtaining a number of first keypoints of at least one eye in a first image and a number of second keypoints of the at least one eye in a second image, wherein the first image is a previous image frame prior to the second image; adjusting the second keypoints based on a position offset between the first keypoints and the second keypoints, to obtain a number of adjusted second keypoints; and detecting a blink based on the first keypoints and the adjusted second keypoints.
G06V 40/18 - Eye characteristics, e.g. of the iris
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/77 - Processing image or video features in feature spacesArrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]Blind source separation
A cleaning device (300), comprising: a body assembly (310), which is provided with an assembly space and an exhaust port (312); a dust collection assembly (320), which comprises an inner housing (321) and a dust collection box (100) mounted to the inner housing (321), the inner housing (321) being provided with a first air inlet and a first air outlet (3212), and the first air inlet and the first air outlet (3212) both being communicated with the dust collection box (100); a fan (330), the fan (330) having an airflow inlet (331) and an airflow outlet (332), the airflow inlet (331) being coupled to and communicated with the first air outlet (3212), and a first elastic member (351) being provided between the airflow inlet (331) and the first air outlet (3212); and an air path assembly (340), the air path assembly (340) having an inlet end (341) and an outlet end (342), the inlet end (341) being connected to the fan (330), the inlet end (341) being communicated with the airflow outlet (332), the outlet end (342) being connected to the body assembly (310) by means of a second elastic member (352), and the outlet end (342) extending to the inner wall of the assembly space so as to be communicated with the exhaust port (312). Thus, the present application solves the problem that vibration of fans (330) of existing cleaning devices (300) not only causes noise but also causes faults due to loosening of connection structures in the cleaning devices (300).
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
A47L 9/00 - Details or accessories of suction cleaners, e.g. mechanical means for controlling the suction or for effecting pulsating actionStoring devices specially adapted to suction cleaners or parts thereofCarrying-vehicles specially adapted for suction cleaners
10.
CLEANING CONTROL METHOD AND APPARATUS FOR ROBOTIC VACUUM CLEANER, AND STORAGE MEDIUM AND INTELLIGENT ROBOT
The present application is applicable to the technical field of artificial intelligence. Provided are a cleaning control method and apparatus for a robotic vacuum cleaner, and a storage medium and an intelligent robot. The method comprises: when a robotic vacuum cleaner is controlled to execute edge cleaning, if a specified event is triggered, determining whether there is an unswept area between an event object in the specified event and a corresponding target environment boundary, wherein triggering the specified event comprises the robotic vacuum cleaner having a collision or the robotic vacuum cleaner detecting an obstacle, and the unswept area is an area where the robotic vacuum cleaner can pass through; and when it is determined that there is an unswept area, controlling the robotic vacuum cleaner to clean the unswept area. The present application can effectively improve the thoroughness and effectiveness of cleaning by a robotic vacuum cleaner, striving to avoid missing a spot or large areas being left unswept, thereby enhancing the user experience.
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
11.
TARGET DETECTION MODEL TRAINING METHOD AND APPARATUS, AND ELECTRONIC DEVICE
A method and an apparatus for training target detection models and an electronic device are provided. The method includes: predicting unlabeled training data through a teacher model and a student model to obtain a first prediction result output by the teacher model and a second prediction result output by the student model; determining a target pseudo-label category to which the first prediction result belongs according to the confidence in the first prediction result; calculating a current pseudo-label loss based on the first prediction result, the second prediction result, and the pseudo-label loss function corresponding to the target pseudo-label category; and updating the student model according to the current pseudo-label loss and updating the teacher model based on the updated student model, and returning to predicting the unlabeled training data through the teacher model and the student model until a preset training end condition is met.
A point cloud data processing method, and a robot and a robot control method using the same are provided. The method includes: obtaining image data including an RGB image and a depth image that is collected through an RGBD camera; obtaining an original mask image by segmenting out targets from the RGB image using a target segmentation mode; obtaining an optimized mask image by performing a pixel-level processing on the mask image; obtaining, based on the optimized mask image and the depth image, a plane equation of each of the targets in the optimized mask image; performing, using the plane equation of each of the targets, a depth value assignment on a plane position of the target in the optimized mask image that is not assigned with the depth value; and obtaining target point cloud data by performing a point cloud conversion on the depth image.
An object tracking method, and a terminal device and a computer-readable storage medium using the same are provided. The method includes: obtaining a first filtered image by filtering out the moving object in the i-th image frame, where the moving object is an object in the i-th image frame that has a positional change relative to the object in the (i−1)-th image frame, and i is an integer larger than 1; determining, based on the first filtered image, a pixel mapping relationship between the (i−1)-th image frame and the i-th image frame; and tracking, according to the pixel mapping relationship, the moving object. Through the above-mentioned method, the reliability of the trajectory matching results can be improved, thereby improving the reliability of object tracking.
An image generation method includes: generating multiple sets of paired data using a trained first model, each set of the multiple sets of paired data including a first face image and a first cartoon image corresponding to the first face image; training a second model based on the plurality of sets of paired data to obtain a trained second model; and inputting a second face image to be processed into the trained second model to obtain a second cartoon image corresponding to the second face image.
G06T 11/60 - Editing figures and textCombining figures or text
G06V 10/77 - Processing image or video features in feature spacesArrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]Blind source separation
A filtering method for a two-dimensional laser point cloud includes: acquiring a pitch angle and a roll angle of a mobile robot when the mobile robot is collecting a two-dimensional laser point cloud; determining a target tilt direction corresponding to the mobile robot and a target tilt angle corresponding to the two-dimensional laser point cloud according to the pitch angle and the roll angle; determining an angular filtering interval according to the target tilt direction and the target tilt angle, wherein the angular filtering interval is an angle interval corresponding to the laser point cloud to be filtered out; and filtering, by the mobile robot, the two-dimensional laser point cloud based on the angular filtering interval.
A method for area dividing in a map for a mobile robot includes: obtaining a binary map corresponding to a target area; determining one or more first areas in the binary map according to partition information, wherein the one or more first areas are undivided areas in the binary map, the partition information corresponds to second areas in the binary map, and the second areas are divided areas in the binary map; for each first area of the one or more first areas, determining, according to size of the first area, the first area as a newly added second area, or merging the first area into a third area that is one of the second areas, thereby obtaining a target area-dividing map; and controlling the mobile robot according to the target area-dividing map.
A method for measuring a center of mass of an object includes: controlling a multi-arm robot to carry an object through multiple robotic arms to perform pose changing movements of the object; obtaining actual end pose data and actual end force parameters of an end effector of each robotic arm after the pose changing movements; according to the actual end pose data and actual end force parameters, performing center of mass position calculation based on a torque balance relationship of the end effectors of the robotic arms on the object, to obtain a current candidate position of a center of mass of the object; and determining whether a distance between the current candidate position of the center of mass and a most recently calculated historical candidate position of the center of mass before the pose changing movements is less than a distance threshold.
A cat litter machine, comprising a machine body and a drum (100) rotatably arranged on the machine body, wherein the drum (100) has an accommodation cavity (111), and an opening (112) in communication with the accommodation cavity (111) is formed in a side wall of the drum (100), the open angle of the opening (112) being α in the direction perpendicular to the axis of rotation of the drum (100), and the open angle of the opening (112) being β in the direction of the axis of rotation of the drum (100), satisfying 70°≤α≤120° and 70°≤β≤120°.
A method for partition cleaning planning of a cleaning robot includes: obtaining a first position of the cleaning robot at a current cleaning moment, and a second position at a previous cleaning moment previous to the current cleaning moment; in response to a distance between the first position and the second position is greater than a predetermined distance threshold, determining a partition to which the first position of the cleaning robot belongs; determining an uncleaned area within the partition to which the first position belongs, and performing path planning based on the uncleaned area; and controlling the cleaning robot according to a result of the path planning.
A47L 9/28 - Installation of the electric equipment, e.g. adaptation or attachment to the suction cleanerControlling suction cleaners by electric means
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
G05D 1/644 - Optimisation of travel parameters, e.g. of energy consumption, journey time or distance
G05D 105/10 - Specific applications of the controlled vehicles for cleaning, vacuuming or polishing
20.
FACIAL RECOGNITION METHOD AND APPARATUS, AND COMPUTER-READABLE STORAGE MEDIUM AND ROBOT
The present application belongs to the technical field of image processing, and particularly relates to a facial recognition method and apparatus, and a computer-readable storage medium and a robot. The method comprises: acquiring a target image to be recognized; and using a preset facial recognition model to perform facial recognition on the target image, so as to obtain a facial recognition result, wherein the facial recognition model uses a margin-based loss function in a training process, and the value of a margin is positively correlated with the image quality of a sample image for training. By means of the method, the value of a margin can be determined on the basis of the image quality of a sample image in a training process, and the value of the margin is positively correlated with the image quality of the sample image, such that the value of the margin can be flexibly adjusted on the basis of the image quality, thereby facilitating an improvement in the robustness of facial recognition.
A robot (5) control method and apparatus, a readable storage medium and a robot (5). The robot (5) control method comprises: acquiring an expected attitude angle of an ankle of a robot (5) (S101); determining an expected stroke length of a linear motor of the ankle on the basis of the expected attitude angle (S102); and, according to the expected stroke length, controlling the linear motor to move (S103).
A mechanical arm (5) motion generation method and apparatus, a computer-readable storage medium, and a mechanical arm (5). The mechanical arm (5) motion generation method comprises: decomposing a mechanical arm (5) grabbing task oriented to a goods shelf environment into a plurality of sub-tasks; on the basis of a geometric dynamical system, separately determining a Riemannian motion policy of each sub-task; and, on the basis of graph calculation processes of the Riemannian motion policies, integrating the Riemannian motion policy of each sub-task to obtain a global motion policy of a mechanical arm (5).
The present application relates to the technical field of robot control. Provided are a robot obstacle avoidance control method and a related device. In the present application, an RMP mapping tree is constructed on the basis of individual actual spatial positions of all obstacles currently present in an operating environment where a target robot is located, such that a root node task of the RMP mapping tree corresponds to a robot joint space, and corresponding leaf node tasks comprise a position motion task and a pose motion task of a robot tail end executing an expected operation, and comprise obstacle avoidance motion tasks of a plurality of robot key parts of the robot tail end respectively performing obstacle avoidance on the obstacles; and then, geometric dynamical systems involving speed information are respectively constructed for various motion tasks, and an expected joint acceleration is solved on the basis of an RMP push-forward operation and an RMP pull-back operation, so as to control the target robot to move, thereby enabling the robot to achieve expected operation execution effects while avoiding dynamic obstacles with high agility and real-time performance.
A torque transferring assembly (100), a head mechanism, and a robot. The torque transferring assembly (100) comprises a rotary output shaft (1), a transmission shaft (2), a rotating member (3), and an axial limiting structure (4). The rotary output shaft (1) can output rotary motion; a first circumferential limiting structure (61) used for enabling the transmission shaft (2) to rotate along with the rotary output shaft (1) is provided at the joint of the rotary output shaft (1) and the transmission shaft (2); a second circumferential limiting structure (62) enabling the rotating member (3) to rotate along with the transmission shaft (2) is provided at the joint of one end of the rotating member (3) and the transmission shaft (2); and the axial limiting structure (4) is used for enabling axial positions of the rotary output shaft (1), the transmission shaft (2), and the rotating member (3) to be relatively fixed. By means of separate arrangement of the axial limiting structure (4) and the circumferential limiting structures, the space occupied by the limiting structures can be reduced, the size of the torque transferring assembly (100) in the axial direction of the transmission shaft (2) is not increased as much as possible, and torque can be transferred in a narrow space, so that the torque transferring assembly is applied to the head mechanism of the robot.
F16D 1/10 - Quick-acting couplings in which the parts are connected by simply bringing them together axially
F16D 1/06 - Couplings for rigidly connecting two coaxial shafts or other movable machine elements for attachment of a member on a shaft or on a shaft-end
F16D 1/00 - Couplings for rigidly connecting two coaxial shafts or other movable machine elements
B25J 19/00 - Accessories fitted to manipulators, e.g. for monitoring, for viewingSafety devices combined with or specially adapted for use in connection with manipulators
The present application relates to the field of path planning, in particular to a robot, a path planning method and apparatus therefor and a storage medium. The method comprises: acquiring a motion trajectory of a robot; on the basis of the motion trajectory, estimating that a movement path of the robot meets a preset closed path condition, and determining the type of the closed path; and determining an exit path of the robot on the basis of the type of the closed path, and, on the basis of the exit path, controlling the robot to exit the closed path. Thus, the present application can reduce the probability that robots are stuck in a looped motion pattern, thereby improving the task execution efficiency of robots.
The present application is applicable to the technical field of driving devices, and provides a driving device. The driving device comprises: a housing, a driving part, and a deceleration output part, wherein the housing is internally provided with an accommodating cavity and a first mounting hole communicated with the accommodating cavity; the driving part is arranged in the accommodating cavity; the input end of the deceleration output part is arranged in the accommodating cavity, and the output end of the deceleration output part is arranged in the first mounting hole; the position of the deceleration output part corresponds radially to the position of the driving part; the output end of the driving part is in driving connection with the input end of the deceleration output part, and the driving part is used for driving the deceleration output part. According to the driving device provided by the present application, the axial size of the driving device can be reduced in the manner that the position of the deceleration output part corresponds radially to the position of the driving part; moreover, since the deceleration output part is arranged inside the housing and corresponds radially to the driving part, the space in the housing is fully utilized, improving the torque density of the driving device provided by the present application.
A pet feeder and a temperature control method and device therefor, and a storage medium. The control method comprises: acquiring feeding time of a pet feeder; on the basis of the feeding time, determining a time range corresponding to a refrigeration gear of the pet feeder, wherein the refrigeration gear comprises an ON duration and an OFF duration of a refrigeration module (20); and on the basis of the ON duration and the OFF duration of the refrigeration module (20), controlling the temperature of a food storage space (10) of the pet feeder.
G05D 23/20 - Control of temperature characterised by the use of electric means with sensing elements having variation of electric or magnetic properties with change of temperature
The present application belongs to the technical field of robots, and particularly relates to a scenario recognition method and apparatus, a computer readable storage medium and a robot. The method comprises: the embodiments of the present application acquiring laser point cloud data of a robot; performing image mapping processing on the laser point cloud data to obtain a target scenario image corresponding to the laser point cloud data; calculating the graphic complexity corresponding to the target scenario image; and, according to the graphic complexity, determining a scenario where the robot is located. The method can calculate the graphic complexity corresponding to the target scenario image, and determine the scenario where the robot is located according to the graphic complexity, thereby using proper repositioning methods in different scenarios, and improving the robustness and scenario adaptability of robot repositioning.
A sound source localization method includes: obtaining a first audio frame and at least two second audio frames, wherein the first audio frame and the at least two second audio frames are synchronously sampled, the first audio frame is obtained by processing sound signals collected by the first microphone, the at least two second audio frames are obtained by processing sound signals collected by the second microphones; calculating a time delay estimation between the first audio frame and each of the at least two second audio frames; and determining a sound source orientation corresponding to the first audio frame and the at least two second audio frames through a preset time delay-orientation lookup table according to the time delay estimation between the first audio frame and each of the at least two second audio frames.
H04R 1/40 - Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
31.
PET FEEDER AND FEEDING CONTROL METHOD AND DEVICE THEREFOR, AND STORAGE MEDIUM
A feeding control method for a pet feeder, comprising: acquiring a preset feeding plan, the feeding plan comprising a correspondence between food types and feeding times; on the basis of the feeding plan, by incorporating a preset correspondence between the food types and rotation angles, determining a correspondence between the rotation angles of a rotating tray and the feeding times; and on the basis of the correspondence between the rotation angles of the rotating tray and the feeding times, controlling the rotation of the rotating tray, such that the rotating tray rotates until a food slot of the food type corresponding to a feeding time is in an unblocked state. Food is separately contained in food slots in the rotating tray, which is conducive to the separation of dry and wet food; and food delivery of the food slots is controlled by means of the feeding plan, which is conducive to the precise control of the meal time of pets for different types of food, the food types and the food amount, thereby improving the feeding convenience of users. Further disclosed are a feeding control device for a pet feeder, a pet feeder, and a computer readable storage medium.
An image quality assessing method, an electronic device, and a computer-readable storage medium are provided. The method includes: obtaining a to-be-assessed original image; obtaining a grayscale image and a histogram equalization image corresponding to the obtained original image by performing an image conversion on the original image; calculating an image similarity between the obtained grayscale image and the obtained histogram equalization image; and determining an image quality assessment result of the original image according to the calculated image similarity. Through the forgoing method, the original image can be converted to obtain the corresponding grayscale image and histogram equalization image, and the image quality assessment result of the original image can be determined according to the image similarity between the grayscale image and the histogram equalization image, which does not involve statistics and calculations of multiple feature indicators, and is helpful to improve the efficiency of the image quality assessment method.
A method for area dividing in a map for a mobile robot includes: obtaining a target binary map of a target area; obtaining a first area map and a second area map according to the target binary map; obtaining a target area-dividing map of the target area according to the first area map and the second area map; and controlling the mobile robot according to the target area-dividing map.
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
G01C 21/32 - Structuring or formatting of map data
34.
HUMAN BODY INFORMATION EXTRACTION METHOD AND APPARATUS, AND COMPUTER-READABLE STORAGE MEDIUM AND ROBOT
The present application belongs to the technical field of image processing, and particularly relates to a human body information extraction method and apparatus, and a computer-readable storage medium and a robot. The method comprises: acquiring a target image to be subjected to detection; performing shallow feature extraction on the target image by using a first feature extraction network in a preset human body information extraction model, so as to obtain shallow features of the target image; performing deep feature extraction on the shallow features by using a second feature extraction network in the human body information extraction model, so as to obtain deep features of the target image; performing multi-scale feature fusion on the shallow features and the deep features by using a feature fusion network in the human body information extraction model, so as to obtain fused features of the target image; and performing full connection processing on the fused features by using a fully-connected network in the human body information extraction model, so as to obtain human body information corresponding to the target image, wherein the human body information extraction model is an artificial intelligence model which is obtained by means of pre-training and is used for human body information extraction, and human body information comprises human body orientation information.
A localization method, a localization apparatus, a self-moving device and a computer-readable storage medium. The method comprises: during the process of a self-moving device executing a mobile task, performing target detection on a first real-time image collected by the self-moving device (101); when there is a target object in the first real-time image, determining position information of the target object in a device coordinate system (102), wherein the target object is an object which is used for assisting in localization and is arranged in an environment where the self-moving device is located; and on the basis of the position information of the target object in the device coordinate system, a pre-constructed target grid map, and an output of an odometer mounted on the self-moving device, determining a localization result of the self-moving device (103), wherein the target grid map is constructed on the basis of the target object. By means of the method, a self-moving device can realize accurate self-localization even in special scenarios.
A method for docking a cleaning robot includes: obtaining a map of a scene where the cleaning robot is located, and a position of the cleaning robot in the map; determining a map contour where the position of the cleaning robot is located according to the map; determining a plurality of traversal points for searching for a charging station and a plurality of traversal areas corresponding to the plurality of traversal points according to the map contour; and searching for the charging station in the traversal areas corresponding to the traversal points according to a predetermined search order, until the charging station is found in the traversal areas, or the traversal areas corresponding to all of the traversal points have been searched.
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
G01C 21/00 - NavigationNavigational instruments not provided for in groups
Provided in the present application are a wheel-foot structure and a robot, the wheel-foot structure being configured to be mounted on one side of a thoracic body structure. The thoracic body structure comprises a first driving member, and the wheel-foot structure comprises a thigh, a rocker, a link, a lower leg and a wheel assembly, wherein one end of the thigh is fixedly connected to a fixing portion of the first driving member, and the other end of the thigh is hinged to a non-end part of the lower leg; one end of the rocker is fixedly connected to a rotating portion of the first driving member, and the other end of the rocker is hinged to one end of the link; and one end of the lower leg is hinged to the other end of the link, and the other end of the lower leg is rotationally connected to the wheel assembly. The robot comprises the thoracic body structure and wheel-foot structures, wherein the thoracic body structure is movably connected to the wheel-foot structures. In the present application, by using equivalent structures of parallel four-bar mechanisms, upward displacement of knee-joint revolute joints of the robot is realized, thereby reducing the inertia of the wheel-foot structures relative to knee joints; moreover, the volume of the overall structure of the robot is reduced.
B62D 57/028 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members having wheels and mechanical legs
38.
ECHO CANCELLATION METHOD AND APPARATUS, COMPUTER READABLE STORAGE MEDIUM, AND TERMINAL DEVICE
An echo cancellation method and apparatus, a computer readable storage medium, and a terminal device. The method comprises: acquiring a reference alignment signal and a microphone alignment signal (S201); using a preset adaptive filter to filter the reference alignment signal and the microphone alignment signal to obtain a filtered output signal (S202); on the basis of the reference alignment signal and the microphone alignment signal, performing speech compensation processing on the filtered output signal to obtain a compensated speech signal (S203); and performing nonlinear processing on the compensated speech signal to obtain a speech signal having undergone echo cancellation (S204). According to the method, speech compensation processing can be performed on the basis of the reference alignment signal and the microphone alignment signal to obtain the compensated speech signal, thereby weakening the cancellation of a speech signal during echo cancellation, reducing the loss of the speech signal, and facilitating the normal conversation with a user.
A human-computer interaction method, the method comprising: acquiring a first human-computer interaction voice and a first human-computer interaction image (S301); determining a reply text corresponding to the first human-computer interaction voice (S302); performing emotional analysis on the first human-computer interaction image, so as to obtain emotional information corresponding to the first human-computer interaction image (S303); performing voice synthesis on the reply text according to the emotional information, so as to obtain a second human-computer interaction voice (S304); and performing human-computer interaction according to the second human-computer interaction voice (S305).
Provided in the present application are a thoracic cavity main body structure and a robot. The thoracic cavity main body structure comprises a thoracic cavity housing assembly, driving assemblies located in the thoracic cavity housing assembly and a mounting assembly for spanning and mounting the driving assemblies between two opposite sides of the thoracic cavity housing assembly. Each driving assembly comprises a first rotation driving part and a second rotation driving part which are connected in series. The mounting assembly comprises connecting brackets, and two first mounting parts and two second mounting parts mounted on the connecting brackets, the connecting brackets spanning and being mounted between the two opposite sides of the thoracic cavity housing assembly by means of the first mounting parts. The robot comprises a leg-shaped structure and the described thoracic cavity main body structure, and the leg-shaped structure is movably connected to the thoracic cavity main body structure. In the present application, the driving assemblies are suspended in the thoracic cavity housing assembly to form the core weight of the thoracic cavity main body structure, and other parts needing to be arranged in the thoracic cavity housing assembly can be arranged around the core, thus allowing for a reasonable structural layout, and effectively achieving miniaturization by means of layout design.
The present application relates to the field of robotic vacuum cleaners, and provides a robotic vacuum cleaner and a pile searching method and device thereof, and a storage medium. The method comprises: obtaining a map of an environment where a robotic vacuum cleaner is located and the position of the robotic vacuum cleaner in the map; on the basis of the map, determining a map contour where the position of the robotic vacuum cleaner is; on the basis of the map contour, determining traversal points for searching for a charging pile and traversal areas corresponding to the traversal points; and on the basis of a predetermined traversal point searching sequence, searching the traversal areas corresponding to the traversal points for the charging pile, until the charging pile is found from the traversal areas, or until the searching of contour areas of all the traversal points is completed. By means of the traversal points determined on the contour, the searching for the charging pile is rapidly carried out on the basis of the predetermined traversal point searching sequence, so that the uncertainty caused by random searching is reduced, and the efficiency of searching for the charging pile is improved.
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
A method for trajectory planning of a turning motion of a spider-type quadruped robot includes: acquiring a desired turning angle of the spider-type quadruped robot in a floating base coordinate system during a current gait cycle; calculating a desired displacement for each support leg of the spider-type quadruped robot in the floating base coordinate system during the current gait cycle based on the desired turning angle; and performing discrete trajectory planning in the floating base coordinate system based on the desired displacements of the support legs, to obtain a desired turning motion trajectory for each of the support legs of the spider-type quadruped robot in the floating base coordinate system during the current gait cycle.
G05D 1/43 - Control of position or course in two dimensions
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted supporting base and legVehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
43.
FEATURE MAP PROCESSING METHOD, IMAGE RECOGNITION METHOD AND RELATED APPARATUSES
The present application relates to the technical field of image processing. Provided are a feature map processing method, an image recognition method and related apparatuses. The feature map processing method of the present application comprises: after a global feature map of an image to be subjected to recognition is acquired, calling a target channel attention model to extract low-order image information and high-order image information of the global feature map to perform deep learning, so as to obtain a low-order channel attention vector corresponding to the low-order image information, and a high-order channel attention vector corresponding to the high-order image information; and then, on the basis of the low-order channel attention vector and the high-order channel attention vector, performing attention vector fusion weighting processing on the global feature map of said image, so as to obtain an expected feature map of said image. Therefore, during a visual recognition process, a channel attention mechanism is introduced to fuse the low-order image information and high-order image information of said image to perform image feature extraction, thereby improving the quality of extracted image features of said image.
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
Provided are a sweeper and a motion control method and apparatus therefor, and a storage medium. The method comprises: when a sweeper is located at a channel position, controlling a second wheel of the sweeper to rotate along a first wheel of the sweeper, such that the second wheel moves in the direction of passing through a channel (S201); when the distance between the sweeper and an obstacle is less than a preset first distance threshold value, controlling the sweeper to move backwards (S202); and when the distance between the sweeper and the obstacle is greater than or equal to the first distance threshold value, controlling the sweeper to rotate by a first preset angle around the center of the sweeper, such that the first wheel moves in the direction of passing through the channel, and repeating the execution of the second wheel rotating along the first wheel, moving backwards and rotating around the center until the sweeper leaves the channel (S203). By means of the motion control method for a sweeper, the sweeper can move forwards along an edge of a channel, such that the sweeper can effectively enter a relatively narrow channel to perform sweeping, thereby facilitating an increase in the sweeping coverage rate of the sweeper.
Provided are a roller mechanism and a cat litter box robot. The roller mechanism comprises: a roller body (100); a bin door assembly (200) movably connected to the roller body (100) and provided with a pressing portion (221); and a feces collecting bin (300) arranged in the roller body (100) and provided with an annular opening portion (310), wherein the pressing portion (221) is snap-fitted with at least part of the opening portion (310) when the bin door assembly (200) is closed. In the roller mechanism, when the bin door assembly (200) is closed, the pressing portion (221) is snap-fitted with at least part of the opening portion (310). At this time, the pressing portion (221) and the opening portion (310) are mutually limited and supported, the pressing portion (221) can limit the deformation of the opening portion (310), and the opening portion (310) can also limit the deformation of the pressing portion (221), thereby reducing litter leakage caused by the deformation of the bin door assembly (200) and the feces collecting bin (300). In addition, when any one of the pressing portion (221) and the opening portion (310) is deformed, the other one of the pressing portion (221) and the opening portion (310) is deformed accordingly, so that the pressing portion (221) and the opening portion (310) can still maintain good sealing fit, and the litter leakage can be reduced.
The present application relates to the technical field of target recognition, and in particular to a video feature extraction method and apparatus, a computer-readable storage medium, and a terminal device. The method comprises: acquiring a video sequence to be processed; respectively performing image feature extraction on video frames in the video sequence to obtain first image features of the video frames; on the basis of the first image features of the video frames, calculating a first video feature of the video sequence; on the basis of the first video feature, respectively performing feature optimization on the first image features of the video frames to obtain second image features of the video frames; and on the basis of the second image features of the video frames, calculating a second video feature of the video sequence. The present application can effectively weaken the impact of video frames of poor quality on the quality of finally extracted video features, thereby improving the robustness of video feature extraction.
A psychological state detection method and apparatus, a computer readable storage medium, and a terminal device (6). The psychological state detection method comprises: acquiring multi-modal information of a user during a dialogue (S101); and processing the multi-modal information by using a preset psychological state detection model to obtain a psychological state detection result of the user (S102), wherein the psychological state detection model is a deep learning model obtained by training a preset training sample set, the training sample set comprises a preset number of training samples, and each training sample comprises a multi-modal information sample and a corresponding expected detection result. By means of the psychological state detection method, the multi-modal information of the user can be acquired when having a dialogue with the user, and the multi-modal information is processed by using the psychological state detection model to obtain the psychological state detection result of the user, so that the accuracy of the psychological state detection result is improved.
Disclosed in the present application are a cognitive test method, a cognitive test apparatus, an electronic device and a computer-readable storage medium. The method comprises: on the basis of an answer of a user to a cognitive test question, acquiring data to be evaluated, wherein the data to be evaluated comprises: vital sign data of the user, video data containing face information of the user and audio data containing an answer audio of the user; extracting face key point data contained in the video data; performing answer discrimination according to the face key point data and the audio data to obtain a voice discrimination result of a cognitive test; and performing state analysis according to the face key point data, an intermediate output of the answer discrimination and the vital sign data to obtain a state analysis result of the cognitive test. The solution of the present application can achieve whole-process evaluation of specialized voice interaction for cognitive tests.
G10L 15/24 - Speech recognition using non-acoustical features
G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
G10L 25/72 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for transmitting results of analysis
G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
49.
SEMANTIC SEGMENTATION METHOD AND APPARATUS, AND COMPUTER-READABLE STORAGE MEDIUM AND ROBOT
The present application belongs to the technical field of robots, and in particular relates to a semantic segmentation method and apparatus, and a computer-readable storage medium and a robot. The method comprises: acquiring a first image and a second image, wherein the first image is an image to be subjected to semantic segmentation, and the second image is an image frame previous to the first image; performing optical-flow calculation on the first image and the second image, so as to obtain first optical-flow data; fusing the first optical-flow data with the first image, so as to obtain an optical-flow fusion image; and using a preset semantic segmentation model to perform semantic segmentation on the optical-flow fusion image, so as to obtain a semantic segmentation result corresponding to the first image. By means of the method, semantic segmentation can be performed in combination with optical-flow data of an image, such that deep features of the image can be mined, and semantic segmentation can be better performed on unknown objects, thereby effectively improving the accuracy of a semantic segmentation method.
G06V 10/26 - Segmentation of patterns in the image fieldCutting or merging of image elements to establish the pattern region, e.g. clustering-based techniquesDetection of occlusion
G06V 20/70 - Labelling scene content, e.g. deriving syntactic or semantic representations
G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
50.
METHOD AND DEVICE FOR SYNTHESIZING TALKING HEAD VIDEO AND COMPUTER-READABLE STORAGE MEDIUM
A method for synthesizing a talking head video includes: obtaining speech data to be synthesized and observation data, wherein the observation data is data obtained through observation other than the speech data; performing feature extraction on the speech data to obtain speech features corresponding to the speech data, and performing feature extraction on the observation data to obtain non-speech features corresponding to the observation data; performing temporal modeling on the speech features and first non-speech features to obtain low-dimensional representations, wherein the first non-speech features are non-speech features that are sensitive to temporal changes; and performing video synthesis based on the low-dimensional representations and second non-speech features, wherein the second non-speech features are non-speech features insensitive to temporal changes.
G06T 13/40 - 3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
G10L 15/02 - Feature extraction for speech recognitionSelection of recognition unit
51.
TRAVELABLE REGION SEGMENTATION METHOD AND APPARATUS, READABLE STORAGE MEDIUM, AND ROBOT
The present application relates to the technical field of robots, and in particular to a travelable region segmentation method and apparatus, a computer-readable storage medium, and a robot. The method comprises: performing training on the basis of a prior image and prior point cloud data that are synchronously collected so as to obtain a travelable region segmentation model; and performing region segmentation on a target image by using the travelable region segmentation model so as to obtain a travelable region segmented image corresponding to the target image. According to the method, when the robot is in a new road environment, the prior image can be used as input and the prior point cloud data collected synchronously is used as expected output for model training, and a large amount of sample collection and manual annotation work are not needed, so that the training costs of the travelable region segmentation model can be effectively reduced, and the training efficiency of the travelable region segmentation model is improved.
G06V 10/26 - Segmentation of patterns in the image fieldCutting or merging of image elements to establish the pattern region, e.g. clustering-based techniquesDetection of occlusion
A pet feeder includes: a housing, a heat conduction member, a temperature adjustment structure and a heat dissipation member that are arranged in the housing, a tray, a cover, and an actuating mechanism. The temperature adjustment structure includes a thermoelectric cooling member and a control module. The thermoelectric cooling member includes a first side connected to the heat conduction member and a second side in contact with the heat dissipation member. The control module is to control the first side of the thermoelectric cooling member to heat or cool. The tray is arranged in the housing and connected to the heat conduction member, and defines two compartments for placing pet food. The cover is arranged on the tray and defines a window in communication with the at least two compartments. The actuating mechanism is arranged in the housing and is to rotate the tray or the cover.
F25B 21/04 - Machines, plants or systems, using electric or magnetic effects using Peltier effectMachines, plants or systems, using electric or magnetic effects using Nernst-Ettinghausen effect reversible
53.
USER EXERCISE DETECTION METHOD, ROBOT AND COMPUTER-READABLE STORAGE MEDIUM
A user exercise detection method applicable in a robot includes: obtaining first measurement data from at least one inertial measurement unit (IMU) sensor that is arranged at a designated body part of the user, and detecting a posture of the user relative to the robot based on the first measurement data; obtaining second measurement data from the at least one IMU sensor, and determining whether an exercise of the user corresponding to the posture is detected according to a preset threshold parameter and the second measurement data; in response to detection of the exercise, obtaining exercise data when the user performs the exercise multiple times through the at least one IMU sensor; and adjusting the threshold parameter according to the exercise data.
A robot control method includes: building a two-wheeled inverted pendulum model based on a wheel-legged robot; constructing initial state-space equations based on the two-wheeled inverted pendulum model; linearizing the initial state-space equations to obtain the state-space equations for a linear time-invariant system; obtaining a quadratic performance objective function according to the state-space equations for the linear time-invariant system; and solving the quadratic performance objective function by a linear quadratic regulator to obtain wheel torques of the wheel-legged robot, and controlling the wheel-legged robot according to the wheel torques.
B60L 15/20 - Methods, circuits or devices for controlling the propulsion of electrically-propelled vehicles, e.g. their traction-motor speed, to achieve a desired performanceAdaptation of control equipment on electrically-propelled vehicles for remote actuation from a stationary place, from alternative parts of the vehicle or from alternative vehicles of the same vehicle train for control of the vehicle or its driving motor to achieve a desired performance, e.g. speed, torque, programmed variation of speed
B62D 57/028 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members having wheels and mechanical legs
A foot mechanism (100) and a robot. The foot mechanism (100) comprises a main framework (110), a rotating assembly (120), a first sole portion (130), a second sole portion (140), and an elastic assembly (150). The main framework (110) is used for being connected to a leg of a robot; the rotating assembly (120) comprises a first rotating member (121) and a second rotating member (122); the first sole portion (130) is arranged at the bottom of the main framework (110), and is rotatably connected to the main framework (110) by means of the first rotating member (121); the second sole portion (140) is arranged at the bottom of the main framework (110), and is rotatably connected to the main framework (110) by means of the second rotating member (122); the first sole portion (130) and the second sole portion (140) are located on two opposite sides of the main framework (110); the elastic assembly (150) comprises a first elastic member (151) and a second elastic member (152); the first elastic member (151) is sleeved on the first rotating member (121), and the first elastic member (151) has one end abutting against the first sole portion (130) and the other end abutting against the main framework (110); and the second elastic member (152) abuts between the second sole portion (140) and the main framework (110).
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted supporting base and legVehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
F16F 15/067 - Suppression of vibrations of non-rotating, e.g. reciprocating, systemsSuppression of vibrations of rotating systems by use of members not moving with the rotating system using elastic means with metal springs using only wound springs
21 - HouseHold or kitchen utensils, containers and materials; glassware; porcelain; earthenware
Goods & Services
Robotic vacuum cleaners, vacuum cleaners for household purposes, robotic lawn mowers, swimming pool vacuum cleaners, electric window cleaning machines, snow blowers, power-operated blowers, 3D printers, hair shearing machines for animals, electrical squeezers for fruits and vegetables Cat litter pans, litter boxes for pets, pet grooming device comprising a brush and an attachment that connects to vacuums, pet grooming device comprising a pet hair clipper and an attachment that connects to vacuums, grooming tools for pets, namely, combs and brushes, automatic pet feeders, pet drinking bowls, cages for pets, water flossers, electrical toothbrushes
57.
POSE ESTIMATION METHOD, POSE ESTIMATION APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM
Disclosed in the present application are a pose estimation method, a pose estimation apparatus, an electronic device and a computer-readable storage medium. The method comprises: performing target detection on an image to be processed, and determining position information and category information of a target object included in said image; extracting a first target two-dimensional feature of the target object according to the position information; in an object point cloud model obtained by means of multiple instances of offline training, according to the category information, determining a target point cloud model corresponding to the target object; and performing pose estimation on the target object according to the first target two-dimensional feature and the target point cloud model, so as to obtain pose information of the target object. By means of the solution of the present application, the pose of an object can be quickly and accurately estimated.
The present application is suitable for the technical field of terminals, and particularly relates to a map data management method, an apparatus, a terminal device, and a readable storage medium. In the method, when map construction is performed, environment data collected when a terminal device is located at each target location is obtained; first key data corresponding to the environment data is determined; a first location at which the terminal device is currently located is obtained, and a second location according to the first location is determined, wherein the first location is one of target locations, the second location is a location other than the first location among the target locations, and a preset condition is met between the second location and the first location; and second key data is determined from the first key data according to the second location, and the second key data is stored on a preset database. That is, when map construction is performed in the present application, the terminal device may determine the second location according to the current first location, and obtain the second key data according to the second location and store the second key data on the preset database, so as to reduce consumption of memory resources of the terminal device.
A method for binocular depth estimation is provided, including: obtaining binocular images and performing feature extraction on the binocular images to obtain left and right feature mappings; performing disparity construction by using the left and right feature mappings to obtain a disparity cost volume with a reduced dimension; performing attention feature learning on the disparity cost volume to obtain an attention feature vector and performing feature weighting on the disparity cost volume by using the attention feature vector to obtain a weighted cost volume; performing disparity regression on the weighted cost volume based on a two-dimensional convolution to obtain a prediction disparity map; and performing disparity depth conversion on the prediction disparity map to obtain a depth map of the binocular images.
G06V 10/77 - Processing image or video features in feature spacesArrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]Blind source separation
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
60.
INSPECTION METHOD, INSPECTION APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
An inspection method, an inspection apparatus, an electronic device, and a computer-readable storage medium. The method comprises: controlling a robot to move to an inspection location that corresponds to the current inspection target (101); after the robot reaches the inspection location, acquiring a first inspection image acquired by a camera of the robot for the current inspection target (102); adjusting the pose of the camera on the basis of the first inspection image, such that the current inspection target is located at the center of a photographic screen of the camera (103); acquiring a plurality of second inspection images acquired by the camera for the current inspection target (104), wherein photographic focal lengths for the plurality of second inspection images are different but photographic magnifications for the plurality of second inspection images are the same, and the photographic magnifications for the plurality of second inspection images are greater than the photographic magnification for the first inspection image; and determining a second inspection image, which has the best image quality, among the plurality of second inspection images as a target inspection image (105). By means of the technical solution, the accuracy of an inspection result can be ensured while the success rate of inspection is increased.
A method for extracting video frame features includes: obtaining a number of initial features of each video frame in a video sequence; calculating global channel attention information of the video sequence based on the initial features of each video frame in the video sequence; calculating local channel attention information of a target video frame according to initial features of a target video frame; wherein the target video frame is one of the video frames in the video sequence; and performing channel attention mechanism processing on the initial features of the target video frame according to the global channel attention information and the local channel attention information to obtain optimized features of the target video frame.
G06V 20/40 - ScenesScene-specific elements in video content
G06V 10/77 - Processing image or video features in feature spacesArrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]Blind source separation
G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
62.
METHOD FOR DETERMINING POSE OF ROBOT, ROBOT AND COMPUTER-READABLE STORAGE MEDIUM
A method for determining a pose of a robot having a lidar including: obtaining a first pose of the robot in a map coordinate system; determining first positions of laser points corresponding to the lidar in the map coordinate system according to the first pose when the lidar performs laser scanning; determining matching scores between the first positions and grids where the first positions are located according to the first positions and mean values of the grids where the first positions are located, wherein the grids are grids in a probability map corresponding to the map coordinate system; determining a first confidence level for the first pose based on the matching scores; and determining a target pose according to the first confidence level.
A method for extracting video features may include: obtaining a target video sequence that comprises a number of video frames; performing video frame feature extraction on the target video sequence to obtain video frame features of each of the video frames; performing feature weight calculation on each of the video frame features to obtain the feature weight of each of the video frame features; wherein the feature weight of each of the video frame features is determined by the video frame features of all of the video frames in the target video sequence; and performing feature weighting on each of the video frame features according to the feature weight of each of the video frame features to obtain video features of the target video sequence.
G06V 20/40 - ScenesScene-specific elements in video content
G06V 10/74 - Image or video pattern matchingProximity measures in feature spaces
G06V 10/77 - Processing image or video features in feature spacesArrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]Blind source separation
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
64.
HUMAN-MACHINE CONVERSATION METHOD AND APPARATUS, AND ELECTRONIC DEVICE
A human-machine conversation method and apparatus, and an electronic device, applicable to the technical field of human-machine conversation. The method comprises: performing object extraction on a current sentence input by a user to obtain a target object, the target object comprising a target entity and/or a target keyword; acquiring a historical conversation record in a current conversation, the historical conversation record comprising sentences generated by an electronic device; and generating a reply sentence for the current sentence on the basis of the target object and the historical conversation record.
A collided position determination method, a computer-readable storage medium, and a robot are provided. The method includes: obtaining, from a collision sensor of the robot, a triggered signal corresponding to a collision of the robot; determining at least two candidate positions of the collision based on the triggered signal corresponding to the collision; obtaining a motion trajectory of the robot after the collision; and obtaining a collided position by screening each of the candidate positions of the collision according to the motion trajectory. In this manner, when the robot collides during its movement, the collision signal generated by the collision sensor of the robot can accurately mark the collided position, thereby reducing the probability of the robot colliding again at the same position.
A47L 11/40 - Parts or details of machines not provided for in groups , or not restricted to one of these groups, e.g. handles, arrangements of switches, skirts, buffers or levers
A47L 9/28 - Installation of the electric equipment, e.g. adaptation or attachment to the suction cleanerControlling suction cleaners by electric means
G05D 1/02 - Control of position or course in two dimensions
69.
METHOD FOR AVOIDING SINGULARITIES OF ROBOTIC ARM, CONTROL DEVICE AND COMPUTER-READABLE STORAGE MEDIUM
A method for avoiding singularities of a robotic arm includes: calculating a virtual environment external force required by the robotic arm to avoid singularities based on current joint positions of joints on the robotic arm when it is determined that the robotic arm needs to avoid singularities; obtaining a current end force of the robotic arm and a desired end trajectory of the robotic arm; performing admittance control calculation based on the virtual environment external force, the current end force and the desired end trajectory to obtain a corrected end trajectory of the robotic arm; and controlling the robotic arm to move based on the corrected end trajectory.
A method for pedestrian body part feature extraction is provided, including: performing global feature extraction on a target pedestrian image to obtain a global feature map; learning each of body parts in the global feature map using a self-produced supervision signals-based self-regulated channel attention model to output first channel attention vectors each describing a respective one of the body parts; weighting the first channel attention vectors with the global feature map to obtain a weighted feature map describing the body parts; and extracting body part features of the target pedestrian image from the weighted feature map.
G06V 10/42 - Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
G06V 10/77 - Processing image or video features in feature spacesArrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]Blind source separation
G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestriansBody parts, e.g. hands
71.
METHOD FOR ROBOT TELEOPERATION CONTROL, ROBOT, AND ELECTRONIC DEVICE
A method for robot telcoperation control is provided. The method includes acquiring target action data and displacement data of a target object, wherein the target action data includes head action data and arm action data; controlling a target robot to act according to the target action data to enable the target robot to complete an action corresponding to the target action data; and performing centroid trajectory planning on the target robot based on a model predictive control (MPC) algorithm according to the displacement data to obtain a target centroid trajectory, and establishing a spring-damping system to track the target centroid trajectory so as to enable the target robot to move to a position corresponding to the displacement data.
A bidirectional energy storage device for a joint includes: a sleeve comprising an open end and an opposite, closed end; an elastic member attached to the sleeve; a sliding member slidably arranged at the open end of the sleeve, wherein opposite ends of the elastic member are respectively in contact with the closed end of the sleeve and the sliding member; a first telescopic link comprising a first end pivotally connected to the sliding member and an opposite, second end; and a second telescopic link comprising a first end pivotally connected to the sliding member and a second end. The second ends of the first telescopic link and the second telescopic link are pivotally connected to a rotating member at an end of the joint, and the first telescopic link and the second telescopic link are to extend and retract to drive the sliding member to side along the sleeve.
A bidirectional energy storage device for a joint includes: a sleeve comprising two, opposite open ends; a first sliding member and a second sliding member that are slidably disposed at the open ends of the sleeve, respectively; an elastic member comprising two, opposite ends that are respectively in contact with the first sliding member and the second sliding member; a first telescopic link comprising a first end and an opposite, second end, the first end of the first telescopic link pivotally connected to the first sliding member, the first telescopic link configured to rotate to drive the first sliding member to slide; a second telescopic link comprising a first end and an opposite, second end, the first end of the second telescopic link pivotally connected to the second sliding member, the second telescopic link configured to rotate to drive the second sliding member to slide.
A target identification method includes: obtaining an image containing a target to be identified; performing feature extraction on the image to obtain image features in the image; and inputting the image features into a target identification network model to obtain an identification result that determines a class to which the target to be identified belongs. The target identification network model includes a loss function that is based on intra-class constraints and inter-class constraints. The intra-class constraints are to constrain an intra-class distance between sample image features of a sample target and a class center of a class to which the sample target belongs, and the inter-class constraints are to constrain inter-class distances between class centers of different classes, and/or inter-class angles between the class centers of different classes.
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 10/77 - Processing image or video features in feature spacesArrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]Blind source separation
75.
TARGET IDENTIFICATION METHOD, DEVICE AND COMPUTER-READABLE STORAGE MEDIUM
A target identification method includes: obtaining an image containing a target to be identified; performing feature extraction on the image to obtain image features in the image; and inputting the image features into a target identification network model to obtain an identification result that determines a class to which the target to be identified belongs. The target recognition network model includes a loss function that is to constrain a first distance corresponding to each triplet in a number of triplets and a second distance corresponding to each triplet in the triplets. The first distance represents a distance between anchor image features and positive sample image features in each triplet in the triplets, and the second distance represents a distance between the anchor image features and a class center mean of a number of classes in each triplet in the triplets.
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 10/77 - Processing image or video features in feature spacesArrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]Blind source separation
76.
Robot control method, robot, and computer-readable storage medium
A robot control method, a robot, and a computer-readable storage medium are provided. The method includes: obtaining a trajectory planning parameter of joint(s) of the robot, force data of an end of the robot, and force data of the joint(s); obtaining an end admittance compensation amount; determining a first joint parameter and a first slack variable corresponding to the end admittance compensation amount in a joint space of each of the joint(s) based on the end admittance compensation amount and the trajectory planning parameter; obtaining a joint admittance compensation amount; determining a second joint parameter based on the first joint parameter, the first slack variable, the joint admittance compensation amount, and the trajectory planning parameter; determining a target joint commanding position based on the second joint parameter; and controlling the robot to move according to the target joint commanding position.
09 - Scientific and electric apparatus and instruments
Goods & Services
security surveillance robots; navigational instruments; Humanoid robots having communication and learning functions for assisting and entertaining people; teaching robots; humanoid robots with artificial intelligence for use in scientific research; Computer chatbot software for simulating conversations; laboratory robots; User-programmable humanoid robots, not configured; telepresence robots; Humanoid robots with artificial intelligence for preparing beverages.
09 - Scientific and electric apparatus and instruments
Goods & Services
Humanoid robots having communication and learning functions for assisting and entertaining people; Humanoid robots with artificial intelligence for preparing beverages; Humanoid robots with artificial intelligence for use in scientific research; User-programmable humanoid robots, not configured
79.
METHOD FOR HUMAN FALL DETECTION AND METHOD FOR OBTAINING FEATURE EXTRACTION MODEL, AND TERMINAL DEVICE
A method for obtaining a feature extraction model, a method for human fall detection and a terminal device are provided. The method for human fall detection includes: inputting a human body image into a feature extraction model for feature extraction to obtain a target image feature; in response to a distance between the target image feature and a pre-stored mean value of standing category image features being greater than or equal to a preset distance threshold, determining that the human body image is a human falling image; and in response to the distance being less than the preset distance threshold, determining that the human body image is a human standing image. The feature extraction model is obtained based on constraint training to aggregate standing category image features and separate falling category image features from the standing category image features.
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G06V 10/77 - Processing image or video features in feature spacesArrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]Blind source separation
G06V 10/774 - Generating sets of training patternsBootstrap methods, e.g. bagging or boosting
A method for controlling a robotic arm that includes an end effector and a sensor that are mounted at an end of the robotic arm includes: obtaining, by the sensor, n gravity matrix data, wherein the n gravity matrix data are gravity matrix data of the end effector in an end coordinate system when the robotic arm is in a different poses, n≤3; determining n rotation transformation matrices from a base coordinate system of the robotic arm to the end coordinate system when the robotic arm is in n different poses; calculating coordinates of a center of mass and mass of the end effector based on the n gravity matrix data and the a rotation transformation matrices; and controlling the robotic arm based on the coordinates of the center of mass and the mass.
21 - HouseHold or kitchen utensils, containers and materials; glassware; porcelain; earthenware
Goods & Services
(1) Automatic swimming pool cleaners and parts therefor; cleaning machines for ponds; conveyors being machines; electric lawnmowers; electric vacuum cleaners; electric window cleaning machines; floor cleaning machines; hair clipping machines for animals; kitchen machines, namely, electric standing mixers; robotic floor cleaners; snow ploughs; swimming pool sweepers; swimming pool vacuum cleaners
(2) Automatic pet feeders; battery-powered dental flossers; brushes for pets; cages for pets; cat litter boxes; cosmetic brushes; electric toothbrushes; litter trays for pets; pet drinking bowls; ultrasonic pest repellers
21 - HouseHold or kitchen utensils, containers and materials; glassware; porcelain; earthenware
Goods & Services
Robotic vacuum cleaners, vacuum cleaners for household purposes, robotic lawn mowers, swimming pool vacuum cleaners, electric window cleaning machines, snow blowers, power-operated blowers, 3D printers, hair shearing machines for animals, electrical squeezers for fruits and vegetables Cat litter pans, litter boxes for pets, pet grooming device comprising a brush and an attachment that connects to vacuums, pet grooming device comprising a pet hair clipper and an attachment that connects to vacuums, grooming tools for pets, namely, combs and brushes, automatic pet feeders, pet drinking bowls, cages for pets, water flossers, electrical toothbrushes
21 - HouseHold or kitchen utensils, containers and materials; glassware; porcelain; earthenware
Goods & Services
Grooming machines for clipping animals hair; sweeping machines; Electric kitchen machines; Electric window cleaning machines; Electric lawnmowers; Machines and apparatus for cleaning, electric; Electric vacuum cleaners; Snow ploughs; conveyors being machines; swimming pool cleaning machines; Robotic swimming pool cleaning machines. Ultrasonic pest repellers; Litter trays for pets; pet drinking bowls; cages for pets; brushes for pets; automatic pet feeders; Water flossers; Toothbrushes,electric; Cosmetic utensils; cat litter boxes.
84.
ADMITTANCE CONTROL METHOD, ROBOT, AND COMPUTER-READABLE STORAGE MEDIUM
An admittance control method, a robot, and a storage medium are provided. The method includes: obtaining, based on a first admittance controller transfer function between force and position, a desired position of a robot in a current control cycle; determining a corresponding Jacobian matrix according to a configuration of the robot in the current control cycle, and calculating an ill condition number of the Jacobian matrix; and controlling the robot to move by inputting the obtained desired position in the current control cycle to a corresponding joint, in response to the ill condition number being less than a preset maximum ill condition number. In this manner, the configuration of the robot can be maintained within a reasonable rang of the ill condition number, and singularities caused by the admittance controller exceeding the work space can be avoided while the velocity reachability and force reachability of the robot can be ensured.
A robot autonomous operation method, a robot, and a computer-readable storage medium are provided. The method includes: moving the robot, under a control of a user, along a guide path in an operation scene; generating a map including the guide path by positioning and mapping during the robot being moved along the guide path in the operation scene; generating a plurality of operation points on the guide path in the map; generating an operation path, wherein the operation path passes through all of the unpassed operation points and has a shortest total distance; and moving the robot, according to the operation path, to each of the unpassed operation points so as to perform an operation. In this manner, it controls the robot to explore the guide path in the operation scene by manual guiding, which can improve the exploration efficiency and reduce the risk of exploring unknown operation scenes.
A mapping method for a robot includes: detecting a plurality of linear trajectories of the robot in a process of building a map; inserting a positioning key frame corresponding to each of the linear trajectories, wherein the positioning key frame comprises, when the robot is located on a corresponding one of the linear trajectories, a first pose in a positioning coordinate system, and a second pose in a map coordinate system; and for each two adjacent ones of the linear trajectories, according to one of the first poses determined according to a displacement between the positioning key frames of the two adjacent ones of the linear trajectories, performing optimization of loop closure constraints on the second poses of the positioning key frames, and generating a map based on the optimized positioning key frames.
A robot control method, a legged robot using the same, and a computer-readable storage medium are provided. The method includes: obtaining a motion parameter of a driving mechanism of a target part of the robot; and obtaining an end pose of the target part by processing the motion parameter of the driving mechanism according to a preset forward kinematics solving model, where the forward kinematics solving model is a neural network model trained by a preset training sample set constructed according to a preset inverse kinematics function relationship. In this manner, a complex forward kinematics solving process can be transformed into a relatively simple inverse kinematics solving process and neural network model processing process, which reduces the computational complexity, shortens the computational time, thereby meeting the demand for real-time control of the robot.
B25J 9/10 - Programme-controlled manipulators characterised by positioning means for manipulator elements
B62D 57/032 - Vehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted supporting base and legVehicles characterised by having other propulsion or other ground-engaging means than wheels or endless track, alone or in addition to wheels or endless track with ground-engaging propulsion means, e.g. walking members with alternately or sequentially lifted feet or skid
88.
ROBOT STEP LENGTH CONTROL METHOD, ROBOT CONTROLLER, AND COMPUTER-READABLE STORAGE MEDIUM
A robot step length control method, a robot controller, and a computer-readable storage medium are provided. The method includes: if it detects that a humanoid robot is not in a balanced state at a current time, it correspondingly obtains a torso deflection posture parameter, a lower limb parameter and a leg swing frequency of the legs of the humanoid robot at the current time; and it calculates, using a swinging leg capture point algorithm, a calculated step length for maintaining a stable state of the humanoid robot that meets a posture balance requirement of the robot at the current time based on the torso deflection posture parameter, the lower limb parameter, and the leg swing frequency, so that the humanoid robot can be restored to the balanced state after moving with the calculated step length, thereby improving the anti-interference ability of the robot.
A method includes: performing semantic segmentation on an RGBD image to obtain a semantic label of each pixel of the image; performing reconstruction of a point cloud based on the image and mapping the semantic label of each pixel of the image into the point cloud to respectively obtain a semantic point cloud of a current frame with the semantic labels and a three-dimensional scene semantic map with the semantic labels; generating two-dimensional discrete semantic feature points for each of three-dimensional semantic point clouds in the current frame and the semantic map to obtain a corresponding two-dimensional semantic feature point image, and performing a three-dimensional semantic feature description on each feature point in the two-dimensional semantic feature point image; and performing feature matching on all feature points in the current frame and all feature points in the semantic map to obtain positioning information based on the three-dimensional semantic feature description.
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersectionsConnectivity analysis, e.g. of connected components
G06V 10/74 - Image or video pattern matchingProximity measures in feature spaces
90.
DYNAMIC TARGET TRACKING METHOD, ROBOT AND COMPUTER-READABLE STORAGE MEDIUM
A dynamic target tracking method for a robot having multiple joints includes: obtaining a motion state of a tracked dynamic target in real time; performing motion prediction according to the motion state at a current moment to obtain a predicted position of the dynamic target; performing lag compensation on the predicted position to obtain a compensated predicted position; performing on-line trajectory planning according to the compensated predicted position to obtain planning quantities of multi-step joint motion states at multiple future moments, and determining a multi-step optimization trajectory according to the planning quantities and a multi-step optimization objective function; and controlling the joints of the robot to according to the multi-step optimization trajectory.
A method for controlling a robot based on a trajectory of a center of mass (COM) of the robot includes: calculate a desired periodic trajectory of the robot in a single gait cycle based on a dynamic equation constructed according to a simplified model of the robot; correct the desired periodic trajectory according to a fed-back state of the COM of the robot; generate a desired trajectory of the COM of the robot using the corrected desired periodic trajectory; and control motion of the robot according to the desired trajectory of the COM of the robot.
A robot calibration method, a robot, and a computer-readable storage medium are provided. The method includes: obtaining operation space information of the execution end of the robot; obtaining operation space points after gridding an operation space of the robot by gridding the operation space based on the operation space information; obtaining calibration data by controlling the execution end to move to the operation space points meeting a preset requirement; and calibrating the hand and the image detection device of the robot based on the obtained calibration data. In this manner, the operation space points are determined by gridding the operation space based on the operation space information, and the execution end can be automatically controlled to move to the operation space points that meet the preset requirements so as to obtain the calibration data in an automatic and accurate manner, thereby simplifying the calibration process and improving the efficiency.
A storage medium, a robot, and a method for generating navigation map are provided. By disposing a first lidar and a second lidar located higher than the first lidar, it constructs a first map corresponding to the first lidar based on first laser data collected by the first lidar, and calculate second positioning data corresponding to the second lidar during constructing the first map, constructs a second map corresponding to the second lidar based on the second positioning data and second laser data collected by the second lidar, and obtains a navigation map corresponding to the robot by fusing the first map with the second map, such that the fused map includes not only positioning information provided by the first map, but also obstacle information provided by the first map and the second map.
A robotic arm angle interval inverse solving method and a robotic arm using the same are provided. The method includes: obtaining a joint angle calculation model and a differential relationship model of a target joint of the robotic arm; obtaining extreme arm angles corresponding to a joint angle of the differential relationship model at extreme values based on the differential relationship model; obtaining a joint arm angle interval corresponding to the target joint based on the extreme arm angle and the joint angle calculation model; and obtaining a target arm angle interval corresponding to the robotic arm based on the joint arm angle interval corresponding to the target joint of the robotic arm. In comparison with the existing method to solve the arm angle interval of the robotic arm, a more accurate arm angle interval can be obtained.
A linkage mechanism includes: a base member; a first link having a first end rotatably connected to the base member; a second link rotatably connected to the first link; a connecting member rotatably connected to the base member and the second link; an actuating mechanism having a linear actuator, a pushing member, and a transmission member, the pushing member slidably connected to the output shaft, the pushing member having a pushing surface, the transmission member including a first end hinged to the pushing member, and a second end pivoted to the first end of the first link. When the output shaft extends to push the pushing surface, the pushing member moves and the first link rotates relative to the base member.
A center of mass (COM) planning method includes: obtaining a planning position of the COM and a planning speed of the COM of a robot, and calculating a planning capture point of the robot according to the planning position of the COM and the planning speed of the COM; obtaining a measured position of the COM and a measured speed of the COM, and calculating a measured capture point of the robot according to the measured position the measured speed; calculating a desired zero moment point (ZMP) of the robot based on the planning capture point and the measured capture point; obtaining a measured ZMP of the robot, and calculating an amount of change in a position of the COM according to the desired ZMP and the measured ZMP; and correcting the planning position of the COM according to the amount of change in the position of the COM.
G05B 19/4155 - Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by programme execution, i.e. part programme or machine function execution, e.g. selection of a programme
A linkage mechanism includes: a base member; a first link rotatably connected to the base member, the first link defining a first arc-shaped guide groove centered on a pivot axis about which the first link rotates relative to the base member; a second link rotatably connected to the first link; a connecting member rotatably connected to the base member and the second link; an actuating mechanism including a linear actuator and a transmission member that is driven by the linear actuator, the transmission member having a first end rotatably connected to the output shaft, and a second end slidably received in the first arc-shaped guide groove. When the linear actuator drives the connecting member to extend and move, the second end of the transmission member abuts against one end of the first arc-shaped guide groove, which drives the first link to rotate relative to the base member.
A text-to-speech synthesis method, an electronic device, and a computer-readable storage medium are provided. The method includes: obtaining prosodic pause features of an input text by performing a prosodic pause prediction processing on the input text, and dividing the input text into a plurality of prosodic phrases according to the prosodic pause features; synthesizing short sentence audios according to the prosodic phrases by performing a streamed speech synthesis processing on each of the prosodic phrases in the input text in a manner of asynchronous processing of a thread pool; and performing an audio playback operation of the input text according to the short sentence audios corresponding to the first prosodic phrase of the input text, in response to synthesizing the short sentence audio corresponding to the first prosodic phrase of the input text.
A method for detecting contact of a swinging leg of a robot with ground includes: obtaining a torque on each joint of the swinging leg when the robot is in a swing phase; estimating a force on a foot of the swinging leg by using a force Jacobian matrix based on the torque on each joint of the swinging leg, and calculating a rate of change of force of the foot in a vertical direction according to the force on the foot; and determining that the swinging leg has contacted the ground in response to a preset consecutive number of values of the rate of change of force being greater than a preset threshold.
B25J 13/08 - Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
G05B 19/4155 - Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by programme execution, i.e. part programme or machine function execution, e.g. selection of a programme