Patent classifications
G05D2109/12
MOVEMENT TRAJECTORY DETERMINATION
Aspects of the disclosure include a method and an apparatus. The method includes generating second relational data indicating a relationship between a movement duration and force(s) applied to a legged robot and third relational data indicating a relationship between the force(s) and a predicted rotational angle of the legged robot. The second relational data includes a vector C to be determined. Fourth relational data is generated based on the third relational data. The fourth relational data indicates a positive correlation between a target value J associated with C and a status data error between predicted status data and target status data. C that minimizes J is determined based on the second relational data and the fourth relational data. First relational data with the determined C representing a movement trajectory of a torso of the legged robot is determined. The legged robot is caused to move based on the movement trajectory.
METHOD AND SYSTEM FOR GENERATING SCAN DATA OF AN AREA OF INTEREST
A system and a method for generating three-dimensional scan data of areas of interest, the method comprising a user defining the areas of interest using a mobile device in the environment, and a scanning device performing a scanning procedure at each defined area of interest to generate the scan data of the respective area of interest, wherein defining the areas of interest comprises, for each area of interest, generating identification data, wherein generating the identification data at least comprises generating image data of the respective area of interest, and the scanning procedure at each defined area of interest is performed by a mobile robot comprising the scanning device and being configured for autonomously performing a scan of a surrounding area using the scanning device, the mobile robot having a SLAM functionality for simultaneous localization and mapping and being configured to autonomously move through the environment using the SLAM functionality.
ENVIRONMENTAL FEATURE-SPECIFIC ACTIONS FOR ROBOT NAVIGATION
Systems and methods are described for reacting to a feature in an environment of a robot based on a classification of the feature. A system can detect the feature in the environment using a first sensor on the robot. For example, the system can detect the feature using a feature detection system based on sensor data from a camera. The system can detect a mover in the environment using a second sensor on the robot. For example, the system can detect the mover using a mover detection system based on sensor data from a lidar sensor. The system can fuse the data from detecting the feature and detecting the mover to produce fused data. The system can classify the feature based on the fused data and react to the feature based on classifying the feature.
Path planning method and biped robot using the same
A path planning method and a biped robot using the same are provided. The method includes: generating a candidate node set for a next foot placement based on a biped robot's own parameters and joint information of a current node, adding valid candidate nodes in the candidate node set to a priority queue so as to select optimal nodes for realizing next node expansion. These optimal nodes are output to generate a foot placement sequence from an initial node to a target node, which can greatly reduce the search amount for path nodes when the robot's legs intersect and touch the ground, thereby improving the efficiency of path planning.
USER INTERFACE DEVICE FOR ROBOTS, ROBOT SYSTEMS AND RELATED METHODS
A wearable human-machine interface device includes a base, a finger, a sensor, and an interface controller. The finger extends longitudinally from the base and including first and second rigid finger segments. A proximal end of the first finger segment is coupled to the base, and a proximal end of the second finger segment is coupled to a distal end of the first finger segment by a joint. The joint is adapted to enable rotational movement of the second finger segment relative to the first finger segment. The sensor is coupled to the finger and configured to provide a sensor signal representative of a position and/or movement of the second finger segment relative to the first finger segment. An interface controller is configured to provide a control signal representative of a flexion of the finger and/or a position of a fingertip of the finger based on the sensor signal.
MOVING APPARATUS, MOVING APPARATUS CONTROL METHOD, AND PROGRAM
To provide an apparatus and a method that efficiently determines a best path on which a moving apparatus such as a walking robot can travel safely. The apparatus includes a path planning unit that determines a travel path of the moving apparatus such as a walking robot, and the path planning unit is configured to calculate a path cost for each of a plurality of path candidates by applying a cost calculation algorithm in which a path that enables more stable traveling has a lower cost and to determine the path candidate having the path cost that has been calculated to be the lowest cost as the travel path that is best. The path planning unit sets a plurality of sampling points in a path, calculates a cost corresponding to each sampling point by applying the cost calculation algorithm in which the cost becomes higher as the difference between landing heights of the left and right legs at each sampling point becomes larger, and calculates an addition value of the costs of the sampling points as the path cost.
LIGHT OUTPUT USING LIGHT SOURCES OF A ROBOT
Systems and methods are described for outputting light and/or audio using one or more light and/or audio sources of a robot. The light sources may be located on one or more legs of the robot, a bottom portion of the robot, and/or a top portion of the robot. The audio sources may include a speaker and/or an audio resonator. A system can obtain sensor data associated with an environment of the robot. Based on the sensor data, the system can identify an alert. For example, the system can identify an entity based on the sensor data and identify an alert for the entity. The system can instruct an output of light and/or audio indicative of the alert using the one or more light and/or audio sources. The system can adjust parameters of the output based on the sensor data.
METHOD FOR AUTOMATICALLY MAPPING THE RADIATION IN A PORTION OF A BUILDING AND A ROBOT VEHICLE
A method is for automatically mapping the radiation in a portion of a building (7) using a robot vehicle (1). The portion of the building includes a plurality of building surfaces (9, 10). The method includes acquiring (1010) a 3D map (42) of a portion of a building (7). The 3D map (42) includes a plurality of segments (44), each representing a substantially flat building surface. The method further comprises applying (1020) to each segment (44) a plurality sectors forming a grid of sectors (46), each sector (46) having a border (48); physically marking, by the robot vehicle (1), the border (48) of each sector (46) with paint on the corresponding building surface (9, 10); and for one or more sectors (46), scanning, by the robot vehicle (1), with a radiation sensor (28) each sector (46), to measure the radioactive radiation within that sector.
ROBOT DEVICE AND ROBOT CONTROL METHOD
In a robot device that identifies an obstacle on the basis of detection information of a sensor, highly accurate robot control by correct obstacle identification is realized without erroneously recognizing a leg, an arm, or the like of the robot itself as an obstacle. A self-region filter processing unit removes object information corresponding to a component of a robot device from object information included in detection information of a visual sensor, a map image generation unit generates map data based on object information from which the object information corresponding to the component of the robot device has been removed, and a robot control unit controls the robot device on the basis of the generated map data. The self-region filter processing unit calculates variable filter regions of different sizes according to the motion speed of the movable part of the robot device, and executes processing of removing the object information in the variable filter regions from the detection information of the visual sensor.
LOCATION BASED CHANGE DETECTION WITHIN IMAGE DATA BY A MOBILE ROBOT
Systems and methods are described for detecting changes at a location based on image data by a mobile robot. A system can instruct navigation of the mobile robot to a location. For example, the system can instruct navigation to the location as part of an inspection mission. The system can obtain input identifying a change detection. Based on the change detection and obtained image data associated with the location, the system can perform the change detection and detect a change associated with the location. For example, the system can perform the change detection based on one or more regions of interest of the obtained image data. Based on the detected change and a reference model, the system can determine presence of an anomaly condition in the obtained image data.