Patent classifications
G05D1/2435
Information processing apparatus, control method for same, non-transitory computer-readable storage medium, and vehicle driving support system
This invention provides an information processing apparatus comprising a first acquiring unit configured to acquire first distance information from a distance sensor, a second acquiring unit configured to acquire an image from an image capturing device, a holding unit configured to hold a learning model for estimating distance information from images, an estimating unit configured to estimate second distance information corresponding to the image acquired by the second acquiring unit using the learning model, and a generating unit configured to generate third distance information based on the first distance information and the second distance information.
Methods and systems for navigating autonomous and semi-autonomous vehicles
A system and method for forecasting perceived transitions to the four annual seasons in geographic areas is disclosed. The perceived transitions are identified by comparing forecasted daily temperatures in each geographic area to thresholds generated based on normal daily temperatures in those geographic areas. The forecasted daily temperatures may be calculated using both forecasted temperatures and forecasted perceived ambient temperatures (calculated using both temperature and humidity, cloud cover, sun intensity, and/or wind speed).
MOBILE ROBOT FOR DETERMINING WHETHER TO BOARD ELEVATOR, AND OPERATING METHOD THEREFOR
A mobile robot for determining whether to board an elevator may include a camera configured for capturing an inside of the elevator, an object recognition unit configured for recognizing an area of the elevator and the number of passengers from an image captured by the camera, and a control unit configured for calculating a density of the elevator based on the area and the number of passengers. The control unit may perform a determination of whether to board the elevator based on the density, and control a driving wheel motor based on the determination.
AUTOMATED UTILITY MARKOUT ROBOT SYSTEM AND METHOD
A portable robotic platform system and method for automatically detecting, locating, and marking underground assets are provided. The portable robotic platform includes a housing with a sensor module including ground penetrating radar (GPR), LiDAR, and electromagnetic (EM) sensors. The robotic platform automatically collects GPR and EM data and uses onboard post-processing techniques to interpret the sensor data and identify the location(s) of underground infrastructure. The portable robotic platform can be deployed to apply paint to a ground surface to identify the located underground assets.
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM
There is provided an information processing device to realize a rich motion expression of an autonomous mobile object by easier attitude control. The information processing device includes: a motion control unit that controls a motion of an autonomous mobile object, wherein the autonomous mobile object includes a wheel that can be stored inside a main body and that can be protruded to an outside of the main body, and the motion control unit keeps a standing state by making the wheel protruded to the outside of the main body and performs driving control of the wheel and attitude control of the autonomous mobile object in movement of the autonomous mobile object, and makes the autonomous mobile object remain still in a seated state during a stop thereof by storing the wheel inside the main body is provided.
System for controlling an autonomous driving vehicle or air vessel, which can be controlled on the basis of steering and acceleration values, and an autonomous driving vehicle or air vessel provided with such a system
System for controlling an autonomous vehicle on the basis of control values and acceleration values, having a safety-determining module configured to receive live images from a camera, to receive recorded stored images preprocessed for image recognition from an internal safety-determining module data storage, to receive navigation instruction(s) from a navigation module; to compare the live images with the stored images to determine a degree of correspondence; and to determine a safety value which indicates the extent to which the determined degree of correspondence suffices to execute the navigation instruction(s); wherein a control module is configured to receive the navigation instruction(s); to receive the live images; to receive the safety value; to compare the safety value to a predetermined value; and if the safety value is greater than the predetermined value, to convert the navigation instruction(s) and the camera images into control values and acceleration values for the autonomous vehicle.
Vacuum cleaner and control method thereof
A self-driving vacuum cleaner may include: a main body; a drive unit; a pattern irradiation unit, disposed on a front surface of the main body, for irradiating light; a camera, disposed on the front surface of the main body; and a controller for detecting a light pattern formed by the pattern irradiation unit using an image photographed by the camera, determining whether an obstacle exists in front of the main body on the basis of the brightness of the detected light pattern, and controlling the drive unit to pass or avoid the obstacle on the basis of the determination result.
REMOTE CONTROL SYSTEM FOR A CONSTRUCTION MACHINE AND METHOD FOR CONTROLLING A CONSTRUCTION MACHINE
A remote control system includes a mobile terminal configured to control a construction machine in a first operating mode using one or more control elements of the mobile terminal and to control at least one imaging device in a second operating mode using the one or more control elements of the mobile terminal. The at least one imaging device is controllable by the one or more control elements of the mobile terminal to record an environment of the construction machine and/or a working tool of the construction machine. A position and/or alignment of the at least one imaging device is controllable via the one or more control elements of the mobile terminal.
Object pose estimation
A plurality of virtual three-dimensional points distributed on a 3D reference plane for a camera array including a plurality of cameras are randomly selected. The plurality of cameras includes a host camera and one or more additional cameras. Respective two-dimensional projections of the plurality of virtual 3D points for the plurality of cameras are determined based on respective poses of the cameras. For the respective one or more additional cameras, respective homography matrices are determined based on the 2D projections for the respective camera and the 2D projections for the host camera. The respective homography matrices map the 2D projections for the respective camera to the 2D projections for the host camera. A stitched image is generated based on respective images captured by the plurality of cameras and the respective homography matrices.
3-D IMAGE SYSTEM FOR VEHICLE CONTROL
A control system uses visual odometry (VO) data to identify a position of the vehicle while moving along a path next to the row and to detect the vehicle reaching an end of the row. The control system can also use the VO image to turn the vehicle around from a first position at the end of the row to a second position at a start of another row. The control system may detect an end of row based on 3-D image data, VO data, and GNSS data. The control system also may adjust the VO data so the end of row detected from the VO data corresponds with the end of row location identified with the GNSS data.