Patent classifications
G05D1/0253
Prioritized constraints for a navigational system
Systems and methods are provided for vehicle navigation. In one implementation, a system may comprise at least one processor. The processor may be programmed to receive images representative of an environment of the host vehicle and analyze the images to identify a first object and a second object. The processor may determine a first predefined navigational constraint implicated by the first object and a second predefined navigational constraint implicated by the second object, wherein the first and second predefined navigational constraints cannot both be satisfied, and the second predefined navigational constraint has a priority higher than the first predefined navigational constraint. The processor may determine a navigational action for the host vehicle satisfying the second predefined navigational constraint, but not satisfying the first predefined navigational constraint and, cause an adjustment of a navigational actuator of the host vehicle in response to the determined navigational action.
Systems and methods for navigating a vehicle among encroaching vehicles
Systems and methods use cameras to provide autonomous navigation features. In one implementation, a method for navigating a user vehicle may include acquiring, using at least one image capture device, a plurality of images of an area in a vicinity of the user vehicle; determining from the plurality of images a first lane constraint on a first side of the user vehicle and a second lane constraint on a second side of the user vehicle opposite to the first side of the user vehicle; enabling the user vehicle to pass a target vehicle if the target vehicle is determined to be in a lane different from the lane in which the user vehicle is traveling; and causing the user vehicle to abort the pass before completion of the pass, if the target vehicle is determined to be entering the lane in which the user vehicle is traveling.
Control transfer of a vehicle
A method for finding at least one trigger for human intervention in a control of a vehicle, the method may include receiving, from a plurality of vehicles, and by an I/O module of a computerized system, visual information acquired during situations that are suspected as situations that require human intervention in the control of at least one of the plurality of vehicles; determining, based at least on the visual information, the at least one trigger for human intervention; and transmitting to one or more of the plurality of vehicles, the at least one trigger.
ROBOT AND METHOD FOR ASCERTAINING A DISTANCE TRAVELED BY A ROBOT
A semiautonomous robot. The robot includes at least two powered locomotion devices and a monocular capture unit. The at least two locomotion devices are designed to rotate at least the capture unit about a rotational axis, which is situated in a fixed position relative to the capture unit, the capture unit and the rotational axis being set apart from each other. The robot further includes at least one control and/or regulating unit for ascertaining a distance traveled. As a function of a movement of the capture unit about the rotational axis fixed during the movement, in particular, at a known distance from the rotational axis and/or in a known orientation relative to the rotational axis, the control and/or regulating unit is configured to determine a distance conversion parameter, which is provided for ascertaining the distance traveled.
Object determining system and auto clean machine using the object determining system
One object determining system comprising: an air ejection device, configured to eject air; a distance detecting circuit, configured to detect distances between an electronic device comprising the object determining system and at least one location of an object when the air ejection device ejects air to the object; and a determining circuit, configured to determine a type of the object according to variations of the distances.
Modular robot
Provided is a robot including: a chassis; wheels; electric motors; a network card; sensors; a processor; and a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuates operations including: capturing, with at least one exteroceptive sensor, a first image and a second image; determining, with the processor, an overlapping area of the first image and the second image by comparing the raw pixel intensity values of the first image to the raw pixel intensity values of the second image; combining, with the processor, the first image and the second image at the overlapping area to generate a digital spatial representation of the environment; and estimating, with the processor using a statistical ensemble of simulated positions of the robot, a corrected position of the robot to replace a last known position of the robot within the digital spatial representation of the environment.
Precision agricultural treatment based on growth stage in real time
Various embodiments of an apparatus, methods, systems and computer program products described herein are directed to an agricultural observation and treatment system and method of operation. The agricultural treatment system may determine a first real-world geo-spatial location of the treatment system. The system can receive captured images depicting real-world agricultural objects of a geographic scene. The system can associate captured images with the determined geo-spatial location of the treatment system. The treatment system can identify, from a group of mapped and indexed images, images having a second real-word geo-spatial location that is proximate with the first real-world geo-spatial location. The treatment system can compare at least a portion of the identified images with at least a portion of the captured images. The treatment system can determine a target object and emit a fluid projectile at the target object using a treatment device.
Method and Apparatus for Scale Calibration and Optimization of a Monocular Visual-Inertial Localization System
The method and system disclosed herein presents a method and system for capturing, by a camera disposed on a device moving in an environment, a plurality of image frames recorded in a first coordinate reference frame at respective locations within a portion of the environment in a first time period; capturing, by an inertial measurement unit disposed on the device, sets of inertial odometry data recorded in a second coordinate reference frame; determining a rotational transformation matrix that corresponds to a relative rotation between the first reference frame and the second reference frame; and determining a scale factor from the matching pairs of image frames. The rotational transformation matrix defines an orientation of the device, and the scale factor and the rotational transformation matrix calibrate the plurality of image frames captured by the camera.
Adaptive region division method and system
An adaptive region division method and system are provided. The adaptive region division method includes: building an environmental map based on laser radar data and odometer data, to determine information about an environment in which a target device is located (S11); performing feature extraction according to the laser radar data, to determine feature data, where the feature data includes line feature data and point feature data (S12); generating a virtual door according to the feature data and the information about the environment in which the target device is located (S13); and dividing a to-be-divided region where the target device is located according to the virtual door (S14). Therefore, a virtual door is generated according to laser data of a current environment to achieve the purpose of adaptive region division, so that the target device can more efficiently and quickly cover the whole space.
Enhanced object detection for autonomous vehicles based on field view
Systems and methods for enhanced object detection for autonomous vehicles based on field of view. An example method includes obtaining an image from an image sensor of one or more image sensors positioned about a vehicle. A field of view for the image is determined, with the field of view being associated with a vanishing line. A crop portion corresponding to the field of view is generated from the image, with a remaining portion of the image being downsampled. Information associated with detected objects depicted in the image is outputted based on a convolutional neural network, with detecting objects being based on performing a forward pass through the convolutional neural network of the crop portion and the remaining portion.