Patent classifications
G05D1/0246
Driving assistance apparatus
In a driving assistance apparatus, an image acquiring unit acquires a captured image captured by an onboard camera. Based on the captured image acquired by the image acquiring unit, a boundary line recognizing unit recognizes a boundary line that demarcates a traffic lane in which an own vehicle is driving. A road information acquiring unit acquires road information related to a road on which the own vehicle is driving. Based on the road information acquired by the road information acquiring unit, a degree-of-reliability setting unit sets a degree of reliability of the boundary line recognized by the boundary line recognizing unit. Based on the boundary line recognized by the boundary line recognizing unit, a driving assisting unit performs driving assistance of the own vehicle and varies control content of the driving assistance based on the degree of reliability.
Temporal information prediction in autonomous machine applications
In various examples, a sequential deep neural network (DNN) may be trained using ground truth data generated by correlating (e.g., by cross-sensor fusion) sensor data with image data representative of a sequences of images. In deployment, the sequential DNN may leverage the sensor correlation to compute various predictions using image data alone. The predictions may include velocities, in world space, of objects in fields of view of an ego-vehicle, current and future locations of the objects in image space, and/or a time-to-collision (TTC) between the objects and the ego-vehicle. These predictions may be used as part of a perception system for understanding and reacting to a current physical environment of the ego-vehicle.
Incorporating rules into complex automated decision making
A set of input conditions is obtained. A plurality of potential decisions is obtained based at least in part on the set of input conditions. A rule-based system is used to process the plurality of potential decisions and obtain a set of one or more updated potential decisions, wherein: the rule-based system specifies a plurality of rules; a rule specifies a rule condition and a corresponding action, wherein when the rule condition is met, the corresponding action is to be performed; and using the rule-based system to process the plurality of potential decisions includes: for a selected potential decision in the plurality of potential decisions, determining whether the rule condition is met for a selected rule among the plurality of rules, wherein the selected rule condition is dependent on, at least in part, the selected potential decision; and in response to the selected rule condition being met, performing the corresponding action. The set of one or more updated potential decisions to be executed is output.
Systems and methods for utilizing images to determine the position and orientation of a vehicle
Described are systems and methods to utilize images to determine the position and/or orientation of a vehicle (e.g., an autonomous ground vehicle) operating in an unstructured environment (e.g., environments such as sidewalks which are typically absent lane markings, road markings, etc.). The described systems and methods can determine the vehicle's position and orientation based on an alignment of annotated images captured during operation of the vehicle with a known annotated reference map. The translation and rotation applied to obtain alignment of the annotated images with the known annotated reference map can provide the position and the orientation of the vehicle.
System and method for determining agricultural vehicle guidance quality based on a crop row boundary consistency parameter
A system for determining agricultural vehicle guidance quality includes an imaging device configured to capture image data depicting a plurality of crops rows present within a field as an agricultural vehicle travels across the field. Additionally, the system includes a controller communicatively coupled to the imaging device. As such, the controller configured to determine a guidance line for guiding the agricultural vehicle relative to the plurality of crop rows based on the captured image data. Furthermore, the controller is configured to determine a crop row boundary consistency parameter associated with one or more crop rows of the plurality of crop row present within a region of interest of the captured image data. Moreover, the controller is configured to determine a quality metric for the guidance line based on the crop row boundary consistency parameter.
Using mapped elevation to determine navigational parameters
Systems and methods for navigating a host vehicle. The system may perform operations including receiving, from an image capture device, at least one image representative of an environment of the host vehicle; analyzing the at least one image to identify an object in the environment of the host vehicle; determining a location of the host vehicle; receiving map information associated with the determined location of the host vehicle, wherein the map information includes elevation information associated with the environment of the host vehicle; determining a distance from the host vehicle to the object based on at least the elevation information; and determining a navigational action for the host vehicle based on the determined distance.
Artificial intelligence apparatus for cleaning in consideration of user's action and method for the same
An AI robot for cleaning in consideration of a user's action includes a camera to acquire a first image data for the user, a cleaning unit including a suction unit and a mopping unit, a driving unit configured to drive the AI robot, and a processor to determine the user's action using the first image data, determine a cleaning schedule in consideration of the user's action, and control the cleaning unit and the driving unit based on the determined cleaning schedule.
Remote control apparatus, system, method, and program
A remote control apparatus performs: calculating a path and a moving speed to reach a desired destination from a current position of the control target apparatus; measuring a communication delay time between the remote control apparatus and the control target apparatus; estimating an overshoot region based on the communication delay time, a stored size of the control target apparatus, and the moving speed; predicting whether the control target apparatus will contact with a peripheral object(s), based on the path, the overshoot region, and stored peripheral object information of the control target apparatus; calculating the moving speed information to be given to the control target apparatus so that a moving direction of the control target apparatus changes by a predetermined value or more when predicted that the control target apparatus will contact with a peripheral object(s); and transmitting a control signal including the moving speed information to the control target apparatus.
Automated restaurant
The present application discloses an automated restaurant comprising: a kitchen; a customer-tracking area comprising a dining area; and a plurality of vehicles. The kitchen comprises a storage apparatus to store ingredient containers, a transfer apparatus to move ingredient containers, and one or more cooking stations. Each vehicle is configured to move one or more food containers from cooking stations to dining tables. A tracking system comprises cameras, lidars, etc., which are fixedly mounted. The tracking system can dynamically map out the fixtures, humans and vehicles in the restaurant. Information from the tracking system is used to control the motion of the vehicles. The tracking system can dynamically track the positions of customers in the customer-tracking area, so that foods ordered by specific customers may be automatically sent by vehicles to the customers' locations.
Mobile robot system and method for generating map data using straight lines extracted from visual images
A mobile robot is configured to navigate on a sidewalk and deliver a delivery to a predetermined location. The robot has a body and an enclosed space within the body for storing the delivery during transit. At least two cameras are mounted on the robot body and are adapted to take visual images of an operating area. A processing component is adapted to extract straight lines from the visual images taken by the cameras and generate map data based at least partially on the images. A communication component is adapted to send and receive image and/or map data. A mapping system includes at least two such mobile robots, with the communication component of each robot adapted to send and receive image data and/or map data to the other robot. A method involves operating such a mobile robot in an area of interest in which deliveries are to be made.