Patent classifications
G05D1/249
Magnetic apparatus for centering caster wheels
An autonomous delivery vehicle having one or more caster wheels that may be held off the ground for a portion of the time that the autonomous delivery vehicle travels. Each caster wheel is mounted in a pivot with a centering mechanism to hold the caster wheels in a design orientation. The caster wheel in the design orientation maximizes the view of forward-looking sensors on the autonomous delivery vehicle. The centering mechanism uses magnetic attraction/repulsion to center the caster wheel. The centering mechanism may incorporate a plurality of permanent magnets and or electro-magnets.
Semiautonomous apparatus for distribution of edible products, and respective operation process
The present invention refers to a semiautonomous apparatus (1) for preparation and distribution of edible products, such as for example beverages, that presents a subsystem of preparation of edible products based upon portions (2) of edible substances, for example capsules of roasted and ground coffee for preparing coffee of the espresso type, and comprising at least one brewing device (4), and a subsystem of propulsion and locomotion (5), and further presenting a control device (8) provided so that can control said systems and so that the operation of said brewing device (4) and of said propulsion and locomotion means (5) is mutually exclusive, so that said semiautonomous apparatus (1) does not provide edible products while being moved by respective propulsion and locomotion means (5).
Convolved augmented range LIDAR nominal area
A method of lidar imaging pulses a scene with laser pulse sequences from a laser light source. Reflected light from the scene is measured for each laser pulse to form a sequence of time resolved light signals. Adjoining time bins in the time resolved light signals are combined to form super time bins. A three dimensional image of the scene is created from distances determined based on maximum intensity super time bins. One or more objects are located within the image. For each object, the time resolved light signals are combined to form a single object time resolved light signal from which to determine distance to the object.
Autonomous electric vehicle charging
Methods and systems for autonomous vehicle recharging or refueling are disclosed. Autonomous electric vehicles may be automatically recharged by routing the vehicles to available charging stations when not in operation, according to methods described herein. A charge level of the battery of an autonomous electric vehicle may be monitored until it reaches a recharging threshold, at which point an on-board computer may generate a predicted use profile for the vehicle. Based upon the predicted use profile, a time and location for the vehicle to recharge may be determined. In some embodiments, the vehicle may be controlled to automatically travel to a charging station, recharge the battery, and return to its starting location in order to recharge when not in use.
System and method for generating precise road lane map data
An in-vehicle system for generating precise, lane-level road map data includes a GPS receiver operative to acquire positional information associated with a track along a road path. An inertial sensor provides time local measurement of acceleration and turn rate along the track, and a camera acquires image data of the road path along the track. A processor is operative to receive the local measurement from the inertial sensor and image data from the camera over time in conjunction with multiple tracks along the road path, and improve the accuracy of the GPS receiver through curve fitting. One or all of the GPS receiver, inertial sensor and camera are disposed in a smartphone. The road map data may be uploaded to a central data repository for post processing when the vehicle passes through a WiFi cloud to generate the precise road map data, which may include data collected from multiple drivers.
Automated inspection of autonomous vehicle equipment
An equipment inspection system receives data captured by a sensor of an autonomous vehicle (AV). The captured data describes a current state of equipment for servicing the AV. The equipment inspection system compares the captured data to a model describing an expected state of the equipment. The equipment inspection system determines, based on the comparison, that the equipment differs from the expected state. The equipment inspection system may transmit data describing the current state of the equipment to an equipment manager. The equipment manager may schedule maintenance for the equipment based on the current state of the equipment.
Information processing apparatus, information processing method, and program
Provided is an information processing apparatus including: a motion control unit (107) that controls a motion of an autonomous moving body (10), in which, when transmitting/receiving internal data related to the autonomous moving body, the motion control unit causes the autonomous moving body to express execution of the transmission/reception of the internal data by an action.
Autonomous painting systems and related methods
An automated mobile paint robot, according to particular embodiments, comprises: (1) a wheeled base; (2) at least one paint sprayer; (3) at least one pump; (4) a vision system; (5) a GPS navigation system; and (5) a computer controller configured to: (A) generate a room painting plan using one or more inputs from the GPS navigation system, vision system, etc.; (B) control movement of the automated mobile paint robot across a support surface: (C) use the vision system to position the wheeled base in a suitable position from which to paint a desired area using the at least one paint sprayer; and (D) use the at least one pump to activate the at least one paint sprayer to paint a swath (e.g., swatch) of paint from the suitable position.
Control method for carpet drift in robot motion, chip, and cleaning robot
A control method for carpet drift in robot motion, a chip, and a cleaning robot are disclosed. The control method includes: performing fusion calculation on a current position coordinate of the robot according to data sensed by a sensor every first preset time, calculating amount of drift, relative to a preset direction, of the robot, according to a relative position relationship between a current position and an initial position of the robot, and accumulating to obtain a drift statistical value; and calculating the number of acquisitions of the position coordinate within a second preset time, averaging to obtain a drift average value, determining a state of the robot deviating from the preset direction according to the drift average value, and setting a corresponding Proportion Integration Differentiation (PID) proportionality coefficient to synchronously adjust speeds of left and right drive wheels of the robot while reducing a deviation angle of the robot.
Scenario identification for validation and training of machine learning based models for autonomous vehicles
A system uses a machine learning based model to determine attributes describing states of mind and behavior of traffic entities in video frames captured by an autonomous vehicle. The system classifies video frames according to traffic scenarios depicted, where each scenario is associated with a filter based on vehicle attributes, traffic attributes, and road attributes. The system identifies a set of video frames associated with ground truth scenarios for validating the accuracy of the machine learning based model and predicts attributes of traffic entities in the video frames. The system analyzes video frames captured after the set of video frames to determine actual attributes of the traffic entities. Based on a comparison of the predicted attributes and actual attributes, the system determines a likelihood of the machine learning based model making accurate predictions and uses the likelihood to generate a navigation action table for controlling the autonomous vehicle.