Patent classifications
G05D1/20
DIGITAL CO-PILOT
A system and method for a digital co-pilot are provided. The method includes receiving a plurality of inputs from a vehicle operating systems, wherein the plurality of inputs comprise engine parameters, control system parameters, or electrical system parameters, identifying one or more first trends in the plurality of inputs, diagnosing one or more first potential conditions based on the first trends, determining a first course of action based on diagnosing the one or more potential conditions, and generating one or more first commands to vehicle controls based on determining the first course of action.
APPARATUS, SYSTEM, AND METHOD OF USING DEPTH ASSESSMENT FOR AUTONOMOUS ROBOT NAVIGATION
An apparatus, system and method of operating an autonomous mobile robot having a height of at least one meter. The robot body; at least two three-dimensional depth camera sensors affixed to the robot body proximate to the height, wherein the sensors are directed toward a floor surface and, in combination, comprise a substantially 360 degree field of view of the floor surface around the robot body; and a processing system for receiving pixel data within the field of view of the sensors; obtaining missing or erroneous pixels from the pixel data; comparing the missing or erroneous pixels to a template, wherein the template comprises at least an indication of ones of the missing or erroneous pixels indicative of the robot body and a shadow of the robot body; and outputting an indication of obstacles in or near the field of view based on the comparing.
APPARATUS, SYSTEM, AND METHOD OF USING DEPTH ASSESSMENT FOR AUTONOMOUS ROBOT NAVIGATION
An apparatus, system and method of operating an autonomous mobile robot having a height of at least one meter. The robot body; at least two three-dimensional depth camera sensors affixed to the robot body proximate to the height, wherein the sensors are directed toward a floor surface and, in combination, comprise a substantially 360 degree field of view of the floor surface around the robot body; and a processing system for receiving pixel data within the field of view of the sensors; obtaining missing or erroneous pixels from the pixel data; comparing the missing or erroneous pixels to a template, wherein the template comprises at least an indication of ones of the missing or erroneous pixels indicative of the robot body and a shadow of the robot body; and outputting an indication of obstacles in or near the field of view based on the comparing.