Patent classifications
G05D1/60
Artificial intelligence apparatus for determining path of user and method for the same
An embodiment of the present invention provides, comprising: a communication unit configured to communicate with a plurality of external AI apparatuses; and a processor configured to receive sound signals of the user from the plurality of external AI apparatuses, calculate a distance and a variation of the distance from each of the plurality of external AI apparatuses to the user based on the received sound signals, determine a current path of the user based on the calculated distance and the calculated variation of the distance, and determine a future path of the user based on the current path.
System, method and apparatus for object identification
The present disclosure provides a system, a method and an apparatus for object identification, capable of solving the problem in the related art that a system for centralized control and management of unmanned vehicles may not be able to identify an object effectively. The system for object identification includes a sensing device, a control device and one or more unmanned vehicles. The control device is configured to determine an object not belonging to a predetermined category as an unknown object by performing object identification based on sensed data; mark the unknown object in the sensed data including the unknown object; determine an unmanned vehicle within a predetermined range from the unknown object; transmit the sensed data with the marked unknown object and an instruction to identify the unknown object to the determined unmanned vehicle; receive a feedback message from the unmanned vehicle, and when the feedback message carries information on an object category, save the information on the object category and mark a category of the unknown object as the saved object category.
Methods of climb and glide operations of a high altitude long endurance aircraft
Systems, devices, and methods including: at least one unmanned aerial vehicle (UAV); at least one battery pack comprising at least one battery; and at least one motor of the at least one UAV, where the at least one battery is configured to transfer energy to the at least one motor; where power from the at least one motor is configured to ascend the at least one UAV to a second altitude when the at least one battery is at or near capacity, and where the second altitude is higher than the first altitude; and where power from the at least one motor is configured to descend the at least one UAV to the first altitude after the Sun has set to conserve energy stored in the at least one battery.
Integrated vision-based and inertial sensor systems for use in vehicle navigation
A navigation system useful for providing speed and heading and other navigational data to a drive system of a moving body, e.g., a vehicle body or a mobile robot, to navigate through a space. The navigation system integrates an inertial navigation system, e.g., a unit or system based on an inertial measurement unit (IMU). with a vision-based navigation system unit or system such that the inertial navigation system can provide real time navigation data and the vision-based navigation can provide periodic, but more accurate, navigation data that is used to correct the inertial navigation system's output. The navigation system was designed with the goal in mind of providing low effort integration of inertial and video data. The methods and devices used in the new navigation system address problems associated with high accuracy dead reckoning systems (such as a typical vision-based navigation system) and enhance performance with low cost IMUs.
Trajectory prediction from precomputed or dynamically generated bank of trajectories
Among other things, techniques are described for predicting how an agent (e.g., a vehicle, bicycle, pedestrian, etc.) will move in an environment based on prior movement, the road network, the surrounding objects and/or other relevant environmental factors. One trajectory prediction technique involves generating a probability map for an agent's movement. Another trajectory prediction technique involves generating a trajectory lattice, for an agent's movement. In addition, a different trajectory prediction technique involves multi-modal regression where a classifier (e.g., a neural network) is trained to classify the probability of a number of (learned) modes such that each model produces a trajectory based on the current input.
Multi-task learning for real-time semantic and/or depth aware instance segmentation and/or three-dimensional object bounding
A machine-learning (ML) architecture for determining three or more outputs, such as a two and/or three-dimensional region of interest, semantic segmentation, direction logits, depth data, and/or instance segmentation associated with an object in an image. The ML architecture may output these outputs at a rate of 30 or more frames per second on consumer grade hardware.
Remote position management
The application relates to systems and techniques for remotely navigating a marine vessel. The systems can include a dynamic positioning system and/or a marine location management system for remotely navigating a marine vessel. The marine location management system can include a communication module for receiving a geographic location of the marine vessel and transmitting a navigation plan to a vessel control system. The marine location management system can also include a processor adapted to determine the geographical coordinates of the marine location and the marine vessel. In some cases, the marine vessel can include a thruster system adapted to receive the navigation plan and determine a set of thrust vectors based on the navigation plan.
Task Management For Unmanned Aerial Vehicles
Technology is disclosed herein for operating a tasking service for UAVs. In an implementation, a tasking service receives task parameters which includes a desired state of the UAVs for performing a task and service information associated with performing the task. The tasking service continuously receives state information from the UAVs which identifies a present state of the UAVs and continuously evaluates the present state of the UAVs with respect to the desired state. When the present state of an UAV matches the desired state, the tasking service assigns the task to the UAV and provides the service information to the UAV. In an implementation, the tasking service receives task parameters via an application programming interface from a client application in communication with the tasking service.
System, method and apparatus for object identification
The present disclosure provides a system, a method and an apparatus for object identification, capable of solving the problem in the related art that a system for centralized control and management of unmanned vehicles may not be able to identify an object effectively. The system for object identification includes a sensing device, a control device and one or more unmanned vehicles. The control device is configured to determine an object not belonging to a predetermined category as an unknown object by performing object identification based on sensed data; mark the unknown object in the sensed data including the unknown object; determine an unmanned vehicle within a predetermined range from the unknown object; transmit the sensed data with the marked unknown object and an instruction to identify the unknown object to the determined unmanned vehicle; receive a feedback message from the unmanned vehicle, and when the feedback message carries information on an object category, save the information on the object category and mark a category of the unknown object as the saved object category.
MOTION CONTROL METHOD, CONTROLLER, AND STORAGE MEDIUM
The present disclosure discloses a motion control method and apparatus for a legged robot, a legged robot, a computer-readable storage medium, and a computer program product. The legged robot includes at least two foot-ends. The method includes: receiving a bound instruction for the legged robot in response to the foot-ends of the legged robot standing in unit regions that are independent from one another, the bound instruction being used for instructing the legged robot to bound to a target unit region from at least two unit regions in which the legged robot is currently located; and controlling the legged robot to bound to the target unit region in response to the bound instruction, a distance between any two foot-ends of the legged robot in the target unit region being less than a distance between two corresponding foot-ends before bounding.