Patent classifications
G05D1/2435
Information processing apparatus, information processing method, information processing system, and storage medium
An information processing apparatus for determining control values for controlling a position of a vehicle for conveying a cargo includes an acquisition unit configured to acquire first information for identifying a three-dimensional shape of the cargo based on a captured first image of the cargo, and second information for identifying, based on a captured second image of an environment where the vehicle moves, a distance between an object in the environment and the vehicle, and a determination unit configured to, based on the first information and the second information, determine the control values for preventing the cargo and the object from coming closer than a predetermined distance.
Hull behavior control system and marine vessel
A hull behavior control system for controlling behavior of a hull of a marine vessel includes a memory and at least one controller coupled to the memory. The at least one controller is configured or programmed to control a steering that changes the traveling direction of the marine vessel, obtain a water surface shape around the marine vessel, estimate movement of a wave based on the water surface shape, and when it is determined that the hull rides the wave whose movement has been estimated, control the steering so as to reduce an influence of the wave on the hull.
Method for constructing episodic memory model based on rat brain visual pathway and entorhinal-hippocampal cognitive mechanism
A method for constructing episodic memory model based on rat brain visual pathway and entorhinal-hippocampal structure mainly applied to environment cognition and navigation of an intelligent mobile robot to complete tasks of environment cognition map construction and target-oriented navigation is provided. The image information of the environment, the head-direction angle and speed of the robot are collected, and then the head-direction angle and speed of the robot are input into the entorhinal-hippocampal CA3 neural computational model to obtain the robot's precise position. The visual information is input into the computational model of the visual pathway to obtain the scene information in the current vision of the robot. The above two kinds of information are fused and stored in a cognitive node with the topological relationship. Utilizing scenario information to correct the path integration errors during the exploration process of the robot, thereby constructing the episodic cognitive map representing the environment.
BINOCULAR VISION-BASED ENVIRONMENT SENSING METHOD AND APPARATUS, AND UNMANNED AERIAL VEHICLE
A binocular vision-based environment sensing method and apparatus, is applied to an unmanned aerial vehicle. The unmanned aerial vehicle is provided with five binocular cameras. The first binocular camera is disposed at the front portion of the fuselage of the unmanned aerial vehicle. The second binocular camera is inclined upward and disposed between the left side of the fuselage and the upper portion of the fuselage of the unmanned aerial vehicle. The third binocular camera is inclined upward and disposed between the right side of the fuselage and the upper portion of the fuselage of the unmanned aerial vehicle. The fourth binocular camera is disposed at the lower portion of the fuselage of the unmanned aerial vehicle. The fifth binocular camera disposed at the rear portion of the fuselage of the unmanned aerial vehicle. The method can simplify an omni-directional sensing system while reducing the sensing blind area.
Identifying elements in an environment
An example method of detecting an element using an autonomous vehicle includes the following operations: using a sensor on the autonomous vehicle to capture image data in a region of interest containing the element, where the image data represents components of the element; filtering the image data to produce filtered data having less of an amount of data than the image data; identifying the components of the element by analyzing the filtered data using a deterministic process; and detecting the element based on the components.
CONTROLLER AND METHOD
Disclosed is a load handling controller and a method for localizing a load handling vehicle and a load target. A spatial map describing a surrounding of the load handling vehicle is obtained. First sensor signals are received from a first sensor system for generating point clouds of the surrounding of the load handling vehicle and second sensor signals are received from a second sensor system for generating images of the surrounding of the load handling vehicle. The second sensor signals are synchronized in time with the first sensor signals. The load handling vehicle and the load target are localized in the spatial map based on the first sensor signals and the second sensor signals.
METHOD AND APPARATUS FOR DETECTING MOTION INFORMATION OF TARGET, DEVICE AND MEDIUM
Disclosed are a method for detecting motion information of a target, a device and memory. The method includes: performing target detection on a first image to obtain a detection box of a first target; acquiring depth information of first image in a corresponding first camera coordinate system and determining depth information of the detection box therefrom, and determining first coordinates of first target in first camera coordinate system based on a location of the detection box in an image coordinate system and the depth information thereof; transforming second coordinates of a second target in a second camera coordinate system corresponding to the second image into third coordinates in the first camera coordinate system based on pose change information of an image capturing device; and determining motion information of the first target based on the first and third coordinates. The disclosure avoids abundant computational processing and improves processing efficiency.
Vehicle-mounted control unit, and method and apparatus for FPGA based automatic driving of vehicle
Embodiments of the present disclosure provide a vehicle-mounted control unit, and a method and an apparatus for FPGA based automatic driving of a vehicle, which includes a MCU and a first SoC implemented by being integrated with an ARM through the FPGA, where the vehicle-mounted control unit is set on an automatic driving vehicle, the FPGA of the first SoC receives video data sent by a vehicle-mounted camera, performs visual perception on the video data by using a first neural network algorithm to obtain first perception information; and sends the first perception information to the ARM of the first SoC. The ARM of the first SoC processes the first perception information to obtain first decision information, and sends the first decision information to the MCU. Finally, the MCU generates a control command according to the first decision information and sends it to the corresponding execution mechanism.
Distributed neural network processing on an intelligent image sensor stack
Systems, methods, and apparatus for intelligent image sensing devices. In one example, a host interface of a sensing device receives sensor data from a host system. The sensing device stores the sensor data in response to a write command received from the host system. The sensing device also stores data from an image stream generated by an image sensor(s) included in the sensing device. An inference engine of the sensing device generates inference results using both the image stream and the sensor data as input. The sensing device stores the inference results in a non-volatile memory for access by the host system. In response to receiving a read command from the host system, the sensing device provides the inference results to the host system.
STRUCTURED LIGHT MODULE AND SELF-MOVING DEVICE
The application provides a structured light module and an autonomous mobile device. The structured light module includes a first camera and line laser emitters for collecting a first environmental image containing laser stripes generated when the line laser encounters an object. The structured light module can also capture a visible light image through a second environmental image that does not contain laser stripes. Both the first and second environmental images can help to detect more accurate and richer environmental information, expanding the application range of laser sensors.