G06T2207/30261

Obstacle detection method and device, apparatus, and storage medium

An obstacle detection method and device, an apparatus and a storage medium are provided, which are related to a field of intelligent transportation. The specific implementation includes: acquiring position information of a two-dimensional (2D) detection frame and position information of a three-dimensional (3D) detection frame of an obstacle in an image; converting the position information of the 3D detection frame of the obstacle into position information of a 2D projection frame of the obstacle; and optimizing the position information of the 3D detection frame of the obstacle by using the position information of the 2D detection frame, the position information of the 3D detection frame and the position information of the 2D projection frame of the obstacle in the image. Accuracy of results of predicting a 3D position of an obstacle by a roadside, on-board sensing device, or other sensing devices may be improved.

Electronic device and method for assisting with driving of vehicle
11688195 · 2023-06-27 · ·

An electronic device may include a processor configured to: obtain a plurality of images, the plurality of images including a first image captured by a first camera or image sensor, and a second image captured by a second camera or image sensor. The processor may further be configured to: determine whether a first object and a second object respectively included in the first image and the second image are a same object, and based on determining that the first object and the second object are the same object, train a neural network data recognition model to recognize the first object and the second object as the same object in both the first image and the second image by using the first image and the second image as training data of the neural network data recognition model.

VACUUM CLEANER

A vacuum cleaner includes a main casing, driving wheels, a control unit, a cleaning unit, cameras, and a depth calculation part. The driving wheels enable the main casing to travel. The control unit controls drive of the driving wheels to make the main casing autonomously travel. The cleaning unit cleans a floor surface. The cameras are disposed apart from each other in the main casing to pick up images on a traveling-direction side of the main casing with their fields of view overlapping with each other. The depth calculation part calculates a depth of an object distanced from the cameras based on images picked up by the cameras. The vacuum cleaner has improved obstacle detection precision.

Sensor fusion for autonomous machine applications using machine learning

In various examples, a multi-sensor fusion machine learning model—such as a deep neural network (DNN)—may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.

Method for Measuring a Distance Between an Object and an Optical Sensor, Control Device for Carrying Out Such a Method, Distance Measuring Apparatus Comprising Such a Control Device, and Motor Vehicle Comprising Such a Distance Measuring Apparatus
20230194719 · 2023-06-22 ·

A method for measuring a distance between an object and an optical sensor by an illumination device and the optical sensor. A spatial position of a visible distance region in an observation region of the optical sensor is specified. A captured image of the visible distance region is captured by the optical sensor. A start image line and an end image line of the visible distance region are determined in the captured image. A base point image line is ascertained in the captured image as an image line with a shortest distance to the start image line in which the object can be detected. A distance from the object is ascertained by evaluating an image position of the base point image line relative to the start image line and the end image line while taking account of the spatial position of the visible distance region.

Image recognition system for a vehicle and corresponding method

An image recognition system and method for a vehicle, including at least two camera units, each being configured to record an image of a road in the vicinity of the vehicle and to provide image data representing the respective image of the road, a first image processor configured to combine the image data provided by the at least two camera units into a first top-view image. The first top-view image is aligned to a road image plane, a first feature extractor configured to extract lines from the first top-view image, a second feature extractor configured to extract an optical flow from the first top-view image and a second top-view image, generated before the first top-view image by the first image processor, and a curb detector configured to detect curbs in the road based on the extracted lines and the extracted optical flow and provide curb data representing the detected curbs.

Map creation apparatus, map creation method, and computer-readable recording medium

A map creation apparatus includes an image receiver configured to receive images in time series while moving; a first calculator configured to extract an image area indicating an object from the images and calculate a coordinate of the object in a world coordinate system; a second calculator configured to track the object in the extracted image area with the images and calculate an optical flow of the object; an eliminator configured to calculate a difference in coordinate between vanishing points generated by the movement of the image receiver and the object based on the optical flow, and eliminate the image area of the object from the images when determining the object as a moving object based on the calculated difference; and a storage controller configured to store map information including the coordinate of the object, not eliminated by the moving object eliminator, in the world coordinate system.

IN-VEHICLE DEVICE, VEHICLE MOVEMENT ANALYSIS SYSTEM
20230196847 · 2023-06-22 · ·

An object of the present invention is to provide a technique for suppressing a data capacity by appropriately selecting a data item that needs to be collected in order to analyze an accident situation or the like. An in-vehicle device according to the present invention acquires data describing a state of a vehicle and a surrounding situation of the vehicle, and selects and outputs a record in which at least one of an operation of the vehicle or an operation of a surrounding object exceeds a threshold among records described in the data (see FIG. 1).

DETERMINING POSITION OF THE CENTRAL POINT OF POINT CLOUD DATA
20230196615 · 2023-06-22 ·

The present disclosure relates to a computer-implemented method for determining a central point of point cloud data in an automotive system for monitoring the environment of a vehicle. The point cloud data is generated by one or more sensors of the vehicle with respect to a reference coordinate system. The point cloud data defines a connected subspace of the reference coordinate system. The method comprises a step a) of determining a bounding box of the point cloud data, a step b) of selecting a starting agent position of an agent within the bounding box, and a step c) of selecting a coordinate system relative to the bounding box. In a further step d) a plurality of agent moving operations are performed. Each agent moving operation comprises moving the agent from the current agent position to a new agent position parallel to a coordinate axis of the selected coordinate system. The new agent position is determined based on an intersecting line through the current agent position parallel to the coordinate axis. The method further comprises a step e) of determining, after step d) is completed, the new agent position as the central point of the point cloud data.

MOTION PLANNER CONSTRAINT GENERATION BASED ON ROAD SURFACE HAZARDS
20230192067 · 2023-06-22 ·

Provided are methods for motion planner constraint generation based on road surface hazards, which can include receiving information about an object, identifying the object as a particular road hazard, generating one or more motion constraints based on the road hazard, and controlling a vehicle based on the motion constraints. Systems and computer program products are also provided.