G06T2207/30252

Camera height calculation method and image processing apparatus

A camera height calculation method that causes a computer to execute a process, the process includes obtaining one or more images captured by an in-vehicle camera, extracting one or more feature points from the one or more images, identifying first feature points that exist over a road surface from the one or more feature points, and calculating a height of the in-vehicle camera from the road surface, based on positions of the identified first feature points.

System and method for presenting tire-related information to customers

A cloud-based system for use by retail store employees or customers at any location to facilitate the sale of automotive tires to consumers is provided. The system accesses multiple independent tire inventory systems from different distributors/manufacturers and provides a personalized set of recommendation tire options and accompanying TPMS service packs.

Mobile robot system and method for generating map data using straight lines extracted from visual images

A mobile robot is configured to navigate on a sidewalk and deliver a delivery to a predetermined location. The robot has a body and an enclosed space within the body for storing the delivery during transit. At least two cameras are mounted on the robot body and are adapted to take visual images of an operating area. A processing component is adapted to extract straight lines from the visual images taken by the cameras and generate map data based at least partially on the images. A communication component is adapted to send and receive image and/or map data. A mapping system includes at least two such mobile robots, with the communication component of each robot adapted to send and receive image data and/or map data to the other robot. A method involves operating such a mobile robot in an area of interest in which deliveries are to be made.

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING PROGRAM, AND INFORMATION PROCESSING METHOD

A processing load in a case where a plurality of different sensors is used can be reduced. An information processing apparatus according to an embodiment includes: a recognition processing unit (15, 40b) configured to perform recognition processing for recognizing a target object by adding, to an output of a first sensor (23), region information that is generated according to object likelihood detected in a process of object recognition processing based on an output of a second sensor (21) different from the first sensor.

OWN-POSITION ESTIMATING DEVICE, MOVING BODY, OWN-POSITION ESTIMATING METHOD, AND OWN-POSITION ESTIMATING PROGRAM

An own-position estimating device for estimating an own-position of a moving body by matching a feature extracted from an acquired image with a database in which position information and the feature are associated with each other in advance, includes an estimating unit estimating the own-position of the moving body by matching the feature extracted by the extracting unit with the database, and a determination threshold value adjusting unit adjusting a determination threshold value for extracting the feature, in which the determination threshold value adjusting unit acquires the database in a state in which the determination threshold value is adjusted, and adjusts the determination threshold value on the basis of the determination threshold value linked to each of the position information items in the database, and the extracting unit extracts the feature from the image by using the determination threshold value adjusted by the determination threshold value adjusting unit.

IMAGING SYSTEM, DRIVING ASSISTANCE SYSTEM, AND PROGRAM
20230044180 · 2023-02-09 ·

The driving assistance system includes an imaging device capable of capturing a first monochrome image in a vehicle traveling direction, a first neural network for segmentation processing, a second neural network for depth estimation processing, a determination portion determining a center of a portion cut off from the first monochrome image on the basis of the segmentation processing and the depth estimation processing, a third neural network for colorization processing of only a second cut-off monochrome image, and a display device for enlargement of the second monochrome image subjected to the colorization processing.

OBJECT RECOGNITION DEVICE, DRIVING ASSISTANCE DEVICE, SERVER, AND OBJECT RECOGNITION METHOD
20230042572 · 2023-02-09 · ·

Included are: an information acquiring unit to acquire information; a periphery recognizing unit to acquire peripheral environment information regarding a state of a peripheral environment based on the information acquired by the information acquiring unit and a first machine learning model and to acquire calculation process information indicating a calculation process when the peripheral environment information has been acquired; an explanatory information generating unit to generate explanatory information indicating information having a large influence on the peripheral environment information in the calculation process among the information acquired by the information acquiring unit based on the calculation process information acquired by the periphery recognizing unit; and an evaluation information generating unit to generate evaluation information indicating adequacy of the peripheral environment information acquired by the periphery recognizing unit based on the information acquired by the information acquiring unit and the explanatory information generated by the explanatory information generating unit.

HIGH-DEFINITION MAP CREATION METHOD AND DEVICE, AND ELECTRONIC DEVICE

A high-definition map creation method includes: obtaining point cloud data collected with respect to a target region, the point cloud data including K frames of point clouds and an initial pose of each frame of point cloud, K being an integer greater than 1; associating the K frames of point clouds with each other in accordance with the initial pose to obtain a first point cloud relation graph of the K frames of point clouds; performing point cloud registration on the K frames of point clouds in accordance with the first point cloud relation graph and the initial pose to obtain a target relative pose of each frame of point cloud in the K frames of point clouds; and splicing the K frames of point clouds in accordance with the target relative pose to obtain a point cloud map of the target region.

Secure Camera Based Inertial Measurement Unit Calibration for Stationary Systems
20230039129 · 2023-02-09 ·

Described are techniques and systems for secure camera based IMU calibration for stationary systems, including vehicles. Existing vehicle camera systems are employed, with enhanced security to prevent malicious attempts by hackers to try and cause a vehicle to enter IMU calibration mode. IMU calibration occurs when a calibration system determines the vehicle is parked in a controlled environment; calibration targets are positioned at different viewing angles to vehicle cameras to act as sources of optical patterns of encoded data. Features of the patterns are for security as well as for alignment functionality. Images of the calibration targets enable inference of a vehicle coordinate system, from which calculations for IMU mounting error compensations are performed. A relative rotation between the IMU and the vehicle coordinate system are applied to IMU data to compensate for relative rotations between the vehicle and the IMU, thereby improving vehicle slope and bank metrics.

METHOD OF PROCESSING IMAGE, ELECTRONIC DEVICE, AND STORAGE MEDIUM

A method of processing an image, an electronic device, and a storage medium, which relate to the artificial intelligence field, in particular to fields of computer vision and intelligent transportation technologies. The method includes: determining at least one key frame image in a scene image sequence captured by a target camera; determining a camera pose parameter associated with each key frame image in the at least one key frame image, according to a geographic feature associated with the key frame image; and projecting each scene image in the scene image sequence to obtain a target projection image according to the camera pose parameter associated with the key frame image, so as to generate a scene map based on the target projection image. The geographic feature associated with any key frame image indicates localization information of the target camera at a time instant of capturing the corresponding key frame image.