Patent classifications
G06T2207/20164
MULTI-SENSOR MOTION ANALYSIS TO CHECK CAMERA PIPELINE INTEGRITY
This specification includes a method that includes receiving, at one or more processing devices at one or more locations, one or more image frames; receiving a set of signals representing outputs of one or more sensors of a device; estimating, based on the one or more image frames, a first set of one or more motion values; estimating, based on the set of signals, a second set of one or more motion values; determining that a degree of correlation between (i) a first motion represented by the first set of one or more motion values and (ii) a second motion represented by the second set of one or more motion values fails to satisfy a threshold condition; and in response to determining that the degree of correlation fails to satisfy the threshold condition, determining presence of an adverse condition associated with the device.
Crossing point detector, camera calibration system, crossing point detection method, camera calibration method, and recording medium
A crossing point detector includes memory and a crossing point detection unit that reads out a square image from a captured image in the memory, and detects a crossing point of two boundary lines in a checker pattern depicted in the square image. The crossing point detection unit decides multiple parameters of a function model treating two-dimensional image coordinates as variables, the parameters optimizing an evaluation value based on a difference between corresponding pixel values represented by the function model and the square image, respectively, and computes the position of a crossing point of two straight lines expressed by the decided multiple parameters to thereby detect the crossing point with subpixel precision. The function model uses a curved surface that is at least first-order differentiable to express pixel values at respective positions in a two-dimensional coordinate system at the boundary between black and white regions.
Automated Determination Of Acquisition Locations Of Acquired Building Images Based On Determined Surrounding Room Data
Techniques are described for computing devices to perform automated operations to determine the acquisition locations of images, such as within a building interior based on automatically determined shapes of rooms of the building, and for using the determined image acquisition location information in further automated manners. The image may be a panorama image or of another type (e.g., a rectilinear perspective image) and acquired at an acquisition location in a multi-room building's interior, and the determined acquisition location for such an image may be at least a location on the building's floor plan and optionally an orientation/direction for at least a part of the image—in addition, the automated image acquisition location determination may be further performed without having or using information from any depth sensors or other distance-measuring devices about distances from an image's acquisition location to walls or other objects in the surrounding building.
SYSTEM AND METHOD FOR LANE DEPARTURE WARNING WITH EGO MOTION AND VISION
An apparatus includes at least one camera configured to capture at least one image of a traffic lane, an inertial measurement unit (IMU) configured to detect motion characteristics, and at least one processor. The at least one processor is configured to obtain a vehicle motion trajectory using the IMU and based on one or more vehicle path prediction parameters, obtain a vehicle vision trajectory based on the at least one image, wherein the vehicle vision trajectory includes at least one lane boundary, determine distances between one or more points on the vehicle and one or more intersection points of the at least one lane boundary based on the obtained vehicle motion trajectory, determine at least one time to line crossing (TTLC) based on the determined distances and a speed of the vehicle, and activate a lane departure warning indicator based on the determined at least one TTLC.
METHODOLOGY TO ESTIMATE SLOT LINE DIRECTION FOR PARKING SLOT DETECTION
An Automated Parking System (“APS”) for a motor vehicle includes a plurality of cameras for generating a vision signal, in response to the cameras capturing an image of a region surrounding the motor vehicle. The APS includes a processor for receiving the vision signal from the cameras. The APS further includes a non-transitory computer readable storage medium for storing instructions such that the processor is programmed to execute a plurality of routines. The routines include a detection module for detecting one or more landmark points in the image. The routines further include an estimate slot module for determining one or more corners of a parking slot based on the landmark points. The routines further include a maneuver module for generating an action signal, with a power steering system maneuvering the motor vehicle into the parking slot based on the corners.
CALIBRATING SYSTEM FOR COLORIZING POINT-CLOUDS
A system includes a three-dimensional (3D) scanner that captures a 3D point cloud corresponding to one or more objects in a surrounding environment. The system further includes a camera that captures a control image by capturing a plurality of images of the surrounding environment, and an auxiliary camera configured to capture an ultrawide-angle image of the surrounding environment. One or more processors of the system colorize the 3D point cloud using the ultrawide-angle image by mapping the ultrawide-angle image to the 3D point cloud. The system performs a limited system calibration before colorizing each 3D point cloud, and a periodic full system calibration before/after a plurality of 3D point clouds are colorized.
Alignment Of Map Segments
A computer implemented method of aligning a plurality of map segments having a local reference frame to a reference map having a global reference frame, each map segment overlapping a portion of the area represented by the reference map, the method comprising: for each map segment, independently generating one or more candidate alignments, aligning the map segment to the reference map; for each candidate alignment evaluating a cost function representing the likelihood that the candidate alignment is correct; based on the candidate alignments and associated cost functions, generating one or more candidate solutions, each candidate solution comprising a single candidate alignment for each map segment; for each candidate solution, evaluating a cost function based on at least the cost functions for each candidate alignment included in the candidate solution; and based on evaluation of the cost functions, determining alignment of the plurality of map segments.
DAMAGE DIAGRAM CREATION METHOD, DAMAGE DIAGRAM CREATION DEVICE, DAMAGE DIAGRAM CREATION SYSTEM, AND RECORDING MEDIUM
Provided are a damage diagram creation method, a damage diagram creation device, a damage diagram creation system, and a recording medium capable of detecting damage with high accuracy based on a plurality of images acquired by subjecting a subject to split imaging.
In a damage diagram creation method, damage of a subject is detected from each image (each image in a state of being not composed) constituting a plurality of images (a plurality of images acquired by subjecting the subject to split imaging), and thus, damage detection performance is not deteriorated due to deterioration of image quality in an overlapping area. Therefore, it is possible to detect damage with high accuracy based on a plurality of images acquired by subjecting the subject to split imaging. Detection results for the respective images can be composed using a composition parameter calculated based on correspondence points between the images.
INFORMATION PROCESSING APPARATUS, RECORDING MEDIUM, AND POSITIONING METHOD
An information processing apparatus comprises a processing unit that acquires, based on an image captured by an imaging unit and including an identifier disposed in a space, a plurality of positions on an outline of a shape of the identifier in an image coordinate system, and determines a position of the imaging unit, based on the plurality of positions on the outline of the shape of the identifier in the image coordinate system and positions on the outline of the shape of the identifier in a world coordinate system.
Automated determination of image acquisition locations in building interiors using determined room shapes
Techniques are described for computing devices to perform automated operations to determine the acquisition locations of images, such as within a building interior based on automatically determined shapes of rooms of the building, and for using the determined image acquisition location information in further automated manners. The image may be a panorama image or of another type (e.g., a rectilinear perspective image) and acquired at an acquisition location in a multi-room building's interior, and the determined acquisition location for such an image may be at least a location on the building's floor plan and optionally an orientation/direction for at least a part of the image—in addition, the automated image acquisition location determination may be further performed without having or using information from any depth sensors or other distance-measuring devices about distances from an image's acquisition location to walls or other objects in the surrounding building.