Patent classifications
G06T2207/30252
METHOD FOR ACQUIRING DISTANCE FROM MOVING BODY TO AT LEAST ONE OBJECT LOCATED IN ANY DIRECTION OF MOVING BODY BY PERFORMING NEAR REGION SENSING AND IMAGE PROCESSING DEVICE USING THE SAME
A method for acquiring a distance from a moving body to an object located in any direction of the moving body includes steps of: an image processing device (a) instructing a rounded cuboid sweep network to project pixels of images, generated by cameras covering all directions of the moving body, onto N virtual rounded cuboids to generate rounded cuboid images and apply 3D concatenation operation thereon to generate an initial 4D cost volume, (b) instructing a cost volume computation network to generate a final 3D cost volume from the initial 4D cost volume, and (c) generating inverse radius indices, corresponding to inverse radii representing inverse values of separation distances of the N virtual rounded cuboids, by referring to the final 3D cost volume and extracting the inverse radii by using the inverse radius indices, to acquire the separation distances and thus, the distance from the moving body to the object.
Method And Apparatus for Image Registration
An image registration apparatus including at least one processor and configured to project, to a first model, a first image generated based on an image obtained from a first camera to generate a first intermediate image, to map the first intermediate image to a first output model to generate a first output image, to project, to a second model, a second image generated based on an image obtained from a second camera to generate a second intermediate image, to map the second intermediate image to a second output model to generate a second output image, and to determine a match rate between the first output image and the second output image and transform at least one of the first model and the second model based on a determined match rate and a preset reference match rate.
System for Determining Road Slipperiness in Bad Weather Conditions
Systems and methods are disclosed for estimating slipperiness of a road surface. This estimate may be obtained using an image sensor mounted on a vehicle. The estimated road slipperiness may be utilized when calculating a risk index for the road, or for an area including the road. If a predetermined threshold for slipperiness is exceeded, corrective actions may be taken. For instance, warnings may be generated to human drivers that are in control of driving vehicle, and autonomous vehicles may automatically adjust vehicle speed based upon road slipperiness detected.
SYSTEMS AND METHODS FOR VALUATION OF A VEHICLE
Aspects described provide systems and methods that relate generally to image analysis and, more specifically, identifying individual components and elements in an image. The systems and methods include a valuation application executing one or more application program interfaces (APIs) communicating with one or more websites via a network, where the user is prompted to enter information and/or take pictures or videos of their vehicle that they would like to sell. The valuation application utilizes a machine learning model to identify and value the various vehicle components within the images and videos. Based on the machine learning model, the valuation application identifies each component according to the images and videos and performs a search to determine the value of the components identified. The valuation application tabulates and summarizes the vehicle component resale values and resell information for the user to view.
Systems and Methods for Image Based Perception
Systems and methods for image-based perception. The methods comprise: capturing images by a plurality of cameras with overlapping fields of view; generating, by a computing device, spatial feature maps indicating locations of features in the images; identifying, by the computing device, overlapping portions of the spatial feature maps; generating, by the computing device, at least one combined spatial feature map by combining the overlapping portions of the spatial feature maps together; and/or using, by the computing device, the at least one combined spatial feature map to define a predicted cuboid for at least one object in the images.
PHOTOELECTRIC CONVERSION DEVICE
A photoelectric conversion device includes a substrate provided with pixels each including a photoelectric converter that accumulates charge generated by an incidence of light, a charge holding portion that holds charge transferred from the photoelectric converter, and an amplifier unit that includes an input node that receives charge transferred from the charge holding portion, a metal film disposed over a side of a first surface of the substrate so as to cover at least the charge holding portion, and a trench structure provided in the substrate on the side of the first surface of the substrate. The photoelectric conversion device is configured such that the light is incident from the side of the first surface of the substrate. The trench structure is disposed between the photoelectric converter and the charge holding portion of a first pixel.
RECEIVING-SIDE APPARATUS, IMAGE QUALITY IMPROVEMENT SYSTEM, AND IMAGE QUALITY IMPROVEMENT METHOD
A receiving-side apparatus executes: reception processing for receiving a camera image from a transmitting-side apparatus; determination processing for determining whether execution conditions for performing image quality improvement processing with respect to the camera image are established; in a case where the execution conditions are established, order processing for transmitting the camera image to an external server apparatus connected through a network to the receiving-side apparatus, and ordering the image quality improvement processing with respect to the camera image; processing for receiving an image-quality-improved image generated by performing image quality improvement processing on the camera image from the external server apparatus; and processing for displaying the image-quality-improved image on a display apparatus.
Systems and Methods for Image Based Perception
Systems and methods for image-based perception. The methods comprise: obtaining, by a computing device, images captured by a plurality of cameras with overlapping fields of view; generating, by the computing device, spatial feature maps indicating locations of features in the images; defining, by the computing device, predicted cuboids at each location of an object in the images based on the spatial feature maps; and assigning, by the computing device, at least two cuboids of said predicted cuboids to a given object when predictions from images captured by separate cameras of the plurality of cameras should be associated with a same detected object.
Vehicular vision system that dynamically calibrates a vehicular camera
A vehicular vision system includes a camera disposed at a vehicle and operable to capture multiple frames of image data during a driving maneuver of the vehicle. A control includes an image processor that processes frames of captured image data to determine feature points in an image frame when the vehicle is operated within a first range of steering angles, and to determine motion trajectories of those feature points in subsequent image frames for the respective range of steering angles. The control determines a horizon line based on the determined motion trajectories. Responsive to determination that the determined horizon line is non-parallel to the horizontal axis of the image plane, at least one of pitch, roll or yaw of the camera is adjusted. Image data captured by the camera is processed at the control for object detection.
Method and apparatus for image processing and computer storage medium
A method and an apparatus for processing an image are provided. The method may include: acquiring a set of image sequences, the set of image sequences including a plurality of image sequence subsets divided according to similarity measurements between image sequences, each image sequence subset including a basic image sequence and other image sequence, wherein a first similarity measurement corresponding to the basic image sequence is greater than or equal to a first similarity measurement corresponding to the other image sequence; creating an original three-dimensional model using the basic image sequence; and creating a final three-dimensional model using the other image sequence based on the original three-dimensional model.