Patent classifications
G06T7/596
SYSTEMS AND METHODS FOR GENERATING 3D IMAGES BASED ON FLUORESCENT ILLUMINATION
There is provided a computer implemented method for generating a three dimensional (3D) image based of fluorescent illumination, comprising: receiving in parallel by each of at least three imaging sensors positioned at a respective parallax towards an object having a plurality of regions with fluorescent illumination therein, a respective sequence of a plurality of images including fluorescent illumination of the plurality of regions, each of the plurality of images separated by an interval of time; analyzing the respective sequences, to create a volume-dataset indicative of the depth of each respective region of the plurality of regions; and generating a 3D image according to the volume-dataset.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND STORAGE MEDIUM
The image processing apparatus of the present invention is an image processing apparatus that generates a virtual viewpoint image based on image data obtained by capturing an image capturing area from a plurality of directions by a plurality of cameras, the image processing apparatus including: an acquisition unit configured to acquire viewpoint information of a virtual viewpoint; an area determination unit configured to determine a three-dimensional area in accordance with a position and a size of a specific object within the image capturing area; and a generation unit configured to generate the virtual viewpoint image in accordance with the virtual viewpoint indicated by the viewpoint information based on determination by the area determination unit such that an object within a field of view in accordance with the virtual viewpoint and not included in the three-dimensional area determined by the area determination unit is not displayed in the virtual viewpoint image.
Generating images from light fields utilizing virtual viewpoints
Systems and methods for the synthesis of light field images from virtual viewpoints in accordance with embodiments of the invention are disclosed. In one embodiment of the invention, a system includes a processor and a memory configured to store captured light field image data and an image manipulation application, wherein the captured light field image data includes image data, pixel position data, and a depth map, and wherein the image manipulation application configures the processor to obtain captured light field image data, determine a virtual viewpoint for the captured light field image data, where the virtual viewpoint includes a virtual location and virtual depth information, compute a virtual depth map based on the captured light field image data and the virtual viewpoint, and generate an image from the perspective of the virtual viewpoint based on the captured light field image data and the virtual depth map.
Gesture operation method based on depth values and system thereof
A gesture operation method based on depth values and the system thereof are revealed. A stereoscopic-image camera module acquires a first stereoscopic image. Then an algorithm is performed to judge if the first stereoscopic image includes a triggering gesture. Then the stereoscopic-image camera module acquires a second stereoscopic image. Another algorithm is performed to judge if the second stereoscopic image includes a command gesture for performing the corresponding operation of the command gesture.
Apparatus for inspecting and sorting
A method and apparatus for sorting is described, and which includes providing a product stream formed of individual objects of interest having feature aspects which can be detected; generating multiple images of each of the respective objects of interest; classifying the feature aspects of the objects of interest; identifying complementary images by analyzing some of the multiplicity of images; fusing the complementary images to form an aggregated region representation of the complementary images; and sorting the respective objects of interest based at least in part upon the aggregated region representation which is formed.
HAIR RENDERING SYSTEM BASED ON DEEP NEURAL NETWORK
A deep neural network based hair rendering system is presented to model high frequency component of furry objects. Compared with existing approaches, the present method can generate photo-realistic rendering results. An acceleration method is applied in our framework, which can speed up training and rendering processes. In addition, a patch-based training scheme is introduced, which significantly increases the quality of outputs and preserves high frequency details.
BINOCULAR VISION-BASED ENVIRONMENT SENSING METHOD AND APPARATUS, AND UNMANNED AERIAL VEHICLE
A binocular vision-based environment sensing method and apparatus, is applied to an unmanned aerial vehicle. The unmanned aerial vehicle is provided with five binocular cameras. The first binocular camera is disposed at the front portion of the fuselage of the unmanned aerial vehicle. The second binocular camera is inclined upward and disposed between the left side of the fuselage and the upper portion of the fuselage of the unmanned aerial vehicle. The third binocular camera is inclined upward and disposed between the right side of the fuselage and the upper portion of the fuselage of the unmanned aerial vehicle. The fourth binocular camera is disposed at the lower portion of the fuselage of the unmanned aerial vehicle. The fifth binocular camera disposed at the rear portion of the fuselage of the unmanned aerial vehicle. The method can simplify an omni-directional sensing system while reducing the sensing blind area.
SYSTEMS AND METHODS FOR SCANNING THREE-DIMENSIONAL OBJECTS AND MATERIALS
According to at least one aspect, a system is provided. The system comprises a rotation device configured to rotate an object about an axis; a plurality of imaging sensors configured to image the object and comprising two imaging sensors that are disposed a fixed distance apart; and a computing device configured to: control the plurality of imaging sensors to capture a first set of images of the object in a first position; control the rotation device to rotate the object from the first position to a second position; control the plurality of imaging sensors to capture a second set of images of the object in the second position; generate a 3-dimensional (3D) model of the object using the first and second sets of images; and identify a scale of the 3D model of the object using the fixed distance.
STRUCTURED LIGHT MATCHING OF A SET OF CURVES FROM TWO CAMERAS
A method for matching points between three images of a scene comprises retrieving three images acquired by a sensor, extracting blobs from said reflection in said two images; for each given extracted blob of the first image: selecting a selected epipolar plane; identifying plausible combinations; calculating a matching error; repeating the steps of selecting, identifying and calculating for each epipolar plane of the set of epipolar planes; determining a most probable combination; identifying matching points between the two images; validating the matching points between the two images, said validating comprising for each pair of matching points, determining a projection of the pair of matching points in a third image of the third camera, determining if the projection of the pair of matching points in the third image of the third camera is located on a blob, identifying the pair of matching points as validated if the projection of the pair of matching points in the third image of the third camera is located on the blob; providing the validated pairs of matching points.
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING SYSTEM, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
An information processing device includes a controller. In a case where multiple images are formed in air in a depth direction, the controller controls a display of at least one of the images corresponding to one position or multiple positions in accordance with a command from a user.