Patent classifications
H04N2013/0092
CONTENT NAVIGATION
One embodiment provides a method comprising receiving a piece of content and salient moments data for the piece of content. The method further comprises, based on the salient moments data, determining a first path for a viewport for the piece of content. The method further comprises displaying the viewport on a display device. Movement of the viewport is based on the first path during playback of the piece of content. The method further comprises generating an augmentation for a salient moment occurring in the piece of content, and presenting the augmentation in the viewport during a portion of the playback. The augmentation comprises an interactive hint for guiding the viewport to the salient moment.
PLANT FEATURE DETECTION USING CAPTURED IMAGES
Described are methods for identifying the in-field positions of plant features on a plant by plant basis. These positions are determined based on images captured as a vehicle (e.g., tractor, sprayer, etc.) including one or more cameras travels through the field along a row of crops. The in-field positions of the plant features are useful for a variety of purposes including, for example, generating three-dimensional data models of plants growing in the field, assessing plant growth and phenotypic features, determining what kinds of treatments to apply including both where to apply the treatments and how much, determining whether to remove weeds or other undesirable plants, and so on.
Method and apparatus for controlling image display
A method of controlling image display includes: receiving an image capture instruction from a client device, wherein the image capture instruction includes a photographing direction, and the photographing direction is determined by the client device according to a relative position relationship between a position of the client device and a user-specified display position; controlling an image capture device to perform image capture according to the photographing direction, to obtain a depth image comprising a target object image; extracting the target object image from the depth image; and sending the target object image to the client device such that the client device displays the target object image at the display position.
IMAGE PROJECTION
According to one example for outputting image data, an image comprising a surface and an object are captured on a sensor. An object mask based on the captured image is created on a processor. A first composite image based on the object mask and a source content file is created. In an example, the first composite image is projected to the surface.
METHOD AND APPARATUS FOR USER INTERACTION FOR VIRTUAL MEASUREMENT USING A DEPTH CAMERA SYSTEM
A method and apparatus provide user interaction for virtual measurement using a depth camera system. According to a possible embodiment, an image of a scene can be displayed on a display of the apparatus. A first frame of the scene can be captured using a first camera on an apparatus. A second frame of the scene can be captured using a second camera on the apparatus. A depth map can be generated based on the first frame and the second frame. A user input can be received that generates at least a human generated segment for measurement on the displayed scene. A measurement overlay can be generated based on the user input and the depth map. The measurement overlay can indicate a measurement in the scene. The measurement overlay can be displayed on a frame of the scene on the display.
SHAPE RECONSTRUCTION USING ELECTRONIC LIGHT DIFFUSING LAYERS (E-GLASS)
Shape measurement of a specular object even in the presence of multiple intra-object reflections such as those at concave regions of the object. Silhouettes of the object are extracted, by positioning the object between a camera and a background. A visual hull of the object is reconstructed based on the extracted silhouettes, such as by image capture of shadows of the object projected onto a screen, and image capture of reflections by the surface of the object of coded patterns onto the screen. The visual hull is used to distinguish between direct (single) reflections of the coded patterns at the surface of the object and multiple reflections. Only the direct (single) reflections are used to triangulate camera rays and light rays onto the surface of the object, with multiple reflections being excluded. The 3D surface shape may be derived by voxel carving of the visual hull, in which voxels along the light path of direct reflections are eliminated. For surface reconstruction of heterogeneous objects, which exhibit both diffuse and specular reflectivity, variations in the polarization state of polarized light may be used to separate between a diffuse component of reflection and a specular component.
User interface control device, user interface control method, computer program and integrated circuit
A user interface control device provides a GUI allowing a depth of a graphic to be easily set when composing the graphic with a stereoscopic image. The device comprises: a graphic information obtaining unit that specifies an area occupied by the graphic when the graphic is arranged on one of two viewpoint images forming a stereoscopic image; a depth information analyzing unit that acquires a depth of a subject appearing within the specified area occupied by the graphic in the one viewpoint image; and a depth setting presenting unit that presents a first alternative and a second alternative for setting a depth of the graphic, the first alternative corresponding to the depth of the subject, and the second alternative corresponding to a depth differing for the depth of the subject.
Interior Camera Apparatus, Driver Assistance Apparatus Having The Same And Vehicle Having The Same
An interior camera apparatus includes a frame body and a stereo camera provided in the frame body and including a first camera and a second camera. The interior camera apparatus also includes a light module provided in the frame body and configured to radiate infrared light; and a circuit board connected to the stereo camera and the light module. The light module includes a first light emitting element and a second light emitting element. The interior camera apparatus is configured to direct infrared light emitted from the first light emitting element in a first irradiation direction and to direct infrared light emitted from the second light emitting element in a second irradiation direction different from the first irradiation direction.
Conversion of a digital stereo image into multiple views with parallax for 3D viewing without glasses
A method for generating additional views from a stereo image defined by a left eye image and a right eye image. The method includes receiving as input at least one stereo image. The method includes, for each of the stereo images, generating a plurality of additional images. The method includes interlacing the additional images for each of the stereo images to generate three dimensional (3D) content made up of multiple views of the scenes presented by each of the stereo images. The interlacing may be performed such that the generated 3D content is displayable on a 3D display device including a barrier grid or a lenticular lens array on the monitor screen. The additional images may include 12 to 40 or more frames providing views of the one or more scenes from differing viewing angles than provided by the left and right cameras used to generate the original stereo image.
PLANT IDENTIFICATION USING HETEROGENOUS MULTI-SPECTRAL STEREO IMAGING
A farming machine identifies and treats a plant as the farming machine travels through a field. The farming machine includes a pair of image sensors for capturing images of a plant. The image sensors are different, and their output images are used to generate a depth map to improve the plant identification process. A control system identifies a plant using the depth map. The control system captures images, identifies a plant, and actuates a treatment mechanism in real time.