G06F16/5862

Real-time camera position estimation with drift mitigation in incremental structure from motion

A system provides camera position and point cloud estimation 3D reconstruction. The system receives images and attempts existing structure integration to integrate the images into an existing reconstruction under a sequential image reception assumption. If existing structure integration fails, the system attempts dictionary overlap detection by accessing a dictionary database and searching to find a closest matching frame in the existing reconstruction. If overlaps are found, the system matches the images with the overlaps to determine a highest probability frame from the overlaps, and attempts existing structure integration again. If overlaps are not found or existing structure integration fails again, the system attempts bootstrapping based on the images. If any of existing structure integration, dictionary overlap detection, or bootstrapping succeeds, and if multiple disparate tracks have come to exist, the system attempts reconstructed track merging.

IMAGE PROCESSING METHOD, APPARATUS, AND STORAGE MEDIUM

The present disclosure discloses an image processing method, apparatus, and a non-transitory computer readable medium. The method can includes: acquiring a three-dimensional (3D) model and original texture images of an object, wherein the original texture images are acquired by an imaging device; determining a mapping relationship between the 3D model and the original texture images of the object; determining, among the original texture images, a subset of texture images associated with a first perspective of the imaging device; splicing the subset of texture images into a spliced texture image corresponding to the first perspective; and mapping the spliced texture image to the 3D model according to the mapping relationship.

Estimating depth from a single image

During a training phase, a machine accesses reference images with corresponding depth information. The machine calculates visual descriptors and corresponding depth descriptors from this information. The machine then generates a mapping that correlates these visual descriptors with their corresponding depth descriptors. After the training phase, the machine may perform depth estimation based on a single query image devoid of depth information. The machine may calculate one or more visual descriptors from the single query image and obtain a corresponding depth descriptor for each visual descriptor from the generated mapping. Based on obtained depth descriptors, the machine creates depth information that corresponds to the submitted single query image.

SCENE ANALYZING METHOD AND MONITORING DEVICE USING THE SAME

A scene analyzing method includes the steps of: receiving captured scene information; analyzing different targets in the scene information to obtain characteristic information of each of the targets; and sending the obtained characteristic information to an external device or correlating the characteristic information to the scene information and storing it to a storage device in order to retrieve the scene information corresponding to the characteristic information stored in the storage device. The scene information corresponding to the characteristic information is searchable to extract a specific target when the scene analyzing method is used for monitoring applications. This reduces human costs and increases efficiency in monitoring information.

REAL-TIME CAMERA POSITION ESTIMATION WITH DRIFT MITIGATION IN INCREMENTAL STRUCTURE FROM MOTION
20180315221 · 2018-11-01 ·

A system provides camera position and point cloud estimation 3D reconstruction. The system receives images and attempts existing structure integration to integrate the images into an existing reconstruction under a sequential image reception assumption. If existing structure integration fails, the system attempts dictionary overlap detection by accessing a dictionary database and searching to find a closest matching frame in the existing reconstruction. If overlaps are found, the system matches the images with the overlaps to determine a highest probability frame from the overlaps, and attempts existing structure integration again. If overlaps are not found or existing structure integration fails again, the system attempts bootstrapping based on the images. If any of existing structure integration, dictionary overlap detection, or bootstrapping succeeds, and if multiple disparate tracks have come to exist, the system attempts reconstructed track merging.

APPARATUS AND METHOD FOR APPLYING HAPTIC ATTRIBUTES USING TEXTURE PERCEPTUAL SPACE
20180308246 · 2018-10-25 ·

Embodiments relate to an apparatus for applying a haptic property using a texture perceptual space and a method therefor, the apparatus including an image acquirer configured to acquire an image of a part of a virtual object inside a virtual space, a perceptual space position determiner configured to determine a position of the image inside a texture perceptual space in which a plurality of haptic models are arranged at predetermined positions, using feature points of the acquired image, a haptic model determiner configured to determine a haptic model that is closest to the determined position of the image, and a haptic property applier configured to apply a haptic property of the determined haptic model to the part of the virtual object, in which each of the haptic models includes a texture image and a haptic property for a specific object.

Method for creating binary code and electronic device thereof

A method for creating a binary code in an electronic device is provided, which includes operations of confirming an image resource for an application, based on a request for creating a binary code for the application; determining an attribute for the image resource; selectively converting the image resource into a compressed texture, based on the attribute; and, if the image resource is converted, creating the binary code for the application, based on the converted image resource.

Vehicle photographic tunnel
10063758 · 2018-08-28 ·

A system and method for automatically photographing vehicles in a drive-thru structure is provided where the passage of a vehicle triggers an automated process that captures a series of vehicle images, and uploads the captured images to a web template for display and recordation. The images captured have controlled reflections from multiple angles and perspectives. A viewer is able to discern whether there are surface imperfections, scratches, and dents on a vehicle surface. Reflections are controlled in the structure or circular chamber with curved walls that are covered with a light scattering sheet material such as a white canvas or gray walls. The lighting style used to illuminate the vehicle within the structure is a sunset horizon style of lighting, where the lights are hidden below the curved wall that may be gray or white so as to use a sunset style reflection on the vehicle surface through subtractive lighting.

System and method for an otitis media database constructing and an image analyzing

The invention provides a system and a method for an otitis media database constructing. The steps of method comprise: First, receiving a plurality of tympanic membrane images, wherein the tympanic membrane images are ear infection in different types. Second, choosing one of the tympanic membrane images, then classifying into a plurality of anatomic regions based on a plurality of tissue types. Coding each of anatomic regions with a numerical number to describe its morbid condition, and obtaining an eigenvalue through collecting the numerical number of each anatomic region. Furthermore, repeatedly choosing other one of tympanic membrane images until each tympanic membrane image obtaining the eigenvalue. And obtaining a matrix through collecting the eigenvalue of each tympanic membrane image, then generating an otitis media database.

Information processing device and information processing method
10019058 · 2018-07-10 · ·

An information processing apparatus includes: a gaze position detection section that detects a gaze position of a user with respect to a display screen; a position information acquisition section that acquires position information of the user; and a control section that judges, based on the detected gaze position, a scene and an object in the scene that the user focuses in a content displayed on the display screen, and generates search information from the judged object and the acquired position information.