G06T7/74

IMAGING SYSTEM
20180003932 · 2018-01-04 ·

An imaging system for creating an image of a target object comprising a mirror mounted on a gimbal and arranged to rotate about at least one axis, a gimbal drive unit configured to control the orientation of the gimbal, and a camera having its optical axis directed onto the mirror in order that an image reflected in the mirror is within a field of view of the camera, wherein the control unit is arranged to position the gimbal such that a reflection of a target object is within a field of view of the camera.

Pattern Matching Device and Computer Program for Pattern Matching
20180005363 · 2018-01-04 ·

The purpose of the present invention is to provide a pattern matching device and computer program that carry out highly accurate positioning even if edge positions and numbers change. The present invention proposes a computer program and a pattern matching device wherein a plurality of edges included in first pattern data to be matched and a plurality of edges included in second pattern data to be matched with the first pattern data are associated, a plurality of different association combinations are prepared, the plurality of association combinations are evaluated using index values for the plurality of edges, and matching processing is carried out using the association combinations selected through the evaluation.

CALIBRATION OF AN INPUT DEVICE TO A DISPLAY USING THE INPUT DEVICE

Examples disclosed herein involve calibrating an input device to a display using the input device. An example method includes determining first coordinates of a position-encoded film corresponding to a location of an input device relative to the position-encoded film of a display of a computing device based on first position points of the position-encoded film, determining second coordinates of a pixel array corresponding to the location of the input device relative to the pixel array of the display based on a first reference pixel of the display, measuring a first offset between the first coordinates of the position-encoded film and the second coordinates of the pixel array, and calculating a calibration transformation to control the computing device based on the first offset.

Position and attitude estimation device, position and attitude estimation method, and storage medium

According to one embodiment, a position and attitude estimation device includes a processor. The processor is configured to acquire time-series images continuously captured by a capture device installed on a mobile object, estimate first position and attitude of the mobile object based on the acquired time-series images, estimate a distance to a subject included in the acquired time-series images and correct the estimated first position and attitude to a second position and attitude based on an actual scale, based on the estimated distance.

AUTOMATIC ANALYSER
20180012375 · 2018-01-11 ·

A two-dimensional code is attached to a location of a reagent storage unit which is visually recognizable from the outside, and a coordinate position of the two-dimensional code in a coordinate system of the two-dimensional code and coordinate information of an installation position of a reagent bottle are held. After that, an image of the two-dimensional code is captured by a portable terminal so that a coordinate system of an image capture unit of the portable terminal is converted into the coordinate system of the two-dimensional code using AR technology. The coordinate information of the installation position of the reagent bottle in the coordinate system of the two-dimensional code is regarded as positional coordinates in the captured image on the basis of the conversion, thereby ascertaining the position of the reagent bottle on the captured image and displaying the ascertained position on a display unit.

METHOD OF AUTOMATIC POSITIONING OF A SEAT
20180009533 · 2018-01-11 ·

A method of automatic positioning a seat in an apparatus comprising two cameras located on either side of the seat, each one in a position able to acquire images of a face of a user seated on the seat. The seat comprises at least one motor, each motor acting on a position of the seat along a predefined axis. The method comprises: for each camera: obtaining a position of a predefined image zone in which at least one eye of a user of the apparatus should be located; acquiring an image of a user seated on the seat; detecting at least one eye of the seated user in the image acquired; and obtaining a relative position between each eye detected and the predefined zone. By using each relative position obtained, at least one motor is actuated until each predefined zone contains at least one eye of the seated user.

Identifying objects within images from different sources

Techniques are disclosed for providing a notification that a person is at a particular location. For example, a resident device may receive from a user device an image that shows a face of a first person, the image being captured by a first camera of the user device. The resident device may also receive, from another device having a second camera, a second image showing a portion of a face of a second person, the second camera having a viewable area showing a particular location. The resident device may determine a score indicating a level of similarity between a first set of characteristics associated with the face of the first person and a second set of characteristics associated with the face of a second person. The resident device may then provide to the user device a notification based on determining the score.

Method for Testing a Graphical Interface and Corresponding Test System
20180011784 · 2018-01-11 ·

This test method for validating a specification of a graphical interface consists of developing a scenario file corresponding to the validation test to be performed. The scenario file includes a plurality of instructions, in a natural programming language, each instruction including a function, parameters and an expected state of the graphical interface following the application of the function. The test is automatically performed by interpreting the scenario file so as to generate commands intended for an engine capable of interacting with the graphical interface and monitoring the evolution of its current state, and then analyzing a result file associating each instruction of the scenario file with a result corresponding to the comparison of the current state of the graphical interface following the application of the corresponding command with the expected state.

Localization determination for mixed reality systems

To enable shared user experiences using augmented reality systems, shared reference points must be provided to have consistent placement (position and orientation) of virtual objects. Furthermore, the position and orientation (pose) of the users must be determinable with respect to the same shared reference points. However, without highly sensitive and expensive global positioning system (GPS) devices, pose information can be difficult to determine to a reasonable level of accuracy. Therefore, what is provided is an alternative approach to determining pose information for augmented reality systems, which can be used to perform location based content acquisition and sharing. Further, what is provided is an alternative approach to determining pose information for augmented reality systems that uses information from already existing GPS devices.

ANALYZING POSTURE-BASED IMAGE DATA
20180012357 · 2018-01-11 ·

Various embodiments are directed to systems and methods for determining whether an individual uses proper posture to perform a job duty/task. For example, systems may determine whether an individual utilizes proper posture when lifting a heavy item from a floor. Accordingly, various embodiments comprise an image capture device and a central computing entity configured to receive item information/data for an item to be moved by an individual and to determine whether the item information/data satisfies one or more image collection criteria. Upon determining the item information/data satisfies one or more of the image collection criteria, the computing entity may activate an image capture device to collect image information/data of individuals performing the job duty/task, to compare collected image information/data against a plurality of reference images, and to determine whether the collected image information/data is indicative of the individual performing the job duty/task according to proper posture considerations.