Patent classifications
G06T2207/30204
COORDINATING ALIGNMENT OF COORDINATE SYSTEMS USED FOR A COMPUTER GENERATED REALITY DEVICE AND A HAPTIC DEVICE
A first electronic device controls a second electronic device to measure a position of the first electronic device. The first electronic device includes a motion sensor, a network interface circuit, a processor, and a memory. The motion sensor senses motion of the first electronic device. The network interface circuit communicates with the second electronic device. The memory stores program code that is executed by the processor to perform operations that include, responsive to determining that the first electronic device has a level of motion that satisfies a defined rule, transmitting a request for the second electronic device to measure a position of the first electronic device. The position of the first electronic device is sensed and then stored in the memory. An acknowledgement is received from the second electronic device indicating that it has stored sensor data that can be used to measure the position of the first electronic device.
REGISTRATION CHAINING WITH INFORMATION TRANSFER
A registration chaining system provides information transfer along a chain of registrations of images of same or different modalities. A registration at each link is based on a shared feature readily distinguished in a pair of images. The information is transferred using the registration.
TECHNIQUES FOR THREE-DIMENSIONAL ANALYSIS OF SPACES
An example method includes receiving a 2D image of a 3D space from an optical camera, identifying, in the 2D image. A virtual image generated by an optical instrument refracting and/or reflecting the light is identified. The example method further includes identifying, in the 2D image, a first object depicting a subject disposed in the 3D space from a first direction extending from the optical camera to the subject and identifying, in the virtual image, a second object depicting the subject disposed in the 3D space from a second direction extending from the optical camera to the subject via the optical instrument, the second direction being different than the first direction. A 3D image depicting the subject based on the first object and the second object is generated. Alternatively, a location of the subject in the 3D space is determined based on the first object and the second object.
VEHICULAR ACCESS CONTROL BASED ON VIRTUAL INDUCTIVE LOOP
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for monitoring events using a Virtual Inductive Loop system. In some implementations, image data is obtained from cameras. A region depicted in the obtained image data is identified, the region comprising lines spaced by a distance that satisfies a distance threshold. For each line included in the region: an object depicted crossing the line is determined whether to satisfy a height criteria indicating that the line is activated. In response to determining that an object depicted crossing the line satisfies the height criteria, an event is determined to have likely occurred using data indicating (i) which lines of the lines were activated and (ii) an order in which each of the lines were activated. In response to determining that an event likely occurred, actions are performed using at least some of the data.
CONTROL APPARATUS, CONTROL METHOD, RADIATION IMAGING SYSTEM, AND STORAGE MEDIUM
An apparatus includes an acquisition unit and a display control unit. The acquisition unit is configured to acquire information about an orientation of a detector. The detector is configured to capture a radiation image by detecting radiation, and includes a plurality of receptor fields for performing automatic exposure control and a mark enabling identification of the orientation of the detector. The display control unit is configured to display an icon related to the detector on a display unit based on the acquired information about the orientation of the detector.
Systems and methods for reconstruction and rendering of viewpoint-adaptive three-dimensional (3D) personas
An exemplary method includes maintaining a receiver-side mesh-vertices list, receiving duplicative-vertex information from a sender, and responsively reducing the receiver-side mesh-vertices list in accordance with the received duplicative-vertex information, and rendering, using the reduced receiver-side mesh-vertices list, viewpoint-adaptive three-dimensional (3D) personas of a subject at least in part by weighting video pixel colors from different video-camera vantage points of video cameras that capture video streams of the subject, the weighting being performed according to a respective geometric relationship of each video-camera vantage point to a user-selected viewpoint.
Autonomous mobile apparatus and control method thereof
The present disclosure provides an autonomous mobile apparatus and a control method thereof. The method includes: starting a SLAM mode; obtaining first image data captured by a first camera; extracting a first tag image of positioning tag(s) from the first image data; calculating a three-dimensional camera coordinate of feature points of the positioning tag(s) in a first camera coordinate system of the first camera based on the first tag image; calculating a three-dimensional world coordinate of the feature points of the positioning tag(s) in a world coordinate system based on a first camera pose of the first camera when obtaining the first image data in the world coordinate system and the three-dimensional camera coordinate; and generating a map file based on the three-dimensional world coordinate of the feature points of the positioning tag(s).
Method and apparatus for sensing moving ball
Provided are an apparatus and method for sensing a moving ball, which extract a feature portion such as a trademark, a logo, etc. indicated on a ball from consecutive images of a moving ball, acquired by an image acquisition unit embodied by a predetermined camera device, and calculate a spin axis and spin amount of rotation the moving ball based on the feature portion and thus spin of the ball is simply, rapidly, and accurately calculated with low computational load, thereby achieving rapid and stable calculation of the ball in a relatively low performance system. The sensing apparatus includes an image acquisition unit for acquiring consecutive images, an image processing unit for extracting a feature portion from the acquired image, and a spin calculation unit for calculating spin using the extracted feature portion.
PURE POSE SOLUTION METHOD AND SYSTEM FOR MULTI-VIEW CAMERA POSE AND SCENE
A pure pose solution method and system for a multi-view camera pose and scene are provided. The method includes: a pure rotation recognition (PRR) step: performing PRR on all views, and marking views having a pure rotation abnormality, to obtain marked views and non-marked views; a global translation linear (GTL) calculation step: selecting one of the non-marked views as a reference view, constructing a constraint t.sub.r=0, constructing a GTL constraint, solving a global translation (I), reconstructing a global translation of the marked views according to t.sub.r and (I), and screening out a correct solution of the global translation; and a structure analytical reconstruction (SAR) step: performing analytical reconstruction on coordinates of all 3D points according to a correct solution of a global pose. The method and system can greatly improve the computational efficiency and robustness of the multi-view camera pose and scene structure reconstruction.
TEMPORAL CODING OF MARKERS FOR OBJECT TRACKING
There is provided a method of motion tracking comprising arranging one or more active marker devices on an object, the active marker devices being configured to emit light and each having an associated temporally repeating pattern comprising a plurality of time frames, controlling the one or more active marker devices to emit light according to their respective temporally repeating patterns, wherein the temporally repeating patterns are such that the active marker device does not emit light during at least one time frame of the plurality of time frames, detecting light emitted by the one or more active marker devices using one or more cameras, and determining a spatial configuration of the object using the light detected by the one or more cameras.