Patent classifications
G06T2207/30241
Augmenting a Moveable Entity with a Hologram
- Daniel Joseph McCulloch ,
- Nicholas Gervase Fajt ,
- Adam G. Poulos ,
- Christopher Douglas Edmonds ,
- Lev Cherkashin ,
- Brent Charles Allen ,
- Constantin Dulu ,
- Muhammad Jabir Kapasi ,
- Michael Grabner ,
- Michael Edward Samples ,
- Cecilia Bong ,
- Miguel Angel Susffalich ,
- Varun Ramesh Mani ,
- Anthony James Ambrus ,
- Arthur C. Tomlin ,
- James Gerard Dack ,
- Jeffrey Alan Kohler ,
- Eric S. Rehmeyer ,
- Edward D. Parker
In embodiments of augmenting a moveable entity with a hologram, an alternate reality device includes a tracking system that can recognize an entity in an environment and track movement of the entity in the environment. The alternate reality device can also include a detection algorithm implemented to identify the entity recognized by the tracking system based on identifiable characteristics of the entity. A hologram positioning application is implemented to receive motion data from the tracking system, receive entity characteristic data from the detection algorithm, and determine a position and an orientation of the entity in the environment based on the motion data and the entity characteristic data. The hologram positioning application can then generate a hologram that appears associated with the entity as the entity moves in the environment.
METHOD AND APPARATUS FOR GENERATING AN EXTRAPOLATED IMAGE BASED ON OBJECT DETECTION
A method and apparatus for generating an extrapolated image from an existing film or video content, which can be displayed beyond the borders of the existing file or video content to increase viewer immersiveness, are provided. The present principles provide to generating the extrapolated image without salient objects included therein, that is, objects that may distract the viewer from the main image. Such an extrapolated image is generated by determining salient areas and generating the extrapolated image with lesser salient objects included in its place. Alternatively, salient objects can be detected in the extrapolated image and removed. Additionally, selected salient objects may be added to the extrapolated image.
AUXILIARY BERTHING METHOD AND SYSTEM FOR VESSEL
The present invention provides an auxiliary berthing method and system for a vessel. In the berthing method, by a solar blind ultraviolet imaging method, a position and an attitude of a vessel relative to a shoreline of a port berth during berthing are calculated by at least two solar blind ultraviolet imaging modules according to light signals received by a solar blind ultraviolet light source array arranged in advance on the shore. Further, when more than three solar blind ultraviolet imaging modules are used, in the method and device of the present invention, a normalized correlation algorithm and a data fusion algorithm are used to improve the accuracy of the position and attitude data of the vessel.
System and method for object tracking and metric generation
Disclosed herein is a system and method directed to object tracking and metric generation using a plurality of cameras. The system includes the plurality of cameras disposed around a playing surface in a mirrored configuration, where the plurality of cameras are time-synchronized. The system further includes logic that, when executed by a processor, causes performance of operations including: obtaining a sequence of images from the plurality of cameras, continuously detecting an object in image pairs at successive points in time, wherein each image pair corresponds to a single point in time, continuously determining a location of the object within the playing space through triangulation of the object within each image pair, detecting a player and the object within each image of a subset of image pairs of the sequence of images, identifying a sequence of interactions between the object and the player, and storing the sequence of interactions.
Detecting an excursion of a CMP component using time-based sequence of images and machine learning
Monitoring operations of a polishing system includes obtaining a time-based sequence of reference images of a component of the polishing system performing operations during a test operation of the polishing system, receiving from a camera a time-based sequence of monitoring images of an equivalent component of an equivalent polishing system performing operations during polishing of a substrate, determining a difference value for the time-based sequence of monitoring images by comparing the time-based sequence of reference images to the time-based sequence of monitoring image using an image processing algorithm, determining whether the difference value exceeds a threshold, and in response to determining the difference value exceeds the threshold, indicating an excursion.
SYSTEM AND METHOD FOR USING IMAGE DATA TO TRIGGER CONTACTLESS CARD TRANSACTIONS
A method for controlling a near field communication between a device and a transaction card is disclosed. The method includes the steps of capturing, by a front-facing camera of the device, a series of images of the transaction card and processing each image of the series of images to identify a darkness level associated with a distance of the transaction card from the front of the device. The method includes comparing each identified darkness level to a predetermined darkness level associated with a preferred distance for a near field communication read operation and automatically triggering a near field communication read operation between the device and the transaction card for the communication of a cryptogram from an applet of the transaction card to the device in response to the identified darkness level corresponding to the predetermined darkness level associated with the preferred distance for the near field communication read operation.
AGENT TRAJECTORY PREDICTION USING ANCHOR TRAJECTORIES
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for agent trajectory prediction using anchor trajectories.
MIGRATION PROPERTY CALCULATING APPARATUS MIGRATION PROPERTY EVALUATING METHOD, COMPUTER PROGRAM CAUSING COMPUTER TO PERFORM MIGRATION PROPERTY EVALUATING METHOD
The first aspect of the present invention provides a migration ability evaluation method comprising: a trajectory generation step of generating a trajectory of movement of an object being observed of a living body on the basis of a plurality of images of the object being observed acquired by capturing observation images of the object being observed multiple times in a time-series manner; a migration ability calculation step of calculating a migration ability measure indicating the degree of migration of the object being observed in a certain direction on the basis of the trajectory of movement of the object being observed; and a migration ability evaluation step of evaluating whether or not the object being observed satisfies a predetermined condition on the basis of the migration ability measure of the object being observed.
MULTI-CAMERA PERSON ASSOCIATION VIA PAIR-WISE MATCHING IN CONTINUOUS FRAMES FOR IMMERSIVE VIDEO
Techniques related to performing object or person association or correspondence in multi-view video are discussed. Such techniques include determining correspondences at a particular time instance based on separately optimizing correspondence sub-matrices for distance sub-matrices based on two-way minimum distance pairs between frame pairs, generating and fusing tracklets across time instances, and adjusting correspondence, after such tracklet processing, via elimination of outlier object positions and rearrangement of object correspondence.
THROWING POSITION ACQUISITION METHOD AND APPARATUS, COMPUTER DEVICE AND STORAGE MEDIUM
A throwing position acquisition method and apparatus, a computer device, and a storage medium. The method includes: acquiring image frames of a target video; acquiring a projectile position in each image frame; acquiring a trajectory starting point position of the target object based on the projectile position in each image frame; acquiring, based on projectile positions corresponding to at least one group of image frames in the image frames, a first height value corresponding to a case that the target object is thrown; and acquiring a throwing position of the target object based on the first height value and the trajectory starting point position of the target object.