Patent classifications
G06T7/292
Arrangement for Monitoring State and Sequence of Movement in an Aseptic Work Chamber of a Containment
The arrangement is provided for monitoring state and sequence of movement in an aseptic work chamber of a containment standing in an installation room. At least one work glove projects into the work chamber, wherein the respective work glove is able to stretch up to a maximum grasping range in the three spatial axes in the work chamber. The arrangement comprises a tracking system, the recordings of which serve to continuously localize the at least one work glove in three dimensions and are stored in a computer unit. At least one three-dimensional or two-dimensional prohibited region, which may be adjoined by a warning region, is defined in the work chamber. Individual surface sections or the entire floor of the containment can be defined as a prohibited region. A prohibited region and possibly a warning region in front of this can be set up around machines which are installed in the work chamber and which constitute a danger for the operator. The coordinates of prohibited region and warning region are stored in the computer unit. The at least one work glove must not be used to intervene in the prohibited region and should not be used to intervene in the warning region.
MULTI-CAMERA PERSON ASSOCIATION VIA PAIR-WISE MATCHING IN CONTINUOUS FRAMES FOR IMMERSIVE VIDEO
Techniques related to performing object or person association or correspondence in multi-view video are discussed. Such techniques include determining correspondences at a particular time instance based on separately optimizing correspondence sub-matrices for distance sub-matrices based on two-way minimum distance pairs between frame pairs, generating and fusing tracklets across time instances, and adjusting correspondence, after such tracklet processing, via elimination of outlier object positions and rearrangement of object correspondence.
MULTI-CAMERA PERSON ASSOCIATION VIA PAIR-WISE MATCHING IN CONTINUOUS FRAMES FOR IMMERSIVE VIDEO
Techniques related to performing object or person association or correspondence in multi-view video are discussed. Such techniques include determining correspondences at a particular time instance based on separately optimizing correspondence sub-matrices for distance sub-matrices based on two-way minimum distance pairs between frame pairs, generating and fusing tracklets across time instances, and adjusting correspondence, after such tracklet processing, via elimination of outlier object positions and rearrangement of object correspondence.
USING 6DOF POSE INFORMATION TO ALIGN IMAGES FROM SEPARATED CAMERAS
Techniques for aligning images generated by an integrated camera physically mounted to an HMD with images generated by a detached camera physically unmounted from the HMD are disclosed. A 3D feature map is generated and shared with the detached camera. Both the integrated camera and the detached camera use the 3D feature map to relocalize themselves and to determine their respective 6 DOF poses. The HMD receives the detached camera's image of the environment and the 6 DOF pose of the detached camera. A depth map of the environment is accessed. An overlaid image is generated by reprojecting a perspective of the detached camera's image to align with a perspective of the integrated camera and by overlaying the reprojected detached camera's image onto the integrated camera's image.
Systems and methods for tracking and interacting with zones in 3D space
Systems and methods are provided for automatically controlling zone interactions in a three dimensional virtual environment. A computing device provides a graphical user interface (GUI) to assign zone attributes to a zone, which is a volume of space in the virtual environment. A virtual object is assigned to the zone, as well as an interaction and a responsive operation that follows the detected interaction. The virtual object's position in the virtual environment corresponds to a physical object's position in a physical environment. For example, when the computing system detects that the virtual object has entered or left the zone, according to an assigned interaction, then an assigned operation is executed to control a physical device in the physical environment.
Systems and methods for tracking and interacting with zones in 3D space
Systems and methods are provided for automatically controlling zone interactions in a three dimensional virtual environment. A computing device provides a graphical user interface (GUI) to assign zone attributes to a zone, which is a volume of space in the virtual environment. A virtual object is assigned to the zone, as well as an interaction and a responsive operation that follows the detected interaction. The virtual object's position in the virtual environment corresponds to a physical object's position in a physical environment. For example, when the computing system detects that the virtual object has entered or left the zone, according to an assigned interaction, then an assigned operation is executed to control a physical device in the physical environment.
Mapping multiple views to an identity
Disclosed are systems and methods for mapping multiple views to an identity. The systems and methods may include receiving a plurality of images that depict an object. Attributes associated with the object may be extracted from the plurality of images. An identity of the object may be determined based on processing the attributes.
Systems and methods for navigating a vehicle among encroaching vehicles
Systems and methods use cameras to provide autonomous navigation features. In one implementation, a method for navigating a user vehicle may include acquiring, using at least one image capture device, a plurality of images of an area in a vicinity of the user vehicle; determining from the plurality of images a first lane constraint on a first side of the user vehicle and a second lane constraint on a second side of the user vehicle opposite to the first side of the user vehicle; enabling the user vehicle to pass a target vehicle if the target vehicle is determined to be in a lane different from the lane in which the user vehicle is traveling; and causing the user vehicle to abort the pass before completion of the pass, if the target vehicle is determined to be entering the lane in which the user vehicle is traveling.
Systems and methods for navigating a vehicle among encroaching vehicles
Systems and methods use cameras to provide autonomous navigation features. In one implementation, a method for navigating a user vehicle may include acquiring, using at least one image capture device, a plurality of images of an area in a vicinity of the user vehicle; determining from the plurality of images a first lane constraint on a first side of the user vehicle and a second lane constraint on a second side of the user vehicle opposite to the first side of the user vehicle; enabling the user vehicle to pass a target vehicle if the target vehicle is determined to be in a lane different from the lane in which the user vehicle is traveling; and causing the user vehicle to abort the pass before completion of the pass, if the target vehicle is determined to be entering the lane in which the user vehicle is traveling.
Dense depth computations aided by sparse feature matching
A system for dense depth computation aided by sparse feature matching generates a first image using a first camera, a second image using a second camera, and a third image using a third camera. The system generates a sparse disparity map using the first image and the third image by (1) identifying a set of feature points within the first image and a set of corresponding feature points within the third image, and (2) identifying feature disparity values based on the set of feature points and the set of corresponding feature points. The system also applies the first image, the second image, and the sparse disparity map as inputs for generating a dense disparity map.