Patent classifications
H04N13/289
METHODS AND SYSTEMS FOR SELECTIVE SENSOR FUSION
A method for determining a physical state of a movable object includes determining an estimated physical state based on first sensing data obtained by a first sensing system of the movable object, during a time duration in which second sensing data from a second sensing system is unavailable or is not updated; in response to determining that the second sensing data from the second sensing system becomes available or is updated, determining an observed physical state of the movable object based on the second sensing data; and based on a deviation between the observed physical state and the estimated physical state of the movable object, determining whether to update the physical state of the movable object based on the observed physical state. The first and second sensing systems have different sampling frequencies. The deviation is indicative of a validity of the sensing data of the second sensing system.
METHODS AND SYSTEMS FOR SELECTIVE SENSOR FUSION
A method for determining a physical state of a movable object includes determining an estimated physical state based on first sensing data obtained by a first sensing system of the movable object, during a time duration in which second sensing data from a second sensing system is unavailable or is not updated; in response to determining that the second sensing data from the second sensing system becomes available or is updated, determining an observed physical state of the movable object based on the second sensing data; and based on a deviation between the observed physical state and the estimated physical state of the movable object, determining whether to update the physical state of the movable object based on the observed physical state. The first and second sensing systems have different sampling frequencies. The deviation is indicative of a validity of the sensing data of the second sensing system.
Adjusting camera exposure for three-dimensional depth sensing and two-dimensional imaging
An example method includes setting an exposure time of a camera of a distance sensor to a first value, instructing the camera to acquire a first image of an object in a field of view of the camera, where the first image is acquired while the exposure time is set to the first value, instructing a pattern projector of the distance sensor to project a pattern of light onto the object, setting the exposure time of the camera to a second value that is different than the first value, and instructing the camera to acquire a second image of the object, where the second image includes the pattern of light, and where the second image is acquired while the exposure time is set to the second value.
Adjusting camera exposure for three-dimensional depth sensing and two-dimensional imaging
An example method includes setting an exposure time of a camera of a distance sensor to a first value, instructing the camera to acquire a first image of an object in a field of view of the camera, where the first image is acquired while the exposure time is set to the first value, instructing a pattern projector of the distance sensor to project a pattern of light onto the object, setting the exposure time of the camera to a second value that is different than the first value, and instructing the camera to acquire a second image of the object, where the second image includes the pattern of light, and where the second image is acquired while the exposure time is set to the second value.
Methods and systems for selective sensor fusion
A method includes obtaining a spatial configuration of a plurality of imaging devices relative to one another and to a movable object. The imaging devices are coupled to the movable object and comprise a first imaging device configured to operate in a multi-ocular mode and a second imaging device configured to operate in a monocular mode. The method further includes determining at least one of a distance of the movable object to an object or surface lying within a field-of-view of at least one of the imaging devices, a disparity between matched points in stereoscopic images acquired by the first imaging device, or an environment in which the plurality of imaging devices are operated. The distance is determined based in part on the spatial configuration. The method also includes selecting either the first imaging device or the second imaging device to acquire image data based on the determination.
Methods and systems for selective sensor fusion
A method includes obtaining a spatial configuration of a plurality of imaging devices relative to one another and to a movable object. The imaging devices are coupled to the movable object and comprise a first imaging device configured to operate in a multi-ocular mode and a second imaging device configured to operate in a monocular mode. The method further includes determining at least one of a distance of the movable object to an object or surface lying within a field-of-view of at least one of the imaging devices, a disparity between matched points in stereoscopic images acquired by the first imaging device, or an environment in which the plurality of imaging devices are operated. The distance is determined based in part on the spatial configuration. The method also includes selecting either the first imaging device or the second imaging device to acquire image data based on the determination.
Image processing apparatus, camera apparatus, and output control method
There is provided an image processing apparatus connected to a camera head capable of imaging a left eye image and a right eye image having parallax on one screen based on light at a target site incident on an optical instrument, the apparatus including: an image processor that performs signal processing of the left eye image and the right eye image imaged by the camera head; and an output controller that outputs the left eye image and the right eye image on which the signal processing is performed to a monitor via each of a first channel and a second channel, in which the output controller outputs one of the left eye image and the right eye image on which the signal processing is performed to the monitor via each of the first channel and the second channel in accordance with switching from a 3D mode to a 2D mode.
Image processing apparatus, camera apparatus, and output control method
There is provided an image processing apparatus connected to a camera head capable of imaging a left eye image and a right eye image having parallax on one screen based on light at a target site incident on an optical instrument, the apparatus including: an image processor that performs signal processing of the left eye image and the right eye image imaged by the camera head; and an output controller that outputs the left eye image and the right eye image on which the signal processing is performed to a monitor via each of a first channel and a second channel, in which the output controller outputs one of the left eye image and the right eye image on which the signal processing is performed to the monitor via each of the first channel and the second channel in accordance with switching from a 3D mode to a 2D mode.
METHOD AND APPARATUS FOR DISPLAYING STEREOSCOPIC STRIKE ZONE
In a method and an apparatus for displaying a 3D strike zone according to an embodiment, rotation information and translation information of a ball are estimated on the basis of corresponding coordinates between a 3D coordinate system and 2D coordinates value, and the strike zone is displayed in a multichannel image or is expressed to be three-dimensional on the basis of the rotation information and the translation information, such that whether a ball thrown by a pitcher has passed through the strike zone may be determined at various angles.
Image processing apparatus, camera apparatus, and image processing method
An image processing apparatus is connected to a camera head capable of imaging a left eye image and a right eye image having parallax on one screen based on light at a target site incident on an optical instrument. The image processing apparatus includes: a deriver that derives parameters of signal processing for the left eye image and the right eye image which are imaged by the camera head in accordance with switching from a 2D mode to a 3D mode; an image processor that performs the signal processing of the left eye image and the right eye image which are imaged by the camera head, based on the derived parameters; and an output controller that outputs the left eye image and the right eye image to which the signal processing is performed to a monitor.