Patent classifications
G01S3/00
Method and apparatus for focusing
A method and an apparatus for focusing are disclosed. The apparatus includes: a determining module, configured to determine that an imaging mode switches from a first imaging mode to a second imaging mode; an image position estimating module, configured to estimate a position of an image of a target object on a picture taking device in the second imaging mode according to a position of an image of the target object on a picture taking device in the first imaging mode and a principle of epipolar geometry; and a searching module, configured to search for the image of the target object in the second imaging mode according to the estimated position of the image of the target object on the picture taking device in the second imaging mode.
Method and apparatus for focusing
A method and an apparatus for focusing are disclosed. The apparatus includes: a determining module, configured to determine that an imaging mode switches from a first imaging mode to a second imaging mode; an image position estimating module, configured to estimate a position of an image of a target object on a picture taking device in the second imaging mode according to a position of an image of the target object on a picture taking device in the first imaging mode and a principle of epipolar geometry; and a searching module, configured to search for the image of the target object in the second imaging mode according to the estimated position of the image of the target object on the picture taking device in the second imaging mode.
Method for commissioning a network of optical sensors across a floor space
A method includes: accessing a floorplan representing the floorspace; and extracting from the floorplan a set of floorplan features representing areas of interest in the floorspace. The method also includes, calculating a set of target locations relative to the floorplan that, when occupied by the set of sensor blocks: locate the areas of interest in the floorspace within fields of view of the set sensor blocks; and yield a minimum overlap in fields of view of adjacent sensor blocks in the set of sensor blocks. The method further includes, for each sensor block in the sensor blocks installed over the floorspace: receiving, from the sensor block, an image of the floorspace; based on overlaps in the image with images from other sensor blocks in sensor blocks, estimating an installed location of the sensor block; and mapping the sensor block to a target location in the set of target locations.
Method for commissioning a network of optical sensors across a floor space
A method includes: accessing a floorplan representing the floorspace; and extracting from the floorplan a set of floorplan features representing areas of interest in the floorspace. The method also includes, calculating a set of target locations relative to the floorplan that, when occupied by the set of sensor blocks: locate the areas of interest in the floorspace within fields of view of the set sensor blocks; and yield a minimum overlap in fields of view of adjacent sensor blocks in the set of sensor blocks. The method further includes, for each sensor block in the sensor blocks installed over the floorspace: receiving, from the sensor block, an image of the floorspace; based on overlaps in the image with images from other sensor blocks in sensor blocks, estimating an installed location of the sensor block; and mapping the sensor block to a target location in the set of target locations.
Detachable mini-camera device
Integrated but detachable mini-camera for a mobile device. The mobile device comprises a main body and a detachable mini-camera configured to attach to and detach from a socket in the main body. The detachable mini-camera may comprise at least one camera and a rechargeable battery configured to, while the detached mini-camera is attached to the main body, charge from a battery in the main body via the socket. While the detachable mini-camera is detached from the main body, a wireless transceiver in the detachable mini-camera wirelessly communicates with a wireless transceiver in the main body, and a mobile application, executed by a processor in the main body, controls the detachable mini-camera and receives image data from the detachable mini-camera via the wireless communication.
Detachable mini-camera device
Integrated but detachable mini-camera for a mobile device. The mobile device comprises a main body and a detachable mini-camera configured to attach to and detach from a socket in the main body. The detachable mini-camera may comprise at least one camera and a rechargeable battery configured to, while the detached mini-camera is attached to the main body, charge from a battery in the main body via the socket. While the detachable mini-camera is detached from the main body, a wireless transceiver in the detachable mini-camera wirelessly communicates with a wireless transceiver in the main body, and a mobile application, executed by a processor in the main body, controls the detachable mini-camera and receives image data from the detachable mini-camera via the wireless communication.
360 degree video capture and playback
In a system for 360 degree video capture and playback, 360 degree video may be captured, stitched, encoded, decoded, rendered, and played-back. In one or more implementations, a stitching device may be configured to stitch the 360 degree video using an intermediate coordinate system between an input picture coordinate system and a capture coordinate system. In one or more implementations, the stitching device may be configured to stitch the 360 degree video into at least two different projection formats using a projection format decision, and an encoding device may be configured to encode the stitched 360 degree video with signaling that indicates the at least two different projection formats. In one or more implementations, the stitching device may be configured to stitch the 360 degree video with multiple viewing angles, and a rendering device may be configured to render the decoded bitstream using one or more suggested viewing angles.
360 degree video capture and playback
In a system for 360 degree video capture and playback, 360 degree video may be captured, stitched, encoded, decoded, rendered, and played-back. In one or more implementations, a stitching device may be configured to stitch the 360 degree video using an intermediate coordinate system between an input picture coordinate system and a capture coordinate system. In one or more implementations, the stitching device may be configured to stitch the 360 degree video into at least two different projection formats using a projection format decision, and an encoding device may be configured to encode the stitched 360 degree video with signaling that indicates the at least two different projection formats. In one or more implementations, the stitching device may be configured to stitch the 360 degree video with multiple viewing angles, and a rendering device may be configured to render the decoded bitstream using one or more suggested viewing angles.
POSITIONAL ZERO LATENCY
Based on viewing tracking data, a viewer's view direction to a three-dimensional (3D) scene depicted by a first video image is determined. The first video image has been streamed in a video stream to the streaming client device before the first time point and rendered with the streaming client device to the viewer at the first time point. Based on the viewer's view direction, a target view portion is identified in a second video image to be streamed in the video stream to the streaming client device to be rendered at a second time point subsequent to the first time point. The target view portion is encoded into the video stream with a higher target spatiotemporal resolution than that used to encode remaining non-target view portions in the second video image.
POSITIONAL ZERO LATENCY
Based on viewing tracking data, a viewer's view direction to a three-dimensional (3D) scene depicted by a first video image is determined. The first video image has been streamed in a video stream to the streaming client device before the first time point and rendered with the streaming client device to the viewer at the first time point. Based on the viewer's view direction, a target view portion is identified in a second video image to be streamed in the video stream to the streaming client device to be rendered at a second time point subsequent to the first time point. The target view portion is encoded into the video stream with a higher target spatiotemporal resolution than that used to encode remaining non-target view portions in the second video image.