Patent classifications
H04N13/271
Dynamic adjustment of structured light for depth sensing systems based on contrast in a local area
A depth camera assembly (DCA) determines depth information. The DCA projects a dynamic structured light pattern into a local area and captures images including a portion of the dynamic structured light pattern. The DCA determines regions of interest in which it may be beneficial to increase or decrease an amount of texture added to the region of interest using the dynamic structured light pattern. For example, the DCA may identify the regions of interest based on contrast values calculated using a contrast algorithm, or based on the parameters received from a mapping server including a virtual model of the local area. The DCA may selectively increase or decrease an amount of texture added by the dynamic structured light pattern in portions of the local area. By selectively controlling portions of the dynamic structured light pattern, the DCA may decrease power consumption and/or increase the accuracy of depth sensing measurements.
Methods and systems for producing content in multiple reality environments
This disclosure contains methods and systems that allow filmmakers to port filmmaking and editing skills to produce content to be used in other environments, such as video game environments, and augmented reality, virtual reality, mixed reality, and non-linear storytelling environments.
Methods and apparatus for dynamically routing robots based on exploratory on-board mapping
Methods and apparatus for dynamically routing robots based on exploratory on-board mapping are disclosed. A control system of a robot includes an image manager to command a depth camera to capture depth images of an environment. The depth camera has a field of view. The control system further includes a map generator to generate a map of the environment based on the depth images. The map includes a representation of unoccupied space within the environment, and a path extending through the unoccupied space from a reference location of the robot to a target location of the robot. The control system further includes a field of view evaluator to determine whether the field of view associated with the reference location satisfies a threshold. The control system further includes a route generator to generate, in response to the field of view associated with the reference location satisfying the threshold, a route to be followed by the robot within the environment. The route includes a first candidate location located along the path of the map between the reference location and the target location. The first candidate location is within the field of view associated with the reference location.
THREE-DIMENSIONAL NOISE REDUCTION
Systems and methods are disclosed for image signal processing. For example, methods may include receiving a current image of a sequence of images from an image sensor; combining the current image with a recirculated image to obtain a noise reduced image, where the recirculated image is based on one or more previous images of the sequence of images from the image sensor; determining a noise map for the noise reduced image, where the noise map is determined based on estimates of noise levels for pixels in the current image, a noise map for the recirculated image, and a set of mixing weights; recirculating the noise map with the noise reduced image to combine the noise reduced image with a next image of the sequence of images from the image sensor; and storing, displaying, or transmitting an output image that is based on the noise reduced image.
THREE-DIMENSIONAL NOISE REDUCTION
Systems and methods are disclosed for image signal processing. For example, methods may include receiving a current image of a sequence of images from an image sensor; combining the current image with a recirculated image to obtain a noise reduced image, where the recirculated image is based on one or more previous images of the sequence of images from the image sensor; determining a noise map for the noise reduced image, where the noise map is determined based on estimates of noise levels for pixels in the current image, a noise map for the recirculated image, and a set of mixing weights; recirculating the noise map with the noise reduced image to combine the noise reduced image with a next image of the sequence of images from the image sensor; and storing, displaying, or transmitting an output image that is based on the noise reduced image.
APPARATUS AND METHODS FOR THREE-DIMENSIONAL SENSING
A three-dimensional (3D) sensing apparatus together with a projector subassembly is provided. The 3D sensing apparatus includes two cameras, which may be configured to capture ultraviolet and/or near-infrared light. The 3D sensing apparatus may also contain an optical filter and one or more computing processors that signal a simultaneous capture using the two cameras and processing the captured images into depth. The projector subassembly of the 3D sensing apparatus includes a laser diode, one or optical elements, and a photodiode that are useable to enable 3D capture.
APPARATUS AND METHODS FOR THREE-DIMENSIONAL SENSING
A three-dimensional (3D) sensing apparatus together with a projector subassembly is provided. The 3D sensing apparatus includes two cameras, which may be configured to capture ultraviolet and/or near-infrared light. The 3D sensing apparatus may also contain an optical filter and one or more computing processors that signal a simultaneous capture using the two cameras and processing the captured images into depth. The projector subassembly of the 3D sensing apparatus includes a laser diode, one or optical elements, and a photodiode that are useable to enable 3D capture.
USER INTERFACE FOR CAMERA EFFECTS
The present disclosure generally relates to user interfaces. In some examples, the electronic device transitions between user interfaces for capturing photos based on data received from a first camera and a second camera. In some examples, the electronic device provides enhanced zooming capabilities that result in visual pleasing results for a displayed digital viewfinder and for captured videos. In some examples, the electronic device provides user interfaces for transitioning a digital viewfinder between a first camera with an applied digital zoom to a second camera with no digital zoom. In some examples, the electronic device prepares to capture media at various magnification levels. In some examples, the electronic device enhanced capabilities for navigating through a plurality of values.
USER INTERFACE FOR CAMERA EFFECTS
The present disclosure generally relates to user interfaces. In some examples, the electronic device transitions between user interfaces for capturing photos based on data received from a first camera and a second camera. In some examples, the electronic device provides enhanced zooming capabilities that result in visual pleasing results for a displayed digital viewfinder and for captured videos. In some examples, the electronic device provides user interfaces for transitioning a digital viewfinder between a first camera with an applied digital zoom to a second camera with no digital zoom. In some examples, the electronic device prepares to capture media at various magnification levels. In some examples, the electronic device enhanced capabilities for navigating through a plurality of values.
ELECTRONIC DEVICE
Disclosed is an electronic device. The electronic device includes a first housing, a second housing, and an input-output assembly. The second housing is arranged on a side opposite to a display screen of the electronic device. The first housing and the second housing are connected to define a receiving space. The second housing defines a light through hole. The input/output assembly is arranged on the first housing and received in the receiving space. A side of the first housing facing the second housing is arranged with a limiting member, and the limiting member is configured to fix the input-output assembly on the first housing. The input-output assembly includes a plurality of input-output modules including a laser transmitter, a laser receiver, and at least one image collector. Each input-output module faces the light through hole.