Patent classifications
H04N23/61
DEVICE AND METHOD FOR ACQUIRING DEPTH OF SPACE BY USING CAMERA
A device and method of obtaining a depth of a space are provided. The method includes obtaining a plurality of images by photographing a periphery of a camera a plurality of times while sequentially rotating the camera by a preset angle, identifying a first feature region in a first image and an n-th feature region in an n-th image, the n-th feature region being identical with the first feature region, by comparing adjacent images between the first image and the n-th image from among the plurality of images, obtaining a base line value with respect to the first image and the n-th image, obtaining a disparity value between the first feature region and the n-th feature region, and determining a depth of the first feature region or the n-th feature region based on at least the base line value and the disparity value.
CAMERA SETTING ADJUSTMENT BASED ON EVENT MAPPING
Systems, methods, and non-transitory media are provided for adjusting camera settings based on event data. An example method can include obtaining, via an image capture device of a mobile device, an image depicting at least a portion of an environment; determining a match between one or more visual features extracted from the image and one or more visual features associated with a keyframe; and based on the match, adjusting one or more settings of the image capture device.
LENS CAP AND METHOD FOR AUTOMATIC RECOGNITION OF THE SAME
A visual sensor having a light sensitive element and a processor, the processor being adapted to recognize whether a cap is on or off the light sensitive element by recognizing a unique identification pattern coded into the light sensitive element.
Privacy-protecting multi-pass street-view photo-stitch
Generating a controllable panoramic image while eliminating unsuitable dynamic elements by receiving a plurality of images of a location from a user device, wherein the plurality of images includes images of a location at various times, identifying an object of one or more images of the plurality of images, wherein the object corresponds to an unsuitable condition for a database, determining a score of the one or more images of the plurality of images based at least in part on the identified object, determining a base image from the one or more images of the plurality of images, and generating a set of replacement images of the location based at least in part on respective determined scores of the one or more images of the plurality of images.
Systems and methods of detecting and identifying an object
Systems and methods of detecting and identifying objects are provided. In one exemplary embodiment, a method performed by one of a plurality of network nodes, with each network node having an optical sensor and being operable to wirelessly communicate with at least one other network node, comprises sending, by a network node over a wireless communication channel, to another network node, an indication associated with an object that is detected and identified by the network node based on one more images of that object that are captured by the optical sensor of the network node. Further, the detection and identification of the object is contemporaneous with the capture of the one or more images of that object. Also, the network node is operable to control a spatial orientation of the sensor so that the sensor has a viewing angle towards the object.
Conditional camera control via automated assistant commands
Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
IMAGE-CAPTURING APPARATUS
An image-capturing apparatus according to the present invention includes: an image sensor; a gaze detecting sensor configured to detect a gaze of a user; and at least one memory and at least one processor which function as: an object-detecting unit configured to detect an object from an image captured by the image sensor; and a control unit configured to, if the object-detecting unit detects an object of a specific type in a case where a state of the gaze detecting sensor is a first state for detecting the gaze, change the state of the gaze detecting sensor to a second state in which an electric power consumption of the gaze detecting sensor is less than in the first state.
Compounding device system
A system for compounding medications. The system includes one or more compounding devices, and a central computer system. The central computer system receives requests, at least some of which require the compounding of one or more medications, and pushes assignments of respective compounding tasks to the one or more compounding devices. The assignments are made in accordance with a set of rules designed to promote efficient use of compounding resources.
Infrared and visible imaging system
Methods, systems, and apparatus for an infrared and visible imaging system. In some implementations, Image data from a visible-light camera is obtained. A position of a device is determined based at least in part on the image data from the visible-light camera. An infrared camera is positioned so that the device is in a field of view of the infrared camera, with the field of view of the infrared camera being narrower than the field of view of the visible-light camera. Infrared image data from the infrared camera that includes regions representing the device is obtained. Infrared image data from the infrared camera that represents the device is recorded. Position data is also recorded that indicates the location and pose of the infrared camera when the infrared image data is acquired by the infrared camera.
Imaging apparatus, imaging system, imaging method, and imaging program including sequential recognition processing on units of readout
An imaging apparatus according to an embodiment includes: an imaging unit (10) having a pixel region in which a plurality of pixels is arranged; a readout controller (11) that controls readout of pixel signals from pixels included in the pixel region; a unit-of-readout controller (123) that controls a unit of readout that is set as a part of the pixel region and for which the readout controller performs the readout; and a recognition unit (14) that has learned training data for each of the units of readout. The recognition unit performs a recognition process on the pixel signal for each of the units of readout, and outputs a recognition result which is a result of the recognition process.