H04N23/90

Sheet lighting for particle detection in drug product containers

In a method for imaging a container holding a sample, the container is illuminated with a laser sheet that impinges upon the container in a first direction corresponding to a first axis. A plane of the laser sheet is defined by the first axis and a second axis orthogonal to the first axis. The method also includes capturing, by a camera having an imaging axis that is substantially orthogonal to at least the first axis, an image of the container. The method further includes analyzing, by one or more processors, the image of the container to detect particles within, and/or on an exterior surface of, the container.

System, method and apparatus for macroscopic inspection of reflective specimens

An inspection apparatus includes a specimen stage configured to retain a specimen, at least three imaging devices arranged in a triangular array positioned above the specimen stage, each of the at least three imaging devices configured to capture an image of the specimen, one or more sets of lights positioned between the specimen stage and the at least three imaging devices, and a control system in communication with the at least three imaging devices.

Image-based drive-thru management system

The subject matter of this specification can be implemented in, among other things, methods, systems, computer-readable storage medium. A method can include receiving, by a processing device, image data including one or more image frames indicative of a current state of a drive-thru area. The processing device determines a vehicle disposed within the drive-thru area based on the image data. The processing device receives order data with a pending meal order. The processing device determines a first association between the vehicle and the pending meal order based on the image data. The processing devices determine a meal delivery procedure associated with the based on the association between the vehicle and the pending meal order. The processing device performs may perform the meal delivery procedure. The processing device may provide the meal delivery procedure for display on a graphical user interface (GUI).

SYSTEMS AND METHODS FOR GENERATING DEPTH MAPS USING A CAMERA ARRAYS INCORPORATING MONOCHROME AND COLOR CAMERAS

A camera array, an imaging device and/or a method for capturing image that employ a plurality of imagers fabricated on a substrate is provided. Each imager includes a plurality of pixels. The plurality of imagers include a first imager having a first imaging characteristics and a second imager having a second imaging characteristics. The images generated by the plurality of imagers are processed to obtain an enhanced image compared to images captured by the imagers. Each imager may be associated with an optical element fabricated using a wafer level optics (WLO) technology.

MODULAR CAMERA BLOCKS FOR VIRTUAL REALITY CAPTURE

An apparatus comprises: a camera module for obtaining a first image, the camera module having at least one port, each of the at least one ports being associated with an attachment position for receiving a second camera module for obtaining a second image; a processor for detecting a position of a second camera module and providing, to an image processing controller, information relating to at least one of the position of the second camera module and the first image obtained by the camera module; and a memory for storing the information relating to at least one of the position of the second camera module and the first image obtained by the camera module.

VISION SYSTEM
20180005529 · 2018-01-04 ·

A vision system for a motor vehicle and a method for providing information to a driver of a motor vehicle, where the system includes at least one video camera, including a lens, adapted and configured to be mounted on a side of the vehicle with the lens facing forward, the lens having an angle of view that provides a field of view for allowing capture, simultaneously, of images of a view alongside the vehicle and of at least part of the side of the vehicle from the rear towards the front of the vehicle, and a processor coupled to the at least one video camera, the processor adapted and configured to process the captured images and provide processed images for display on a monitor and/or vehicle orientation information to a computerized vehicle operation system.

CAMERA CONFIGURATION ON MOVABLE OBJECTS
20180004232 · 2018-01-04 ·

Systems and methods for obstacle detection and state information determination are provided. In some embodiments, a movable object may carry one or more imaging devices. The imaging devices may be arranged on the movable object so as to have a field of view oriented vertically relative to the movable object. The arrangement of the imaging device may complement or supplant existing arrangement schemes and provide efficient, multi-functional and cost-effective means of arranging imaging devices on movable objects.

SPLIT-CAMERA AUTOALIGNMENT

An electronic device comprises a camera and a retaining member. The camera includes an objective portion configured to collect light from a subject, a sensor portion reversibly separable from the objective portion, an alignment-sensing system configured to sense a state of alignment between the objective portion and the sensor portion, an actuator configured to move the objective or sensor portion, and a computer configured to control the actuator responsive to output of the alignment-sensing system, so as to bring the objective and sensor portions into alignment. The retaining member is configured to couple the objective portion to the sensor portion when the objective and sensor portions are aligned and when the objective portion is separated from the sensor portion.

Systems and Methods for Assessing Viewer Engagement
20180007431 · 2018-01-04 ·

A system for quantifying viewer engagement with a video playing on a display includes at least one camera to acquire image data of a viewing area in front of the display. A microphone acquires audio data emitted by a speaker coupled to the display. The system also includes a memory to store processor-executable instructions and a processor. Upon execution of the processor-executable instructions, the processor receives the image data and the audio data and determines an identity of the video displayed on the display based on the audio data. The processor also estimates a first number of people present in the viewing area and a second number of people engaged with the video. The processor further quantifies the viewer engagement of the video based on the first number of people and the second number of people.

CAMERA CONTROL APPARATUS
20180007282 · 2018-01-04 ·

A camera control apparatus of the present disclosure includes an interface and a controller. The interface receives first image data generated by a first camera performing image capturing, second image data generated by a second camera performing image capturing, and altitude information relating to altitude, the altitude information being output by an altitude sensor, and transmits a drive signal to a first actuator capable of changing an image capturing direction of the first camera and to a second actuator capable of changing an image capturing direction of the second camera. The controller outputs the drive signal driving at least one of the first actuator and the second actuator to the interface so that an image capturing region of composite image data in which the first image data and the second image data are combined is narrower when the altitude indicated by the altitude information is lower.