Patent classifications
H04N23/617
Image processing device including neural network processor and operating method thereof
An image processing device includes: an image sensor configured to generate first image data by using a color filter array; and processing circuitry configured to select a processing mode from a plurality of processing modes for the first image data, the selecting being based on information about the first image data; generate second image data by reconstructing the first image data using a neural network processor based on the processing mode; and generate third image data by post-processing the second image data apart from the neural network processor based on the processing mode.
Networked cameras configured for camera replacement
A camera is provided that has an image sensor for recording image data from a field of vision, a communication interface for connection to at least one further camera in a network, a control and evaluation unit for reading the image data, and a memory in which a parameter set for the operation of the camera is stored, At least one further parameter set for the operation of at least one further camera of the network is stored in the memory here.
NEURAL NETWORK BASED AUTO-WHITE-BALANCING
A method of auto white balancing, including, receiving an original image, determining an RG logarithmic ratio of a set of red to green channel values of the original image, determining a BG logarithmic ratio of a set of blue to green channel values of the original image, determine an original two-dimensional histogram utilizing the RG logarithmic ratio and the BG logarithmic ratio, determine a Gaussian-blur two-dimensional histogram utilizing the RG logarithmic ratio and the BG logarithmic ratio, determining a sharpened two-dimensional histogram of a sharpened image utilizing the RG logarithmic ratio and the BG logarithmic ratio, determining a Laplacian-edge two-dimensional histogram of a Laplacian-edge image utilizing the RG logarithmic ratio and the BG logarithmic ratio and determining a white balancing gain utilizing a neural network based on the original 2D histogram, the Gaussian-blur 2D histogram, the sharpened 2D histogram and the Laplacian-edge 2D histogram.
DETERMINING IMAGE SENSOR SETTINGS USING LIDAR
Methods and devices related to determining image sensor settings using LiDAR are described. In an example, a method can include receiving, at a processing resource via a LiDAR sensor, first signaling indicative of location data, elevation data, and/or light energy intensity data associated with an object, receiving, at the processing resource via an image sensor, second signaling indicative of data representing an image of the object, generating, based at least in part on the first signaling, additional data representing a frame of reference for the object, transmitting to a user interface third signaling indicative of the data representing the frame of reference for the object and the data representing the image of the object, and displaying, at the user interface and based at least in part on the third signaling, another image that comprises a combination of the frame of reference and the data representing the image.
DISPLAY DEVICE AND METHOD OF DRIVING THE SAME
A display device includes: a display panel including a first display area having a first light transmittance and a second display area having a second light transmittance that is higher than the first light transmittance; a camera module under the second display area that is configured to output a raw image signal; a compensation module configured to: activate in response to a compensation control signal; receive the raw image signal; and compensate the raw image signal through a compensation program utilizing a learning data-based deep learning algorithm to generate a compensation image signal; and a control module configured to control operations of the display panel, the camera module, and the compensation module.
PATH-BASED SURVEILLANCE IMAGE CAPTURE
Systems, methods, and computer readable media for performing task assignment, completion, and management within a crowdsourced surveillance platform. A remote server may identify targets for image capture and may assign capture tasks to users based on travel plans of the user. Users may be assigned task to capture image of target locations lying along a travel path. The remote server may aggregate data related to the captured images and use it to update a map and log changes to the target location over time.
Root level controls to enable privacy mode for device cameras
An approach is provided that detects when a digital camera has been set to a privacy mode that limits access to the digital camera. When in privacy mode, the digital camera receives a request to access the digital camera from an application. The approach determines whether the requesting application is allowed access to the digital camera while the digital camera is in the privacy mode. The requesting application is then allowed access to the digital camera in response to the determination being that the requesting application is allowed access to the digital camera. Likewise, the requesting application is inhibited access to the digital camera in response to the determination being that the requesting application is not allowed access to the digital camera.
Multi-Stage Autonomous Localization Architecture for Charging Electric Vehicles
An automated charging system for an electric vehicle is disclosed that includes a plug with a built-in camera assembly. The camera assembly captures images of a charging port of the electric vehicle, which are processed by one or more processors to estimate the location of the charging port relative to the plug. A multi-stage localization architecture is described that includes a gross localization procedure and a fine localization procedure. The gross localization procedure can implement a first convolutional neural network (CNN) to estimate a position of an object in the image. The fine localization procedure can implement a second CNN to estimate a position and orientation of the object. Actuators for moving the plug in a three-dimensional space can be controlled by the multi-stage localization architecture.
CAMERA SERVER FOR HETEROGENEOUS CAMERAS
Systems and techniques are provided for a camera server for heterogeneous cameras. A camera server may include a computing device that may a receive image data from a first camera and second image data from a second camera of a heterogenous system that may be a trapped ion quantum computer. The first camera may observe trapped ions. The second camera may observe optical systems and laser beams. The second image data may have a different format than the first image data. The computing device may convert the image data and the second image data into a format for a common data structure for image data, send the image data in the format for the common data structure for image data to client computing devices, and send the second image data in the format for the common data structure for image data to additional client computing devices.
Image capturing apparatus, information processing apparatus, methods for controlling the same, image capturing apparatus system, and storage medium
An image capturing apparatus includes a reception unit configured to connect with an external device, which is able to transmit a plurality of learned models, and receive list information of a plurality of learned models, a selection unit configured to select, based on the list information of the plurality of learned models, a learned model from the plurality of learned models, and a transmission unit configured to transmit a transmission request for the learned model selected by the selection unit to the external device, wherein, the reception unit receives the selected learned model transmitted from the external device.