H04N23/61

IMAGE PROCESSING DEVICE AND IMAGE PROCESSING SYSTEM, AND IMAGE PROCESSING METHOD
20230007167 · 2023-01-05 ·

Provided are a device and method that calculate a predicted motion vector corresponding to a type and posture of a tracked subject, and generate a camera control signal necessary for capturing an image of a tracked subject. There are included a predicted subject motion vector calculation unit that detects a tracked subject of a previously designated type from a captured image input from an imaging unit and calculates a predicted motion vector corresponding to a type and posture of the detected tracked subject, and a camera control signal generation unit that generates, on the basis of the predicted motion vector calculated by the predicted subject motion vector calculation unit, a camera control signal for capturing an image of a tracked image of the tracked subject. By using a neural network or the like, the predicted subject motion vector calculation unit executes processing of detecting a tracked subject of a type designated from the captured image by a user, and predicted motion vector calculation processing.

METHOD, SYSTEM, AND IMAGE PROCESSING DEVICE FOR CAPTURING AND/OR PROCESSING ELECTROLUMINESCENCE IMAGES, AND AN AERIAL VEHICLE

A method (400) of capturing and processing electroluminescence (EL) images (1910) of a PV array (40) is disclosed herein. In a described embodiment, the method 400 includes controlling the aerial vehicle (20) to fly along a flight path to capture EL images (1910) of corresponding PV array subsections (512b) of the PV array (40), deriving respective image quality parameters from at least some of the captured EL images, dynamically adjusting a flight speed of the aerial vehicle along the flight path, based on the respective image quality parameters for capturing the EL images (1910) of the PV array subsections (512b), extracting a plurality of frames (1500) of the PV array subsection (512b) from the EL images (1910); determining a reference frame having a highest image quality of the PV array subsection (512b) from among the extracted frames (2100); performing image alignment of the extracted frames (2100) to the reference frame to generate image aligned frames (2130), and processing the image aligned frames (2130) to produce an enhanced image (2140) of the PV array subsection (512b) having a higher resolution than the reference frame. A system, image processing device, and aerial vehicle for the method thereof are also disclosed.

PROCESSING SYSTEM, IMAGE PROCESSING METHOD, LEARNING METHOD, AND PROCESSING DEVICE
20230005247 · 2023-01-05 · ·

A processing system includes a processor with hardware. The processor is configured to perform processing of acquiring a detection target image captured by an endoscope apparatus, controlling the endoscope apparatus based on control information, detecting a region of interest included in the detection target image based on the detection target image for calculating estimated probability information representing a probability of the detected region of interest, identifying the control information for improving the estimated probability information related to the region of interest within the detection target image based on the detection target image, and controlling the endoscope apparatus based on the identified control information.

SHOOTING METHOD, SHOOTING INSTRUCTION METHOD, SHOOTING DEVICE, AND SHOOTING INSTRUCTION DEVICE

A shooting method executed by a shooting device includes: shooting first images of a target space; generating a first three-dimensional point cloud of the target space, based on the first images and a first shooting position and a first shooting orientation of each of the first images; and determining a first region of the target space for which generating a second three-dimensional point cloud which is denser than the first three-dimensional point cloud is difficult, using the first three-dimensional point cloud and without generating the second three-dimensional point cloud. The determining includes generating a mesh using the first three-dimensional point cloud, and determining the region other than a second region of the target space for which the mesh is generated.

AUTOMATED IMAGE CAPTURING APPARATUS AND SYSTEM THEREOF

A system and apparatus for automated image capturing, comprising a microcontroller, an image capturing device operatively coupled to the pair of guiding apparatus using a first electric rotary actuator, a rotary plate operatively mounted on a second electric rotary actuator. The pair of guiding apparatus and the first electric rotary actuator is actuated to cause change in position of the image capturing device relative to an object positioned on the rotary plate and second electric rotary actuator is actuated causing change in angle of orientation of the object positioned on the rotary plate. By varying lighting conditions and for different background images, plurality of images of object are captured using the image capturing device by actuating electro-mechanical components of the apparatus.

METHOD AND SYSTEM FOR PROVIDING INTELLIGENT CONTROL BY USING RADAR SECURITY CAMERA
20230007183 · 2023-01-05 · ·

An intelligent control method and system using a radar security camera are disclosed, wherein a target is detected by 360° radar sensing regardless of the rotation radius of a camera by using the security camera having a built-in radar, and the camera is enabled to track the target according to the moving direction and specific signs of the target after the target is identified as a person and a vehicle sequentially according to a decision priority order.

ELECTRONIC DEVICE AND METHOD FOR OFFERING VIRTUAL REALITY SERVICE

An electronic device may include a communication module, a camera, and at least one processor operably coupled with the communication module and the camera. The at least one processor may be configured to establish a communication connection with an external device and a wearable device by using the communication module, and acquire an image through the camera, and acquire first object information for rendering a first virtual reality object based on the image, and transmit the first object information to the external device through the communication module, and while transmitting the first object information to the external device, determine whether a specified event occurs, and provide map information on a peripheral environment of the electronic device based on at least one of first peripheral space information including information on an object located within a first region, acquired by using the camera, or second peripheral space information including information on an object located within a second region, received from the wearable device, and based on the occurrence of the specified event, transmit second object information which includes the map information and information on a location of the first virtual reality object within the map information, to the external device.

Digital zoom conferencing
11570401 · 2023-01-31 · ·

Methods, apparatuses, and techniques for security and/or automation systems are described. In one embodiment, the method including identifying a presence of a first person at a first location, capturing a first video related to the first person at the first location, and initiating an adjustment of a display of the first video based at least in part on identifying the presence of the first person.

Carrier-assisted tracking

A method includes receiving selection of a target within an image captured by an image sensor of a payload and displayed on a user interface of the payload, detecting a deviation of the target from an expected target state within the image, generating, based at least partly on the deviation, a payload control signal including a first angular velocity for rotating the payload about an axis of the carrier to reduce the deviation about the axis in a subsequent image, and generating a base support control signal including a second angular velocity for rotating the payload with respect to the axis. When the first and second angular velocities are received, the carrier is controlled to rotate the payload at a third angular velocity about the axis. The third angular velocity is the first angular velocity, the second angular velocity, or a combination of both.

Using remote sensors to resolve start up latency in battery-powered cameras and doorbell cameras
11570358 · 2023-01-31 · ·

An apparatus comprises a camera and one or more sensors. The camera generally has a low power deactivated mode. The one or more sensors are generally remotely located with respect to the camera. The one or more sensors may be configured to communicate a signal to the camera in response to a trigger condition. The camera is generally configured to activate in response to receiving the signal from the one or more sensors.