G06V20/13

Unmanned system (US) for safety device testing

Methods, devices, and systems for an unmanned system (US) for safety device testing are described herein. In some examples, one or more embodiments include a processor and a memory having instructions stored thereon which, when executed by the processor, cause the processor to capture, using an imaging device of the US, an image of a safety device, determine information corresponding to the safety device based on the image, communicate the determined information to a fire system network, receive, from the fire system network, a test procedure associated with the safety device, perform the test procedure on the safety device, and communicate a result of the test procedure to the fire system network.

Creating a ground control point file using an existing landmark shown in images

In some examples, a system includes a memory configured to store a first image and a second image captured by one or more cameras mounted on one or more vehicles and store locations and orientations of the one or more cameras at times when the first and second images were captured. The system also includes processing circuitry configured to identify an existing landmark in the first and second images. The processing circuitry is also configured to determine a latitude, a longitude, and an altitude of the existing landmark based on the locations and orientations of the one or more cameras at the times when the images were captured. The processing circuitry is configured to create a file including the location of the existing landmark and pixel coordinates of the existing landmark in the first and second images.

Creating a ground control point file using an existing landmark shown in images

In some examples, a system includes a memory configured to store a first image and a second image captured by one or more cameras mounted on one or more vehicles and store locations and orientations of the one or more cameras at times when the first and second images were captured. The system also includes processing circuitry configured to identify an existing landmark in the first and second images. The processing circuitry is also configured to determine a latitude, a longitude, and an altitude of the existing landmark based on the locations and orientations of the one or more cameras at the times when the images were captured. The processing circuitry is configured to create a file including the location of the existing landmark and pixel coordinates of the existing landmark in the first and second images.

Methods and devices for earth remote sensing using stereoscopic hyperspectral imaging in the visible (VIS) and infrared (IR) bands

A hyperspectral stereoscopic CubeSat with computer vision and artificial intelligence capabilities consists of a device and a data processing methodology. The device comprises a number of VIS-NIR-TIR hyperspectral sensors, a central processor with memory, a supervisor system running independently of the imager system, radios, a solar panel and battery system, and an active attitude control system. The device is launched into low earth orbit to capture, process, and transmit stereoscopic hyperspectral imagery in the visible and infrared portions of the electromagnetic spectrum. The processing methodology therein comprises computer vision and convolutional neural network algorithms to perform spectral feature identification and data transformations.

Methods and devices for earth remote sensing using stereoscopic hyperspectral imaging in the visible (VIS) and infrared (IR) bands

A hyperspectral stereoscopic CubeSat with computer vision and artificial intelligence capabilities consists of a device and a data processing methodology. The device comprises a number of VIS-NIR-TIR hyperspectral sensors, a central processor with memory, a supervisor system running independently of the imager system, radios, a solar panel and battery system, and an active attitude control system. The device is launched into low earth orbit to capture, process, and transmit stereoscopic hyperspectral imagery in the visible and infrared portions of the electromagnetic spectrum. The processing methodology therein comprises computer vision and convolutional neural network algorithms to perform spectral feature identification and data transformations.

APPARATUS FOR, METHOD OF, AND COMPUTER PROGRAM PRODUCT HAVING PROGRAM OF DISPLAYING BIOLOGICAL INFORMATION

A biological information displaying apparatus according to an embodiment includes a picture obtaining apparatus and a processor. The picture obtaining apparatus obtains a picture signal of a predetermined site of a subject as a moving image. The processor generates a hue moving image by extracting a luminance or an image-based photoplethysmogram (iPPG) related to a pulse wave for each pixel of the moving image and assigning a hue in accordance with a value of luminance information or iPPG information. The processor displays the generated hue moving image such that the hue moving image is superimposed on an image of the subject.

IMAGE RECOGNITION SUPPORT APPARATUS, IMAGE RECOGNITION SUPPORT METHOD, AND IMAGE RECOGNITION SUPPORT PROGRAM
20220398831 · 2022-12-15 · ·

The invention supports creation of models for recognizing attributes in an image with high accuracy. An image recognition support apparatus includes an image input unit configured to acquire an image, a pseudo label generation unit configured to recognize the acquired image based on a plurality of types of image recognition models and output recognition information, and generate pseudo labels indicating attributes of the acquired image based on the output recognition information, and a new label generation unit configured to generate new labels based on the generated pseudo labels.

EXTRANEOUS CONTENT REMOVAL FROM IMAGES OF A SCENE CAPTURED BY A MULTI-DRONE SWARM

A method for removing extraneous content in a first plurality of images, captured at a corresponding plurality of poses and a corresponding first plurality of times, by a first drone, of a scene in which a second drone is present includes the following steps, for each of the first plurality of captured images. The first drone predicts a 3D position of the second drone at a time of capture of that image. The first drone defines, in an image plane corresponding to that captured image, a region of interest (ROI) including a projection of the predicted 3D position of the second drone at a time of capture of that image. A drone mask for the second drone is generated, and then applied to the defined ROI, to generate an output image free of extraneous content contributed by the second drone.

Image-based productivity tracking system
11525243 · 2022-12-13 · ·

A work machine including a sensing device, a user interface, and a control unit is disclosed. The control unit may be configured to generate a productivity layer based on productivity data, and generate an image layer based on image data. The image data may include information relating to an image corresponding to a state of an operation associated with a worksite and a geospatial reference associated with the image. The control unit may be configured to generate a composite image of the worksite based on a map layer, the image layer, and the productivity layer, and cause the composite image to be displayed via the user interface. The composite image may position the image layer relative to the map layer based on the geospatial reference and geographical coordinates corresponding to the geospatial reference, and position the productivity layer relative to the image layer.

System and method for intelligent infrastructure calibration
11527160 · 2022-12-13 · ·

A system for infrastructure system calibration includes a sensor configured to be mounted to an infrastructure component and configured to detect an object. A corner reflector has an optical pattern and is arranged within a field of view of the sensor. The corner reflector has three surfaces that meet at a point.