G06T2207/10048

Information processing apparatus, information processing system, and material identification method

An information processing apparatus includes an imaging apparatus that irradiates reference light in a predetermined wavelength band to a subject and captures reflection of the reference light from the subject to acquire data of captured images including a polarized image in multiple bearings (S30). Based on the polarized image, the imaging apparatus acquires a polarization degree image representing a distribution of polarization degrees (S32). The imaging apparatus extracts a region whose polarization degree falls within a predetermined range of polarization degrees as an image of the subject having a predetermined material (S34). The imaging apparatus performs relevant processing on the subject image to generate output data and outputs the generated data (S36).

METHOD FOR TURBINE COMPONENT QUALIFICATION

A method for evaluating a turbine component includes inducing a thermal response of the component at an initial time, capturing a two-dimensional infrared image of the thermal response of the component with a thermal imaging device, wherein the two-dimensional infrared image comprises a plurality of infrared image pixels, generating a two-dimension to three-dimension mapping template to correlate two-dimensional infrared image data with three-dimensional locations on the component, mapping at least a subset of the plurality of infrared image pixels of the two-dimensional infrared image to three-dimensional coordinates using the mapping template, and generating a three-dimensional infrared image and infrared data of the component from the mapped infrared image pixels to three-dimensional coordinates, wherein the three-dimensional infrared image and infrared data is used to qualify the component for use.

Methods and devices for unmanned aerial vehicle based site inspection and scale rendered analysis
11710415 · 2023-07-25 · ·

Various embodiments of the present technology generally relate to unmanned aerial vehicle (UAV) scale rendered analysis, orthomosaic, and 3D mapping and landing platform systems. More specifically, some embodiments relate to systems, methods, and means for the collection and processing of images captured during a UAV flight sequence. In some embodiments, the UAV landing platform retrieves flight information and initial map information over a unidirectional virtual private network from a multitenant cloud-based scheduling application. The UAV landing platform sends the initial map information to a UAV over a WiFi, Bluetooth, or radio frequency network and initiates a drone flight sequence once the drone flight sequence has been approved by a local user. The UAV landing platform receives property image data from a UAV after a UAV flight sequence has ended and transmits the received property image data back to the cloud application.

Sensor fusion eye tracking
11710350 · 2023-07-25 · ·

Some implementations of the disclosure involve, at a device having one or more processors, one or more image sensors, and an illumination source, detecting a first attribute of an eye based on pixel differences associated with different wavelengths of light in a first image of the eye. These implementations next determine a first location associated with the first attribute in a three dimensional (3D) coordinate system based on depth information from a depth sensor. Various implementations detect a second attribute of the eye based on a glint resulting from light of the illumination source reflecting off a cornea of the eye. These implementations next determine a second location associated with the second attribute in the 3D coordinate system based on the depth information from the depth sensor, and determine a gaze direction in the 3D coordinate system based on the first location and the second location.

CONTROLLING LIGHTING LOADS TO ACHIEVE A DESIRED LIGHTING PATTERN

A visible light sensor may be configured to sense environmental characteristics of a space using an image of the space. The visible light sensor may be controlled in one or more modes, including a daylight glare sensor mode, a daylighting sensor mode, a color sensor mode, and/or an occupancy/vacancy sensor mode. In the daylight glare sensor mode, the visible light sensor may be configured to decrease or eliminate glare within a space. In the daylighting sensor mode and the color sensor mode, the visible light sensor may be configured to provide a preferred amount of light and color temperature, respectively, within the space. In the occupancy/vacancy sensor mode, the visible light sensor may be configured to detect an occupancy/vacancy condition within the space and adjust one or more control devices according to the occupation or vacancy of the space. The visible light sensor may be configured to protect the privacy of users within the space via software, a removable module, and/or a special sensor.

MULTISCALE MODELING TO DETERMINE MOLECULAR PROFILES FROM RADIOLOGY

Systems and methods for analyzing pathologies utilizing quantitative imaging are presented herein. Advantageously, the systems and methods of the present disclosure utilize a hierarchical analytics framework that identifies and quantify biological properties/analytes from imaging data and then identifies and characterizes one or more pathologies based on the quantified biological properties/analytes. This hierarchical approach of using imaging to examine underlying biology as an intermediary to assessing pathology provides many analytic and processing advantages over systems and methods that are configured to directly determine and characterize pathology from underlying imaging data.

Systems and Methods for Measuring Vital Signs Using Multimodal Health Sensing Platforms

Systems and methods for measuring vitals in accordance with embodiments of the invention are illustrated. One embodiment includes a method for measuring vital signs. The method includes steps for identifying regions of interest (ROIs) from video data of an individual, generating temporal waveforms from the ROIs, analyzing the generated temporal waveforms to extract vital sign measurements, and generating outputs based on the analyzed temporal waveforms.

NODE-BASED NEAR-MISS DETECTION
20230237804 · 2023-07-27 ·

A system includes one or more video capture devices and a processor coupled to each video capture device. Each processor is operable to direct its respective video capture device to obtain an image of a monitored area and process the image to identify objects of interest represented in the image. The processor is also operable to generate bounding perimeter virtual objects for the identified objects of interest, each bounding perimeter virtual object surrounding at least part of its respective object of interest. The processor is further operable to determine danger zones for the identified objects of interest based on the bounding perimeter virtual objects. The processor is further operable to determine at least one near-miss condition based at least in part on an actual or predicted overlap of danger zones for multiple objects of interest, and may optionally generate an alert at least partially in response to the near-miss condition.

Systems and methods for detecting movement of at least one non-line-of-sight object

A system and method for detecting movement of a non-light-of-sight (NLOS) object in a space outside a line-of-sight (LOS) of a camera is disclosed. The camera acquires a sequence of successive images that each include a set of pixels, at least a subset thereof representing a target, being at least part of a visible object located within the LOS of the camera and impacted by light scattered from the NLOS object. The set of pixels of two images of the sequence, acquired from different positions of the camera, are registered into a common coordinate system, giving rise to two registered images. A target light intensity value is calculated for both registered images based on at least part of the set of pixels representing the target in the respective registered image. Movement of the NLOS object is detected based on a variation in the target light intensity value between registered images.

GEOLOGICALLY CONSTRAINED INFRARED IMAGING DETECTION METHOD AND SYSTEM FOR URBAN DEEPLY-BURIED STRIP-LIKE PASSAGE

Provided in the present invention are a geologically constrained infrared imaging detection method and system for an urban deeply-buried strip-like passage, pertaining to the crossing fields of geophysics and remote sensing technology. The method includes: establishing an urban hierarchical three-dimensional temperature field model according to urban street DEM data and geological data corresponding to urban streets; acquiring urban stratum geological background heat flux according to the urban hierarchical three-dimensional temperature field model; using a total solar radiation energy distribution model to calculate urban surface total solar radiation energy; sequentially filtering out the urban surface total solar radiation energy and the urban stratum geological background heat flux from an infrared remote sensing image of a region corresponding to a strip-like underground target, to acquire a perturbation signal image of an urban street deeply-buried strip-like passage; and using grayscale closed-operation plus an edge detection algorithm to perform detection and positioning after preprocessing the perturbation signal image of the urban street deeply-buried strip-like passage, to acquire location information of an urban strip-like underground passage. The present invention achieves inverse detection and positioning of an urban street deeply-buried strip-like passage.