Patent classifications
G06T2207/10032
SPACECRAFT, COMMUNICATION METHOD, AND COMMUNICATION SYSTEM
A disclosed spacecraft is provided with: an attitude control actuator configured to control an attitude of the spacecraft; an imaging device configured to receive an optical communication signal from another spacecraft; and an attitude controller configured to control the attitude control actuator, based on a position of the optical communication signal in an image obtained by the imaging device.
HIGH DYNAMIC RANGE IMAGE SYNTHESIS METHOD AND APPARATUS, IMAGE PROCESSING CHIP AND AERIAL CAMERA
Embodiments of the present invention are a high dynamic range (HDR) synthesis method and apparatus, an image processing chip and an aerial camera. The method includes: acquiring a plurality of to-be-synthesized images having different exposure time; calculating a mean brightness of the to-be-synthesized images; determining an image brightness type of the to-be-synthesized images according to the mean brightness; calculating a brightness difference between adjacent pixel points in one to-be-synthesized image; calculating an inter-frame difference of different to-be-synthesized images at a same pixel point position according to the brightness difference; determining a motion state of the to-be-synthesized images at the pixel point position according to the inter-frame difference; and weighting and synthesizing the to-be-synthesized images into a corresponding HDR image according to the image brightness type and the motion state.
SYSTEM AND METHOD FOR REAL-TIME CROP MANAGEMENT
The present invention discloses a method for selective crop management in real time. The method comprises steps of: (a) producing a biosensor plant, said biosensor plant comprises a visual biomarker, said biomarker is encoded by at least one modified genetic locus comprising (i) preselected reporter gene allele having a phenotype detectable by a sensor, and (ii) a regulatory region of a preselected gene allele responsive to at least one parameter or condition of said plant or its environment, said regulatory region is operably linked to said reporter gene, such that the expression of said reporter gene phenotype is correlated with the status of said at least one parameter or condition of said biosensor plant or its environment; (b) acquiring image data of a target area comprising a plurality of said biosensor plants via said sensor and processing said data to generate a signal indicative of the phenotypic expression of said reporter gene allele of said biosensor plant; and (c) communicating said signal to an execution unit communicably linked to the sensor, said execution unit is capable of exerting in real time a selective monitoring and/or treatment of said target area or a portion thereof comprising said biosensor plants, said treatment is being responsive to said status of said parameter or condition of the biosensor plant or its environment. The present invention further discloses systems and plants related to the aforementioned method.
TREE CROWN EXTRACTION METHOD BASED ON UNMANNED AERIAL VEHICLE MULTI-SOURCE REMOTE SENSING
A tree crown extraction method based on UAV multi-source remote sensing includes: obtaining a visible light image and LIDAR point clouds, taking a digital orthophoto map (DOM) and the LIDAR point clouds as data sources, using a method of watershed segmentation and object-oriented multi-scale segmentation to extract single tree crown information under different canopy densities. The object-oriented multi-scale segmentation method is used to extract crown and non-crown areas, and a tree crown distribution range is extracted with the crown area as a mask; a preliminary segmentation result of single tree crown is obtained by the watershed segmentation method based on a canopy height model; a brightness value of DOM is taken as a feature, the crown area of the DOM is performed secondary segmentation based on a crown boundary to obtain an optimized single tree crown boundary information, which greatly increases the accuracy of remote sensing tree crown extraction.
DETERMINING MINIMUM REGION FOR FINDING PLANAR SURFACES
Systems, devices, methods, and computer-readable media for determining planarity in a 3D data set are provided. A method can include receiving or retrieving three-dimensional (3D) data of a geographical region, dividing the 3D data into first contiguous regions of specified first geographical dimensions, determining, for each first contiguous region of the first contiguous regions, respective measures of variation, identifying, based on the respective measures of variation, a search radius, dividing the 3D data into respective second contiguous or overlapping regions with dimensions the size of the identified search radius, and determining, based on the identified search radius, a planarity of each of the respective second contiguous or overlapping regions.
System and method for determining position of multi-dimensional object from satellite images
Various aspects of a system and a method for determining a position of one or more multi-dimensional objects are disclosed herein. In accordance with an embodiment, the system may include a memory and a processor. The processor may be configured to obtain, from a plurality of satellite images, shadow data of a first multi-dimensional object from one or more multi-dimensional objects on a visible surface. The processor may be configured to obtain, from a server, base elevation data and height data of the first multi-dimensional object. The processor may be further configured to generate a Digital Elevation Model (DEM) of the plurality of multi-dimensional objects. The processor may be further configured to determine a position of a second multi-dimensional object of the plurality of multi-dimensional objects on the visible surface, based on the generated DEM.
Adaptive gaussian derivative sigma systems and methods
In one embodiment, a method is provided. The method comprises determining a first value of a coefficient of an edge-determining algorithm in response to a spatial resolution of a first image acquired with an image capture device onboard a vehicle, a spatial resolution of a second image, and a second value of the coefficient in response to which the edge-determining algorithm generated a second edge map corresponding to the second image. The method further comprises determining, with the edge-determining algorithm in response to the coefficient having the first value, at least one edge of at least one object in the first image. The method further comprises generating, in response to the determined at least one edge, a first edge map corresponding to the first image. The method further comprises determining at least one navigation parameter of the vehicle in response to the first and second edge maps.
METHOD FOR COMPUTATIONAL METROLOGY AND INSPECTION FOR PATTERNS TO BE MANUFACTURED ON A SUBSTRATE
Methods include generating a scanner aerial image using a neural network, where the scanner aerial image is generated using a mask inspection image that has been generated by a mask inspection machine. Embodiments also include training the neural network with a set of images, such as with a simulated scanner aerial image and another image selected from a simulated mask inspection image, a simulated Critical Dimension Scanning Electron Microscope (CD-SEM) image, a simulated scanner emulator image and a simulated actinic mask inspection image.
Performing 3D reconstruction via an unmanned aerial vehicle
In some examples, an unmanned aerial vehicle (UAV) employs one or more image sensors to capture images of a scan target and may use distance information from the images for determining respective locations in three-dimensional (3D) space of a plurality of points of a 3D model representative of a surface of the scan target. The UAV may compare a first image with a second image to determine a difference between a current frame of reference position for the UAV and an estimate of an actual frame of reference position for the UAV. Further, based at least on the difference, the UAV may determine, while the UAV is in flight, an update to the 3D model including at least one of an updated location of at least one point in the 3D model, or a location of a new point in the 3D model.
Systems and methods for improved 3-D data reconstruction from stereo-temporal image sequences
In some aspects, the techniques described herein relate to systems, methods, and computer readable media for data pre-processing for stereo-temporal image sequences to improve three-dimensional data reconstruction. In some aspects, the techniques described herein relate to systems, methods, and computer readable media for improved correspondence refinement for image areas affected by oversaturation. In some aspects, the techniques described herein relate to systems, methods, and computer readable media configured to fill missing correspondences to improve three-dimensional (3-D) reconstruction. The techniques include identifying image points without correspondences, using existing correspondences and/or other information to generate approximated correspondences, and cross-checking the approximated correspondences to determine whether the approximated correspondences should be used for the image processing.