Patent classifications
G06T2207/20164
CONTOUR SHAPE RECOGNITION METHOD
Provided is a contour shape recognition method, including: sampling and extracting salient feature points of a contour of a shape sample; calculating a feature function of the shape sample at a semi-global scale by using three types of shape descriptors; dividing the scale with a single pixel as a spacing to acquire a shape feature function in a full-scale space; storing feature function values at various scales into a matrix to acquire three types of feature grayscale map representations of the shape sample in the full-scale space; synthesizing the three types of grayscale map representations of the shape sample, as three channels of RGB, into a color feature representation image; constructing a two-stream convolutional neural network by taking the shape sample and the feature representation image as inputs at the same time; and training the two-stream convolutional neural network, and inputting a test sample into a trained network model to achieve shape classification.
IMAGE REGISTRATION METHOD AND ELECTRONIC DEVICE
An image registration method includes: acquiring a target image comprising a target object; inputting the target image to a preset network model, and outputting position information and rotation angle information of the target object; obtaining a reference image comprising the target object by querying a preset image database according to the position information and the rotation angle information; and performing image registration on the target image and the reference image to obtain a corresponding position of the target object of the target image in the reference image.
Plant group identification
A farming machine moves through a field and includes an image sensor that captures an image of a plant in the field. A control system accesses the captured image and applies the image to a machine learned plant identification model. The plant identification model identifies pixels representing the plant and categorizes the plant into a plant group (e.g., plant species). The identified pixels are labeled as the plant group and a location of the pixels is determined. The control system actuates a treatment mechanism based on the identified plant group and location. Additionally, the images from the image sensor and the plant identification model may be used to generate a plant identification map. The plant identification map is a map of the field that indicates the locations of the plant groups identified by the plant identification model.
System for performing convolutional image transformation estimation
A method for training a neural network includes receiving a plurality of images and, for each individual image of the plurality of images, generating a training triplet including a subset of the individual image, a subset of a transformed image, and a homography based on the subset of the individual image and the subset of the transformed image. The method also includes, for each individual image, generating, by the neural network, an estimated homography based on the subset of the individual image and the subset of the transformed image, comparing the estimated homography to the homography, and modifying the neural network based on the comparison.
METHOD AND APPARATUS FOR GENERATING AN INITIAL SUPERPIXEL LABEL MAP FOR AN IMAGE
A method and an apparatus for generating an initial superpixel label map for a current image from an image sequence are described. The apparatus includes a feature detector that determines features in the current image. A feature tracker then tracks the determined features back into a previous image. Based on the tracked features a transformer transforms a superpixel label map associated to the previous image into an initial superpixel label map for the current image.
MOVING OBJECT DETECTION METHOD IN DYNAMIC SCENE USING MONOCULAR CAMERA
The present invention relates to a moving object detection method in a dynamic scene using a monocular camera, which is capable of detecting a moving object using a monocular camera installed on the moving object such as a vehicle, and warning a driver of a dangerous situation. The moving object detection method in a dynamic scene using a monocular camera can detect a moving object in a dynamic scene using the monocular camera without a stereo camera.
METHOD, SYSTEM, AND IMAGE PROCESSING DEVICE FOR CAPTURING AND/OR PROCESSING ELECTROLUMINESCENCE IMAGES, AND AN AERIAL VEHICLE
A method (400) of capturing and processing electroluminescence (EL) images (1910) of a PV array (40) is disclosed herein. In a described embodiment, the method 400 includes controlling the aerial vehicle (20) to fly along a flight path to capture EL images (1910) of corresponding PV array subsections (512b) of the PV array (40), deriving respective image quality parameters from at least some of the captured EL images, dynamically adjusting a flight speed of the aerial vehicle along the flight path, based on the respective image quality parameters for capturing the EL images (1910) of the PV array subsections (512b), extracting a plurality of frames (1500) of the PV array subsection (512b) from the EL images (1910); determining a reference frame having a highest image quality of the PV array subsection (512b) from among the extracted frames (2100); performing image alignment of the extracted frames (2100) to the reference frame to generate image aligned frames (2130), and processing the image aligned frames (2130) to produce an enhanced image (2140) of the PV array subsection (512b) having a higher resolution than the reference frame. A system, image processing device, and aerial vehicle for the method thereof are also disclosed.
Apparatus for real-time monitoring for construction object and monitoring method and computer program for the same
Disclosed herein is an apparatus for the real-time monitoring of construction objects. The apparatus for the real-time monitoring of construction objects includes: a communication unit configured to receive image data acquired by photographing a construction site, and to transmit safety information to an external device; and a monitoring unit provided with a prediction model pre-trained using binary image sequences of construction objects at the construction site as training data, and configured to detect a plurality of construction objects from image frames included in image data received via the communication unit and convert the detected construction objects into binary images, to generate future frames by inputting the resulting binary images to the prediction model, and to derive proximity between the construction objects by comparing the generated future frames with the resulting binary images and generate the safety information.
Path planning method and device and mobile device
The present disclosure discloses a path planning method and device and a mobile device. The method comprises: collecting environmental information in a viewing angle by a sensor of a mobile device, processing the environmental information by using an SLAM algorithm, and constructing a grid map; dividing the grid map to obtain a plurality of pixel blocks, using an area constituted of pixel blocks not occupied by obstacles as a search area for path planning, and obtaining a processed grid map; determining reference points by using pixel points in the search area, and deploying topological points on the processed grid map according to the reference point determined and constructing a topological map; and calculating an optimal path from a starting point to a preset target point by using a predetermined algorithm according to the topological map constructed. The present disclosure improves path planning efficiency and saves storage resources.
MIXED REALITY (MR) PROVIDING DEVICE FOR PROVIDING IMMERSIVE MR, AND CONTROL METHOD THEREOF
A mixed reality (MR) providing device is disclosed. The MR providing device includes: a camera, a communication unit comprising circuitry configured to communicate with an electronic device providing video, an optical display unit comprising a display configured to simultaneously display real space within a preset range of viewing angle and a virtual image, and a processor. The processor is configured to: capture the preset range of viewing angle through the camera to acquire an image, identify at least one semantic anchor spot of the acquired image in which an object may be positioned, transmit characteristic information of the semantic anchor spot related to the object that may be positioned to the electronic device through the communication unit, receive an object region including the object corresponding to the characteristic information and included in an image frame of the video from the electronic device through the communication unit, and control the optical display unit to display the received object region on the semantic anchor spot.