Semantic segmentation method and system for high-resolution remote sensing image based on random blocks
11189034 · 2021-11-30
Assignee
Inventors
- Jianwei Yin (Hangzhou, CN)
- Ge Su (Hangzhou, CN)
- Yongheng Shang (Hangzhou, CN)
- Zhengwei Shen (Hangzhou, CN)
Cpc classification
G06V10/267
PHYSICS
G06V10/454
PHYSICS
G06F17/18
PHYSICS
G06T7/143
PHYSICS
International classification
G06T7/143
PHYSICS
G06F17/18
PHYSICS
Abstract
A semantic segmentation method and system for a high-resolution remote sensing image based on random blocks. In the semantic segmentation method, the high-resolution remote sensing image is divided into random blocks, and semantic segmentation is performed for each individual random block separately, thus avoiding overflow of GPU memory during semantic segmentation of the high-resolution remote sensing image. In addition, feature data in random blocks neighboring each random block incorporated into the process of semantic segmentation, overcoming the technical shortcoming that the existing segmentation method for the remote sensing image weakens the correlation within the image. Moreover, in the semantic segmentation method, semantic segmentation is separately performed on mono-spectral feature data in each band of the high-resolution remote sensing image, thus enhancing the accuracy of sematic segmentation of the high-resolution remote sensing image.
Claims
1. A semantic segmentation method for a high-resolution remote sensing image based on random blocks, comprising: partitioning a high-resolution remote sensing image into a plurality of random blocks; extracting mono-spectral feature data in each band from each random block; performing semantic segmentation separately on the mono-spectral feature data in each band of each random block by using a supervised semantic segmentation network and with reference to mono-spectral feature data in each band of random blocks neighboring each random block, to obtain a mono-spectral semantic segmentation probability plot regarding each band of each random block; and fusing mono-spectral semantic segmentation probability plots regarding all bands of each random block by using a trained weight, to obtain a fused semantic segmentation probability plot of each random block.
2. The semantic segmentation method for a high-resolution remote sensing image based on random blocks according to claim 1, wherein the partitioning the high-resolution remote sensing image into random blocks, to obtain a plurality of random blocks specifically comprises: randomly selecting a pixel point d.sub.0 in a central area of the high-resolution remote sensing image; cropping a square from the high-resolution remote sensing image to obtain a random block p.sub.0, wherein the square is centered at the pixel point d.sub.0 and has a randomly generated side length of len(p.sub.0); further cropping squares which are respectively centered at the four vertices d.sub.01, d.sub.02, d.sub.03, and d.sub.04 of the random block p.sub.0 and have randomly generated side lengths len(p.sub.01), len(p.sub.02), len(p.sub.03), len(p.sub.04) from the high-resolution remote sensing image, to generate random blocks p.sub.01, p.sub.02, p.sub.03 and p.sub.04 which neighbor the random block p.sub.0, wherein the side length of each square is in a range of 512≤len(.Math.)≤1024; and repeating the step of “further cropping squares which are respectively centered at the four vertices d.sub.01, d.sub.02, d.sub.03 and d.sub.04 of the random block p.sub.0 and have randomly generated side lengths len(p.sub.01), len(p.sub.02), len(p.sub.03), len(p.sub.04) from the high-resolution remote sensing image, to generate random blocks p.sub.01, p.sub.02, p.sub.03, and p.sub.04 which neighbor the random block p.sub.0”, to continuously generate random blocks neighboring each newly generated random block until these newly generated random blocks all reach edges of the high-resolution remote sensing image.
3. The semantic segmentation method for a high-resolution remote sensing image based on random blocks according to claim 1, wherein the supervised semantic segmentation network comprises an encoder, an RNN network, a decoder, a first supervision module, a second supervision module, and a third supervision module; the encoder, the RNN network, and the decoder are successively connected; and the first supervision module is arranged in the last layer of the encoder, the second supervision module is arranged in the first layer of the decoder, and the third supervision module is arranged in the second layer of the decoder.
4. The semantic segmentation method for a high-resolution remote sensing image based on random blocks according to claim 3, wherein the performing semantic segmentation separately on the mono-spectral feature data in each band of each random block by using a supervised semantic segmentation network and with reference to mono-spectral feature data in each band of random blocks neighboring each random block, to obtain a mono-spectral semantic segmentation probability plot regarding each band of each random block specifically comprises: extracting abstract features from mono-spectral feature data in the jth bands of the ith random block p.sub.i and a random block p.sub.im neighboring the ith random block p.sub.i by using the encoder and according to a formula
5. The semantic segmentation method for a high-resolution remote sensing image based on random blocks according to claim 1, wherein before the step of fusing mono-spectral semantic segmentation probability plots regarding all bands of each random block by using a trained weight, to obtain a fused semantic segmentation probability plot of each random block, the method further comprises: constructing a weight training network for the plurality of mono-spectral semantic segmentation probability plots, wherein the weight training network comprises a plurality of parallel supervised semantic segmentation networks and a convolution fusion module; and based on the multiple pieces of mono-spectral feature data of the random block, performing weight training on the mono-spectral semantic segmentation probability plot regarding each band by using the weight training network, to obtain a trained weight.
6. A semantic segmentation system for a high-resolution remote sensing image based on random blocks, comprising: an image partitioning module, configured to partition a high-resolution remote sensing image into a plurality of random blocks; a mono-spectral feature data extraction module, configured to extract mono-spectral feature data in each band from each random block; a semantic segmentation module, configured to: perform semantic segmentation separately on the mono-spectral feature data in each band of each random block by using a supervised semantic segmentation network and with reference to mono-spectral feature data in each band of random blocks neighboring each random block, to obtain a mono-spectral semantic segmentation probability plot regarding each band of each random block; and a fusion module, configured to fuse mono-spectral semantic segmentation probability plots regarding all bands of each random block by using a trained weight, to obtain a fused semantic segmentation probability plot of each random block.
7. The semantic segmentation system for a high-resolution remote sensing image based on random blocks according to claim 6, wherein the image partitioning module specifically comprises: a pixel point selection submodule, configured to randomly select a pixel point d.sub.0 in a central area of the high-resolution remote sensing image; a first image partitioning submodule, configured to crop a square from the high-resolution remote sensing image to obtain a random block p.sub.0, wherein the square is centered at the pixel point d.sub.0 and has a randomly generated side length of len(p.sub.0); a second image partitioning submodule, configured to further crop squares which are respectively centered at the four vertices d.sub.01, d.sub.02, d.sub.03 and d.sub.04 of the random block p.sub.0 and have randomly generated side lengths len(p.sub.01), len(p.sub.02), len(p.sub.03), len(p.sub.04) from the high-resolution remote sensing image, to generate random blocks p.sub.01, p.sub.02, p.sub.03, and p.sub.04 which neighbor the random block p.sub.0, wherein the side length of each square is in a range of 512≤len(.Math.)≤1024; and a third image partitioning submodule, configured to repeat the step of “further cropping squares which are respectively centered at the four vertices d.sub.01, d.sub.02, d.sub.03 and d.sub.04 of the random block p.sub.0 and have randomly generated side lengths len(p.sub.01), len(p.sub.02), len(p.sub.03), len(p.sub.04) from the high-resolution remote sensing image, to generate random blocks p.sub.01, p.sub.02, p.sub.03, and p.sub.04 which neighbor the random block p.sub.0”, to continuously generate random blocks neighboring each newly generated random block until these newly generated random blocks all reach edges of the high-resolution remote sensing image.
8. The semantic segmentation system for a high-resolution remote sensing image based on random blocks according to claim 6, wherein the supervised semantic segmentation network comprises an encoder, an RNN network, a decoder, a first supervision module, a second supervision module, and a third supervision module; the encoder, the RNN network, and the decoder are successively connected; and the first supervision module is arranged in the last layer of the encoder, the second supervision module is arranged in the first layer of the decoder, and the third supervision module is arranged in the second layer of the decoder.
9. The semantic segmentation system for a high-resolution remote sensing image based on random blocks according to claim 8, wherein the semantic segmentation module specifically comprises: an encoding submodule, configured to extract abstract features from mono-spectral feature data in the jth bands of the ith random block p.sub.i and a random block p.sub.im neighboring the ith random block p.sub.i by using the encoder and according to a formula
10. The semantic segmentation system for a high-resolution remote sensing image based on random blocks according to claim 6, further comprising: a weight training network construction module, configured to construct a weight training network for the plurality of mono-spectral semantic segmentation probability plots, wherein the weight training network comprises a plurality of parallel supervised semantic segmentation networks and a convolution fusion module; and a weight training module, configured to: based on the multiple pieces of mono-spectral feature data of the random block, perform weight training on the mono-spectral semantic segmentation probability plot regarding each band by using the weight training network, to obtain the trained weight.
Description
BRIEF DESCRIPTION OF DRAWINGS
(1) To describe the technical solutions in the embodiments of the present disclosure or in the related art more clearly, the following briefly describes the accompanying drawings required for the embodiments. Apparently, the accompanying drawings in the following description show merely some embodiments of the present disclosure, and those of ordinary skill in the art may still derive other accompanying drawings from these accompanying drawings without creative efforts.
(2)
(3)
(4)
(5)
DETAILED DESCRIPTION
(6) Certain embodiments herein provide a semantic segmentation method and system for a high-resolution remote sensing image based on random blocks, so as to overcome the technical shortcoming that the existing semantic segmentation method for the high-resolution remote sensing image causes the overflow of GPU memory and is unable to identify objects with the same or similar colors, thus improving the accuracy of semantic segmentation of the high-resolution remote sensing image.
(7) To make the foregoing objective, features, and advantages of the present disclosure clearer and more comprehensible, the present disclosure is further described in detail below with reference to the accompanying drawings and specific embodiments.
(8) To achieve the above objective, the present disclosure provides the following solutions: Generally, a high-resolution remote sensing image covers a very wide geographic area and has a huge data amount which may be gigabyte-sized. In addition, the high-resolution remote sensing image usually contains four or more spectral bands, among which a blue band from 0.45-0.52 μm, a green band from 0.52-0.60 μm, a red band from 0.62-0.69 μm, and a near-infrared band from 0.76-0.96 μm are the most common spectral bands. However, the exiting semantic segmentation network seldom considers the effects of the different bands on semantic segmentation. In addition, limited by the receptive field, most convolutional neural networks (CNNs) for semantic segmentation can only acquire limited context information, easily resulting in divergence in classification of visually similar pixels. Therefore, certain embodiments herein focus on the effects with different spectrums on semantic segmentation, and employs a recurrent neural network (RNN) to enhance dependency between pixels.
(9) As shown in
(10) Step 101: Partition a high-resolution remote sensing image into a plurality of random blocks.
(11) The step of partitioning a high-resolution remote sensing image into a plurality of random blocks specifically includes: randomly selecting a pixel point d.sub.0 in a central area of the high-resolution remote sensing image; cropping a square from the high-resolution remote sensing image to obtain a random block p.sub.0, where the square is centered at the pixel point d.sub.0 and has a randomly generated side length of len(p.sub.0); further cropping squares which are respectively centered at the four vertices d.sub.01, d.sub.02, d.sub.03 and d.sub.04 of the random block p.sub.0 and have randomly generated side lengths len(p.sub.01), len(p.sub.02), len(p.sub.03), len(p.sub.04) from the high-resolution remote sensing image, to generate random blocks p.sub.01, p.sub.02, p.sub.03, and p.sub.04 which neighbor the random block p.sub.0, where the side length of each square is in a range of 512≤len(.Math.)≤1024; and repeating the step of “further cropping squares which are respectively centered at the four vertices d.sub.01, d.sub.02, d.sub.03 and d.sub.04 of the random block p.sub.0 and have randomly generated side lengths len(p.sub.01), len(p.sub.02), len(p.sub.03), len(p.sub.04) from the high-resolution remote sensing image, to generate random blocks p.sub.01, p.sub.02, p.sub.03, and p.sub.04 which neighbor the random block p.sub.0”, to continuously generate random blocks neighboring each newly generated random block until these newly generated random blocks all reach edges of the high-resolution remote sensing image.
(12) Specifically, as shown in are respectively H(
) and
(
). First, a pixel point d.sub.0 is randomly selected from the high-resolution remote sensing image, and the position of d.sub.0 may be denoted as a vector (x.sub.0, y.sub.0). A square is randomly cropped from the image at the point d.sub.0, to generate a random block
.sub.0. The side length of the random block
.sub.0 is denoted as
(
.sub.0) The four vertices of the block
.sub.0 from the upper left corner to the lower right corner in a clockwise direction are d.sub.01, d.sub.02, d.sub.03, and d.sub.04 respectively:
(13)
(14) To realize proliferation of random blocks from .sub.0, four square images (generated based on the same rule as the random block
.sub.0) respectively centered at the four vertices d.sub.01, d.sub.02, d.sub.03, and d.sub.04 of the block
.sub.0 are randomly captured, to generate new random blocks
.sub.i, i=1, 2, 3, 4. Likewise, the four vertices of each newly generated random block are named as d.sub.i1, d.sub.i2, d.sub.i3, d.sub.i4, i=1, 2, 3, 4. The foregoing process is repeated for
times, till these captured random blocks
.sub.i reach the edges of the image (if one of the random blocks reach an edge of the image, proliferation based on this random block is stopped), thus guaranteeing that the random blocks are spread all over the whole high-resolution remote sensing image
.
(15) After proliferations (
is an integer), the total number of the random blocks reaches num
which is calculated as follows:
(16)
(17) In order that a combination of all the random blocks can cover all pixels of the remote sensing image, the side length of each random block is limited as follows:
(18) The side length of each square is limited in a range of 512≤len(.Math.)≤1024.
(19) Step 102: Extract mono-spectral feature data in each band from each random block. The random block and its neighboring random blocks are all composed of multiple bands. Because specific ground objects have different sensitivities to different bands, it is required to perform extraction for these bands separately, to acquire multiple pieces of mono-spectral feature data from the random block and mono-spectral feature data from its neighboring random blocks. Multispectral feature data is extracted from the random block and its neighboring random blocks. Because the high-resolution remote sensing image is composed of multiple bands and specific ground objects have different sensitivities to the different bands, it is required to perform extraction for these bands separately. Generally, a remote sensing image is composed of four spectral bands, which are a blue band from 0.45 μm to 0.52 μm, a green band from 0.52 μm to 0.60 μm, a red band from 0.62 μm to 0.69 μm, and a near-infrared band from 0.76 μm to 0.96 μm. The remote sensing image is usually represented as four-channel mono-spectral feature data by a computer, and these several bands may be read directly by using the Geospatial Data Abstraction Library (GDAL) in python.
(20) Step 103: Perform semantic segmentation separately on the mono-spectral feature data in each band of each random block by using a supervised semantic segmentation network and with reference to mono-spectral feature data in each band of random blocks neighboring each random block, to obtain a mono-spectral semantic segmentation probability plot regarding each band of each random block.
(21) As shown in .sub.
.sub.im at a hidden layer in the RNN network, h
.sub.
.sub.i at the hidden layer in the RNN network, y
.sub.
.sub.
.sub.
.sub.i denotes an advanced abstract feature generated after a random block
.sub.i is processed by an encoder En(.Math.), and
.sub.im denotes an advanced abstract feature generated after one neighboring random block
.sub.im of the random block
.sub.i is processed by the encoder En(.Math.), where m is a subscript. As shown in
(22) The step 103 of performing semantic segmentation separately on the mono-spectral feature data in each band of each random block by using a supervised semantic segmentation network and with reference to mono-spectral feature data in each band of random blocks neighboring each random block, to obtain a mono-spectral semantic segmentation probability plot regarding each band of each random block specifically includes the following process:
(23) Abstract features are extracted from mono-spectral feature data in the jth bands of the ith random block p.sub.i and a random block p.sub.im neighboring the ith random block p.sub.i by using the encoder and according to a formula
(24)
to obtain abstract feature maps regarding the jth bands of the ith random block p.sub.i and the random block p.sub.im neighboring the ith random block p.sub.i, where F.sub.i.sup.j denotes an abstract feature map regarding the jth band of the random block p.sub.i, En(□), denotes the encoder, and F.sub.im.sup.j denotes an abstract feature map regarding the jth band of the mth random block p.sub.im neighboring the random block p.sub.i. Specifically, by using the random block p.sub.i as an image unit, the neighborhood of the random block p.sub.i covers four random blocks randomly captured at the four vertices d.sub.i1, d.sub.i2, d.sub.i3, d.sub.i4 of .sub.i, which are denoted as
.sub.i1,
.sub.i2,
.sub.i3,
.sub.i4 herein for convenience. The four random blocks are nearest to the random block p.sub.i, and there are overlapping image regions. Therefore, they are highly correlated in content. Based on a dependence relationship between images, semantic segmentation subnetworks can output semantic segmentation probability plots with the same size as an input image, for ease of fusion.
(25) To achieve a semantic segmentation function, certain embodiments herein use a typical framework U-Net for semantic segmentation. First, advanced abstract features are extracted from the image by using the encoder.
(26) Afterwards, F.sub.im.sup.j, (m=1, 2, 3, 4) and F.sub.i.sup.j are sequentially input into the RNN network, to establish a dependence relationship between the four neighboring random blocks and the random block p.sub.i. Based on the abstract feature maps regarding the jth bands of the ith random block p.sub.i and the random block p.sub.im neighboring the ith random block p.sub.i, neighborhood association is established between abstract feature maps regarding the jth bands of the ith random block p.sub.i and four random blocks neighboring the ith random block p.sub.i via the RNN network and by using the formula
(27)
to obtain abstract features of the jth band of the ith random block p.sub.i after the neighborhood association, where h.sub.F.sub.
(28) Feature data output by the encoder, feature data output by the RNN network, and feature data output by the decoder are supervised respectively with the first, second, and third supervision modules. Specifically, in order to improve the performance of semantic segmentation, classification are made pixel by pixel and upsampling is performed subsequently by means of a bilinear interpolation to restore the image to the original size, which will be performed respectively in a convolutional layer in the last layer of the encoder, in the first layer of the decoder and in the second layer of the decoder; and finally, a cross-entropy loss function is used to evaluate the performance of the encoder, the RNN network, and the decoder, thus monitoring the network from these three aspects. The calculation equation is as follows:
y.sub.pre=i(conv1(
))
=−Σy.sub.true log y.sub.pre+(1−y.sub.true)log(1−y.sub.pre)
(29) where y.sub.pre denotes a predicted probability, which is a semantic segmentation probability plot, regarding output features of a supervised layer after processing by the convolutional layer and a bilinear interpolation layer; conv1(.Math.) denotes a convolution for classification;
(.Math.) denotes a bilinear interpolation operation; and
denotes a loss difference between the predicated probability y.sub.pre calculated by using the cross-entropy loss function and a true label y.sub.true.
(30) A weight training network is constructed for the plurality of mono-spectral semantic segmentation probability plots. As shown in
(31) A spectral image with different spectrums shows different sensitivities to different ground objects, and therefore weight training can be performed according to an identified target. Specifically, based on the multiple pieces of mono-spectral feature data of the random block, weight training is performed on the mono-spectral semantic segmentation probability plot regarding each band by using the weight training network, to obtain a trained weight. By continuously inputting mono-spectral feature data of new random blocks and their neighboring random blocks, outputs from an input layer to the hidden layer and from the hidden layer to an output layer are calculated by means of forward propagation, and the network is optimized by means of back propagation, such that weight parameters in the weight training network are continuously updated till convergence.
(32) Step 104: Fuse mono-spectral semantic segmentation probability plots regarding all bands of each random block by using a trained weight, to obtain a fused semantic segmentation probability plot of each random block.
(33) After the mono-spectral feature data in each band of each random block is processed by the semantic segmentation subnetworks, a semantic segmentation probability plot p.sub.i.sup.j is generated. These semantic segmentation probability plots are fused to obtain a fused semantic segmentation probability plot, which may be specifically expressed as follows:
out=conv2(p.sub.i.sup.1, . . . ,p.sub.i.sup.j),j=1,2, . . . max(j)
(34) where out denotes the fused semantic segmentation probability plot, conv2 denotes an operation of using a convolutional layer for spectral fusion, and max (j) denotes the number of bands contained in the high-resolution remote sensing image.
(35) Road information, bridge information, and the like to be detected are acquired according to the fused semantic segmentation probability plot from the high-resolution remote sensing image.
(36) Certain embodiments herein further provide a semantic segmentation system for a high-resolution remote sensing image based on random blocks, where the semantic segmentation system includes the following modules:
(37) An image partitioning module is configured to partition a high-resolution remote sensing image into a plurality of random blocks.
(38) The image partitioning module specifically includes: a pixel point selection submodule, configured to randomly select a pixel point d.sub.0 in a central area of the high-resolution remote sensing image; a first image partitioning submodule, configured to crop a square from the high-resolution remote sensing image to obtain a random block p.sub.0, where the square is centered at the pixel point d.sub.0 and has a randomly generated side length of len(p.sub.0); a second image partitioning submodule, configured to further crop squares which are respectively centered at the four vertices d.sub.01, d.sub.02, d.sub.03 and d.sub.04 of the random block p.sub.0 and have randomly generated side lengths len(p.sub.01), len(p.sub.02), len(p.sub.03), len(p.sub.04) from the high-resolution remote sensing image, to generate random blocks p.sub.01, p.sub.02, p.sub.03, and p.sub.04 which neighbor the random block p.sub.0, where the side length of each square is in a range of 512≤len(.Math.)≤1024; and a third image partitioning submodule, configured to repeat the step of “further cropping squares which are respectively centered at the four vertices d.sub.01, d.sub.02, d.sub.03 and d.sub.04 of the random block p.sub.0 and have randomly generated side lengths len(p.sub.01), len(p.sub.02), len(p.sub.03), len(p.sub.04) from the high-resolution remote sensing image, to generate random blocks p.sub.01, p.sub.02, p.sub.03, and p.sub.04 which neighbor the random block p.sub.0”, to continuously generate random blocks neighboring each newly generated random block until these newly generated random blocks all reach edges of the high-resolution remote sensing image.
(39) A mono-spectral feature data extraction module is configured to extract mono-spectral feature data in each band from each random block.
(40) A semantic segmentation module is configured to: perform semantic segmentation separately on the mono-spectral feature data in each band of each random block by using a supervised semantic segmentation network and with reference to mono-spectral feature data in each band of random blocks neighboring each random block, to obtain a mono-spectral semantic segmentation probability plot regarding each band of each random block.
(41) The supervised semantic segmentation network includes an encoder, an RNN network, a decoder, a first supervision module, a second supervision module, and a third supervision module. The encoder, the RNN network, and the decoder are successively connected. The first supervision module is arranged in the last layer of the encoder, the second supervision module is arranged in the first layer of the decoder, and the third supervision module is arranged in the second layer of the decoder.
(42) The semantic segmentation module specifically includes: an encoding submodule, configured to extract abstract features from mono-spectral feature data in the jth bands of the ith random block p.sub.i and a random block p.sub.im neighboring the ith random block p.sub.i by using the encoder and according to a formula
(43)
to obtain abstract feature maps regarding the jth bands of the ith random block p.sub.i and the random block p.sub.im neighboring the ith random block p.sub.i, where F.sub.i.sup.j denotes an abstract feature map regarding the jth band of the random block p.sub.i, En(□) denotes the encoder, and F.sub.im.sup.j denotes an abstract feature map regarding the jth band of the mth random block p.sub.im neighboring the random block p.sub.i; a neighborhood feature association submodule, configured to: based on the abstract feature maps regarding the jth bands of the ith random block p.sub.i and the random block p.sub.im neighboring the ith random block p.sub.i, establish neighborhood association between abstract feature maps regarding the jth bands of the ith random block p.sub.i and four random blocks neighboring the ith random block p.sub.i via the RNN network and by using the formula
(44)
to obtain abstract features of the jth band of the ith random block p.sub.i after the neighborhood association, where h.sub.F.sub.
(45) A fusion module is configured to fuse mono-spectral semantic segmentation probability plots regarding all bands of each random block by using a trained weight, to obtain a fused semantic segmentation probability plot of each random block.
(46) The semantic segmentation system further includes: a weight training network construction module, configured to construct a weight training network for the plurality of mono-spectral semantic segmentation probability plots, where the weight training network includes a plurality of parallel supervised semantic segmentation networks and a convolution fusion module; and a weight training module, configured to: based on the multiple pieces of mono-spectral feature data of the random block, perform weight training on the mono-spectral semantic segmentation probability plot regarding each band by using the weight training network, to obtain the trained weight.
(47) The technical solutions of certain embodiments herein achieve the following advantages: Because a high-resolution remote sensing image is multispectral and has a large data amount, certain embodiments herein partition the remote sensing image into a plurality of small sections by means of random blocks, and further achieves data enhancement. Moreover, the remote sensing image with different spectrums shows different sensitivities to different ground objects, and therefore, the convolutional layer used in certain embodiments herein are equivalent to subjecting a predicted image with different spectrums to weighted summation. Certain embodiments herein divide the high-resolution remote sensing image into random blocks, and performs semantic segmentation for each individual random block separately, thus avoiding overflow of GPU memory during semantic segmentation of the high-resolution remote sensing image. In addition, certain embodiments herein incorporate feature data in random blocks neighboring each random block in the process of semantic segmentation, overcoming the technical shortcoming that the existing segmentation method for the remote sensing image weakens the correlation within the image. Moreover, certain embodiments herein perform semantic segmentation separately on mono-spectral feature data in each band of the high-resolution remote sensing image, so that objects with the same or similar colors can be accurately identified according to the characteristic that different ground objects have different sensitivities to light with different spectrums, thus enhancing the accuracy of sematic segmentation of the high-resolution remote sensing image.
(48) Each embodiment of the present specification is described in a progressive manner, each embodiment focuses on the difference from other embodiments, and identical or similar parts of the embodiments may be obtained with reference to each other.
(49) The principles and implementations of the present disclosure have been described with reference to specific embodiments. The description of the above examples is only for facilitating understanding of the method and the core idea of the present disclosure, and the described embodiments are only a part of the embodiments of the present disclosure. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present disclosure without departing from the inventive scope are the scope of the present disclosure.