VIDEO MONITORING APPARATUS, METHOD OF CONTROLLING THE SAME, COMPUTER-READABLE STORAGE MEDIUM, AND VIDEO MONITORING SYSTEM
20210271896 · 2021-09-02
Inventors
Cpc classification
H04N7/181
ELECTRICITY
G06V20/52
PHYSICS
International classification
Abstract
According to the present invention, switching of the monitoring images matching the intention of the observer can be automatically performed for images from a plurality of image capturing apparatus, and the load about the job of the observer can be reduced. The image monitoring apparatus includes an estimating unit configured to estimate attention degrees of a user for a plurality of images acquired from the plurality of image capturing apparatuses, a designating unit configured to designate one of the acquired images as an image to be displayed in accordance with an instruction from the user, a learning unit configured to cause the estimating unit to learn so as to increase an attention degree of the designated image, and a selecting unit configured to select one of the plurality of images based on an attention degree of each estimated image.
Claims
1. A video monitoring apparatus comprising: an acquiring unit configured to acquire a video containing a plurality of images; a designation unit configured to designate, by a user, one of the plurality of the image to be displayed; a learning unit configured to learn a trained model so that a degree of attention of the designated image becomes higher than that of the image not designated; an estimation unit configured to estimate degrees of attention for each of the plurality of the images by using a trained model; and a select unit configured to select, based on the degree of attention for each image, an image, an image whose degree of attention is larger than that of other images among the plurality of images.
2. The apparatus according to claim 1, wherein the trained model estimates the degree of attention based on a set parameter, and wherein the set parameter is updated so that a degree of attention of the designated image becomes higher than that of the image not designated.
3. The apparatus according to claim 1, wherein the degree of attention is estimated based on a set parameter, and the set parameter is updated so as to increase the degree of attention for the designated image.
4. The apparatus according to claim 3, wherein the learning unit updates the set parameter until the number of times of updating the parameter becomes a preset number of times.
5. The apparatus according to claim 1, wherein the trained model is updated until the user inputs a predetermined instruction.
6. The apparatus according to claim 1, wherein the trained model is learned based on time of an image selection operation by the user.
7. The apparatus according to claim 1, wherein an image is divided into a plurality of areas; and degrees of attention to be estimated for respective divided areas are integrated.
8. The apparatus according to claim 7, wherein the degrees of attention for the areas are integrated to a highest degree of attention for the areas.
9. The apparatus according to claim 7, wherein the degrees of attention for the areas are integrated to an average of the degrees of attention for the areas.
10. The apparatus according to claim 1, wherein the degree of attention estimated for an image and an identifier of the video monitoring camera that captured the image are included in learning data for the trained model.
11. The apparatus according to claim 1, wherein the trained model is updated when an instruction is executed in response to the acquisition of a predetermined times.
12. The apparatus according to claim 1, wherein the degree of attention is estimated from a time-space image obtained by coupling a plurality of image frames.
13. The apparatus according to claim 1, wherein the one of the plurality of images is displayed in a main screen with displaying, at least one of the plurality of images other than the image displayed in the main screen, in sub screens.
14. The apparatus according to claim 1, wherein the plurality of images are acquired from a plurality of image capturing apparatuses.
15. The apparatus according to claim 1, wherein the plurality of images are acquired from a storage device storing a plurality of images captured previously.
16. A method of controlling a video monitoring apparatus comprising: acquiring a video containing a plurality of images; designating, by a user, one of the plurality of the image to be displayed; learning a trained model so that a degree of attention of the designated image becomes higher than that of the image not designated; estimating degrees of attention for each of the plurality of the images by using a trained model; and selecting, based on the degree of attention for each image, an image, an image whose degree of attention is larger than that of other images among the plurality of images.
17. A non-transitory computer-readable storage medium storing a program which, when executed by a computer, causes the computer to execute steps of a method of controlling a video monitoring apparatus, the method comprising: acquiring a video containing a plurality of images; designating, by a user, one of the plurality of the image to be displayed; learning a trained model so that a degree of attention of the designated image becomes higher than that of the image not designated; estimating degrees of attention for each of the plurality of the images by using a trained model; and selecting, based on the degree of attention for each image, an image, an image whose degree of attention is larger than that of other images among the plurality of images.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
DESCRIPTION OF THE EMBODIMENTS
[0022] An embodiment according to the present invention will be described with reference to the accompanying drawings.
[0023]
[0024] The display unit 300 is made from a liquid crystal panel, an external monitor, or the like and outputs images captured by the cameras and various kinds of information. The screen switching operation unit 400 is made from a mouse, a keyboard, a touch panel device, and buttons and inputs a screen switching operation of videos captured by the plurality of cameras. Units 110 to 160 are implemented by an arithmetic processing apparatus made from a CPU (Central Processing Unit), a GPU (Graphics Processing unit), and memories. These components execute operation programs to be described later to implement the present invention. The respective processing units are communicable with each other and are connected via a bus or the like.
[0025] The image acquisition unit 110 acquires image data captured by the cameras 200-1 to 200-N. The image data is a still image or time-series image data. When the cameras 200-1 to 200-N are installed on the network, the correspondence between each image and each camera can be specified from the name or address (for example, an IP address) of each camera on the network. In this embodiment, information representing correspondence between each image and each camera is represented by a camera identification number.
[0026] The display unit 300 displays an image acquired by the image acquisition unit 110. The attention degree estimating unit 120 estimates an index value (to be referred to as an attention degree hereinafter) representing an attention degree of a user (observer) for each image acquired by the image acquisition unit 110 using the parameter stored in the estimation parameter storing unit 130. As a matter of course, a screen (image) selected by a screen switching operation of a user is higher attention degree than that of an unselected image.
[0027] The screen switching control unit 140 automatically switches screens displayed on the display unit 300 in accordance with the attention degrees of the images of the plurality of cameras which are estimated by the attention degree estimating unit 120. The observer can manually cause the screen switching operation unit 400 to switch the screens displayed on the display unit 300. Switching by the screen switching operation unit 400 has a higher priority over the operation in the screen switching control unit 140. The operation information acquisition unit 150 acquires operation information of the screen switching operation unit 400. The learning unit 160 learns a parameter from the attention degrees of the plurality of cameras estimated by the attention degree estimating unit 120 and the operation information acquired by the operation information acquisition unit 150 and stores the learned parameter in the estimation parameter storing unit 130.
[0028] The operation of the video monitoring system according to the embodiment at the time of learning (learning stage) will be described with reference to the processing sequence shown in
[0029] The image acquisition unit 110 acquires image data captured by the cameras 200-1 to 200-N (step S100). The image data to be acquired is two-dimensional data made of R, G, and B 8-bit pixels which can be acquired sequentially in time series. The acquired image data is held in a memory (not shown).
[0030] The attention degree estimating unit 120 estimates the attention degree of each image acquired by the image acquisition unit 110 using the parameter stored in the estimation parameter storing unit 130. The arrangement of the attention degree estimating unit 120 is shown in
[0031] The feature amount extracting unit 122 and the estimating unit 123 estimate the attention degree of each of the image areas divided by the area dividing unit 121 (step S120). The feature amount extracting unit 122 and the estimating unit 123 are made from a deep neural network shown in
[0032] The attention degree estimating unit 120 repeats the above estimation processing by the number of areas divided by the area dividing unit 121. The integrating unit 124 of the attention degree estimating unit 120 integrates the outputs from the estimating unit 123 of the attention degree estimating unit 120 (step S130). The integrating unit 124 according to this embodiment compares the attention degrees estimated for the plurality of areas and obtains the highest attention degree. Note that in addition to this, the attention degrees estimated from the plurality of areas may be averaged or another integration method may be used.
[0033] The display unit 300 displays an image acquired by the image acquisition unit 110 (step S140). An example of a screen displayed on the display unit 300 is shown in
[0034] The screen switching operation unit 400 accepts a screen switching operation from a user who monitors the screen displayed on the display unit 300 and switches the screens to be displayed on the display unit 300 (step S150). An example of the screen after the screen switching operation is shown in
[0035] The operation information acquisition unit 150 acquires operation information of the screen switching operation unit 400 (step S160). The operation information to be acquired here is a camera identification number for specifying the camera of the selected image. The learning unit 160 acquires, as learning data, the attention degrees of the images obtained by the plurality of cameras and estimated by the attention degree estimating unit 120 and the camera identification number acquired by the operation information acquisition unit 150 (step S170). When the user does not select one image, the process returns to step S100 for the next image acquisition.
[0036] On the other hand, when the user selects one image, the process advances to step S180. In step S180, the learning unit 160 updates the parameter used in the attention degree estimating unit 120 by using the acquired learning data and stores the updated parameter in the estimation parameter storing unit 130.
[0037] The above processing is repeated until the number of times the images are acquired from each camera reaches a predetermined value.
[0038] The stochastic gradient descent method for obtaining an estimation parameter from the average loss gradient is used for learning of the neural network. Let A.sup.p be an attention degree after integration obtained in step S130 for an image obtained from a camera corresponding to the camera identification number, out of the attention degrees estimated by the attention degree estimating unit 120; and let A.sup.m.sub.i be an attention degree after integration obtained in step S130 for an image obtained from another camera. Note that i indicates a value representing the data index. In this embodiment, a difference between the attention degree of a camera selected by the user and the attention degree of a camera not selected by the user is evaluated as an average loss. The loss function can be obtained by:
L=ΣI(A.sup.P−A.sup.m.sub.i<0) (1)
where I( ) is the indicator function. This function outputs 1 if the value in the parentheses is true; and otherwise 0. Σ represents the total sum of the number of learning data of the index i. All data may be used for learning, or a predetermined number of data may be selected at random.
[0039] The learning unit 160 obtains a gradient based on equation (1) from the estimating unit 123 of the attention degree estimating unit 120, that is, from an attention degree estimation value obtained by changing the parameter of each of the sixth and seventh layers of the neural network shown in
[0040] Processing on the learning stage according to this embodiment has been described above. In the above description, the learning stage processing is triggered when the number of times of acquisition of the images from each camera reaches a predetermined number of times. However, when the numbers of times the image acquisition is performed and the user performs image selection operation are large, a higher learning effect can be expected. When the numbers of times the image acquisition is performed and the user performs image selection operation reach predetermined counts, these numbers of times can be given as a condition.
[0041] The operation of the display control of the video monitoring system according to this embodiment at the time of automatic screen switching control (operation stage) will now be described with reference to the processing sequence shown in
[0042] The image acquisition unit 110 acquires image data captured by the cameras 200-1 to 200-N (step S200). The area dividing unit 121 of the attention degree estimating unit 120 divides each image data obtained by the image acquisition unit 110 by predetermined numbers in the vertical and horizontal directions, thereby obtaining image areas having the same size. The image acquisition unit 110 normalizes the divided image areas into a predetermined size set in advance (step S210).
[0043] The feature amount extracting unit 122 and the estimating unit 123 of the attention degree estimating unit 120 estimate the attention degree for each of the areas divided by the area dividing unit 121 (step S220). At this time, when the estimation parameter is updated in the learning processing described above, the attention degree is estimated using the latest parameter. In addition, the attention degree estimating unit 120 repeats the estimation processing by the number of areas divided by the area dividing unit 121.
[0044] The integrating unit 124 of the attention degree estimating unit 120 integrates outputs from the estimating unit 123 of the attention degree estimating unit 120 (step S230).
[0045] On the other hand, the display unit 300 displays the image acquired by the image acquisition unit 110 on a subscreen (step S240). Processing from step S200 to step S240 is repeated for the cameras 200-1 to 200-N by the number of cameras.
[0046] The screen switching control unit 140 compares the integrated attention degrees obtained in step S230 for the image of each camera and obtains a camera identification number of a camera which has captured an image having the largest attention degree (step S250).
[0047] The screen switching control unit 140 displays the obtained camera identification number image on the main screen of the display unit 300, thereby automatically switching the screens (step S260).
[0048] The operation at the time of automatic control is thus complete. Note that processing continues until an automatic control end instruction is input via an operation unit (not shown).
[0049] As has been described above, according to this embodiment, by using an operation of switching the screens using the attention degree estimated from the image for each camera, learning is performed such that the attention degree of the camera selected by the user becomes larger than the attention degree of another camera. For this reason, learning of switching screens matching the intension of the user can be performed. Since the learned parameter can be updated, the processing time does not pose any problem even if the number of cameras is increased.
[0050] Note that in this embodiment, the attention degree estimating unit is formed from a neural network. However, it is possible to form the attention degree estimating unit using an estimating unit using another machine learning such as a support vector machine.
[0051] The attention degree estimating unit according to this embodiment estimates the attention degree from a still image, but can estimate the attention degree from a time-space image (moving image) obtained by coupling the areas of a plurality of frames of a time-series image.
[0052] For example, if a camera captures a moving image by 30 frames/sec, a neural network which receives 30 (for 1 sec) feature amounts arranged on the latest time axis or the attention degrees shown in the above embodiment is used. Learning is performed such that the time axis video from the camera selected (given attention) by the user is distinguished from the time axis video from an unselected camera.
[0053] In the above embodiment, the learning unit acquires, as learning data for each area, the plurality of estimation results in the image which are estimated by the attention degree estimating unit. However, the estimation results may be integrated as one estimation result by the integrating unit of the attention degree estimating unit, and then the integrated estimation result can be set as the learning data for each camera image. Alternatively, the plurality of estimation results of the attention degree estimating unit may be integrated and estimated using a recursive neural network, and the learning unit receives the output from this neural network as the learning data. An RNN (Recurrent Neural Network) or an LSTM (Long Short-Term Memory) may be used as the recursive neural network.
[0054] In this embodiment, the learning unit performs learning such that the attention degree of the camera selected by the screen switching operation unit is larger than the attention degree of another camera. However, pieces of information before and after the screen switching operation may be used. For example, learning may be performed such that the attention degree of the selected camera is set larger than the attention degree of the camera which is displayed on the main screen before selection.
[0055] The learning stage and the screen switching stage (operation stage) may be automatically switched based on the time when the user performs the screen switching operation.
[0056] In the above embodiment, the images are acquired from the cameras 200-1 to 200-N on the learning stage. For example, the images captured in the past (immediately preceding day) by the cameras 200-1 to 200-N are stored in a storage device such as a hard disk in association with the camera identification numbers. As for the acquisition of an image from each camera from the storage device, learning may be performed such that the user repeats the selection operation.
Other Embodiments
[0057] Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
[0058] While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
[0059] This application claims the benefit of Japanese Patent Application No. 2017-004617, filed Jan. 13, 2017, which is hereby incorporated by reference herein in its entirety.