• Nie Znaleziono Wyników

Simulation Evaluation Method for Fusion Characteristics of the Optical Camouflage Pattern

N/A
N/A
Protected

Academic year: 2021

Share "Simulation Evaluation Method for Fusion Characteristics of the Optical Camouflage Pattern"

Copied!
8
0
0

Pełen tekst

(1)

103

Yang X, Xu W-D, Liu J, Jia Q, Zhu W-N. Simulation Evaluation Method for Fusion Characteristics of the Optical Camouflage Pattern.

FIBRES & TEXTILES in Eastern Europe 2021; 29, 3(147): 103-110. DOI: 10.5604/01.3001.0014.7795

Simulation Evaluation Method for Fusion

Characteristics of the Optical Camouflage

Pattern

DOI: 10.5604/01.3001.0014.7795

Abstract

A comprehensive evaluation system for a camouflage design combining local effect evalu- ation and global sampling is developed. Different from previous models, this method can  sample and evaluate target camouflage in a wide range of combat areas, thereby obtaining  a comprehensive evaluation effect. In evaluating local effects, the Gaussian pyramid model is  adopted to decompose the image on a multi-scale so that it can conform to the multi-resolution  property of human eyes. The Universal Image Quality Index (UIQI) conforming to features  of eye movements is then adopted to measure the similarities between multi-scale targeted  and background brightness, color and textural features. In terms of the imitation camouflage  pattern design algorithm, uniform sampling is used to obtain the evaluation distribution in  the background; while for the deformation camouflage pattern, the sampling distribution is  improved to make it conform to the movement rule of the target in the background. The eva- luation results of the model for different designs were investigated. It is suggested by the  experimental results that the model can compare and evaluate the indicators involved in  the process of camouflage design, including integration, polychromatic adaptability and  algorithm stability. This method can be applied in the evaluation and contrast of camouflage  pattern design algorithms, in parameter optimisation of camouflage design and in scheme  comparison in engineering practice, and can provide support of evaluation methodology  for camouflage design theories. 

Key words: camouflage pattern evaluation, effect evaluation, sampling statistics, visual  feature.

Xin Yang

1*

Wei-Dong Xu

1

Jun Liu

1

Qi Jia

1

Wan-Nian Zhu

2

1 Army Engineering University, National Key Laboratory of Lightning Protection and Electromagnetic Camouflage, Nanjing, Jiangsu, 210007, China,

*e-mail: 1435227062@qq.com

2 Army Engineering University, Teaching and Research Office of Camouflage in Training Center, Xuzhou, Jiangsu, 221004, China

ly and objectively. Subjective evaluation includes mainly traditional detection probability statistics and target signifi- cance, both of which require huge finan- cial and material resources in actual oper- ation and have low efficiency. Jia Qi et al.

[3] applied Itti’s visual attention mecha- nism model in the extraction of disguise features and fit the features to detection probability parameters with the BP(Back Propagation) network. Lin Wei et al. [4]

established the relations between detec- tion probability and the features and fit- ted the parameters according to the psy- chological stimulation equation. Objec- tive evaluation is mainly to evaluate the comprehensive similarities between the evaluated background and the target zone based on the similarity measure. The gen- eral idea is to decompose the target and background image regions into features such as colour, brightness, texture, shape and structure, and then to establish sim- ilarity relationship models using certain measuring methods [5], including the BP neural network method [6], an eval- uation method combining colour and distribution information [7], the Gabor wavelet texture-based method [8] and the multi-index evaluation method based on grey theory [9]. Xue Feng et al. [10] de- signed a camouflage pattern based on the feedback of the evaluation methods, and the evaluation process was extracted and

Introduction

As one of the significant military disguis- ing technologies, camouflage technology has been used widely in shelter-of-per- sonnel, weaponry and engineering goals.

According to various usages or objects of the camouflage pattern, it can be classi- fied as an imitation camouflage pattern for fixed objects and a deformation cam- ouflage pattern for moving targets with- in a large scope [1, 2]. Hitherto, design methods aiming at these two camouflage patterns have been developed steadily.

Meanwhile, various evaluation methods have been proposed. According to the stages of evaluating a camouflage pat- tern, the evaluation method can be divid- ed into engineering evaluation after com- pletion of the construction and simulation evaluation after completion of the design.

As a matter of fact, engineering evalua- tion generally requires much time and ef- fort, and currently there are few research- es on simulation evaluation. Simulation evaluation in the design stage plays a vi- tal role in the performance measurement and comparison of the camouflage design algorithm as well as in quantification of the effect of the deformation camouflage pattern, and thus it could affect the de- velopment of camouflage design theory.

At present, camouflage evaluation tech- nologies can be differentiated subjective-

quantised the target boundary character- istics based on the gradient method. Dan- iela H. et al. [11] evaluated the features of camouflage pattern textures based on a human observer. The above methods have mainly two limitations: First, most of the methods are engineering evalua- tion methods, in which the objects evalu- ated are specific targets, and the scope of the evaluation is also local background.

The overall performance evaluation of the camouflage pattern and its effects cannot be applied well. Second, the eval- uation method is not guided by visual relevance theory when measuring feature similarity. Moreover, all the evaluation processes are not carried out at a mul- ti-scale resolution. For the feature of the multi-resolution of the human eye when observing a target, the fusion of the tar- get and the background cannot be better reflected by those models.

All these limitations have restricted the development of camouflage patterns to some degree. In engineering practice, for the design and application of these two types of camouflage patterns, it is neces- sary to consider evaluation of the advan- tages and disadvantages of the imitation camouflage pattern design algorithm, the overall effect of the deformation camouflage pattern against an extensive background, and the adaptability against

(2)

FIBRES & TEXTILES in Eastern Europe 2021, Vol. 29, 3(147)

104

a complex background. Actually, most of the present camouflage pattern design theories do not offer parallel comparison.

Based on the principle of sampling sta- tistics, this work puts forward simulation evaluation theory for camouflage patterns and systematically expounds the evalua- tion steps. According to the visual obser- vation mechanism, the theory of local fusion measurement of camouflage pat- terns is studied. The contribution mainly includes two aspects. One is to propose a sampling theory and statistical method for evaluation of the global camouflage effect, and the other is to improve the lo- cal evaluation simulation model to make it more consistent with the human visual characteristic model. By examining lo- cal fusion of the camouflage pattern and its statistical characteristics in the back- ground, the overall performance of the camouflage pattern and its design method is given according to the numerical meth- od, which could provide theoretical sup- port for the performance measurement of camouflage pattern effects and its design methods.

Methodology

Fusion characteristic evaluation based on visual features

First it is necessary to classify the target and the background regions. The tradi- tional method is to compare the features of the background and the target within a certain peripheral scope directly. How- ever, during field reconnaissance, the background in far regions has little in- fluence on the target, and the evaluation result is not stable in the case of the main colours of the selected background scope not being single. The eight-way domain partition map method [12] is a relatively rational method to classify the target and the background, which takes the target as the original point and eight regions of equal dimension surrounding the target as the background. To calculate the results, the mean value of all the backgrounds are counted as the effects of local regions.

At present, there are various theoretical methods to evaluate the target and the background. The most common is to de- compose the image feature into bright- ness, colour, texture, structure and shape, which are combined in a certain com- pound mode. However, there is no the- ory or experimental study reflecting the advantages and disadvantages of various combination modes of these characteris-

tic parameters. To conform to low-level visual features, the multi-layer Gaussian pyramid is adopted to decompose the tar- get and background images, for image comparison on a multi-scale can reflect the observation results within different visual scopes. Suppose the initial A(i, j) image is the 0th layer of the pyramid and the next layer image can be obtained by Gaussian filtering and down-sampling the upper layer image. Suppose M and N are, respectively, the length and width of the original image, and the image size of layer l is 2-lM × 2-lN. Where, the two-di- mensional Gaussian convolution func- tion is as shown in Equation (1).

Suppose the initial image is the 0th layer of the pyramid and the next layer image can be obtained by Gaussian filtering and down-sampling the upper layer image. Suppose and are, respectively, the length and width of the original image, and the image size of layer is . Where, the two-dimensional Gaussian convolution function is as shown in formula 1.

(1)

Extract the brightness, colour and texture features of the images at each layer. Formula 2 represents the calculation of the feature of brightness. The colour feature is represented by value a and value b with Lab uniform colour space. During colour space conversion, it is necessary to transfer the image to XYZ colour space and then to Lab space. The texture feature is described with the filter, as shown by formulas 3-5. The filter is the result of consine modulation of the two-dimensional Gaussian function, and it has been found by biological experiment that it can approach the one-cellular reception field function better [13-16]. 4 orientations are selected to extract the texture feature, that is, .

(2)

(3)

(4)

(5)

Similarities between feature results with the same dimensions are investigated. Literature [17] verified the relative superiority of the Universal Image Quantity Index (UIQI) by an eye movement experiment, which has high correlations with the indicators of eye movement and is easy to calculate. However, it is impossible to reveal the influences of chromaticity on visual effects by comparing the results in a grey level image. Calculation of UIQI is as shown in formulas 6-11, where and are, respectively, their corresponding pixel values. With the same scale and their corresponding features, this index is adopted to calculate the similarities between the target and background features. (6)

(7) (8) (9) (10) (1) Extract the brightness, colour and tex- ture features of the images at each layer. Equation (2) represents the calculation of the feature of brightness. The colour feature is represented by value a and val- ue b with Lab uniform colour space. Dur- ing colour space conversion, it is neces- sary to transfer the image to XYZ colour space and then to Lab space. The texture feature is described with the Gabor filter, as shown by Equations (3)-(5). The Ga- bor filter is the result of consine modu- lation of the two-dimensional Gaussian function, and it has been found by bio- logical experiment that it can approach the one-cellular reception field function better [13-16]. 4 orientations are select- ed to extract the texture feature, that is, Suppose the initial image is the 0th layer of the pyramid and the next layer image can be obtained by Gaussian filtering and down-sampling the upper layer image. Suppose and are, respectively, the length and width of the original image, and the image size of layer is . Where, the two-dimensional Gaussian convolution function is as shown in formula 1. (1) Extract the brightness, colour and texture features of the images at each layer. Formula 2 represents the calculation of the feature of brightness. The colour feature is represented by value a and value b with Lab uniform colour space. During colour space conversion, it is necessary to transfer the image to XYZ colour space and then to Lab space. The texture feature is described with the filter, as shown by formulas 3-5. The filter is the result of consine modulation of the two-dimensional Gaussian function, and it has been found by biological experiment that it can approach the one-cellular reception field function better [13-16]. 4 orientations are selected to extract the texture feature, that is, . (2)

(3)

(4)

(5)

Similarities between feature results with the same dimensions are investigated. Literature [17] verified the relative superiority of the Universal Image Quantity Index (UIQI) by an eye movement experiment, which has high correlations with the indicators of eye movement and is easy to calculate. However, it is impossible to reveal the influences of chromaticity on visual effects by comparing the results in a grey level image. Calculation of UIQI is as shown in formulas 6-11, where and are, respectively, their corresponding pixel values. With the same scale and their corresponding features, this index is adopted to calculate the similarities between the target and background features. (6)

(7) (8) (9) (10) . Suppose the initial image is the 0th layer of the pyramid and the next layer image can be obtained by Gaussian filtering and down-sampling the upper layer image. Suppose and are, respectively, the length and width of the original image, and the image size of layer is . Where, the two-dimensional Gaussian convolution function is as shown in formula 1. (1) Extract the brightness, colour and texture features of the images at each layer. Formula 2 represents the calculation of the feature of brightness. The colour feature is represented by value a and value b with Lab uniform colour space. During colour space conversion, it is necessary to transfer the image to XYZ colour space and then to Lab space. The texture feature is described with the filter, as shown by formulas 3-5. The filter is the result of consine modulation of the two-dimensional Gaussian function, and it has been found by biological experiment that it can approach the one-cellular reception field function better [13-16]. 4 orientations are selected to extract the texture feature, that is, . (2)

(3)

(4)

(5)

Similarities between feature results with the same dimensions are investigated. Literature [17] verified the relative superiority of the Universal Image Quantity Index (UIQI) by an eye movement experiment, which has high correlations with the indicators of eye movement and is easy to calculate. However, it is impossible to reveal the influences of chromaticity on visual effects by comparing the results in a grey level image. Calculation of UIQI is as shown in formulas 6-11, where and are, respectively, their corresponding pixel values. With the same scale and their corresponding features, this index is adopted to calculate the similarities between the target and background features. (6)

(7) (8) (9) (10)   (2) Suppose the initial image is the 0th layer of the pyramid and the next layer image can be obtained by Gaussian filtering and down-sampling the upper layer image. Suppose and are, respectively, the length and width of the original image, and the image size of layer is . Where, the two-dimensional Gaussian convolution function is as shown in formula 1. (1) Extract the brightness, colour and texture features of the images at each layer. Formula 2 represents the calculation of the feature of brightness. The colour feature is represented by value a and value b with Lab uniform colour space. During colour space conversion, it is necessary to transfer the image to XYZ colour space and then to Lab space. The texture feature is described with the filter, as shown by formulas 3-5. The filter is the result of consine modulation of the two-dimensional Gaussian function, and it has been found by biological experiment that it can approach the one-cellular reception field function better [13-16]. 4 orientations are selected to extract the texture feature, that is, . (2)

(3)

(4)

(5)

Similarities between feature results with the same dimensions are investigated. Literature [17] verified the relative superiority of the Universal Image Quantity Index (UIQI) by an eye movement experiment, which has high correlations with the indicators of eye movement and is easy to calculate. However, it is impossible to reveal the influences of chromaticity on visual effects by comparing the results in a grey level image. Calculation of UIQI is as shown in formulas 6-11, where and are, respectively, their corresponding pixel values. With the same scale and their corresponding features, this index is adopted to calculate the similarities between the target and background features. (6)

(7) (8) (9) (10)     (3) Suppose the initial image is the 0th layer of the pyramid and the next layer image can be obtained by Gaussian filtering and down-sampling the upper layer image. Suppose and are, respectively, the length and width of the original image, and the image size of layer is . Where, the two-dimensional Gaussian convolution function is as shown in formula 1. (1) Extract the brightness, colour and texture features of the images at each layer. Formula 2 represents the calculation of the feature of brightness. The colour feature is represented by value a and value b with Lab uniform colour space. During colour space conversion, it is necessary to transfer the image to XYZ colour space and then to Lab space. The texture feature is described with the filter, as shown by formulas 3-5. The filter is the result of consine modulation of the two-dimensional Gaussian function, and it has been found by biological experiment that it can approach the one-cellular reception field function better [13-16]. 4 orientations are selected to extract the texture feature, that is, . (2)

(3)

(4)

(5)

Similarities between feature results with the same dimensions are investigated. Literature [17] verified the relative superiority of the Universal Image Quantity Index (UIQI) by an eye movement experiment, which has high correlations with the indicators of eye movement and is easy to calculate. However, it is impossible to reveal the influences of chromaticity on visual effects by comparing the results in a grey level image. Calculation of UIQI is as shown in formulas 6-11, where and are, respectively, their corresponding pixel values. With the same scale and their corresponding features, this index is adopted to calculate the similarities between the target and background features. (6)

(7) (8) (9) (10) Suppose the initial image is the 0th layer of the pyramid and the next layer image can be obtained by Gaussian filtering and down-sampling the upper layer image. Suppose and are, respectively, the length and width of the original image, and the image size of layer is . Where, the two-dimensional Gaussian convolution function is as shown in formula 1. (1) Extract the brightness, colour and texture features of the images at each layer. Formula 2 represents the calculation of the feature of brightness. The colour feature is represented by value a and value b with Lab uniform colour space. During colour space conversion, it is necessary to transfer the image to XYZ colour space and then to Lab space. The texture feature is described with the filter, as shown by formulas 3-5. The filter is the result of consine modulation of the two-dimensional Gaussian function, and it has been found by biological experiment that it can approach the one-cellular reception field function better [13-16]. 4 orientations are selected to extract the texture feature, that is, . (2)

(3)

(4)

(5)

Similarities between feature results with the same dimensions are investigated. Literature [17] verified the relative superiority of the Universal Image Quantity Index (UIQI) by an eye movement experiment, which has high correlations with the indicators of eye movement and is easy to calculate. However, it is impossible to reveal the influences of chromaticity on visual effects by comparing the results in a grey level image. Calculation of UIQI is as shown in formulas 6-11, where and are, respectively, their corresponding pixel values. With the same scale and their corresponding features, this index is adopted to calculate the similarities between the target and background features. (6)

(7) (8) (9) (10)    (4) Suppose the initial image is the 0th layer of the pyramid and the next layer image can be obtained by Gaussian filtering and down-sampling the upper layer image. Suppose and are, respectively, the length and width of the original image, and the image size of layer is . Where, the two-dimensional Gaussian convolution function is as shown in formula 1. (1) Extract the brightness, colour and texture features of the images at each layer. Formula 2 represents the calculation of the feature of brightness. The colour feature is represented by value a and value b with Lab uniform colour space. During colour space conversion, it is necessary to transfer the image to XYZ colour space and then to Lab space. The texture feature is described with the filter, as shown by formulas 3-5. The filter is the result of consine modulation of the two-dimensional Gaussian function, and it has been found by biological experiment that it can approach the one-cellular reception field function better [13-16]. 4 orientations are selected to extract the texture feature, that is, . (2)

(3)

(4)

(5)

Similarities between feature results with the same dimensions are investigated. Literature [17] verified the relative superiority of the Universal Image Quantity Index (UIQI) by an eye movement experiment, which has high correlations with the indicators of eye movement and is easy to calculate. However, it is impossible to reveal the influences of chromaticity on visual effects by comparing the results in a grey level image. Calculation of UIQI is as shown in formulas 6-11, where and are, respectively, their corresponding pixel values. With the same scale and their corresponding features, this index is adopted to calculate the similarities between the target and background features. (6)

(7) (8) (9) (10)    (5) Similarities between feature results with the same dimensions are investigated. Literature [17] verified the relative supe- riority of the Universal Image Quantity Index (UIQI) by an eye movement exper- iment, which has high correlations with the indicators of eye movement and is easy to calculate. However, it is impossi- ble to reveal the influences of chromatic- ity on visual effects by comparing the re- sults in a grey level image. Calculation of UIQI is as shown in Equations (6)-(11), where bij and cij are, respectively, their corresponding pixel values. With the same scale and their corresponding fea- tures, this index is adopted to calculate the similarities between the target and background features. Suppose the initial image is the 0th layer of the pyramid and the next layer image can be obtained by Gaussian filtering and down-sampling the upper layer image. Suppose and are, respectively, the length and width of the original image, and the image size of layer is . Where, the two-dimensional Gaussian convolution function is as shown in formula 1. (1) Extract the brightness, colour and texture features of the images at each layer. Formula 2 represents the calculation of the feature of brightness. The colour feature is represented by value a and value b with Lab uniform colour space. During colour space conversion, it is necessary to transfer the image to XYZ colour space and then to Lab space. The texture feature is described with the filter, as shown by formulas 3-5. The filter is the result of consine modulation of the two-dimensional Gaussian function, and it has been found by biological experiment that it can approach the one-cellular reception field function better [13-16]. 4 orientations are selected to extract the texture feature, that is, . (2)

(3)

(4)

(5)

Similarities between feature results with the same dimensions are investigated. Literature [17] verified the relative superiority of the Universal Image Quantity Index (UIQI) by an eye movement experiment, which has high correlations with the indicators of eye movement and is easy to calculate. However, it is impossible to reveal the influences of chromaticity on visual effects by comparing the results in a grey level image. Calculation of UIQI is as shown in formulas 6-11, where and are, respectively, their corresponding pixel values. With the same scale and their corresponding features, this index is adopted to calculate the similarities between the target and background features. (6)

(7) (8) (9) (10) Suppose the initial image is the 0th layer of the pyramid and the next layer image can be obtained by Gaussian filtering and down-sampling the upper layer image. Suppose and are, respectively, the length and width of the original image, and the image size of layer is . Where, the two-dimensional Gaussian convolution function is as shown in formula 1. (1) Extract the brightness, colour and texture features of the images at each layer. Formula 2 represents the calculation of the feature of brightness. The colour feature is represented by value a and value b with Lab uniform colour space. During colour space conversion, it is necessary to transfer the image to XYZ colour space and then to Lab space. The texture feature is described with the filter, as shown by formulas 3-5. The filter is the result of consine modulation of the two-dimensional Gaussian function, and it has been found by biological experiment that it can approach the one-cellular reception field function better [13-16]. 4 orientations are selected to extract the texture feature, that is, . (2)

(3)

(4)

(5)

Similarities between feature results with the same dimensions are investigated. Literature [17] verified the relative superiority of the Universal Image Quantity Index (UIQI) by an eye movement experiment, which has high correlations with the indicators of eye movement and is easy to calculate. However, it is impossible to reveal the influences of chromaticity on visual effects by comparing the results in a grey level image. Calculation of UIQI is as shown in formulas 6-11, where and are, respectively, their corresponding pixel values. With the same scale and their corresponding features, this index is adopted to calculate the similarities between the target and background features. (6)

(7) (8) (9) (10)   (6) Suppose the initial image is the 0th layer of the pyramid and the next layer image can be obtained by Gaussian filtering and down-sampling the upper layer image. Suppose and are, respectively, the length and width of the original image, and the image size of layer is . Where, the two-dimensional Gaussian convolution function is as shown in formula 1. (1) Extract the brightness, colour and texture features of the images at each layer. Formula 2 represents the calculation of the feature of brightness. The colour feature is represented by value a and value b with Lab uniform colour space. During colour space conversion, it is necessary to transfer the image to XYZ colour space and then to Lab space. The texture feature is described with the filter, as shown by formulas 3-5. The filter is the result of consine modulation of the two-dimensional Gaussian function, and it has been found by biological experiment that it can approach the one-cellular reception field function better [13-16]. 4 orientations are selected to extract the texture feature, that is, . (2)

(3)

(4)

(5)

Similarities between feature results with the same dimensions are investigated. Literature [17] verified the relative superiority of the Universal Image Quantity Index (UIQI) by an eye movement experiment, which has high correlations with the indicators of eye movement and is easy to calculate. However, it is impossible to reveal the influences of chromaticity on visual effects by comparing the results in a grey level image. Calculation of UIQI is as shown in formulas 6-11, where and are, respectively, their corresponding pixel values. With the same scale and their corresponding features, this index is adopted to calculate the similarities between the target and background features. (6)

(7) (8) (9) (10)     (7) Suppose the initial image is the 0th layer of the pyramid and the next layer image can be obtained by Gaussian filtering and down-sampling the upper layer image. Suppose and are, respectively, the length and width of the original image, and the image size of layer is . Where, the two-dimensional Gaussian convolution function is as shown in formula 1. (1) Extract the brightness, colour and texture features of the images at each layer. Formula 2 represents the calculation of the feature of brightness. The colour feature is represented by value a and value b with Lab uniform colour space. During colour space conversion, it is necessary to transfer the image to XYZ colour space and then to Lab space. The texture feature is described with the filter, as shown by formulas 3-5. The filter is the result of consine modulation of the two-dimensional Gaussian function, and it has been found by biological experiment that it can approach the one-cellular reception field function better [13-16]. 4 orientations are selected to extract the texture feature, that is, . (2)

(3)

(4)

(5)

Similarities between feature results with the same dimensions are investigated. Literature [17] verified the relative superiority of the Universal Image Quantity Index (UIQI) by an eye movement experiment, which has high correlations with the indicators of eye movement and is easy to calculate. However, it is impossible to reveal the influences of chromaticity on visual effects by comparing the results in a grey level image. Calculation of UIQI is as shown in formulas 6-11, where and are, respectively, their corresponding pixel values. With the same scale and their corresponding features, this index is adopted to calculate the similarities between the target and background features. (6)

(7) (8) (9) (10)     (8) Suppose the initial image is the 0th layer of the pyramid and the next layer image can be obtained by Gaussian filtering and down-sampling the upper layer image. Suppose and are, respectively, the length and width of the original image, and the image size of layer is . Where, the two-dimensional Gaussian convolution function is as shown in formula 1. (1) Extract the brightness, colour and texture features of the images at each layer. Formula 2 represents the calculation of the feature of brightness. The colour feature is represented by value a and value b with Lab uniform colour space. During colour space conversion, it is necessary to transfer the image to XYZ colour space and then to Lab space. The texture feature is described with the filter, as shown by formulas 3-5. The filter is the result of consine modulation of the two-dimensional Gaussian function, and it has been found by biological experiment that it can approach the one-cellular reception field function better [13-16]. 4 orientations are selected to extract the texture feature, that is, . (2)

(3)

(4)

(5)

Similarities between feature results with the same dimensions are investigated. Literature [17] verified the relative superiority of the Universal Image Quantity Index (UIQI) by an eye movement experiment, which has high correlations with the indicators of eye movement and is easy to calculate. However, it is impossible to reveal the influences of chromaticity on visual effects by comparing the results in a grey level image. Calculation of UIQI is as shown in formulas 6-11, where and are, respectively, their corresponding pixel values. With the same scale and their corresponding features, this index is adopted to calculate the similarities between the target and background features. (6)

(7)

(8) (9) (10)   (9) Suppose the initial image is the 0th layer of the pyramid and the next layer image can be obtained by Gaussian filtering and down-sampling the upper layer image. Suppose and are, respectively, the length and width of the original image, and the image size of layer is . Where, the two-dimensional Gaussian convolution function is as shown in formula 1. (1) Extract the brightness, colour and texture features of the images at each layer. Formula 2 represents the calculation of the feature of brightness. The colour feature is represented by value a and value b with Lab uniform colour space. During colour space conversion, it is necessary to transfer the image to XYZ colour space and then to Lab space. The texture feature is described with the filter, as shown by formulas 3-5. The filter is the result of consine modulation of the two-dimensional Gaussian function, and it has been found by biological experiment that it can approach the one-cellular reception field function better [13-16]. 4 orientations are selected to extract the texture feature, that is, . (2)

(3)

(4)

(5)

Similarities between feature results with the same dimensions are investigated. Literature [17] verified the relative superiority of the Universal Image Quantity Index (UIQI) by an eye movement experiment, which has high correlations with the indicators of eye movement and is easy to calculate. However, it is impossible to reveal the influences of chromaticity on visual effects by comparing the results in a grey level image. Calculation of UIQI is as shown in formulas 6-11, where and are, respectively, their corresponding pixel values. With the same scale and their corresponding features, this index is adopted to calculate the similarities between the target and background features. (6)

(7) (8) (9) (10)   (10) (11)

Considering that the brightness, colour and texture features are on an equal status for stimulation of the low-level visual sensory, at the same pyramid scale, the average of the three similarity measures is calculated as the evaluation result of this layer. Similarly, the mean values of 2 colour features and the texture features in four directions are taken as the measurement of the feature layer. Thus, the measurement results of the target and background similarity at different scales can be obtained. Since human eyes tend to focus on the whole when observing local effects, it will pay attention to details only when the overall difference is large. Therefore, when integrating the effects between different pyramid layers, the higher the number of the layer is, the greater the weight required to be given, as shown in Formula 12. (12) Thus, the local simulation evaluating method for camouflage patterns in the background is represented in the procedure shown in Fig. 1. First, classify the target and the background regions according to the eight-connected-region method. Decompose the target and each background region in the Gaussian pyramid scale and calculate their brightness, colour and texture features. Then, use UIQI to evaluate the similarities between the same scale and the corresponding features. Last, integrate the results multi-dimensionally and calculate the results of eight backgrounds in turn to obtain the mean value, which is the integration effects of the camouflage pattern FIGURE 1. Flow chart of local evaluation based on visual features. Algorithm 1 Evaluation procedure of imitation pattern scheme Require: Background data set D, type number of main colour n, times of repetition m //processing 1: for i=1,2,3,…,m do 2: select randomly from data set D some background image 3: generate uniform sampling successively by the linear congruence method 4: calculate camouflage pattern by the design algorithm 5: evaluate the algorithm by the local 6: calculate the mean and variance of with the method of estimation 7: Return B. Evaluation of imitation camouflage pattern design scheme The camouflage design algorithm is assessed, including the overall effect of the algorithm, its stability, and the complexity of the camouflage pattern, among which the complexity of the camouflage determines the difficulty of the construction and is determined mainly by the size of the spots and the quantity of the main colours. Spot size is deduced according to the reconnaissance conditions, thus the main determinant is the quantity of main colours, which is usually restricted by the construction conditions in practice. Therefore, when conducting theoretical evaluation, it is possible to set 3-5 colours based on experiences [18-20]. Comprehensive evaluation of a (11) Considering that the brightness, colour and texture features are on an equal sta- tus for stimulation of the low-level visual sensory, at the same pyramid scale, the average of the three similarity measures is calculated as the evaluation result of this layer. Similarly, the mean values of 2 colour features and the texture fea- tures in four directions are taken as the measurement of the feature layer. Thus, the measurement results of the target and background similarity at different scales can be obtained. Since human eyes tend to focus on the whole when observing local effects, it will pay attention to de- tails only when the overall difference is large. Therefore, when integrating the effects between different pyramid layers, the higher the number of the layer is, the greater the weight required to be given, as shown in Equation (12). (11)

Considering that the brightness, colour and texture features are on an equal status for stimulation of the low-level visual sensory, at the same pyramid scale, the average of the three similarity measures is calculated as the evaluation result of this layer. Similarly, the mean values of 2 colour features and the texture features in four directions are taken as the measurement of the feature layer. Thus, the measurement results of the target and background similarity at different scales can be obtained. Since human eyes tend to focus on the whole when observing local effects, it will pay attention to details only when the overall difference is large. Therefore, when integrating the effects between different pyramid layers, the higher the number of the layer is, the greater the weight required to be given, as shown in Formula 12.

(12) Thus, the local simulation evaluating method for camouflage patterns in the background is represented in the procedure shown in Fig. 1. First, classify the target and the background regions according to the eight-connected-region method. Decompose the target and each background region in the Gaussian pyramid scale and calculate their brightness, colour and texture features. Then, use UIQI to evaluate the similarities between the same scale and the corresponding features. Last, integrate the results multi-dimensionally and calculate the results of eight backgrounds in turn to obtain the mean value, which is the integration effects of the camouflage pattern

FIGURE 1. Flow chart of local evaluation based on visual features.

Algorithm 1 Evaluation procedure of imitation pattern scheme

Require: Background data set D, type number of main colour n, times of repetition m //processing

1: for i=1,2,3,…,m do

2: select randomly from data set D some background image

3: generate uniform sampling successively by the linear congruence method 4: calculate camouflage pattern by the design algorithm

5: evaluate the algorithm by the local

6: calculate the mean and variance of with the method of estimation

7: Return

B. Evaluation of imitation camouflage pattern design scheme

The camouflage design algorithm is assessed, including the overall effect of the algorithm, its stability, and the complexity of the camouflage pattern, among which the complexity of the camouflage determines the difficulty of the construction and is determined mainly by the size of the spots and the quantity of the main colours. Spot size is deduced according to the reconnaissance conditions, thus the main determinant is the quantity of main colours, which is usually restricted by the construction conditions in practice. Therefore, when conducting theoretical evaluation, it is possible to set 3-5 colours based on experiences [18-20]. Comprehensive evaluation of a

   (12) Thus, the local simulation evaluating method for camouflage patterns in the background is represented in the proce- dure shown in Figure 1. First, classify the target and the background regions according to the eight-connected-region method. Decompose the target and each background region in the Gaussian pyr-

Cytaty

Powiązane dokumenty

The Editorial Board prepares the article for printing, sends for the Author’s correction and informs the Author about the number of issue in which the article will be published. The

Thus, instead of directly measuring the change in the diffraction angle, this method allows the measurement of small changes in the lattice spacing through the change in the

Using these rankings the kNN [1], the IncNet neural network [3], the Feature Space Mapping (FSM) neurofuzzy system [8], and the K* classification method [9] were used to

Z czasem jednak na skutek spowszednienia elektronicznych środków przekazu, które przestano postrzegać przez pryzmat „magicznej” nowości oraz zwiększenia świadomości

Methanol reageert met PO naar propyleenglycolmethylether (PGME, l-methoxy-2-propanol). Voor het proces zijn op het terrein van ARCO Chemie diverse utilities beschikbaar. Hiervan is

Stąd też stanie się pieniądz trafnym zobrazowaniem dialektyki: myślenia krążą­ cego od słowa do słowa, dla którego Praw da nigdy nie jest „od razu”,

Autor zalicza plany prze­ strzenne do aktów prawa miejscowego, więc i do nich stosuje tę samą zasadę dodając, że po pierwsze Sąd nie jest merytorycznie kompe­ tentny do

This is confirmed by measures used in experiments (global and point measures) and also JPEG compression itself for which compressed files in this channel are characterised