• Nie Znaleziono Wyników

Comparative analysis of information security assessment and management methods

N/A
N/A
Protected

Academic year: 2021

Share "Comparative analysis of information security assessment and management methods"

Copied!
16
0
0

Pełen tekst

(1)

Summary

Information security, at a time of growing requirements for flexibility, became one of the fundamental points of interest of various kinds of organizations and for most companies it is one of the most complex areas to manage. There are numerous methods for information security assessment and a number of information security management methods that are widely known. None of them, however, became either a national or an international standard, and each is used on a smaller or larger local area only. Of course, due to the differences between them, some should be more universal and flexible than others. This paper provides a comparison of the most widely known information security management and information security assessment methods in search of the most universal solution that can be used in any type of company.

Keywords: information security, information security management, information security assessment

1. Introduction

Distributed information systems are more and more widely applied not only in large enterprises but also in the SME sector as well as state-run institutions and offices. Due to the development of public telecommunications networks, a dynamic and unrelenting increase in the number of customers of such systems may be observed. Together with a growing number of users and increasing amount of both data and information gathered and processed by distributed information systems, it is increasingly difficult to guarantee an adequate level of information security and confidentiality. The United States Code defines information security as “protection of information and information systems from unauthorized access, use, disclosure, modification or destruction” [15]. This means that problems with information security may be caused by a number of factors such as:

• lack of control over data sent by a public network;

• lack of control over information made available to third parties;

• limited control over the internal flow of information outside the IT system; • lack of awareness among users and managers.

Disclosing information or making it available to an unauthorized party is not the only problem associated with information security in distributed systems. The above-mentioned systems usually

(2)

use extensive broadband networks. What is also significant apart from the indisputable threat of captures or changes in the data transmitted by means of such channels is a physical dispersion of the components of the system. In many cases, the system components are independent. This means that a malfunction or inactivation of any of the elements does not result in the malfunction or inactivation of the remaining components. On the contrary, in most cases, other components will work normally, but due to the lack of data they may generate incomplete or erroneous results. There are a number of reasons for the disability of individual system components which in turn can lead to the incorrect (i.e. incompatible with expected) operation of the entire system. These are:

• unintentional and intentional actions of people;

• problems with powering computer hardware and telecommunications devices; • device failures;

• unsuitable operating conditions of the equipment (e.g. overheating of devices); • malfunction of software (errors);

• lack of periodic maintenance service of hardware and software.

Due to the fact that even seemingly unimportant local events may have a significant impact on the entire system, what is essential is a general analysis rather than an analysis of every single component on its own. Standardization is a way to avoid uncontrollable changes. The aim of standardization is to develop a series of standards for information security and a number of methods and methodologies. Unfortunately, various solutions and normative acts, which have separately emerged over time and have been independently updated ever since, are not compatible with each other. Standards use different definitions for the same issues and take various approaches to similar problems, whereas methods choose different lines of reasoning to solve what seems to be exactly the same problem. As a result, in most cases, a direct comparison of results generated by two various methods is simply impossible. What is more, an indirect comparison is (on the assumption of the required conversion) difficult and laborious. A question arises as to whether these widely applied methods are credible. Do they enable an accurate description of any kind of information system or do they constitute an exaggerated copy of a particular case? Has the declared conformity with standards actually been provided for, and why there is no such agreement with other standards? In order to answer these questions in-depth research in this field has been conducted. The purpose of the research was to:

• demonstrate differences between existing solutions; • evaluate and classify already existing solutions;

• identify potential areas for the development of existing solutions;

• determine a possible degree of automation of information security management.

On the basis of the results of this research it shall then be possible to conduct further research under the development of a decision support system helping to choose the right assessment methodology, on the basis of the requirements and preferences given by the law and the organization itself.

Chapter 2 contains general characteristics of the selected methods and methodologies of information security evaluation and management. Chapter 3 presents the criteria for the analysis as well as the scoring rules. Chapter 4 contains a comparative analysis of the selected methods and methodologies. The analysis was conducted on the basis of the criteria described in Chapter 3. Chapter 5 contains the summary.

(3)

2. Characteristics of security assessment methods/information security management systems

2.1. Standards of information security

Information security is regulated by a number of national and international standards. The best-known and most commonly used is the family of ISO 27000 standards [9] and the Common Criteria [3] described also by the ISO 15408 [7] and ISO 15 443 [8] standards. The first, called ISMS, is a general technical standard describing the information security management system. The remaining two constitute a description of how the declaration of conformity meets the requirements of a given level of confidence for general solutions and for specific implementations, respectively. In addition to these basic standards, there are also many others. Some of them are an older equivalent of the ISO 27000, while the others specify a selected area by implicitly extending the provisions of the general standards.

2.2. MEHARI

MEHARI [11], the successor to Marion, is a methodology of security assessment developed in 1995 by CLUSIF. The security assessment is based on the probability of occurrence and the effects of scenarios associated with the resources. The resources are assigned to individual business processes and to the organizational structure of the enterprise. Identification of risks is based on the audit during which the system users are asked questions. Their responses enable identification of risk and vulnerability areas. The responses are subject to verification and crosschecked to rule out false information. After the calculation of risk values for each scenario, countermeasures can be applied. The culmination of the process is to calculate the residual risk, i.e. to carry out a security assessment of the system after the application of risk mitigation solutions. The method has a number of tools to help, including some official tools. The method is widespread around the world.

2.3. Cramm

Cramm [4] is a method developed in 1987 by a UK governmental agency. The method consists of three main parts:

1. Determination of main objectives; 2. Risk identification and assessment; 3. Selection of countermeasures.

The aim of the first stage is to estimate the total value of fixed assets, software and data possessed by a company. It is also at this stage that the scope of the analysis is set. During the second stage such aspects as threats and vulnerabilities are identified. Both risk and result are calculated on the basis of a model which, by means of a fix range of values, presents the final value in British pounds. This means that high risk is rigidly associated with very high amounts of potential losses rendering the method of little use for the analysis of security systems in the SME sector; despite a greater number of risk levels, the number of levels useful from the standpoint of such an undertaking is too small. Moreover, the selection of countermeasures, which takes place in the third stage, is related to the value of risk. This means that despite an array of as many as 70

(4)

groups of countermeasures of their choice, in the case of the SME sector, it may be suboptimal as the scale of risk is often not fully adjusted to the size of the enterprise.

2.4 EBIOS

Ebios [5] is a method of security analysis developed by the French Ministry of Defense. This method allows measuring the risks associated with the functioning of information systems and developing mechanisms to respond to risk as well as developing a security policy tailored to the needs of a given organization. The aim of this method is to develop a set of rules which, if complied with, ensure that the desired level of security is achieved. This means that unlike Cramm and Mehari, this method focuses on the requirements for security and not on hazard and vulnerability analysis. The final point is to define a set of conditions to be satisfied so that the system reaches a given level of security. These conditions are presented in the form of security policy which may also specify the scope of necessary security and "remedial measures" to ensure an adequate level of safety. Methods of ensuring compliance with the information security standards are currently being worked on.

2.5 Octave

Octave [10] is a method of security analysis. It comes in three variants. The basic variant, the Octave Method, was developed primarily for larger organizations and became the basis for what is called the "Octave knowledge base". The second version, the Octave-S, was designed for smaller businesses. The third variant of the method, the Octave-Allegro, is suitable for carrying out simple and rapid security assessments. This method is based on the Octave Criteria, i.e. a set of its own standardized performance measures for evaluating safety. In addition to the documentation the method is supported by a number of auxiliary tools. Its basic variant consists of three stages: a) identification of critical resources and risks associated with those resources, b) identification of those related to critical resources areas of vulnerability and risk assessment, c) development of security plans and risk transfer, both tailored to the needs of a given organization. Octave-S uses a much-simplified analysis of the resources due to the fact that small businesses often tend to outsource IT (and other areas), thus limiting proper security control within this area.

2.6 Cobra

Cobra [2] is a methodology built into software tools. Its purpose is to automate the risk management process by automatically generating customized questionnaires. Cobra is highly flexible and has the ability to self-adapt to the nature of the organization. It has a built-in auto-verification mechanism, which reduces the likelihood of errors. Cobra, like many other solutions based on questionnaires, greatly assists an audit but cannot substitute the audit itself. The identification of risks is not automatic. What is, however, subject to automation is the selection of content of forms which are necessary for a manual audit. Cobra is currently not available for sale – a new version of the tool is being worked on. The release date of the update has not yet been disclosed nor has any details concerning the scale of changes.

(5)

2.7 Common Criteria – Common Evaluation Method

Common Criteria [3] is a methodology for assessing the security of the system. This methodology sets out the steps and actions that must be performed to verify the system's compatibility with the chosen level of confidence. The methodology includes a detailed description of how various provisions of the declaration of compliance with a given level of confidence in CC ought to be verified. The outcome of the verification process is binary (pass / fail). However, until the assessment of a given module of declaration has been completed its status is non-binding (inconclusive). This methodology does not aim to determine the actual level of system security. Its main objective is to test whether the security level declared by the manufacturer has been reached. Any non-compliance with the requirements for the declared level of confidence proves the declaration incompatible with the actual state and causes its rejection. Cases of non-compliance do not reduce the level of confidence in the tested product. This method is usually used when designing new solutions and products whose security has to be certified. 2.8 Cobit

Cobit (Control Objectives for Information and Related Technology) [1] is a set of indicators and targets to assess the current state of information system of a given enterprise. The only area outside the scope of this method is security of information. Cobit is a universal tool for information management in the information system of an enterprise where security is one of the most important issues. As in the case of other methods, raw data is collected in a non-automated manner by means of audit or expertise.

3. Description of comparative methodology

The comparison was carried out by applying a point scale. For each criterion, depending on its relevance, a point on a scale ranging from 0 to 5 is assigned. The maximum number of points for each category is awarded only when the criterion is completely fulfilled. In the case of partial fulfilment, points are awarded in proportion to the degree of compliance with conditions of the criterion. A detailed justification for granting a certain number of points can be found in the fifth chapter along with detailed characteristics of individual solutions. Each evaluated method can score a maximum of 100 points.

Table 1. Description of the criteria Criterion Description

Cost of the method / methodology

5 points

Cost of documentation, licensing and other fees necessary for acquiring the right to use the method. It does not include software to support the above-mentioned if it is available for an additional fee or cost of implementing the solution. The cost has a significant impact on the availability of solutions for companies and organizations that have a smaller budget or manage assets of lesser value.

• 5 points a free solution;

• 2 points a solution, not exceeding 1000 Euro; • 0 points for solutions over 1000 Euro.

(6)

Criterion Description Documentation in

English 2 points

Availability of documentation methods/methodologies and standard tools in English. If the file is accompanied by custom software to support it, this also applies to the language of software (localization). English is widely recognized as the primary language of communication in the IT industry. Other languages, although used in many countries, can be considered as national languages.

• Documentation available (2 points); • Documentation not available (0 points). National standard

2 points

Is the method/methodology standard in at least one country? Standardization of methods/methodologies at the national level, on the one hand, may imply the requirement of application of a method in a given country. On the other hand, the fact that a given method/methodology is recognized as standard demonstrates both quality and recognition of a given method/methodology.

• Yes (2 points);

• Yes, there is a legal requirement for its application (1 point); • No (0 points)

International standard

4 points

Is the method/methodology an international standard? On the one hand standardization of methods/methodologies on an international level can indicate a requirement to use a method authorized by law. On the other hand, the fact that a given method/methodology is recognized as standard demonstrates both quality and recognition of a given method/methodology.

• Yes (4 points); • No (0 points). Declared compliance with standards 3 points

Declared by the authors of the method/methodology compliance with standards of information security. Compliance with the requirements established by international standards concerning information security indicates the possibility of using it in a standardized system of information security management.

Declared compliance with standards: • None (0 points);

• Declaration of compliance with one standard (1 point); • Declaration of compliance with two standards (2 points); • Declaration of compliance with multiple standards. (3 points). Target Group

5 points

Recipient of methods / methodology due to the type of activity, size, area of operation, etc. The more universal the method / methodology, the target group is greater. The high specialization of method / methodology in a small group of customers significantly reduces its usefulness in other areas (profiles) of business.

• Without limitation (5 points);

• Limitation of the size or type of entity (3 points); • Limitation of the size and type of entity (1 point). Complexity of

usage/implementatio n

5 points

The level of difficulty related to the implementation and maintenance of the method / methodology. It is associated with the required experience. The high level of complexity of the method / methodology restricts users to a group of specialists, in many cases, preventing its use by managers at the strategic and operational levels.

• Easy + easy (5 points);

• Easy + normal, normal + normal (3 points); • Normal + difficult, difficult + difficult (1 points).

(7)

Criterion Description

2 points international market is proof of the solutions’ universality and is proof of its high quality as a product.

• Large, international (2 points); • Small, local (0 points). Flexibility

3 points

The ability of methods / methodologies to adapt to the specific company (custom assessment of the events). The same event can have different meanings depending on the perspective view, the specifics of the market, the company (location, a team of people, etc.).

• Large (3 points); • Average (2 points); • Small (1 points). Scope

5 points

Area of operation methods / methodologies (risk management, assessment and risk analysis). Assessment and risk analysis is one of the stages of risk management. The method / methodology must cooperate with others covering the remaining areas of risk management (risk identification and minimum). Reducing the scope of the method to the IT system, in turn, means that it does not take into account the part of information system, which is not supported by the IT system.

• Risk Management – information system (5 points);

• Risk management – IT system, risk assessment – information system (3 points);

• Risk Assessment – IT system (1 point). Risk identification

method 2 points

Risk identification system (audit, checklists, etc.) used in the method/methodology. • Direct (2 points); • Indirect (1 point); • None (0 points). Verification of the risk completeness 3 points

Whether the method / methodology provides a mechanism for verifying the consistency of the identified risks and whether all business areas are covered by risk scenarios? • Auto (3 points); • Manual (1 point); • None (0 points). Number of standard scenarios 2 points

Number of scenarios or variants that describe causes and consequences of adverse events. A large number of defined scenarios means a lot of detail in the reproduction of reality by the method / methodology.

• Large (2 points); • Average (1 point); • Low / None (0 points). Analysis of

scenarios relationships

4 points

Whether the method / methodology includes a mechanism for controlling relationship between scenarios. Events can have an impact not only directly on the company, but may also cause other events (or prevent them).

• Automatic (4 points); • Manual (2 points); • None (0 points),

(8)

Criterion Description methods

3 points

or unintentional false statements. It is difficult to detect and usually it requires the verification of cross-received data / information.

• Verified (3 points);

• Demanding verification (1 point). Data verification

3 points

Verification method of the correctness of the information obtained about the system.

• A comprehensive and adequate (3 points); • Simplified (2 points);

• None (0 point). The basis for risk

calculation 2 points

Method of determining the risk value (equitation, matrix, other), and the component values used to calculate it.

• Based on the qualitative values (2 points); • Based on the quantitative values (0 points). Number of risk

levels 2 points

Determines detailed risk identification. Insufficient number risk levels limits or prevents its effective analysis and may cause too much deviation from the model of reality. Too much detail blurs the differences between the levels and reduces the readability of the results and analysis.

• 3 to 5 (2 points) – optimal;

• 2 or 6 (1 point) - a small detail or convenience;

• 1, 7 or above (0 points) - risk levels are indistinguishable. Basis for

determining the probability

2 points

Method of determining the probability (formula, table, other) and the partial values used to calculate it.

• Based on the qualitative values (2 points); • Based on the quantitative values (0 points). Number of

probability levels 2 points

Determines detailed recognition of probability. Insufficient number of the probability levels may be a cause of too much deviation from the model of reality. Too much detail blurs the differences between the levels and reduces the readability of the results and analysis.

• 3 to 5 (2 points) – optimal;

• 2 or 6 (1 point) - a small detail or convenience;

• 1, 7 or above (0 points) - probability levels are indistinguishable. Basis for

determining the effect 2 points

Method of determining the effect value (formula, table, other) and the partial values used to calculate it.

• Based on the qualitative values (2 points); • Based on the quantitative values (0 points). Number of effect

levels 2 points

Determines detailed recognition of effects. Insufficient number of the effects levels may be a cause of too much deviation from the model of reality. Too much detail blurs the differences between the levels and reduces the readability of the results and analysis.

• 3 to 5 (2 points) – optimal;

• 2 or 6 (1 point) - a small detail or convenience;

• 1, 7 or above (0 points) - effects levels are indistinguishable. Measure of effect

1 point

Unit of effect and the classification result representation. • Relative e.g. % (1 point);

• Absolute e.g. Dollar (0 points).

(9)

Criterion Description probability

1point

• Relative e.g. % (1 point); • Other (0 points). Measure of risk

1 point

Unit of risk and means of its classification. • Relative e.g. based on % (1 point); • Absolute e.g. based on money (0 points). Selection of

countermeasures 2 points

Whether the method contains selection mechanisms of appropriate countermeasures to the problem identified.

• Yes (2 points); • No (0 points). An analysis of

countermeasures 3 points

Countermeasures, like the "scenarios" may affect each other by increasing or reducing its effectiveness. Method / methodology should take into account this phenomenon and propose a remedy to correct the parameters in the course of selection. • Automatic (3 points); • Manual (1point); • None (0 points). Analysis of the impact of countermeasures 3 points

The countermeasures can interact with varying degrees of success. Method / methodology should consider this fact and evaluate the suitability of the solution in the analyzed case.

• Automatic (3 points); • Manual (1 point); • None (0 points). Assessment of effectiveness of risk treatment 4 points

Method / methodology should compare the effects of the countermeasure to the costs of its use.

• Automatic (3 points); • Manual (1point); • None (0 points). Risk monitoring

2 points

Whether the method/methodology enables risk monitoring.

Detection of an event recognized, as a risk element should be recorded. The method of handling such events is assessed.

• Automatic (2 points); • Manual (1 point); • None (0 points). New risks detection

2 points

Whether the method / methodology detects (identifies and reports the mean), new types of risks that have not been described before they occurred. The method of handling such events is assessed.

• Automatic (2 points); • Manual (1 point); • None (0 points). Adaptability

4 points

The ability of methods / methodologies to adapt to changing environmental conditions, the ability to use in non-standard conditions (other than those the authors assumed).

• Automatic (4 points); • Manual (2 points); • None (0 points).

(10)

Criterion Description Automatic

correction of associated risk

2 points

The method should update the risk value assigned to the event ("scenarios") related with the tested event. The update can be caused by a change in classification of the probability or of the effect of the risk, or indirectly using the countermeasures. • Full (2 points); • Partial (1 point); • None (0 point). Supports the generation of security policy framework 5 points

The method should include mechanisms for the creation of a framework of security policy based on the results of risk analysis and applied countermeasures (residual risk), aiming to secure the desired risk level.

• Automatic selection of security policy elements (5 points);

• General guidelines for the manual selection of the security policy records (2 points); • No support (0 points). Supports the procedures generation 5 points

The method should include mechanisms for the creation of procedures based on the results of risk analysis and applied countermeasures (residual risk), aiming to secure the desired risk level.

• Automatic selection of key elements of procedures (5 points);

• General guidelines for the manual selection of the security policy records (2 points);

• No support (0 points). Source: own elaboration.

4. Comparative analysis

Among the methods presented in Chapter 2 for further, detailed analysis the best known, well-established methods in the market described in [6], [12] have been chosen. Cobit has been rejected at this stage due to its excessive generality. Cobit is a versatile, robust and recognized method of system and IT resources management. Risk management is only one of the elements of this method, closely related to other functional areas. Comparing it with specialized methods in the assessment or risk management is not justified, because the results would not be reliable.

The second method rejected at this stage is the Common Criteria Methodology. The reason for this is that this method does not allow either the risk assessment study of the system or risk management. The aim of this method is to verify fulfilment of the requirements’ sets for the computer system or component, which provide the desired level of confidence. In contrast to EBIOS, which also uses the CC-Methodology, this method does not analyze the observed differences between the current and expected states with the purpose of applying the necessary changes and countermeasures. The evaluation result is binary (compliant, is not compliant), because of that, it is useless in terms of risk assessment and risk management. It is widely used in a process of verifying products’ compliance with standards and security requirements. In contrast to the CC-Methodology, EBIOS can also be successfully used for the existing systems and solutions. Results and the reports generated by this method contain recommended countermeasures and actions. This means that the evaluation results are useful in risk management and are the data source for the risk management process integrated with the method. For this reason, in contrast to

(11)

the CC-Methodology, EBIOS has undergone a comparative analysis and evaluation. Table 2 presents the results of methods evaluation using the comparison criteria’s defined in Section 3.

Table 2. Comparison results

No. Comparison criteria Weight Mehari Cramm Ebios Octave Cobra

1 Cost 5 2 0 5 5 0

2 English documentation 2 2 2 2 2 2

3 National standard 2 0 0 0 0 0

4 International standard 4 0 0 0 0 0

5 Declared compliance with standards 3 2 1 3 0 1

6 Target group 5 5 1 5 1 1

7 Sophistication of usage/implementation 5 3 1 3 3 5

8 Popularity 2 2 2 2 0 0

9 Flexibility 3 2 1 3 3 3

10 Method’s scope of action 5 3 3 5 5 3

11 Method of risk identification 2 1 1 2 1 1

12 Risk completeness verification 3 0 0 3 1 3

13 Number of standard "scenarios" 2 2 2 2 2 2

14 Analysis of "scenarios" dependencies 4 4 4 4 2 4

15 Data gathering method 3 1 1 3 1 1

16 Data verification 3 3 3 3 2 3

17 Basis for risk calculation 2 2 0 2 2 2

18 Number of risk levels 2 2 0 2 2 2

19 Basis for probability estimation 2 2 2 2 2 2

20 Number of probability levels 2 2 0 2 2 2

21 Basis for cause estimation 2 2 0 2 2 2

22 Number of cause levels 2 2 0 2 2 2

23 Cause metric 1 1 0 1 1 1

24 Probability metric 1 1 1 1 1 1

25 Risk metric 1 1 0 1 1 1

26 Choice of countermeasures 2 2 2 2 2 2

27 Analysis of countermeasures’ dependencies 3 1 3 1 1 0

28 Analysis of countermeasures’ influence 3 3 3 3 1 3

29 Estimation of risk treatment efficiency 4 0 0 0 2 0

30 Risk monitoring 2 0 0 0 1 0

31 Detection of new risks 2 0 0 1 1 1

32 Adaptability 4 2 0 2 2 4

33 Automatic correction of dependant risk 2 1 0 0 0 2

34 Support for security policy framework

generation 5 0 0 5 5 2

35 Procedures generation support 5 0 0 5 5 2

Total: 100 56 33 79 63 60

(12)

Each of the analyzed methods could gain up to 100 points. Although they are some of the most widely used, none of them reached a points level of 80%.

The best one out of the compared methods is EBIOS. This is caused mostly by the fact that it is a method of information security management, so it is not limited to risk evaluation and analysis. Moreover, thanks to complex report generation functionality, it delivers information necessary for the creation of information security policy and procedures adjusted to the current level of the information security. It is also important that this method is available free of charge which helps reduce the costs of security management. Another strong side of the EBIOS method is its declared compliance with many information security standards – their number is significantly greater than the other methods. This underlines the universal and complex character of the method as well as its high flexibility. Although, as a result of utilization of CC-Methodology rules described in ISO 15408, the rules of evaluation proposed by this method are significantly different from those presented by any other of the analysed methods; they do not cause any advantage over competitors. Sources of data, similarly to CC, are highly formalized which limits the probability of mistakes and improves data authenticity. Other methods rely on data gathered during an audit or in the form of questionnaires filled-in by the system users.

The fact that the best one out of the compared methods, EBIOS, gained less than 80% of the points confirms that the proposed criteria identified also shows the weaknesses of the method. Similarly to its competitors, it is neither an international nor a national standard. As a consequence, there is no legal basis that would justify pointing out this method as required in any given market/industry area or company. Another significant disadvantage of this method is the lack of evaluation of efficiency of risk treatment. Different schemes of security evaluation is not and cannot be a reason for justification of the lack of analysis of costs of chosen countermeasures limiting risk, therefore raising the trust level of the system. No matter what requirements are defined, it is always possible to identify more than one solution to the problem, which means that the goal can be achieved at a different price of assets, time and money. The analyzed method, due to an alternative process of security evaluation and management, does not include risk monitoring – there are no mechanisms responsible for the periodic control of the system. From the perspective of the method, it is useless, because it is assumed that any change of security can be caused only by introduced changes, which requires a full evaluation. As a consequence, monitoring was reduced and does not exist in the form of a separate process. From the perspective of a standard risk management scheme (refer to [13], [14]), it is a significant simplification.

The weakest out of the compared methods is the Cramm. It is widely accepted and has a stable market position, which makes the result even more surprising. The result is mainly caused by the target group of the method. It was created as a solution for institutions and large or very large corporations. The method was adjusted to the expectations and requirements of such a target group – not only procedures, but also the basis for evaluation of risk and cause. The most fundamental charge against this method is relying on quantitative values to determine the cause and risk level. Both of them are bound by a fixed declaration with the amounts stated in GBP. Because of the chosen levels, despite a large number of them, higher risks are from the definition unavailable for companies with smaller assets, which reduces the applicability of the method and its flexibility. Moreover, division of risk and cause into to many levels results in excessive blurring of the sets’ borders. As a consequence, it is hard to determine the difference between neighbouring levels. In addition to it, the inflation rate causes a risk that over a period of time can be basically

(13)

overestimated and only because of the change of the nominal value of the asset or the nominal value of potential losses, despite the unchanged real value. It is clear that it can lead to misleading results of information security evaluation.

Moreover, this expense in the implementation and maintenance (due to complexity of service resulting in cost of specialist employment) method does not have almost any adaptability capabilities and is limited to the evaluation and analysis of information security, with no mechanisms supporting other phases of the risk management process. According to declarations, Cramm is compliant with one standard only (ISO 1799), which is old, not up-to-date and has already been replaced (by the ISO 27000 family). This means that Cramm is a method adjusted strictly to the needs of a very narrow target group and is far from being a universal method, which is proved by the final result at the level of 33% only.

The remaining three methods present quite a comparable level, gaining around 60% of the points. The first out of them, the Mehari, loses points for lack of evaluation of risk treatment efficiency (it is limited to the verification of the effectiveness of the chosen actions) and for limiting the scope to evaluation and analysis of information security, which means that a separate method for the rest of the risk management is required. A significant disadvantage is also lack of mechanisms supporting the generation of procedures and security policies on the basis of the analysis results and applied countermeasures – all actions are treated as a solution of a unit problem, not as a part of a security management process.

Similarly as all the other methods, it is not treated as an international or a local standard. Although it declares compliance with two information security standards, both of them are not up-to-date and have been replaced by the ISO 27000 family. Despite continuous development of the method, declaration of compliance with ISO 27001 was not issued.

Octave and Cobra are another examples of methods addressed to a limited target group. Both of them gained local popularity only, despite the lack of licence fees in the case of Octave and the significant simplification of implementation and maintenance of the other. None of them provides declared compliance with current information security standards, which eliminated them from utilisation, e.g. in companies certified to comply with ISO 27001.

Octave, out of the compared methods, is the only one that contains any form of risk treatment efficiency evaluation. As a risk management method, it includes support for risk monitoring and new risks identification. Although neither of them is fully automated, it is described how to input such data into the method and make use of them. Octave also provides support for generation of documentation – procedures and the security policy, adjusted to the requirements of the organization, results of the evaluation and applied countermeasures. Unfortunately, in most cases that support is limited to providing the way of proper data processing only. Gathering data, its input and processing are not automated by the Octave, which means that the method waits for inputted processed data that will be then used in the process of evaluation, analysis or information security management. As a consequence, the method is rather time and asset consuming and requires additional prerequisite tools to prepare input data.

Cobra, in contrast to Octave, is functionality limited to evaluation and analysis only. It stands out with simplicity of maintenance and a high level of flexibility and adaptability. As the only one it provides mechanisms of adjustment of questionnaires and evaluation schemes to a defined characteristic of a given organization. As a result, for the same input data describing risk, the

(14)

method may identify a different risk level and offer different countermeasures, depending e.g. on size, goals and the profile of the company. It is a functionality that is clearly missing in the others methods.

Also, as the only one out of the methods that do not rely on Common Criteria (which means all except of the EBIOS); it has a full capability of verification of completeness of risk coverage with evaluated scenarios. The method includes simplified mechanisms supporting the creation of procedures and a security policy adjusted to the results generated by the method.

5. Conclusion

Presented analysis of information security evaluation and management methods shows that none of them is a perfect, universal tool to fight for information security. Even if the fact that none of them is a national or international standard is not considered, it should be said that all of them require improvement in numerous areas. Even in the case of the method that gained the most points, it cannot be said that it can be used in any given environment. It is simply appropriate in more situations than other methods, but it still has to be verified whether it is applicable in the case of a given system or organization.

The biggest problem of the analyzed methods is minimal automation. At almost every step they require data in a specific format and for specified content, but they lack mechanisms of extracting such information from data and information processed daily. Most information is gathered through manual audit, either with paper forms or computer tools – if someone is being questioned it is a manual process. Especially in the case of computerized areas of the information system it would be possible to gain the same knowledge without human involvement. The problem is that in analyzed methods there is no support for the necessary processes.

To sum it all up, further research on the development of information security evaluation and management methods is necessary. Main directions of development should include the automation of gathering data and information necessary for proper evaluation of the state of security. At the same time, recent events on global markets show that it is necessary to work under elimination of dependency between the evaluation of risk and the nominal value of actives and assets, and creation of a universal model applicable in a company of any size and profile. A lot can be done in the area of methods’ adjustability to various expectations on information security level.

Bibliography

[1] COBIT 4.1 Executive Summary F ramework, 2007, available online http://www.isaca.org/Knowledge-Center/cobit/Documents/COBIT4.pdf.

[2] COBRA – Security Risk Assessment, http://www.riskworld.net/.

[3] Common Criteria, v3.1. Release 3, http://www.commoncriteriaportal.org/cc/. [4] Cramm overview, http://www.cramm.com/capabilities/risk.htm.

[5] EBIOS 2010 – Expression of Needs and Identification of Security Objectives, http://www.ssi.gouv.fr/en/the-anssi/publications-109/methods-to-achieve-iss/ebios-2010-expression-of-needs-and-identification-of-security-objectives.html.

[6] ENISA, Risk Management: Implementation principles and Inventories for Risk Management/Risk Assessment methods and tools, European Network and Information Security Agency (ENISA), 2006.

(15)

[7] ISO/IEC 15408:2009 – Information technology – Security techniques – Evaluation criteria for IT security.

[8] ISO/IEC 15443:2005 – Information technology – Security techniques – A framework for IT security assurance.

[9] ISO/IEC 27001:2005 – Information technology – Security techniques – Information security management systems – Requirements.

[10] OCTAVE® Information Security Risk Evaluation,

http://www.cert.org/octave/octavemethod.html.

[11] Présentation de Mehari, https://www.clusif.asso.fr/fr/production/mehari/.

[12] Rot, A., IT Risk Assessment: Quantitative and Qualitative Approach, Proceedings of the World Congress on Engineering and Computer Science 2008, San Francisco, USA, 2008. [13] Szyjewski, Z. – Zarzdzanie projektami informatycznymi, Placet, Warszawa 2001 (in

Polish).

[14] Szyjewski, Z. and Klasa, T. Computer aided risk tree method in risk management, Polish Journal of Environmental Studies, Vol. 17, No. 3B, 2008, Hard, Olsztyn, 2008.

(16)

ANALIZA PORÓWNAWCZA METOD OCENY I ZARZĄDZANIA BEZPIECZEēSTWEM INFORMACJI

Streszczenie

BezpieczeĔstwo informacji, w czasie rosnących wymagaĔ wobec elastycznoĞci, stało siĊ jednym z podstawowych obszarów zainteresowania wszelkiego rodzaju organizacji, a dla wielu firm jest jednym z najbardziej złoĪonych obszarów do zarządzania. Istnieje wiele powszechnie znanych metod oceny bezpieczeĔstwa informacji oraz kilka metod zarządzania bezpieczeĔstwem informacji. ĩadna z nich nie stała siĊ krajowym ani miĊdzynarodowym standardem, a kaĪda z nich jest stosowana jedynie lokalnie, na mniejszym lub wiĊkszym obszarze. Ze wzglĊdu na róĪnice miĊdzy nimi, czĊĞü z nich powinna byü bardziej uniwersalna i elastyczna od pozostałych. Artykuł ten stanowi porównanie najbardziej znanych metod oceny i zarządzania bezpieczeĔstwem informacji oraz próbĊ wyłonienia najbardziej uniwersalnej metody, która moĪe byü zastosowana w dowolnej firmie.

Słowa kluczowe: bezpieczestwo informacji, ocena bezpieczestwa informacji, zarzdzanie bezpieczestwem informacji Luiza Fabisiak1 Tomasz Hyla2 Tomasz Klasa2 1 Uniwersytet Szczeciski

Wydział Nauk Ekonomicznych i Zarzdzania ul. Mickiewicza 64, Szczecin

2

Zachodniopomorski Uniwersytet Technologiczny w Szczecinie Wydział Informatyki

ul. ołnierska 52, Szczecin e-mail: luiza.fabisiak@gmail.com

tklasa@wi.zut.edu.pl thyla@wi.zut.edu.pl

Cytaty

Powiązane dokumenty

rych w miejscowości Dunkelheuser pod silnym ogniem zorganizował szpital polowy, za co wystąpiono później (w 1946) o Krzyż Walecz- nych, którego jednak nie dostał. W

O MOŻ LIWOŚ C I OPISU  PEŁNEGO PROCESU  PEŁZANIA METALI 147 z krzywych peł zania, przedstawionych w ukł adzie dwulogarytmicznym. D la wyznaczenia

The elaborated model is an attempt at identifying the kind and order of chemical reactions leading to permanent binding of carbon dioxide observed in aqueous ash suspensions

Thanks to such authors as Hardt, Negri and Jason Read (in another context also Michael Lebowitz or Harry Cleaver), we know beyond any doubt that the transition from formal to

However, in a widely cited user study published in 2011, Parnin and Orso found that research in automated debugging techniques made assumptions that do not hold in practice,

O dopuszczalność zastępstwa adwokackiego przed komisjami rozjemczymi. Palestra

AUJ, WT II 32, Sprawozdanie z działalności Wydziału Teologicznego w roku akademic- kim 1948/1949; tamże, Sprawozdanie z seminarium Pisma św.. choć bezskutecznie, na urzędników

Katedra Etyki Wydziału Filozoficznego Uniwersytetu Papieskiego Jana Pawła II w Krakowie (dawna PAT) zainicjowała w 2008 r. nową serię wy- dawniczą zatytułowaną Etyka i