Thursday 30 July 2015

2.3.6 Types of requirement 


functional requirement 

In software engineering (and systems engineering), a functional requirement defines a function of a system and its components. A function is described as a set of inputs, the behavior, and outputs .
Functional requirements may be calculations, technical details, data manipulation and processing and other specific functionality that define what a system is supposed to accomplish. Behavioral requirements describing all the cases where the system uses the functional requirements are captured in use cases. Functional requirements are supported by non-functional requirements (also known as quality requirements), which impose constraints on the design or implementation (such as performance requirements, security, or reliability). Generally, functional requirements are expressed in the form "system must do <requirement>", while non-functional requirements are "system shall be <requirement>". The plan for implementing functional requirements is detailed in the system design. The plan for implementing non-functional requirements is detailed in the system architecture.
As defined in requirements engineering, functional requirements specify particular results of a system. This should be contrasted with non-functional requirements which specify overall characteristics such as cost and reliability. Functional requirements drive the application architecture of a system, while non-functional requirements drive the technical architecture of a system.
In some cases a requirements analyst generates use cases after gathering and validating a set of functional requirements. The hierarchy of functional requirements is: user/stakeholder request → feature → use case → business rule. Each use case illustrates behavioral scenarios through one or more functional requirements. Often, though, an analyst will begin by eliciting a set of use cases, from which the analyst can derive the functional requirements that must be implemented to allow a user to perform each use case.

non-functional requirement

In systems engineering and requirements engineering, a non-functional requirement is a requirement that specifies criteria that can be used to judge the operation of a system, rather than specific behaviors. This should be contrasted with functional requirements that define specific behavior or functions. The plan for implementing functional requirements is detailed in the system design. The plan for implementing non-functional requirements is detailed in the system architecture.
Broadly, functional requirements define what a system is supposed to do and non-functional requirements define how a system is supposed to be. Functional requirements are usually in the form of "system shall do <requirement>", an individual action of part of the system, perhaps explicitly in the sense of a mathematical function, a black box description input, output, process and control functional model or IPO Model. In contrast, non-functional requirements are in the form of "system shall be <requirement>", an overall property of the system as a whole or of a particular aspect and not a specific function. The systems' overall properties commonly mark the difference between whether the development project has succeeded or failed.
Non-functional requirements are often called qualities of a system. Other terms for non-functional requirements are "constraints", "quality attributes", "quality goals", "quality of service requirements" and "non-behavioral requirements".[1]Informally these are sometimes called the "ilities", from attributes like stability and portability. Qualities, that is non-functional requirements, can be divided into two main categories:
  1. Execution qualities, such as security and usability, which are observable at run time.
  2. Evolution qualities, such as testability, maintainability, extensibility and scalability, which are embodied in the static structure of the software system.[2][3]
2.3.7 Metrics
As we stated earlier, the basic purpose of metrics at any point during a development project is to provide quantitative information to the management process so that the information can be used to effectively to control the development process. Unless the metric is useful in some form to monitor or control the cost, schedule, or quality of the project, it is of little use for a project.  
  In this section, we will discuss some of the metrics and how they can be used.
2.3.7.1 Size—Function Points
        As the primary factor that determines the cost (and schedule) of a software project is its size, a metric that can help get an idea of the size of the project will be useful for estimating cost.
              A commonly used size metric for requirements is the size of the text of the SRS. The size could be in number of pages, number of paragraphs, number of functional requirements, etc. As can be imagined, these measures are highly dependent on the authors of the document. A verbose analyst who likes to make heavy  use of  illustrations may produce an SRS that is many times the size of the SRS of a terse analyst. Similarly, how much an analyst refines the requirements has an impact on the size of the document. Generally, such metrics cannot be accurate indicators of the size of the project. They are
used mostly to convey a general sense about the size of the project.Function points are one of the most widely used measures of software size.
           The basis of function points is that the "functionality" of a system, that is, what the system performs, is the measure of the system size. And as functionality is independent of how the requirements of the system are specified, or even how they are eventually implemented.
        In function points, the system functionality is calculated in terms of the number of functions it implements, the number of inputs, the number of outputs, etc.—parameters that can be obtained after requirements analysis and that are independent of the specification (and implementation) language.
            The original formulation for computing the function points uses the count of five different parameters, namely, external input types, external output types, logical internal file types, external interface file types, and external inquiry types. According to the function point approach, these five parameters capture the entire functionality of a system. However, two elements of the same type may differ in their complexity and hence should not contribute the same amount to the "functionality" of the system.
            To account for complexity, each parameter in a type is classified as simple, average, or complex. The definition of each of these types and the interpretation of their complexity levels is given later .
Each unique input (data or control) type that is given as input to the application from outside is considered of external input type and is counted.
             An external input type is considered unique if the format is different from others or if the specifications require a different processing for this type from other inputs of the same format. The source of the external input can be the user, or some other application, files. An external input type is considered
simple if it has a few data elements and affects only a few internal files of the application. It is considered complex if it has many data items and many internal logical files are needed for processing them. The complexity is average if it is in between.
             Similarly, each unique output that leaves the system boundary is counted as an external output type. Again, an external output type is considered unique if its format or processing is different. Reports or messages to the users or other applications are counted as external input types. The complexity
criteria are similar to those of the external input type. For a report, if it contains a few columns it is considered simple, if it has multiple columns it is considered average, and if it contains complex structure of data and references many files for production, it is considered complex.
Each application maintains information internally for performing its functions.




Function type
Simple
Average
Complex
External input
External output
Logical internal file
External interface file
External inquiry
3
4
7
5
3

4
5
10
7
4

7
15
10
6


                         Table 3.3: Function point contribution of an element.
            Each logical group of data or control information that is generated,used, and maintained by the application is counted as a logical internal file type. A logical internal file is simple if it contains a few record types, complex if it has many record types, and average if it is in between.
            Files that are passed or shared between applications are counted as external interface file type. Note that each such file is counted for all the applications sharing it. The complexity levels are defined as for logical internal file type.
             A system may have queries also, where a query is defined as an inputoutput combination where the input causes the output to be generated almost immediately. Each unique input-output pair is counted as an external inquiry type. A query is unique if it differs from others in format of input or output or if it requires different processing. For classifying the query type, the input and output are classified as for external input type and external output type, respectively. The query complexity is the larger of the two.
Each element of the same type and complexity contributes a fixed and same amount to the overall function point count of the system (which is a measure of the functionality of the system), but the contribution is different for the different types, and for a type, it is different for different complexity
levels. The amount of contribution of an element is shown in Table 3.3
              Once the counts for all five different types are known for all three different complexity classes, the raw or unadjusted function point (UFP) can be computed as a weighted sum as follows:

                                    i=5   j=3
                   UFP = ∑ ∑ wijcij
                               i=0  j=0


where i refiects the row and j refiects the column in Table 3.3; wij is the entry in the ith row and jth column of the table (i.e., it represents the contribution of an element of the type i and complexity j ) ; and Cij is the count of the number of elements of type i that have been classified as having the complexity corresponding to column j .
                Once the UFP is obtained, it is adjusted for the environment complexity.For this, 14 different characteristics of the system are given. These are data communications, distributed processing, performance objectives, operation configuration load, transaction rate, on-line data entry, end user efficiency, on-hne update, complex processing logic, re-usabihty, installation ease, operational
ease, multiple sites, and desire to facilitate change. The degree of influence of each of these factors is taken to be from 0 to 5, representing the six different levels: not present (0), insignificant influence (1), moderate influence (2), average influence (3), significant influence (4), and strong influence (5). The 14 degrees of influence for the system are then summed, giving a total N''
 (N ranges from 0 to 14*5=70). This N is used to obtain a complexity adjustment factor (CAP) as follows:
                      CAP = 0.65 + O.O1N.
With this equation, the value of CAF ranges between 0.65 and 1.35. The delivered function points (DFP) are simply computed by multiplying the UFP by CAF.
 That is,
                          Delivered Function Points = CAF * Unadjusted Function Points.
As we can see, by adjustment for environment complexity, the DFP can differ from the UFP by at most 35%. The flnal function point count for an application is the computed DFP.
Function points have been used as a size measure extensively and have been used for cost estimation. Studies have also been done to establish correlation between DFP and the final size of the software (measured in lines of code.) For example, according to one such conversion given in www.theadvisors.com/langcomparison.htm, one function point is approximately equal to about 125 lines of C code, and about 50 lines of C++ or Java code. By building models between function points and delivered lines of code .
             A major drawback of the function point approach is that the process
of computing the function points involves subjective evaluation at various
points
 (1) different interpretations of the SRS (e.g., whether something should count as an external input type or an external interface type; whether or not something constitutes a logical internal file; if two reports differ in a very minor way should they be counted as two or one);
(2) complexity estimation of a user function is totally subjective and depends entirely on the analyst (an analyst may classify something as complex while someone else may classify it as average) and  complexity can have a substantial impact on the final count as the weighs for simple and complex frequently differ by a factor of 2;
 (3) value judgments for the environment complexity. These factors make the process of function point counting somewhat subjective.

The main advantage of function points over the size metric of KLOC, the other commonly used approach, is that the definition of DFP depends only on information available from the specifications, whereas the size in KLOC cannot be directly determined from specifications. Furthermore, the DFP count is  independent of the language in which the project is implemented.

  
2.3.7.2 Quality Metrics

Number of errors found  is a process metric that is useful for assessing the quality of requirement specifications. Once the number of errors of different categories found during the requirement review of the project is known, some assessment can be made about the SRS from the size of the project and historical data. This assessment is possible if the development process is under statistical control. In this situation, the error distribution during requirement reviews of a project will show a pattern similar to other projects executed following the same development process. From the pattern of errors
to be expected for this process and the size of the current project (say, in function points), the volume and distribution of errors expected to be found during requirement reviews of this project can be estimated. These estimates can be used for evaluation.
                        For example, if much fewer than expected errors were detected, it means that either the SRS was of very high quality or the requirement reviews were not careful. Further analysis can reveal the true situation. If too many clerical errors were detected and too few omission type errors were detected,
it might mean that the SRS was written poorly or that the requirements review meeting could not focus on "larger issues" and spent too much effort on "minor" issues. Again, further analysis will reveal the true situation. Similarly, a large number of errors that reflect ambiguities in the SRS can imply that the problem analysis has not been done properly and many more ambiguities may still exist in the SRS. Some project management decision to control this can then be taken (e.g., build a prototype or do further analysis). Clearly, review data about the number of errors and their distribution can be used effectively by the project manager to control quality of the requirements. From the historical data, a rough estimate of the number of errors that remain in the SRS after the reviews can also be estimated.
This can be useful in the rest of the development process as it gives some handle on how many requirement errors should be revealed by later quality assurance activities.

  Change request frequency can be used as a metric to assess the stability of the requirements and how many changes in requirements to expect during the later stages. Many organizations have formal methods for requesting and incorporating changes in requirements. We have earlier seen a requirements change
management process. Change data can be easily extracted from these formal change approval procedures. The frequency of changes can also be plotted against time. For most projects, the frequency decreases with time. This is to be expected; most of the changes will occur early, when the requirements are being analyzed and understood. During the later phases, requests for changes should decrease.
For a project, if the change requests are not decreasing with time, it could mean that the requirements analysis has not been done properly. Frequency of change requests can also be used to "freeze" the requirements—when the frequency goes below an acceptable threshold, the requirements can be
considered frozen and the design can proceed. The threshold has to be determined based on experience and historical data.

No comments:

Post a Comment