şükela:  tümü | bugün
  • use case points
    here i want to discuss the notion of use case points, which are gaining currency, at least in some organizations. this is really nothing more or less than counting and is based on work by gustav karner in 1993 (use case points - resource estimation for objectory projects, gustav karner, objective systems sf ab). this work was, in turn, a modification of the work by allen albrecht on function points. use case models can be used for estimating the notion of "effort" in software development and testing. it is pretty much an accepted fact now that the understandability of a use case model is influenced by its structure (i.e., the structure of its use cases) and this structure, in turn, has a direct effect on the precision of any estimates made based on use cases. in fact, in the paper estimating software development effort based on use-cases by bente anda, hege dreiem, dag i.k. sjøberg, and magne jørgensen they showed that there were very specific aspects of structure that made a difference. they pointed out:

    the use of generalization between actors
    the use of included and extended use cases
    the level of detail in the descriptions
    as the authors state:

    "two actors can be generalized into a superactor if there is a large description that is common between those two actors ... common behaviour is factored out in included use cases. optional sequences of events are separated out in extending use cases."

    the main point here, in all such models, is that if one wants to apply uses cases for estimation, one should have use cases that have been identified at a suitable level of detail such that, at the very least, functional requirements have been broken down into a suitable number of use cases. also, in order to utilize the concept of use case points, the fundamental element is that one should, at the very least, be able to count the number of transactions in each use case. this should pose no problem for standard use cases because a transaction is an event that occurs between an actor and the system being modeled. the actor element is absolutely fundamental to this measuring scheme. if you are using a use case driven project model, like that which is part of the rational unified process, it should be the case that your functional requirements will be defined as a number of use cases. to be able to estimate the effort required to complete the whole project you will need to create a model that will estimate the total number of hours based on the number of use cases and what you know about them. the amount of work required to analyse and implement a use case can vary to greater and lesser degrees depending on the complexity of the use case. a good way to start is to group the use cases in three groups: simple, average, and complex. (others use schemes like easy, moderate, and difficult. whatever works for you.) simple use cases would be standard data access use cases with no complicated logic or relations and complex ones being use cases that affects many objects and where complicated logic is often required.

    so, let us take it step by step. first, you have to determine the number of actors in the system. this will give you what is called the unadjusted actor weights (uaw). actors are external to the system and interface with it. examples are end-users, other programs, data stores, etc. they should be included in any use case specification document. actors come in three types, just like our use cases: simple, average, and complex. a simple actor is another system that your system is interfacing to via a programming interface of some sort, like your standard application programming interface (or api). an average actor is either another system that your system is interfacing to via a protocol or a text based user interface. average actors interact with the system through some protocols (such as http, tcp/ip, etc.) or this type of actor could be a data store. these types of actors qualify as average since the results of test case runs might need to be verified manually by running sql statements on the store, verifying timing information for the protocol transfer, etc. a complex actor is a person interacting via a graphical interface. (end-users are often referred to as complex actors.)

    after determining the number of actors, you have to weight them. a simple user has a factor of 1, an average user has a factor of 2, and a complex user has a factor of 3. these are called weighting factors and the sum of these products gives the total unadjusted actor weights. so, total uaw is calculated counting the number of actors in each category, multiplying each total by its specified weighting factor, and then adding the products. we can consider this in a table format:

    actor type description factor number of actors result
    simple system interface 1 2 2
    average interactive or protocol driven interface 2 2 4
    complex graphical interface 3 4 12
    total actor weight factor 18

    secondly, you have to determine the number of use cases in the system, which is reffered to as the unadjusted use case weight (uucw). the use cases should be assigned weights depending on the number of transactions and/or scenarios. next you need to weigh the use cases. determine which use cases are simple, average, and complex. the basis of your categorization is how many transactions they contain, where a transaction is defined as an atomic set of activities that are either performed entirely or not at all. for a simple user, the description is usually less than or equal to three transactions and the factor is 5. for an average user, the description is usually four to seven transactions, and the factor is 10. for the complex user, the description is usually greater than seven transactions and the factor is 15.

    use case type description factor number of use cases result
    simple 1 - 3 transactions 5 8 40
    average 4 - 7 transactions 10 12 120
    complex 8 - ... transactions 15 4 60
    total use case weight factor 220

    the sum of these products gives the total unadjusted use case weight. the unadjusted use case weights is calculated counting the number of use cases in each category, multiplying each category of use case with its weight and adding the products. the unadjusted actor weights, from the first step, is added to the unadjusted use case weight to get what is called the unadjusted use case points (uucp). so your simple calculation is:

    uucp = uaw + uucw

    so, with our example here, you would have:

    uucp = 18 + 220 = 238

    thirdly, the use case points are adjusted based on the values assigned to a number of technical factors and environmental factors. this is the step that people often find a little odd so it is best to use a standard measure of some sort; however that is not to say that there is a standard measure in use. this can be relative to a given organization. first let us consider the technical complexity of the system. to do this you use the table below. you walk through the table and rate the technical factors on a scale from 0 to 5. an assigned value 0 means that the factor irrelevant and an assigned value 5 means that the factor is very important.

    factor number description weight assigned value calc. factor comment
    t1 distributed system 2 0 0 central system
    t2 response time or throughput performance objectives 1 3 3 speed is probably limited by human input
    t3 end user efficiency 1 5 5 needs to be efficient
    t4 complex internal processing 1 1 1 no complex calculations
    t5 code must be reusable 1 0 0 no
    t6 easy to install 0.5 5 2.5 must be very easy to install
    t7 easy to use 0.5 5 2.5 very user-friendly
    t8 portable 2 0 0 no
    t9 easy to change 1 4 4 low maintenance cost
    t10 concurrent 1 0 0 no
    t11 includes special security objectives 1 3 3 normal security
    t12 provides direct access for third parties 1 5 5 web users have direct access
    t13 special user training facilities are required 1 1 1 few internal users, easy to use system
    total technical factor (tfactor): 27

    each factor is assigned a value between 0 and 5 depending on its assumed influence on the project. a rating of 0 means the factor is irrelevant for this project; 5 means it is essential. the idea is that technical complexity factor (tcf) is calculated multiplying the value of each factor (t1-t13) in by its weight and then adding all these numbers to get the sum called the tfactor (or what some called the tef multiplier in simplified models). this factor is calculated via the following formula:

    tcf = 0.6 + (.01 × tfactor)

    so with our current example we have:

    tcf = 0.6 + (.01 × 27) = 0.6 + 0.27 = 0.87

    the training and skills of the staff also has a great impact on your time estimates. this is considered when the environment factor (ef) is calculated. use the table below and assign values that are appropriate for your project in a similar way as with the technical factors.

    factor number description weight assigned value calc. factor comment
    e1 familiar with the project model that is used 1.5 1 1.5 most staff is not familiar with the model
    e2 application experience 0.5 4 2 most staff has worked many years in this applic.
    e3 object-oriented experience 1 1 1 most staff is former cobol-programmers
    e4 lead analyst capability 0.5 5 2.5 a consultant from callista is used
    e5 motivation 1 5 5 the team is highly motivated
    e6 stable requirements 2 2 4 we expect some changed
    e7 part-time staff -1 3 -3 unfortunately several staff is half-time
    e8 difficult programming language -1 1 -1 visual basic
    total environment factor (efactor): 12

    ef is calculated accordingly by multiplying the value of each factor (e1-e8) by its weight and adding all the products to get the sum called the efactor. the formula below is applied:

    ef = 1.4+(-0.03 × efactor)

    given our current numbers so far, we would have:

    ef = 1.4 + (-0.03 × 12) = 1.04 - 0.285 = 0.755

    now you can compute the adjusted use case points (aucp). (note that adjusted use case points is generally just referred to as use case points (ucp).) we do this by multiplying the factors we have calculated. we use this formula:

    aucp = uucp × tcf × ef

    given the numbers we have derived, we have:

    aucp = 238 × 0.87 × 0.755 = 156.3303

    now we (finally!) arrrive at the final effort. we now have to simply multiply the adjusted ucp with a conversion factor. this conversion factor denotes the person-hours in test effort required for a given language/technology combination. the organization will have to determine the conversion factors for various such combinations. karner originally suggested that each aucp would require about twenty man-hours, which with our hypothetical numbers would be:

    man-hours: 156.3303 × 20 = 3126.6

    naturally, you could modify the factors in this model to work better in your own projects. for example, some have argued against a factor of twenty staff hours per use case point for a project saying that, in fact, the number should be taken as a range between fifteen and thirty hours per use case point. what tends to be common is the recommendation that the environmental factors should determine the number of staff hours per use case point. the number of factors in e1 through e6 that are below 3 are counted and added to the number of factors in e7 through e8 that are above 3. if the total is 2 or less, the general idea is to use twenty staff hours per ucp; if the total is 3 or 4, use twenty-eight staff hours per ucp. if the number exceeds 5, it is usually recommended that changes should be made to the project so the number can be adjusted because, in this case, the risk is unacceptably high. another possibility is to increase the number of staff hours to thirty-six per use case point.

    all of this was a somewhat gentle introduction to use case points justto give you a very general idea. two other papers you might want to consider checking out in relation to this subject are the estimation of effort based on use cases by john smith and estimating object-oriented software projects with use cases by kirsten ribu.

    www.globaltester.com