This site is 100% ad supported. Please add an exception to adblock for this site.

se3

Terms

undefined, object
copy deck
mm-path endpoints
event quiesence, message quiesence, data quiesence
event quiesence
system is nearly idle, waiting for port input to trigger further processing
message quiescence
unit that sends no message is reached (module C in example)
data quiescence
sequence of processing results in stored data that is not immediately used
ASF
atomic system function: action that is observable in terms of port input and output events. begins with one input, traverses MM path, and ends with a port output event
ASFs are an ____ ____ for MM-paths
upper limit MM-paths should not cross ASF boundaries
ASFs represent the seam between _______ testing and _______ testing
system and integration testing they are the largest item for system testing and the smallest item for integration testing
system testing is closest to _______ ________, that is we evaluate a product with respect to our experience
everyday experience
we tend to approach system testing from a _________ standpoint rather than a _________ one
we tend to approach system testing from a functional standpoint rather than a structural one
system testing is more than functional. what else does system testing involve?
- load/stress testing - usability testing - performance testing - resource testing
functional testing: objective: ? basis: ? test case: ?
functional testing: objective: assess whether the application does what it is supposed to do basis: behavioural, functional specification test case: sequencs of ASFs (thread)
stress testing
pushing system to its limit and beyond
performance testing: performance seen by: ? specified: ? unspecified: ?
performance testing: performance seen by: - users - delay, throughput - system owner - memory, CPU, comm specified: explicitly specified, expected to do well unspecified: find the limit !
Usability Testing: ________ _________ in system operation.
Usability Testing: Human Element in system operation. - GUI, messages, reports
unit level thread
execution time-path of instructions or some path flow on a graph
integration level thread
sequence of MM paths that implement some atomic function, denoted perhaps as a sequence of module executions and messages
system level thread
a sequence of ASFs
views of a thread
- there are many, such as: - sequence of ASFs - sequence of machine instructions - sequence of source instructions - sequence of transitions in a state machine - sequence of interleaved port input and output events - system level test case - scenario of normal usage blah blah blah
unit thread
- 2 levels of thread in integration testing: MM-Paths and ASFs
MM-Path (in context of thread definitions)
- a path in the MM path graph of a set of units - directed graph in which module execution paths are nodes and edges show execution time space
ASF Graph (context: thread defs)
For a system that defines in terms of ASFs, is the directed graph in nodes are ASFs and edges represent sequential flow.
Source ASF (Thread Defs)
an ASF that appears as a source node in the ASF graph of a system
Sink ASF (Thread Defs)
is an ASF that appears as a sink node in the ASF graph
Thread Graph (Thread Defs)
for a system defined in terms of threads, directed graph in which nodes are threads of a system and edges represent sequential execution of individual threads
System testing for a system that is described in terms of data, the focus is ________________, and for example _______ models are useful at the highest level
System testing for a system that is described in terms of data, the focus is the information used/created by the system, described in terms of variables, data structures, fields, records, datastores, and files , and for example ER models are useful at the highest level
test plan objectives and motivation: objective
to get ready and organized for test execution
what do test plan documents do for test plan objectives and motivation?
- provide guidance for exec management to support test project and release necessary resources - provide a foundation for the system being tested as part of the overall project - assurance of test coverage via traceability matrix - outline orderly schedule of events / test milestones to be achieved - specify personnel, financial, and facility resources req\'d
test case life cycle main idea:
test cases as products. therefore test cases have a lifecycle.
stages of test case life cycle
create -> draft -> review -> released review -> deleted released -> deprecated released->update->review update -> released
released phase of test case cycle
- test case ready for execution - owner is test organization - review test case for reusability - if need for update, move to update
Create stage of test case life cycle:
test case id, requirements id, title, orignator, creator, category, etc.
draft phase of test case life cycle
author, objective, environment, test steps, clean up, pass/fail criteria, candidate for automation, automation priority
review phase of test case life cycle:
- owner invites test engineers & devs to review - ensure case is executable and pass/fail criteria are clearly stated - changes may occur, once approved goes to released
update phase of test case life cycle
- strengthen test case as functionality/env changes - improve reliability - idea about its automation potential - major update means review
deleted phase of test case life cycle
test case is no a valid one
deprecated phase of test case life cycle
- obsolete, system func changed w/o proper maintenance of tests - test cases not designed w/ reusability in mind - carried forward due to carelessness, original justification has long been disappeared
Test Suite Effort Estimation
- number of test cases created by one person per day - number of test cases executed per person per day - effort needed to create test environmnet - effort needed to train test engineers - availability of engineers when they are needed
For large projects, test execution is measured on a ______ basis at the beginning and a ________ at the end of testing
weekly, daily
defect trend
# of test cases in each category (passed, failed, blocked, invalid) - # of defects in different states (open, resolved, irreproducible, hold, postponed, FAD, closed)
test execution trend
how many test cases are executed on a weekly basis?
states of work related to bug fixing:
- bug resolution rate (weekly rate of bug fixing) - percentage of fixes that do not work on re-test - turnaround time for fixing defects (defect aging)
measuring effectiveness
# defects found in testing / (# defects found + # defects not found)
software metrics
meaures to determine the degree of each quality characteristic attained by the product
Software reliability:
probability that a software will not cause a failure for some specified time
failure
divergence in expected behaviour
fault
cause/representation of an error (ie bug)
error
programmer mistake (perhaps mispresentation of specificaitons?)
software reliability big question, major issues, growth models
how to estimate question: growth in software as its errors are being removed? major issues: testing (how much, when to stop) - field use (# trained personnel, support staff) s/w reliability growth models - observe past failure and give estimate of future failure behaviour, about 40 models propsed
Simple Measure of Reliability
MTBF = MTTF + MTTR mean time between failures = mean time to fail + mean time to repair
availability formula
probability a system is still operating w/i reqs at a given point in time is: = MTTF/(MTTF+MTTR) * 100
reliability growth
measures and predicts the reliability through the testing process using a growth function to represent the process - independent variables of growth function could be time, number of test cases (or stages) - dependent variables can be reliability, failure rate, or cumulative number of errors
failure intensity rate (failures per units time)
lambda(tao) = f(# of failures in tao, tau + delta tau) / delta tau tao = program CPU time (in time shared) or wall clock time in embedded system
operational profile
description of input events expected to occur in actual software operation (how it will be used in practice) consequence here we are unable to go from lambdatest to lambdafield
6 basic assumptions of musa
1. errors in the program are independent and distributed with constant average occurrence rate 2. execution time between failures is large with respect to instruction execution time 3. potential test space covers its use space 4. the set of inputs per test run is randomly selected 5. all failures are observed 6. error causing the failure is immediately fixed or else its re-occurrence is not counted again
musa basic model: failure intensity
number of failures per unit time
operational phase
once the software has released and is operational, no features are added or repairs made b/w releases, failure intensity becomes constant both models reduce to homogenous Poisson processes that take FI as a parameter, failures in a given time period follow poisson while failure intervals follow exponential dist.
operational phase reliability
R(tao) = exp (-lambda tao) reliability R and FI lambda
fan-in
number of local flows into a procedure plus the number of global structures read by the procedure
fan-out
number of local flows from a procedure plus the number of global variables updated by a procedure
software evolution
a continuous change from a lesser, simpler, or worse state to a higher or better state
software maintenance
- consists of activities required to keep a software system operational and responsive after it is accepted and placed in production
importance of software evolution
- huge investments in software - critical assets - maintain value of assets, must be changed and updated - majority of software budget in large companies devoted to evolving existing software rather than developing new software
program evolution dynamics
study of processes of system change
lehman\'s law of continuing change
a program must change or become progressively less useful
lehman\'s law of increasing complexity
as a program changes, its structure becomes increasingly complex as extra resources are required
lehman\'s law of large program evolution
system attributes (size, time between releases) is ~invariant for each system release
lehman\'s law of organizational stability
program\'s rate of development is ~constant
lehman\'s law of convservation of familiarity
incremental change in each release is ~constant
lehman\'s law of continuing growth
functionality has to continually increase in order to maintain user satisfaction
lehman\'s law of declining quality
quality of systems appears declining unless adapted to changing environments
lehman\'s law of feedback system
evolution processes involve feedback systems for product improvement
types of maintenance - reasons
- maintenance to repair software faults - chaning a system to correct deficiences in the way it meets its requirements - maintenance to adapt to different operating env - changing a system so that it operates in a different env (comp, OS, etc) from initial implementation - maintenance to add or modify system\'s funcs - to satisfy new reqs - maintenance to detect/correct latent faults before they become effective (preventetive)
types of maintenance
corrective maintenance: - focuses on fixing defects (fault repair), it is a reactive process (17%) adaptive maintenance: - includes changes needed to meed evolving needs of user (changed hardware, env, OS, 18%), or changes to enhance system functionality (business changes/new requirements, 65%) perfective maintenance: efforts to improve quality of software (restructuring code, create/update docs, improve reliability/efficiency/perf) - often done concurrently with adaptive, corrective
what are process metrics?
- process measurements that may be used to assess maintainability, such as - number of reqs for corrective maintenance - avg time for impact analysis - avg time taken to implement change request - number of outstanding change requests - if any or all of these is increasing, this may indicate a decline in maintainability
What are the prime objectives of software re-engineering? (there are 2)
1) increase quality and reduce complexity 2) reduce actual or anticipated maintenance over software lifetime
What is the prime driver for whether or not to re-engineer?
- whether estimated costs to reengineer outweigh benefits - otherwise, consider replacement
Software Re-eng personnel metrics?
- maintenance ability - knowledge of env - application knowledge - experience
Software Re-eng environment metrics?
- office characteristics - tools available - hardware - software
Software Re-eng product metrics?
- quantity (locs, FPs, etc.) - complexity (graph, data, design) - quality (maintainability, resuability, interoperability, portability)
Maintenance Costs without renewal - definition/formula?
T1 = t * (C + P) where T1 is total maintenance cost w/o renewal, t is elapsed time in months, C is computer costs/month, P is personnel costs per month

Deck Info

80

pandemona

permalink