CppDepend Rules Explorer
- All CppDepend Rules
- Quality Gates
- Code Smells
- Code Smells Regression
- Object Oriented Design
- CWE Coding Standard
- Memory Management
- STL
- Primitive Types Usage
- IO usage
- 64-bit portability
- Misc
- Vera++
- API Breaking Changes
- Code Diff Summary
- Code Coverage
- Dead Code
- Modernize C++ Code
- HICPP coding standard
- Cert coding standard
- Naming Conventions
- CPD Queries
- Hot Spots
- Statistics
- Samples of Custom rules
- Trend Metrics
- Defining JustMyCode
All CppDepend Rules
- Quality Gates Evolution
- Percentage Code Coverage
- Percentage Coverage on New Code
- Percentage Coverage on Refactored Code
- Blocker Issues
- Critical Issues
- New Blocker / Critical / High Issues
- Critical Rules Violated
- Percentage Debt
- Debt
- New Debt since Baseline
- Debt Rating per Namespace
- Annual Interest
- New Annual Interest since Baseline
- Avoid types too big
- Avoid types with too many methods
- Avoid types with too many fields
- Avoid methods too big, too complex
- Avoid methods with too many parameters
- Avoid methods with too many local variables
- Avoid methods with too many overloads
- Avoid methods potentially poorly commented
- Avoid types with poor cohesion
- From now, all types added should respect basic quality principles
- From now, all types added should be 100% covered by tests
- From now, all methods added should respect basic quality principles
- Avoid decreasing code coverage by tests of types
- Avoid making complex methods even more complex
- Avoid making large methods even larger
- Avoid adding methods to a type that already had many methods
- Avoid adding instance fields to a type that already had many instance fields
- Avoid transforming an immutable type into a mutable one
- Base class should not use derivatives
- Class shouldn't be too deep in inheritance tree
- Constructor should not call a virtual methods
- Don't assign static fields from instance methods
- Avoid Abstract Classes with too many methods
- Type should not have too many responsibilities
- Nested types should not be visible
- Projects with poor cohesion (RelationalCohesion)
- Projects that don't satisfy the Abstractness/Instability principle
- Higher cohesion - lower coupling
- Constructors of abstract classes should be declared as protected or private
- The class does not have a constructor.
- Class has a constructor with 1 argument that is not explicit.
- Value of pointer var, which points to allocated memory, is copied in copy constructor instead of allocating new memory.
- class class does not have a copy constructor which is recommended since the class contains a pointer to allocated memory.
- Member variable is not initialized in the constructor.
- Member variable is not assigned a value in classname::operator=.
- Unused private function: classname::funcname
- Using memfunc on class that contains a classname.
- Using memfunc on class that contains a reference.
- Using memset() on class which contains a floating point number.
- Memory for class instance allocated with malloc(), but class provides constructors.
- Memory for class instance allocated with malloc(), but class contains a std::string.
- class::operator= should return class &.
- Class Base which is inherited by class Derived does not have a virtual destructor.
- Suspicious pointer subtraction. Did you intend to write ->?
- operator= should return reference to this instance.
- No return statement in non-void function causes undefined behavior.
- operator= should either return reference to this instance or be declared private and left unimplemented.
- operator= should check for assignment to self to avoid problems with dynamic memory.
- Variable is assigned in constructor body. Consider performing initialization in initialization list.
- Member variable is initialized by itself.
- The class class defines member variable with name variable also defined in its parent class class.
- Buffer Copy without Checking Size of Input ('Classic Buffer Overflow')
- Divide By Zero
- Unchecked Error Condition
- Declaration of Catch for Generic Exception
- Improper Release of Memory Before Removing Last Reference ('Memory Leak')
- Double Free
- Use of Uninitialized Variable
- Incomplete Cleanup
- NULL Pointer Dereferenced
- Use of Obsolete Functions
- Comparing instead of Assigning
- Omitted Break Statement in Switch
- Dead Code
- Assignment to Variable without Use ('Unused Variable')
- Expression is Always False
- Expression is Always True
- Function Call with Incorrectly Specified Arguments
- Use of Potentially Dangerous Function
- Operator Precedence Logic Error
- Returning/dereferencing p after it is deallocated / released
- Memory pointed to by varname is freed twice.
- Allocation with funcName, funcName doesnt release it.
- Return value of allocation function funcName is not stored.
- Possible leak in public function. The pointer varname is not deallocated before it is allocated.
- Class class is unsafe, class::varname can leak by wrong usage.
- Memory leak: varname
- Resource leak: varname
- Deallocating a deallocated pointer: varname
- Dereferencing varname after it is deallocated / released
- The allocated size sz is not a multiple of the underlying types size.
- Mismatching allocation and deallocation: varname
- Common realloc mistake: varname nulled but not freed upon failure
- Address of local auto-variable assigned to a function parameter.
- Address of an auto-variable returned.
- Pointer to local array variable returned.
- Reference to auto variable returned.
- Reference to temporary returned.
- Deallocation of an auto-variable results in undefined behaviour.
- Address of function parameter parameter returned.
- Assignment of function parameter has no effect outside the function.
- Assignment of function parameter has no effect outside the function. Did you forget dereferencing it?
- Array array[2] index array[1][1] out of bounds.
- Buffer is accessed out of bounds: buffer
- Dangerous usage of strncat - 3rd parameter is the maximum number of characters to append.
- index is out of bounds: Supplied size 2 is larger than actual size 1.
- The size argument is given as a char constant.
- Array index -1 is out of bounds.
- Buffer overrun possible for long command line arguments.
- Undefined behaviour, pointer arithmetic is out of bounds.
- Array index index is used before limits check.
- Possible buffer overflow if strlen(source) is larger than or equal to sizeof(destination).
- The array array is too small, the function function expects a bigger one.
- Memory allocation size is negative.
- Declaration of array with negative size is undefined behaviour
- Array x[10] accessed at index 20, which is out of bounds. Otherwise condition y==20 is redundant.
- Invalid iterator: iterator
- Same iterator is used with different containers container1 and container2.
- Iterators of different containers are used together.
- Invalid iterator iter used.
- When i==foo.size(), foo[i] is out of bounds.
- After push_back|push_front|insert(), the iterator iterator may be invalid.
- Invalid pointer pointer after push_back().
- Dangerous comparison using operator< on iterator.
- Suspicious condition. The result of find() is an iterator, but it is not properly checked.
- Inefficient usage of string::find() in condition; string::compare() would be faster.
- Dangerous usage of c_str(). The value returned by c_str() is invalid after this call.
- Returning the result of c_str() in a function that returns std::string is slow and redundant.
- Passing the result of c_str() to a function that takes std::string as argument no. 0 is slow and redundant.
- Possible inefficient checking for list emptiness.
- Missing bounds check for extra iterator increment in loop.
- Redundant checking of STL container element existence before removing it.
- Copying auto_ptr pointer to another does not create two equal objects since one has lost its ownership of the pointer.
- You can randomly lose access to pointers if you store auto_ptr pointers in an STL container.
- Object pointed by an auto_ptr is destroyed using operator delete. You should not use auto_ptr for pointers obtained with operator new[].
- Object pointed by an auto_ptr is destroyed using operator delete. You should not use auto_ptr for pointers obtained with function malloc.
- It is inefficient to call str.find(str) as it always returns 0.
- It is inefficient to swap a object with itself by calling str.swap(str)
- Ineffective call of function substr because it returns a copy of the object. Use operator= instead.
- Ineffective call of function empty(). Did you intend to call clear() instead?
- Return value of std::remove() ignored. Elements remain in container.
- Possible dereference of an invalid iterator: i
- Boolean value assigned to pointer.
- Boolean value assigned to floating point variable.
- Comparison of a function returning boolean value using relational (<, >, <= or >=) operator.
- Comparison of two functions returning boolean value using relational (<, >, <= or >=) operator.
- Comparison of a variable having boolean value using relational (<, >, <= or >=) operator.
- Incrementing a variable of type bool with postfix operator++ is deprecated by the C++ Standard. You should assign it the value true instead.
- Comparison of a boolean expression with an integer other than 0 or 1.
- Converting pointer arithmetic result to bool. The bool is always true unless there is undefined behaviour.
- Modifying string literal directly or indirectly is undefined behaviour.
- Undefined behavior: Variable varname is used as parameter and destination in s[n]printf().
- Unusual pointer arithmetic. A value of type char is added to a string literal.
- String literal Hello World doesnt match length argument for substr().
- String literal compared with variable foo. Did you intend to use strcmp() instead?
- Char literal compared with pointer foo. Did you intend to dereference it?
- Conversion of string literal Hello World to bool always evaluates to true.
- Unnecessary comparison of static strings.
- Comparison of identical string variables.
- Shifting 32-bit value by 64 bits is undefined behaviour
- Signed integer overflow for expression .
- Suspicious code: sign conversion of var in calculation, even though var can have a negative value
- int result is assigned to long variable. If the variable is long to avoid loss of information, then you have loss of information.
- int result is returned as long value. If the return value is long to avoid loss of information, then you have loss of information.
- scanf is deprecated: This function or variable may be unsafe. Consider using scanf_s instead.
- Invalid usage of output stream: << std::cout.
- fflush() called on input stream stdin may result in undefined behaviour on non-linux systems.
- Read and write operations without a call to a positioning function (fseek, fsetpos or rewind) or fflush in between result in undefined behaviour.
- Read operation on a file that was opened only for writing.
- Write operation on a file that was opened only for reading.
- Used file that is not opened.
- Repositioning operation performed on a file opened in append mode has no effect.
- scanf() without field width limits can crash with huge input data.
- printf format string requires 3 parameters but only 2 are given.
- %s in format string (no. 1) requires a char * but the argument type is Unknown.
- %d in format string (no. 1) requires int * but the argument type is Unknown.
- %f in format string (no. 1) requires float * but the argument type is Unknown.
- %s in format string (no. 1) requires char * but the argument type is Unknown.
- %n in format string (no. 1) requires int * but the argument type is Unknown.
- %p in format string (no. 1) requires an address but the argument type is Unknown.
- %X in format string (no. 1) requires unsigned int but the argument type is Unknown.
- %u in format string (no. 1) requires unsigned int but the argument type is Unknown.
- %i in format string (no. 1) requires int but the argument type is Unknown.
- %f in format string (no. 1) requires double but the argument type is Unknown.
- I in format string (no. 1) is a length modifier and cannot be used without a conversion specifier.
- Width 5 given in format string (no. 10) is larger than destination buffer [0], use %-1s to prevent overflowing it.
- printf: referencing parameter 2 while 1 arguments given
- Either the condition is redundant or there is division by zero at line 0.
- Instance of varname object is destroyed immediately.
- Casting between float* and double* which have an incompatible binary data representation.
- Shifting a negative value is undefined behaviour
- Buffer varname must have size of 2 integers if used as parameter of pipe().
- Race condition: non-interlocked access after InterlockedDecrement(). Use InterlockedDecrement() return value instead.
- Buffer var is being written before its old content has been used.
- Variable var is reassigned a value before the old one has been used.
- Comparison of two identical variables with isless(varName,varName) always evaluates to false.
- Storing func_name() return value in char variable and then comparing with EOF.
- Function parameter parametername should be passed by reference.
- Redundant code: Found a statement that begins with type constant.
- Signed char type used as array index.
- char type used as array index.
- When using char variables in bit operations, sign extension can generate unexpected results.
- The scope of the variable varname can be reduced.
- Variable var is reassigned a value before the old one has been used. break; missing?
- Buffer var is being written before its old content has been used. break; missing?
- Redundant assignment of varname to itself.
- memset() called to fill 0 bytes.
- The 2nd memset() argument varname is a float, its representation is implementation defined.
- The 2nd memset() argument varname doesnt fit into an unsigned char.
- Clarify calculation precedence for + and ?.
- Ineffective statement similar to *A++;. Did you intend to write (*A)++;?
- Same expression on both sides of &&.
- Same expression in both branches of ternary operator.
- Consecutive return, break, continue, goto or throw statements are unnecessary.
- Statements following return, break, continue, goto or throw will never be executed.
- Checking if unsigned variable varname is less than zero.
- Unsigned variable varname cant be negative so it is unnecessary to test it.
- A pointer can not be negative so it is either pointless or an error to check if it is.
- A pointer can not be negative so it is either pointless or an error to check if it is not.
- Passing NULL after the last typed argument to a variadic function leads to undefined behaviour.
- Using NaN/Inf in a computation.
- Comma is used in return statement. The comma can easily be misread as a ;.
- Redundant pointer operation on varname - its already a pointer.
- Label is not used. Should this be a case of the enclosing switch()?
- Label is not used.
- Expression x = x++; depends on order of evaluation of side effects
- Prefer prefix ++/-- operators for non-primitive types.
- Source files should not use the '\r' (CR) character
- File names should be well-formed
- No trailing whitespace
- Don't use tab characters
- No leading and no trailing empty lines
- Line cannot be too long
- There should not be too many consecutive empty lines
- Source file should not be too long
- One-line comments should not have forced continuation
- Reserved names should not be used for preprocessor macros
- Some keywords should be followed by a single space
- Some keywords should be immediately followed by a colon
- Keywords break and continue should be immediately followed by a semicolon
- Keywords return and throw should be immediately followed by a semicolon or a single space
- Semicolons should not be isolated by spaces or comments from the rest of the code
- Keywords catch, for, if, switch and while should be followed by a single space
- Comma should not be preceded by whitespace, but should be followed by one
- Identifiers should not be composed of 'l' and 'O' characters only
- Curly brackets from the same pair should be either in the same line or in the same column
- Negation operator should not be used in its short form
- Source files should contain the copyright notice
- HTML links in comments and string literals should be correct
- Calls to min/max should be protected against accidental macro substitution
- Calls Unnamed namespaces are not allowed in header files
- Using namespace is not allowed in header files
- Control structures should have complete curly-braced block of code
- New Projects
- Projects removed
- Projects where code was changed
- New namespaces
- Namespaces removed
- Namespaces where code was changed
- New types
- Types removed
- Types where code was changed
- Heuristic to find types moved from one namespace or project to another
- Types directly using one or several types changed
- Types indirectly using one or several types changed
- New methods
- Methods removed
- Methods where code was changed
- Methods directly calling one or several methods changed
- Methods indirectly calling one or several methods changed
- New fields
- Fields removed
- Third party types that were not used and that are now used
- Third party types that were used and that are not used anymore
- Third party methods that were not used and that are now used
- Third party methods that were used and that are not used anymore
- Third party fields that were not used and that are now used
- Third party fields that were used and that are not used anymore
- Use auto specifier
- Use nullptr
- Modernize loops
- Use unique_ptr instead of auto_ptr
- Use override keyword
- Pass By Value
- Avoid Bind
- Modernize deprecated headers
- Modernize make_shared
- Modernize make_unique
- Modernize raw string literal
- Modernize redundant void arg
- Modernize random shuffle
- Modernize return braced init list
- Modernize shrink-to-fit
- Modernize unary static-assert
- Modernize use bool literals
- Modernize use default member init
- Modernize use emplace
- Modernize use equals default
- Modernize use equals delete
- Modernize use noexcept
- Modernize use transparent functors
- Modernize use using
- Braces around statements
- Deprecated headers
- Exception baseclass
- Explicit conversions
- Function size
- Invalid access moved
- Member init
- Move const arg
- Named parameter
- New and delete overloads
- No array decay
- No assembler
- No malloc
- Signed bitwise
- Special member functions
- Undelegated constructor
- Use emplace
- Use noexcept
- Use auto
- HICPP-Use nullptr
- Use equals default
- Use equals delete
- Static assert
- Check Postfix operators
- Check C-style variadic functions
- Delete null pointer
- Check new and delete overloads
- check change of std or posix namespace
- Finds anonymous namespaces in headers.
- Do not call system()
- Finds violations of the rule Throw by value, catch by reference.
- Detect errors when converting a string to a number.
- Do not use setjmp() or longjmp().
- Handle all exceptions thrown before main() begins executing
- Exception objects must be nothrow copy constructible.
- Do not copy a FILE object.
- Do not use floating-point variables as loop counters.
- Check the usage of std::rand()
- Performance of move constructor init
- Instance fields should be prefixed with a 'm_'
- Static fields should be prefixed with a 's_'
- Exception class name should be suffixed with 'Exception'
- Types name should begin with an Upper character
- Avoid types with name too long
- Avoid methods with name too long
- Avoid fields with name too long
- Avoid naming types and namespaces with the same identifier
- Most used types (Rank)
- Most used methods (Rank)
- Most used namespaces (#NamespacesUsingMe )
- Most used types (#TypesUsingMe )
- Most used methods (#MethodsCallingMe )
- Namespaces that use many other namespaces (#NamespacesUsed )
- Types that use many other types (#TypesUsed )
- Methods that use many other methods (#MethodsCalled )
- High-level to low-level Projects (Level)
- High-level to low-level namespaces (Level)
- High-level to low-level types (Level)
- High-level to low-level methods (Level)
- Max # Lines of Code for Methods (JustMyCode)
- Average # Lines of Code for Methods
- Average # Lines of Code for Methods with at least 3 Lines of Code
- Max # Lines of Code for Types (JustMyCode)
- Average # Lines of Code for Types
- Max Cyclomatic Complexity for Methods
- Max Cyclomatic Complexity for Types
- Average Cyclomatic Complexity for Methods
- Average Cyclomatic Complexity for Types
- Max Nesting Depth for Methods
- Average Nesting Depth for Methods
- Max # of Methods for Types
- Average # Methods for Types
- Max # of Methods for Interfaces
- Average # Methods for Interfaces
+450 CppDepend Rules
More than 450 default code rules to check against best practices. Support for Code Query over LINQ (CQLinq) to easily write custom rules and query code.
- Issues
- Quality Gates
- Code Smells
- Smart Technical Debt
- Hot Spots
Search results Save results for later use
Quality Gates Evolution
Show quality gates evolution between baseline and now.
When a quality gate relies on diff between now and baseline (like New Debt since Baseline) it is not executed against the baseline and as a consequence its evolution is not available.
Double-click a quality gate for editing.
Percentage Code Coverage
Code coverage is a measure used to describe the degree to which the source code of a program is tested by a particular test suite. A program with high code coverage, measured as a percentage, has had more of its source code executed during testing which suggests it has a lower chance of containing undetected software bugs compared to a program with low code coverage.
Code coverage is certainly the most important quality code metric. But coverage is not enough the team needs to ensure that results are checked at test-time. These checks can be done both in test code, and in application code through assertions. The important part is that a test must fail explicitely when a check gets unvalidated during the test execution.
This quality gate define a warn threshold (70%) and a fail threshold (80%). These are indicative thresholds and in practice the more the better. To achieve high coverage and low risk, make sure that new and refactored classes gets 100% covered by tests and that the application and test code contains as many checks/assertions as possible.
Percentage Coverage on New Code
New Code is defined as methods added since the baseline.
To achieve high code coverage it is essential that new code gets properly tested and covered by tests. It is advised that all non-UI new classes gets 100% covered.
Typically 90% of a class is easy to cover by tests and 10% is hard to reach through tests. It means that this 10% remaining is not easily testable, which means it is not well designed, which often means that this code is especially error-prone. This is the reason why it is important to reach 100% coverage for a class, to make sure that potentially error-prone code gets tested.
Percentage Coverage on Refactored Code
Refactored Code is defined as methods where code was changed since the baseline.
Comment changes and formatting changes are not considerd as refactoring.
To achieve high code coverage it is essential that refactored code gets properly tested and covered by tests. It is advised that when refactoring a class or a method, it is important to also write tests to make sure it gets 100% covered.
Typically 90% of a class is easy to cover by tests and 10% is hard to reach through tests. It means that this 10% remaining is not easily testable, which means it is not well designed, which often means that this code is especially error-prone. This is the reason why it is important to reach 100% coverage for a class, to make sure that potentially error-prone code gets tested.
Blocker Issues
An issue with the severity Blocker cannot move to production, it must be fixed.
The severity of an issue is either defined explicitely in the rule source code, either inferred from the issue annual interest and thresholds defined in the CppDepend Project Properties > Issue and Debt.
Critical Issues
An issue with a severity level Critical shouldn't move to production. It still can for business imperative needs purposes, but at worst it must be fixed during the next iterations.
The severity of an issue is either defined explicitely in the rule source code, either inferred from the issue annual interest and thresholds defined in the CppDepend Project Properties > Issue and Debt.
New Blocker / Critical / High Issues
An issue with the severity Blocker cannot move to production, it must be fixed.
An issue with a severity level Critical shouldn't move to production. It still can for business imperative needs purposes, but at worth it must be fixed during the next iterations.
An issue with a severity level High should be fixed quickly, but can wait until the next scheduled interval.
The severity of an issue is either defined explicitely in the rule source code, either inferred from the issue annual interest and thresholds defined in the CppDepend Project Properties > Issue and Debt.
Critical Rules Violated
The concept of critical rule is useful to pinpoint certain rules that should not be violated.
A rule can be made critical just by checking the Critical button in the rule edition control and then saving the rule.
This quality gate fails if any critical rule gets any violations.
When no baseline is available, rules that rely on diff are not counted. If you observe that this quality gate count slightly decreases with no apparent reason, the reason is certainly that rules that rely on diff are not counted because the baseline is not defined.
Percentage Debt
% Debt total is defined as a percentage on:
• the estimated total effort to develop the code base
• and the the estimated total time to fix all issues (the Debt)
Estimated total effort to develop the code base is inferred from # lines of code of the code base and from the Estimated number of man-dat to develop 1000 logicial lines of code setting found in CppDepend Project Properties > Issue and Debt.
Debt documentation: http://cppdepend.com/Doc_TechnicalDebt#Debt
This quality gates fails if the estimated debt is more than 30% of the estimated effort to develop the code base, and warns if the estimated debt is more than 20% of the estimated effort to develop the code base
Debt
This Quality Gate is disabled per default because the fail and warn thresholds of unacceptable Debt in man-days can only depend on the project size, number of developers and overall context.
However you can refer to the default Quality Gate Percentage Debt.
The Debt is defined as the sum of estimated effort to fix all issues. Debt documentation: http://cppdepend.com/Doc_TechnicalDebt#Debt
New Debt since Baseline
This Quality Gate fails if the estimated effort to fix new or worsened issues (what is called the New Debt since Baseline) is higher than 2 man-days.
This Quality Gate warns if this estimated effort is positive.
Debt documentation: http://cppdepend.com/Doc_TechnicalDebt#Debt
Debt Rating per Namespace
Forbid namespaces with a poor Debt Rating equals to E or D.
The Debt Rating for a code element is estimated by the value of the Debt Ratio and from the various rating thresholds defined in this project Debt Settings.
The Debt Ratio of a code element is a percentage of Debt Amount (in floating man-days) compared to the estimated effort to develop the code element (also in floating man-days).
The estimated effort to develop the code element is inferred from the code elements number of lines of code, and from the project Debt Settings parameters estimated number of man-days to develop 1000 logical lines of code.
The logical lines of code corresponds to the number of debug breakpoints in a method and doesn't depend on code formatting nor comments.
The Quality Gate can be modified to match projects, types or methods with a poor Debt Rating, instead of matching namespaces.
Annual Interest
This Quality Gate is disabled per default because the fail and warn thresholds of unacceptable Annual-Interest in man-days can only depend on the project size, number of developers and overall context.
However you can refer to the default Quality Gate New Annual Interest since Baseline.
The Annual-Interest is defined as the sum of estimated annual cost in man-days, to leave all issues unfixed.
Each rule can either provide a formula to compute the Annual-Interest per issue, or assign a Severity level for each issue. Some thresholds defined in Project Properties > Issue and Debt > Annual Interest are used to infer an Annual-Interest value from a Severity level. Annual Interest documentation: http://cppdepend.com/Doc_TechnicalDebt#AnnualInterest
New Annual Interest since Baseline
This Quality Gate fails if the estimated annual cost to leave all issues unfixed, increased from more than 2 man-days since the baseline.
This Quality Gate warns if this estimated annual cost is positive.
This estimated annual cost is named the Annual-Interest.
Each rule can either provide a formula to compute the Annual-Interest per issue, or assign a Severity level for each issue. Some thresholds defined in Project Properties > Issue and Debt > Annual Interest are used to infer an Annual-Interest value from a Severity level. Annual Interest documentation: http://cppdepend.com/Doc_TechnicalDebt#AnnualInterest
Avoid types too big
This rule matches types with more than 200 lines of code. Only lines of code in JustMyCode methods are taken account.
Types where NbLinesOfCode > 200 are extremely complex to develop and maintain. See the definition of the NbLinesOfCode metric here http://www.cppdepend.com/Metrics.aspx#NbLinesOfCode
Maybe you are facing the God Class phenomenon: A God Class is a class that controls way too many other classes in the system and has grown beyond all logic to become The Class That Does Everything.
How to Fix:
Types with many lines of code should be split in a group of smaller types.
To refactor a God Class you'll need patience, and you might even need to recreate everything from scratch. Here are a few refactoring advices:
• The logic in the God Class must be splitted in smaller classes. These smaller classes can eventually become private classes nested in the original God Class, whose instances objects become composed of instances of smaller nested classes.
• Smaller classes partitioning should be driven by the multiple responsibilities handled by the God Class. To identify these responsibilities it often helps to look for subsets of methods strongly coupled with subsets of fields.
• If the God Class contains way more logic than states, a good option can be to define one or several static classes that contains no static field but only pure static methods. A pure static method is a function that computes a result only from inputs parameters, it doesn't read nor assign any static or instance field. The main advantage of pure static methods is that they are easily testable.
• Try to maintain the interface of the God Class at first and delegate calls to the new extracted classes. In the end the God Class should be a pure facade without its own logic. Then you can keep it for convenience or throw it away and start to use the new classes only.
• Unit Tests can help: write tests for each method before extracting it to ensure you don't break functionality.
The estimated Debt, which means the effort to fix such issue, varies linearly from 1 hour for a 200 lines of code type, up to 10 hours for a type with 2.000 or more lines of code.
In Debt and Interest computation, this rule takes account of the fact that static types with no mutable fields are just a collection of static methods that can be easily splitted and moved from one type to another.
Avoid types with too many methods
This rule matches types with more than 20 methods. Such type might be hard to understand and maintain.
Notice that methods like constructors or property and event accessors are not taken account.
Having many methods for a type might be a symptom of too many responsibilities implemented.
Maybe you are facing the God Class phenomenon: A God Class is a class that controls way too many other classes in the system and has grown beyond all logic to become The Class That Does Everything.
How to Fix:
To refactor properly a God Class please read HowToFix advices from the default rule Types to Big. // The estimated Debt, which means the effort to fix such issue, varies linearly from 1 hour for a type with 20 methods, up to 10 hours for a type with 200 or more methods.
In Debt and Interest computation, this rule takes account of the fact that static types with no mutable fields are just a collection of static methods that can be easily splitted and moved from one type to another.
Avoid types with too many fields
This rule matches types with more than 15 fields. Such type might be hard to understand and maintain.
Notice that constant fields and static-readonly fields are not counted. Enumerations types are not counted also.
Having many fields for a type might be a symptom of too many responsibilities implemented.
How to Fix:
To refactor such type and increase code quality and maintainability, certainly you'll have to group subsets of fields into smaller types and dispatch the logic implemented into the methods into these smaller types.
More refactoring advices can be found in the default rule Types to Big, HowToFix section.
The estimated Debt, which means the effort to fix such issue, varies linearly from 1 hour for a type with 15 fields, to up to 10 hours for a type with 200 or more fields.
Avoid methods too big, too complex
This rule matches methods where ILNestingDepth > 2 and (NbLinesOfCode > 35 or CyclomaticComplexity > 20 Such method is typically hard to understand and maintain.
Maybe you are facing the God Method phenomenon. A "God Method" is a method that does way too many processes in the system and has grown beyond all logic to become The Method That Does Everything. When need for new processes increases suddenly some programmers realize: why should I create a new method for each processe if I can only add an if.
See the definition of the CyclomaticComplexity metric here: http://www.cppdepend.com/Metrics.aspx#CC
How to Fix:
A large and complex method should be split in smaller methods, or even one or several classes can be created for that.
During this process it is important to question the scope of each variable local to the method. This can be an indication if such local variable will become an instance field of the newly created class(es).
Large switch…case structures might be refactored through the help of a set of types that implement a common interface, the interface polymorphism playing the role of the switch cases tests.
Unit Tests can help: write tests for each method before extracting it to ensure you don't break functionality.
The estimated Debt, which means the effort to fix such issue, varies from 40 minutes to 8 hours, linearly from a weighted complexity score.
Avoid methods with too many parameters
This rule matches methods with more than 8 parameters. Such method is painful to call and might degrade performance. See the definition of the NbParameters metric here: http://www.cppdepend.com/Metrics.aspx#NbParameters
How to Fix:
More properties/fields can be added to the declaring type to handle numerous states. An alternative is to provide a class or a structure dedicated to handle arguments passing.
The estimated Debt, which means the effort to fix such issue, varies linearly from 1 hour for a method with 7 parameters, up to 6 hours for a methods with 40 or more parameters.
Avoid methods with too many local variables
This rule matches methods with more than 15 variables.
Methods where NbVariables > 8 are hard to understand and maintain. Methods where NbVariables > 15 are extremely complex and must be refactored.
See the definition of the Nbvariables metric here: http://www.cppdepend.com/Metrics.aspx#Nbvariables
How to Fix:
To refactor such method and increase code quality and maintainability, certainly you'll have to split the method into several smaller methods or even create one or several classes to implement the logic.
During this process it is important to question the scope of each variable local to the method. This can be an indication if such local variable will become an instance field of the newly created class(es).
The estimated Debt, which means the effort to fix such issue, varies linearly from 10 minutes for a method with 15 variables, up to 2 hours for a methods with 80 or more variables.
Avoid methods with too many overloads
Method overloading is the ability to create multiple methods of the same name with different implementations, and various set of parameters.
This rule matches sets of methods with 6 overloads or more.
Such method set might be a problem to maintain and provokes coupling higher than necessary.
See the definition of the NbOverloads metric here http://www.cppdepend.com/Metrics.aspx#NbOverloads
How to Fix:
Typically the too many overloads phenomenon appears when an algorithm takes a various set of in-parameters. Each overload is presented as a facility to provide a various set of in-parameters. The too many overloads phenomenon can also be a consequence of the usage of the visitor design pattern http://en.wikipedia.org/wiki/Visitor_pattern since a method named Visit() must be provided for each sub type. In such situation there is no need for fix.
Sometime too many overloads phenomenon is not the symptom of a problem, for example when a numeric to something conversion method applies to all numeric and nullable numeric types.
The estimated Debt, which means the effort to fix such issue, is of 2 minutes per method overload.
Avoid methods potentially poorly commented
This rule matches methods with less than 20% of comment lines and that have at least 20 lines of code. Such method might need to be more commented.
See the definitions of the Comments metric here: http://www.cppdepend.com/Metrics.aspx#PercentageComment http://www.cppdepend.com/Metrics.aspx#NbLinesOfComment
Notice that only comments about the method implementation (comments in method body) are taken account.
How to Fix:
Typically add more comment. But code commenting is subject to controversy. While poorly written and designed code would needs a lot of comment to be understood, clean code doesn't need that much comment, especially if variables and methods are properly named and convey enough information. Unit-Test code can also play the role of code commenting.
However, even when writing clean and well-tested code, one will have to write hacks at a point, usually to circumvent some API limitations or bugs. A hack is a non-trivial piece of code, that doesn't make sense at first glance, and that took time and web research to be found. In such situation comments must absolutely be used to express the intention, the need for the hacks and the source where the solution has been found.
The estimated Debt, which means the effort to comment such method, varies linearly from 2 minutes for 10 lines of code not commented, up to 20 minutes for 200 or more, lines of code not commented.
Avoid types with poor cohesion
This rule is based on the LCOM code metric, LCOM stands for Lack Of Cohesion of Methods. See the definition of the LCOM metric here http://www.cppdepend.com/Metrics.aspx#LCOM
The LCOM metric measures the fact that most methods are using most fields. A class is considered utterly cohesive (which is good) if all its methods use all its instance fields.
Only types with enough methods and fields are taken account to avoid bias. The LCOM takes its values in the range [0-1].
This rule matches types with LCOM higher than 0.8. Such value generally pinpoints a poorly cohesive class.
How to Fix:
To refactor a poorly cohesive type and increase code quality and maintainability, certainly you'll have to split the type into several smaller and more cohesive types that together, implement the same logic.
The estimated Debt, which means the effort to fix such issue, varies linearly from 5 minutes for a type with a low poorCohesionScore, up to 4 hours for a type with high poorCohesionScore.
From now, all types added should respect basic quality principles
This rule is executed only if a baseline for comparison is defined (diff mode). This rule operates only on types added since baseline.
This rule can be easily modified to also match types refactored since baseline, that don't satisfy all quality criterions.
Types matched by this rule not only have been recently added or refactored, but also somehow violate one or several basic quality principles, whether it has too many methods, it has too many fields, or is using too many types. Any of these criterions is often a symptom of a type with too many responsibilities.
Notice that to count methods and fields, methods like constructors or property and event accessors are not taken account. Notice that constants fields and static-readonly fields are not counted. Enumerations types are not counted also.
How to Fix:
To refactor such type and increase code quality and maintainability, certainly you'll have to split the type into several smaller types that together, implement the same logic.
Issues of this rule have a constant 10 minutes Debt, because the Debt, which means the effort to fix such issue, is already estimated for issues of rules in the category Code Quality.
However issues of this rule have a High severity, with even more interests for issues on new types since baseline, because the proper time to increase the quality of these types is now, before they get commited in the next production release.
From now, all types added should be 100% covered by tests
This rule is executed only if a baseline for comparison is defined (diff mode). This rule operates only on types added since baseline.
This rule can be easily modified to also match types refactored since baseline, that are not 100% covered by tests.
This rule is executed only if some code coverage data is imported from some code coverage files.
Often covering 10% of remaining uncovered code of a class, requires as much work as covering the first 90%. For this reason, typically teams estimate that 90% coverage is enough. However untestable code usually means poorly written code which usually leads to error prone code. So it might be worth refactoring and making sure to cover the 10% remaining code because most tricky bugs might come from this small portion of hard-to-test code.
Not all classes should be 100% covered by tests (like UI code can be hard to test) but you should make sure that most of the logic of your application is defined in some easy-to-test classes, 100% covered by tests.
In this context, this rule warns when a type added or refactored since the baseline, is not fully covered by tests.
How to Fix:
Write more unit-tests dedicated to cover code not covered yet. If you find some hard-to-test code, it is certainly a sign that this code is not well designed and hence, needs refactoring.
You'll find code impossible to cover by unit-tests, like calls to MessageBox.Show(). An infrastructure must be defined to be able to mock such code at test-time.
Issues of this rule have a constant 10 minutes Debt, because the Debt, which means the effort to write tests for the culprit type, is already estimated for issues in the category Code Coverage.
However issues of this rule have a High severity, with even more interests for issues on new types since baseline, because the proper time to write tests for these types is now, before they get commited in the next production release.
From now, all methods added should respect basic quality principles
This rule is executed only if a baseline for comparison is defined (diff mode). This rule operates only on methods added or refactored since the baseline.
This rule can be easily modified to also match methods refactored since baseline, that don't satisfy all quality criterions.
Methods matched by this rule not only have been recently added or refactored, but also somehow violate one or several basic quality principles, whether it is too large (too many lines of code), too complex (too many if, switch case, loops…) has too many variables, too many parameters or has too many overloads.
How to Fix:
To refactor such method and increase code quality and maintainability, certainly you'll have to split the method into several smaller methods or even create one or several classes to implement the logic.
During this process it is important to question the scope of each variable local to the method. This can be an indication if such local variable will become an instance field of the newly created class(es).
Large switch…case structures might be refactored through the help of a set of types that implement a common interface, the interface polymorphism playing the role of the switch cases tests.
Unit Tests can help: write tests for each method before extracting it to ensure you don't break functionality.
Issues of this rule have a constant 5 minutes Debt, because the Debt, which means the effort to fix such issue, is already estimated for issues of rules in the category Code Quality.
However issues of this rule have a High severity, with even more interests for issues on new methods since baseline, because the proper time to increase the quality of these methods is now, before they get commited in the next production release.
Avoid decreasing code coverage by tests of types
This rule is executed only if a baseline for comparison is defined (diff mode).
This rule is executed only if some code coverage data is imported from some code coverage files.
This rule warns when the number of lines of a type covered by tests decreased since the baseline. In case the type faced some refactoring since the baseline, this loss in coverage is estimated only for types with more lines of code, where # lines of code covered now is lower than # lines of code covered in baseline + the extra number of lines of code.
Such situation can mean that some tests have been removed but more often, this means that the type has been modified, and that changes haven't been covered properly by tests.
To visualize changes in code, right-click a matched type and select:
• Compare older and newer versions of source file
How to Fix:
Write more unit-tests dedicated to cover changes in matched types not covered yet. If you find some hard-to-test code, it is certainly a sign that this code is not well designed and hence, needs refactoring.
The estimated Debt, which means the effort to cover by test code that used to be covered, varies linearly 15 minutes to 3 hours, depending on the number of lines of code that are not covered by tests anymore.
Severity of issues of this rule varies from High to Critical depending on the number of lines of code that are not covered by tests anymore. Because the loss in code coverage happened since the baseline, the severity is high because it is important to focus on these issues now, before such code gets released in production.
Avoid making complex methods even more complex
This rule is executed only if a baseline for comparison is defined (diff mode).
The method complexity is measured through the code metric Cyclomatic Complexity defined here: http://www.cppdepend.com/Metrics.aspx#CC
This rule warns when a method already complex (i.e with Cyclomatic Complexity higher than 6) become even more complex since the baseline.
To visualize changes in code, right-click a matched method and select:
• Compare older and newer versions of source file
How to Fix:
A large and complex method should be split in smaller methods, or even one or several classes can be created for that.
During this process it is important to question the scope of each variable local to the method. This can be an indication if such local variable will become an instance field of the newly created class(es).
Large switch…case structures might be refactored through the help of a set of types that implement a common interface, the interface polymorphism playing the role of the switch cases tests.
Unit Tests can help: write tests for each method before extracting it to ensure you don't break functionality.
The estimated Debt, which means the effort to fix such issue, varies linearly from 15 to 60 minutes depending on the extra complexity added.
Issues of this rule have a High severity, because it is important to focus on these issues now, before such code gets released in production.
Avoid making large methods even larger
This rule is executed only if a baseline for comparison is defined (diff mode).
This rule warns when a method already large (i.e with more than 15 lines of code) become even larger since the baseline.
The method size is measured through the code metric # Lines of Code defined here: http://www.cppdepend.com/Metrics.aspx#NbLinesOfCode
To visualize changes in code, right-click a matched method and select:
• Compare older and newer versions of source file
How to Fix:
Usually too big methods should be split in smaller methods.
But long methods with no branch conditions, that typically initialize some data, are not necessarily a problem to maintain, and might not need refactoring.
The estimated Debt, which means the effort to fix such issue, varies linearly from 5 to 20 minutes depending on the number of lines of code added.
The estimated Debt, which means the effort to fix such issue, varies linearly from 10 to 60 minutes depending on the extra complexity added.
Issues of this rule have a High severity, because it is important to focus on these issues now, before such code gets released in production.
Avoid adding methods to a type that already had many methods
This rule is executed only if a baseline for comparison is defined (diff mode).
Types where number of methods is greater than 15 might be hard to understand and maintain.
This rule lists types that already had more than 15 methods at the baseline time, and for which new methods have been added.
Having many methods for a type might be a symptom of too many responsibilities implemented.
Notice that constructors and methods generated by the compiler are not taken account.
How to Fix:
To refactor such type and increase code quality and maintainability, certainly you'll have to split the type into several smaller types that together, implement the same logic.
The estimated Debt, which means the effort to fix such issue, is equal to 10 minutes per method added.
Issues of this rule have a High severity, because it is important to focus on these issues now, before such code gets released in production.
Avoid adding instance fields to a type that already had many instance fields
This rule is executed only if a baseline for comparison is defined (diff mode).
Types where number of fields is greater than 15 might be hard to understand and maintain.
This rule lists types that already had more than 15 fields at the baseline time, and for which new fields have been added.
Having many fields for a type might be a symptom of too many responsibilities implemented.
Notice that constants fields and static-readonly fields are not taken account. Enumerations types are not taken account also.
How to Fix:
To refactor such type and increase code quality and maintainability, certainly you'll have to group subsets of fields into smaller types and dispatch the logic implemented into the methods into these smaller types.
The estimated Debt, which means the effort to fix such issue, is equal to 10 minutes per field added.
Issues of this rule have a High severity, because it is important to focus on these issues now, before such code gets released in production.
Avoid transforming an immutable type into a mutable one
This rule is executed only if a baseline for comparison is defined (diff mode).
A type is considered as immutable if its instance fields cannot be modified once an instance has been built by a constructor.
Being immutable has several fortunate consequences for a type. For example its instance objects can be used concurrently from several threads without the need to synchronize accesses.
Hence users of such type often rely on the fact that the type is immutable. If an immutable type becomes mutable, there are chances that this will break users code.
This is why this rule warns about such immutable type that become mutable.
The estimated Debt, which means the effort to fix such issue, is equal to 2 minutes per instance field that became mutable.
How to Fix:
If being immutable is an important property for a matched type, then the code must be refactored to preserve immutability.
The estimated Debt, which means the effort to fix such issue, is equal to 10 minutes plus 10 minutes per instance fields of the matched type that is now mutable.
Issues of this rule have a High severity, because it is important to focus on these issues now, before such code gets released in production.
Base class should not use derivatives
In Object-Oriented Programming, the open/closed principle states: software entities (components, classes, methods, etc.) should be open for extension, but closed for modification. http://en.wikipedia.org/wiki/Open/closed_principle
Hence a base class should be designed properly to make it easy to derive from, this is extension. But creating a new derived class, or modifying an existing one, shouldn't provoke any modification in the base class. And if a base class is using some derivative classes somehow, there are good chances that such modification will be needed.
Extending the base class is not anymore a simple operation, this is not good design.
How to Fix:
Understand the need for using derivatives, then imagine a new design, and then refactor.
Typically an algorithm in the base class needs to access something from derived classes. You can try to encapsulate this access behind an abstract or a virtual method.
If you see in the base class some conditions on typeof(DerivedClass) not only urgent refactoring is needed. Such condition can easily be replaced through an abstract or a virtual method.
Sometime you'll see a base class that creates instance of some derived classes. In such situation, certainly using the factory method pattern http://en.wikipedia.org/wiki/Factory_method_pattern or the abstract factory pattern http://en.wikipedia.org/wiki/Abstract_factory_pattern will improve the design.
The estimated Debt, which means the effort to fix such issue, is equal to 3 minutes per derived class used by the base class + 3 minutes per member of a derived class used by the base class.
Class shouldn't be too deep in inheritance tree
This rule warns about classes having 3 or more base classes. Notice that third-party base classes are not counted because this rule is about your code design, not third-party libraries consumed design.
In theory, there is nothing wrong having a long inheritance chain, if the modelization has been well thought out, if each base class is a well-designed refinement of the domain.
In practice, modeling properly a domain demands a lot of effort and experience and more often than not, a long inheritance chain is a sign of confused design, that will be hard to work with and maintain.
How to Fix:
In Object-Oriented Programming, a well-known motto is Favor Composition over Inheritance.
This is because inheritance comes with pitfalls. In general, the implementation of a derived class is very bound up with the base class implementation. Also a base class exposes implementation details to its derived classes, that's why it's often said that inheritance breaks encapsulation.
On the other hands, Composition favors binding with interfaces over binding with implementations. Hence, not only the encapsulation is preserved, but the design is clearer, because interfaces make it explicit and less coupled.
Hence, to break a long inheritance chain, Composition is often a powerful way to enhance the design of the refactored underlying logic.
You can also read: http://en.wikipedia.org/wiki/Composition_over_inheritance and http://stackoverflow.com/questions/49002/prefer-composition-over-inheritance
The estimated Debt, which means the effort to fix such issue, depends linearly upon the depth of inheritance.
Constructor should not call a virtual methods
This rule matches constructors of a non-sealed class that call one or several virtual methods.
When an object written in C++ is constructed, what happens is that constructors run in order from the base class to the most derived class.
Also objects do not change type as they are constructed, but start out as the most derived type, with the method table being for the most derived type. This means that virtual method calls always run on the most derived type, even when calls are made from the constructor.
When you combine these two facts you are left with the problem that if you make a virtual method call in a constructor, and it is not the most derived type in its inheritance hierarchy, then it will be called on a class whose constructor has not been run, and therefore may not be in a suitable state to have that method called.
Hence this situation makes the class fragile to derive from.
How to Fix:
Violations reported can be solved by re-designing object initialisation or by declaring the parent class as sealed, if possible.
Don't assign static fields from instance methods
Assigning static fields from instance methods leads to poorly maintainable and non-thread-safe code.
More discussion on the topic can be found here: http://codebetter.com/patricksmacchia/2011/05/04/back-to-basics-usage-of-static-members/
How to Fix:
If the static field is just assigned once in the program lifetime, make sure to declare it as readonly and assign it inline, or from the static constructor.
In Object-Oriented-Programming the natural artifact to hold states that can be modified is instance fields.
Hence to fix violations of this rule, make sure to hold assignable states through instance fields, not through static fields.
Avoid Abstract Classes with too many methods
This rule matches abstract classes with more than 10 methods. Abstract clases are abstractions and are meant to simplify the code structure. An interface should represent a single responsibility. Making an interface too large, too complex, necessarily means that the interface has too many responsibilities.
A property with getter or setter or both count as one method. An event count as one method.
How to Fix:
Typically to fix such issue, the interface must be refactored in a grape of smaller single-responsibility interfaces.
A classic example is a ISession large interface, responsible for holding states, run commands and offer various accesses and facilities.
The classic problem for a large public interface is that it has many clients that consume it. As a consequence splitting it in smaller interfaces has an important impact and it is not always feasible.
The estimated Debt, which means the effort to fix such issue, varies linearly from 20 minutes for an interface with 10 methods, up to 7 hours for an interface with 100 or more methods. The Debt is divided by two if the interface is not publicly visible, because in such situation only the current project is impacted by the refactoring.
Nested types should not be visible
This rule warns about nested types not declared as private.
A nested type is a type declared within the scope of another type. Nested types are useful for encapsulating private implementation details of the containing type. Used for this purpose, nested types should not be externally visible.
Do not use externally visible nested types for logical grouping or to avoid name collisions; instead use namespaces.
Nested types include the notion of member accessibility, which some programmers do not understand clearly.
Protected types can be used in subclasses and nested types in advanced customization scenarios.
How to Fix:
If you do not intend the nested type to be externally visible, change the type's accessibility.
Otherwise, remove the nested type from its parent and make it non-nested.
If the purpose of the nesting is to group some nested types, use a namespace to create the hierarchy instead.
The estimated Debt, which means the effort to fix such issue, is 2 minutes per nested type plus 4 minutes per outter type using such nesting type.
Projects with poor cohesion (RelationalCohesion)
This rule computes the Relational Cohesion metric for the application projects, and warns about wrong values.
The Relational Cohesion for an project, is the total number of relationship between types of the projects, divided by the number of types. In other words it is the average number of types in the project used by a type in the project.
As classes inside an project should be strongly related, the cohesion should be high. On the other hand, a value which is too high may indicate over-coupling. A good range for Relational Cohesion is 1.5 to 4.0.
Notice that projects with less than 20 types are ignored.
How to Fix:
Matches of this present rule might reveal either projects with specific coding constraints (like code generated that have particular structure) either issues in design.
In the second case, large refactoring can be planned not to respect this rule in particular, but to increase the overall design and code maintainability.
The severity of issues of this rule is Low because the code metric Relational Cohesion is an information about the code structure state but is not actionable, it doesn't tell precisely what to do obtain a better score.
Fixing actionable issues of others Architecture and Code Quality default rules will necessarily increase the Relational Cohesion scores.
Projects that don't satisfy the Abstractness/Instability principle
The Abstractness versus Instability Diagram that is shown in the CppDepend report helps to assess which projects are potentially painful to maintain (i.e concrete and stable) and which projects are potentially useless (i.e abstract and instable).
• Abstractness: If an project contains many abstract types (i.e interfaces and abstract classes) and few concrete types, it is considered as abstract.
• Stability: An project is considered stable if its types are used by a lot of types from other projects. In this context stable means painful to modify.
From these metrics, we define the perpendicular normalized distance of an project from the idealized line A + I = 1 (called main sequence). This metric is an indicator of the project's balance between abstractness and stability. We precise that the word normalized means that the range of values is [0.0 … 1.0].
This rule warns about projects with a normalized distance greater than than 0.7.
This rules use the default code metric on project Normalized Distance from the Main Sequence explained here: http://www.cppdepend.com/Metrics#DitFromMainSeq
These concepts have been originally introduced by Robert C. Martin in 1994 in this paper: http://www.objectmentor.com/resources/articles/oodmetrc.pdf
How to Fix:
Violations of this rule indicate projects with an improper abstractness / stability balance.
• Either the project is potentially painful to maintain (i.e is massively used and contains mostly concrete types). This can be fixed by creating abstractions to avoid too high coupling with concrete implementations.
• Either the project is potentially useless (i.e contains mostly abstractions and is not used enough). In such situation, the design must be reviewed to see if it can be enhanced.
The severity of issues of this rule is Low because the Abstractness/Instability principle is an information about the code structure state but is not actionable, it doesn't tell precisely what to do obtain a better score.
Fixing actionable issues of others Architecture and Code Quality default rules will necessarily push the Abstractness/Instability principle scores in the right direction.
Code should be tested
This rule lists methods not covered at all by test or partially covered by tests.
For each match, the rules estimates the technical debt, i.e the effort to write unit and integration tests for the method. The estimation is based on the effort to develop the code element multiplied by factors in the range ]0,1.3] based on
• the method code size and complexity
• the actual percentage coverage
• the abstracness of types used, because relying on classes instead of interfaces makes the code more difficult to test
• the method visibility because testing private or protected methods is more difficult than testing public and internal ones
• the fields used by the method, because is is more complicated to write tests for methods that read mutable static fields whose changing state is shared across tests executions.
• whether the method is considered JustMyCode or not because NotMyCode is often generated easier to get tested since tests can be generated as well.
This rule is necessarily a large source of technical debt, since the code left untested is by definition part of the technical debt.
This rule also estimates the annual interest, i.e the annual cost to let the code uncovered, based on the effort to develop the code element, multiplied by factors based on usage of the code element.
How to Fix:
Write unit tests to test and cover the methods and their parent classes matched by this rule.
New Methods should be tested
This rule is executed only if a baseline for comparison is defined (diff mode). This rule operates only on methods added or refactored since the baseline.
This rule is executed only if some code coverage data is imported from some code coverage files.
It is important to write code mostly covered by tests to achieve maintainable and non-error-prone code.
In real-world, many code bases are poorly covered by tests. However it is not practicable to stop the development for months to refactor and write tests to achieve high code coverage ratio.
Hence it is recommended that each time a method (or a type) gets added, the developer takes the time to write associated unit-tests to cover it.
Doing so will help to increase significantly the maintainability of the code base. You'll notice that quickly, refactoring will also be driven by testability, and as a consequence, the overall code structure and design will increase as well.
Issues of this rule have a High severity because they reflect an actual trend to not care about writing tests on refactored code.
How to Fix:
Write unit-tests to cover the code of most methods and classes added.
Methods refactored should be tested
This rule is executed only if a baseline for comparison is defined (diff mode). This rule operates only on methods added or refactored since the baseline.
This rule is executed only if some code coverage data is imported from some code coverage files.
It is important to write code mostly covered by tests to achieve maintainable and non-error-prone code.
In real-world, many code bases are poorly covered by tests. However it is not practicable to stop the development for months to refactor and write tests to achieve high code coverage ratio.
Hence it is recommended that each time a method (or a type) gets refactored, the developer takes the time to write associated unit-tests to cover it.
Doing so will help to increase significantly the maintainability of the code base. You'll notice that quickly, refactoring will also be driven by testability, and as a consequence, the overall code structure and design will increase as well.
Issues of this rule have a High severity because they reflect an actual trend to not care about writing tests on refactored code.
How to Fix:
Write unit-tests to cover the code of most methods and classes refactored.
Types almost 100% tested should be 100% tested
This rule is executed only if some code coverage data is imported from some code coverage files.
Often covering the few percents of remaining uncovered code of a class, requires as much work as covering the first 90%. For this reason, often teams estimate that 90% coverage is enough. However untestable code usually means poorly written code which usually leads to error prone code. So it might be worth refactoring and making sure to cover the few uncovered lines of code because most tricky bugs might come from this small portion of hard-to-test code.
Not all classes should be 100% covered by tests (like UI code can be hard to test) but you should make sure that most of the logic of your application is defined in some easy-to-test classes, 100% covered by tests.
Issues of this rule have a High severity because as explained, such situation is bug-prone.
How to Fix:
Write more unit-tests dedicated to cover code not covered yet. If you find some hard-to-test code, it is certainly a sign that this code is not well designed and hence, needs refactoring.
Namespaces almost 100% tested should be 100% tested
This rule is executed only if some code coverage data is imported from some code coverage files.
Often covering the few percents of remaining uncovered code of one or several classes in a namespace requires as much work as covering the first 90%. For this reason, often teams estimate that 90% coverage is enough. However untestable code usually means poorly written code which usually leads to error prone code. So it might be worth refactoring and making sure to cover the few uncovered lines of code because most tricky bugs might come from this small portion of hard-to-test code.
Not all classes should be 100% covered by tests (like UI code can be hard to test) but you should make sure that most of the logic of your application is defined in some easy-to-test classes, 100% covered by tests.
Issues of this rule have a High severity because as explained, such situation is bug-prone.
How to Fix:
Write more unit-tests dedicated to cover code not covered yet in the namespace. If you find some hard-to-test code, it is certainly a sign that this code is not well designed and hence, needs refactoring.
Types that used to be 100% covered by tests should still be 100% covered
This rule is executed only if a baseline for comparison is defined (diff mode).
This rule is executed only if some code coverage data is imported from some code coverage files.
Often covering 10% of remaining uncovered code of a class, requires as much work as covering the first 90%. For this reason, typically teams estimate that 90% coverage is enough. However untestable code usually means poorly written code which usually leads to error prone code. So it might be worth refactoring and making sure to cover the 10% remaining code because most tricky bugs might come from this small portion of hard-to-test code.
Not all classes should be 100% covered by tests (like UI code can be hard to test) but you should make sure that most of the logic of your application is defined in some easy-to-test classes, 100% covered by tests.
In this context, this rule warns when a type fully covered by tests is now only partially covered.
Issues of this rule have a High severity because often, a type that used to be 100% and is not covered anymore is a bug-prone situation that should be carefully handled.
How to Fix:
Write more unit-tests dedicated to cover code not covered anymore. If you find some hard-to-test code, it is certainly a sign that this code is not well designed and hence, needs refactoring.
You'll find code impossible to cover by unit-tests, like calls to MessageBox.Show(). An infrastructure must be defined to be able to mock such code at test-time.
Methods should have a low C.R.A.P score
This rule is executed only if some code coverage data is imported from some code coverage files.
So far this rule is disabled because other code coverage rules assess properly code coverage issues.
Change Risk Analyzer and Predictor (i.e. CRAP) is a code metric that helps in pinpointing overly both complex and untested code. Is has been first defined here: http://www.artima.com/weblogs/viewpost.jsp?thread=215899
The Formula is: CRAP(m) = CC(m)^2 * (1 – cov(m)/100)^3 + CC(m)
• where CC(m) is the cyclomatic complexity of the method m
• and cov(m) is the percentage coverage by tests of the method m
Matched methods cumulates two highly error prone code smells:
• A complex method, difficult to develop and maintain.
• Non 100% covered code, difficult to refactor without introducing any regression bug.
The higher the CRAP score, the more painful to maintain and error prone is the method.
An arbitrary threshold of 30 is fixed for this code rule as suggested by inventors.
Notice that no amount of testing will keep methods with a Cyclomatic Complexity higher than 30, out of CRAP territory.
Notice that this rule doesn't match too short method with less than 10 lines of code.
How to Fix:
In such situation, it is recommended to both refactor the complex method logic into several smaller and less complex methods (that might belong to some new types especially created), and also write unit-tests to full cover the refactored logic.
You'll find code impossible to cover by unit-tests, like calls to MessageBox.Show(). An infrastructure must be defined to be able to mock such code at test-time.
Types Hot Spots
This query lists types with most Debt, or in other words, types with issues that would need the largest effort to get fixed.
Both issues on the type and its members are taken account.
Since untested code often generates a lot of Debt, the type size and percentage coverage is shown (just uncomment t.PercentageCoverage in the query source code once you've imported the coverage data).
The Debt Rating and Debt Ratio are also shown for informational purpose.
--
The amount of Debt is not a measure to prioritize the effort to fix issues, it is an estimation of how far the team is from clean code that abides by the rules set.
For each issue the Annual Interest estimates the annual cost to leave the issues unfixed. The Severity of an issue is estimated through thresholds from the Annual Interest.
The Debt Breaking Point represents the duration from now when the estimated cost to leave the issue unfixed costs as much as the estimated effort to fix it.
Hence the shorter the Debt Breaking Point the largest the Return on Investment for fixing the issue. The Breaking Point is the right metric to prioritize issues fix.
Types to Fix Priority
This query lists types per increasing Debt Breaking Point.
For each issue the Debt estimates the effort to fix the issue, and the Annual Interest estimates the annual cost to leave the issue unfixed. The Severity of an issue is estimated through thresholds from the Annual Interest of the issue.
The Debt Breaking Point represents the duration from now when the estimated cost to leave the issue unfixed costs as much as the estimated effort to fix it.
Hence the shorter the Debt Breaking Point the largest the Return on Investment for fixing the issues.
Often new and refactored types since baseline will be listed first, because issues on these types get a higher Annual Interest because it is important to focus first on new issues.
--
Both issues on the type and its members are taken account.
Only types with at least 30 minutes of Debt are listed to avoid parasiting the list with the numerous types with small Debt, on which the Breaking Point value makes less sense.
The Annual Interest estimates the cost per year in man-days to leave these issues unfixed.
Since untested code often generates a lot of Debt, the type size and percentage coverage is shown (just uncomment t.PercentageCoverage in the query source code once you've imported the coverage data).
The Debt Rating and Debt Ratio are also shown for informational purpose.
Issues to Fix Priority
This query lists issues per increasing Debt Breaking Point.
Double-click an issue to edit its rule and select the issue in the rule result. This way you can view all information concerning the issue.
For each issue the Debt estimates the effort to fix the issue, and the Annual Interest estimates the annual cost to leave the issue unfixed. The Severity of an issue is estimated through thresholds from the Annual Interest of the issue.
The Debt Breaking Point represents the duration from now when the estimated cost to leave the issue unfixed costs as much as the estimated effort to fix it.
Hence the shorter the Debt Breaking Point the largest the Return on Investment for fixing the issue.
Often issues on new and refactored code elements since baseline will be listed first, because such issues get a higher Annual Interest because it is important to focus first on new issues on recent code.
Debt and Issues per Rule
This query lists violated rules with most Debt first.
A rule violated has issues. For each issue the Debt estimates the effort to fix the issue.
--
The amount of Debt is not a measure to prioritize the effort to fix issues, it is an estimation of how far the team is from clean code that abides by the rules set.
For each issue the Annual Interest estimates the annual cost to leave the issues unfixed. The Severity of an issue is estimated through thresholds from the Annual Interest.
The Debt Breaking Point represents the duration from now when the estimated cost to leave the issue unfixed costs as much as the estimated effort to fix it.
Hence the shorter the Debt Breaking Point the largest the Return on Investment for fixing the issue. The Breaking Point is the right metric to prioritize issues fix.
--
Notice that rules can be grouped in Rule Category. This way you'll see categories that generate most Debt.
Typically the rules that generate most Debt are the ones related to Code Coverage by Tests, Architecture and Code Smells.
New Debt and Issues per Rule
This query lists violated rules that have new issues since baseline, with most new Debt first.
A rule violated has issues. For each issue the Debt estimates the effort to fix the issue.
--
New issues since the baseline are consequence of recent code refactoring sessions. They represent good opportunities of fix because the code recently refactored is fresh in the developers mind, which means fixing now costs less than fixing later.
Fixing issues on recently touched code is also a good way to foster practices that will lead to higher code quality and maintainability, including writing unit-tests and avoiding unnecessary complex code.
--
Notice that rules can be grouped in Rule Category. This way you'll see categories that generate most Debt.
Typically the rules that generate most Debt are the ones related to Code Coverage by Tests, Architecture and Code Smells.
Debt and Issues per Code Element
This query lists code elements that have issues, with most Debt first.
For each code element the Debt estimates the effort to fix the element issues.
The amount of Debt is not a measure to prioritize the effort to fix issues, it is an estimation of how far the team is from clean code that abides by the rules set.
For each element the Annual Interest estimates the annual cost to leave the elements issues unfixed. The Severity of an issue is estimated through thresholds from the Annual Interest of the issue.
The Debt Breaking Point represents the duration from now when the estimated cost to leave the issues unfixed costs as much as the estimated effort to fix it.
Hence the shorter the Debt Breaking Point the largest the Return on Investment for fixing the issue. The Breaking Point is the right metric to prioritize issues fix.
New Debt and Issues per Code Element
This query lists code elements that have new issues since baseline, with most new Debt first.
For each code element the Debt estimates the effort to fix the element issues.
New issues since the baseline are consequence of recent code refactoring sessions. They represent good opportunities of fix because the code recently refactored is fresh in the developers mind, which means fixing now costs less than fixing later.
Fixing issues on recently touched code is also a good way to foster practices that will lead to higher code quality and maintainability, including writing unit-tests and avoiding unnecessary complex code.
Max C.R.A.P Score
Change Risk Analyzer and Predictor (i.e. CRAP) is a code metric that helps in pinpointing overly complex and untested code. Is has been first defined here: http://www.artima.com/weblogs/viewpost.jsp?thread=215899
The Formula is: CRAP(m) = CC(m)^2 * (1 – cov(m)/100)^3 + CC(m)
• where CC(m) is the cyclomatic complexity of the method m
• and cov(m) is the percentage coverage by tests of the method m
Matched methods cumulates two highly error prone code smells:
• A complex method, difficult to develop and maintain.
• Non 100% covered code, difficult to refactor without any regression bug.
The highest the CRAP score, the more painful to maintain and error prone is the method.
An arbitrary threshold of 30 is fixed for this code rule as suggested by inventors.
Notice that no amount of testing will keep methods with a Cyclomatic Complexity highest than 30, out of CRAP territory.
Notice that CRAP score is not computed for too short methods with less than 10 lines of code.
To list methods with highest C.R.A.P scores, please refer to the default rule: Test and Code Coverage > C.R.A.P method code metric
Average C.R.A.P Score
Change Risk Analyzer and Predictor (i.e. CRAP) is a code metric that helps in pinpointing overly complex and untested code. Is has been first defined here: http://www.artima.com/weblogs/viewpost.jsp?thread=215899
The Formula is: CRAP(m) = CC(m)^2 * (1 – cov(m)/100)^3 + CC(m)
• where CC(m) is the cyclomatic complexity of the method m
• and cov(m) is the percentage coverage by tests of the method m
Matched methods cumulates two highly error prone code smells:
• A complex method, difficult to develop and maintain.
• Non 100% covered code, difficult to refactor without any regression bug.
The highest the CRAP score, the more painful to maintain and error prone is the method.
An arbitrary threshold of 30 is fixed for this code rule as suggested by inventors.
Notice that no amount of testing will keep methods with a Cyclomatic Complexity highest than 30, out of CRAP territory.
Notice that CRAP score is not computed for too short methods with less than 10 lines of code.
To list methods with highest C.R.A.P scores, please refer to the default rule: Test and Code Coverage > C.R.A.P method code metric
Discard methods with a pattern name
The domain JustMyCode represents a facility of CQLinq to eliminate generated code elements from CQLinq query results. For example the following query will only match large methods that are not generated by a tool (like a UI designer): from m in JustMyCode.Methods where m.NbLinesOfCode > 30 select m The set of generated code elements is defined by CQLinq queries prefixed with the CQLinq keyword notmycode. For example the query below matches methods defined in source files whose name ends up with a pattern. These are file generated by some tools: notmycode from m in Methods where m.SourceFileDeclAvailable && m.SourceDecl.SourceFile.FileName.ToLower().EndsWith(".designer.cs") select m The CQLinq queries runner executes all notmycode queries before queries relying on JustMyCode, hence the domain JustMyCode is defined once for all. Obviously the CQLinq compiler emits an error if a notmycode query relies on the JustMyCode domain.
Discard types with a pattern name
The domain JustMyCode represents a facility of CQLinq to eliminate generated code elements from CQLinq query results. For example the following query will only match large types that are not generated by a tool (like a UI designer): from m in JustMyCode.Types where t.NbLinesOfCode > 3000 select t The set of generated code elements is defined by CQLinq queries prefixed with the CQLinq keyword notmycode. For example the query below matches types defined in source files whose name ends up with a pattern. These are file generated by some tools: notmycode from t in Types where t.SourceFileDeclAvailable && t.SourceDecl.SourceFile.FileName.ToLower().EndsWith("pattern to exclude") select m The CQLinq queries runner executes all notmycode queries before queries relying on JustMyCode, hence the domain JustMyCode is defined once for all. Obviously the CQLinq compiler emits an error if a notmycode query relies on the JustMyCode domain.
./img/QualityGateStatusPass32x32.png
Quality Gates Evolution - Quality Gates
When a quality gate relies on diff between now and baseline (like New Debt since Baseline) it is not executed against the baseline and as a consequence its evolution is not available.
Double-click a quality gate for editing.
Percentage Code Coverage - Quality Gates
Code coverage is certainly the most important quality code metric. But coverage is not enough the team needs to ensure that results are checked at test-time. These checks can be done both in test code, and in application code through assertions. The important part is that a test must fail explicitely when a check gets unvalidated during the test execution.
This quality gate define a warn threshold (70%) and a fail threshold (80%). These are indicative thresholds and in practice the more the better. To achieve high coverage and low risk, make sure that new and refactored classes gets 100% covered by tests and that the application and test code contains as many checks/assertions as possible.
Percentage Coverage on New Code - Quality Gates
To achieve high code coverage it is essential that new code gets properly tested and covered by tests. It is advised that all non-UI new classes gets 100% covered.
Typically 90% of a class is easy to cover by tests and 10% is hard to reach through tests. It means that this 10% remaining is not easily testable, which means it is not well designed, which often means that this code is especially error-prone. This is the reason why it is important to reach 100% coverage for a class, to make sure that potentially error-prone code gets tested.
Percentage Coverage on Refactored Code - Quality Gates
Comment changes and formatting changes are not considerd as refactoring.
To achieve high code coverage it is essential that refactored code gets properly tested and covered by tests. It is advised that when refactoring a class or a method, it is important to also write tests to make sure it gets 100% covered.
Typically 90% of a class is easy to cover by tests and 10% is hard to reach through tests. It means that this 10% remaining is not easily testable, which means it is not well designed, which often means that this code is especially error-prone. This is the reason why it is important to reach 100% coverage for a class, to make sure that potentially error-prone code gets tested.
Blocker Issues - Quality Gates
The severity of an issue is either defined explicitely in the rule source code, either inferred from the issue annual interest and thresholds defined in the CppDepend Project Properties > Issue and Debt.
Critical Issues - Quality Gates
The severity of an issue is either defined explicitely in the rule source code, either inferred from the issue annual interest and thresholds defined in the CppDepend Project Properties > Issue and Debt.
New Blocker / Critical / High Issues - Quality Gates
An issue with a severity level Critical shouldn't move to production. It still can for business imperative needs purposes, but at worth it must be fixed during the next iterations.
An issue with a severity level High should be fixed quickly, but can wait until the next scheduled interval.
The severity of an issue is either defined explicitely in the rule source code, either inferred from the issue annual interest and thresholds defined in the CppDepend Project Properties > Issue and Debt.
Critical Rules Violated - Quality Gates
A rule can be made critical just by checking the Critical button in the rule edition control and then saving the rule.
This quality gate fails if any critical rule gets any violations.
When no baseline is available, rules that rely on diff are not counted. If you observe that this quality gate count slightly decreases with no apparent reason, the reason is certainly that rules that rely on diff are not counted because the baseline is not defined.
Percentage Debt - Quality Gates
• the estimated total effort to develop the code base
• and the the estimated total time to fix all issues (the Debt)
Estimated total effort to develop the code base is inferred from # lines of code of the code base and from the Estimated number of man-dat to develop 1000 logicial lines of code setting found in CppDepend Project Properties > Issue and Debt.
Debt documentation: http://cppdepend.com/Doc_TechnicalDebt#Debt
This quality gates fails if the estimated debt is more than 30% of the estimated effort to develop the code base, and warns if the estimated debt is more than 20% of the estimated effort to develop the code base
Debt - Quality Gates
However you can refer to the default Quality Gate Percentage Debt.
The Debt is defined as the sum of estimated effort to fix all issues. Debt documentation: http://cppdepend.com/Doc_TechnicalDebt#Debt
New Debt since Baseline - Quality Gates
This Quality Gate warns if this estimated effort is positive.
Debt documentation: http://cppdepend.com/Doc_TechnicalDebt#Debt
Debt Rating per Namespace - Quality Gates
The Debt Rating for a code element is estimated by the value of the Debt Ratio and from the various rating thresholds defined in this project Debt Settings.
The Debt Ratio of a code element is a percentage of Debt Amount (in floating man-days) compared to the estimated effort to develop the code element (also in floating man-days).
The estimated effort to develop the code element is inferred from the code elements number of lines of code, and from the project Debt Settings parameters estimated number of man-days to develop 1000 logical lines of code.
The logical lines of code corresponds to the number of debug breakpoints in a method and doesn't depend on code formatting nor comments.
The Quality Gate can be modified to match projects, types or methods with a poor Debt Rating, instead of matching namespaces.
Annual Interest - Quality Gates
However you can refer to the default Quality Gate New Annual Interest since Baseline.
The Annual-Interest is defined as the sum of estimated annual cost in man-days, to leave all issues unfixed.
Each rule can either provide a formula to compute the Annual-Interest per issue, or assign a Severity level for each issue. Some thresholds defined in Project Properties > Issue and Debt > Annual Interest are used to infer an Annual-Interest value from a Severity level. Annual Interest documentation: http://cppdepend.com/Doc_TechnicalDebt#AnnualInterest
New Annual Interest since Baseline - Quality Gates
This Quality Gate warns if this estimated annual cost is positive.
This estimated annual cost is named the Annual-Interest.
Each rule can either provide a formula to compute the Annual-Interest per issue, or assign a Severity level for each issue. Some thresholds defined in Project Properties > Issue and Debt > Annual Interest are used to infer an Annual-Interest value from a Severity level. Annual Interest documentation: http://cppdepend.com/Doc_TechnicalDebt#AnnualInterest
Avoid types too big - Code Smells
Types where NbLinesOfCode > 200 are extremely complex to develop and maintain. See the definition of the NbLinesOfCode metric here http://www.cppdepend.com/Metrics.aspx#NbLinesOfCode
Maybe you are facing the God Class phenomenon: A God Class is a class that controls way too many other classes in the system and has grown beyond all logic to become The Class That Does Everything.
How to Fix:
Types with many lines of code should be split in a group of smaller types.
To refactor a God Class you'll need patience, and you might even need to recreate everything from scratch. Here are a few refactoring advices:
• The logic in the God Class must be splitted in smaller classes. These smaller classes can eventually become private classes nested in the original God Class, whose instances objects become composed of instances of smaller nested classes.
• Smaller classes partitioning should be driven by the multiple responsibilities handled by the God Class. To identify these responsibilities it often helps to look for subsets of methods strongly coupled with subsets of fields.
• If the God Class contains way more logic than states, a good option can be to define one or several static classes that contains no static field but only pure static methods. A pure static method is a function that computes a result only from inputs parameters, it doesn't read nor assign any static or instance field. The main advantage of pure static methods is that they are easily testable.
• Try to maintain the interface of the God Class at first and delegate calls to the new extracted classes. In the end the God Class should be a pure facade without its own logic. Then you can keep it for convenience or throw it away and start to use the new classes only.
• Unit Tests can help: write tests for each method before extracting it to ensure you don't break functionality.
The estimated Debt, which means the effort to fix such issue, varies linearly from 1 hour for a 200 lines of code type, up to 10 hours for a type with 2.000 or more lines of code.
In Debt and Interest computation, this rule takes account of the fact that static types with no mutable fields are just a collection of static methods that can be easily splitted and moved from one type to another.
Avoid types with too many methods - Code Smells
Notice that methods like constructors or property and event accessors are not taken account.
Having many methods for a type might be a symptom of too many responsibilities implemented.
Maybe you are facing the God Class phenomenon: A God Class is a class that controls way too many other classes in the system and has grown beyond all logic to become The Class That Does Everything.
How to Fix:
To refactor properly a God Class please read HowToFix advices from the default rule Types to Big. // The estimated Debt, which means the effort to fix such issue, varies linearly from 1 hour for a type with 20 methods, up to 10 hours for a type with 200 or more methods.
In Debt and Interest computation, this rule takes account of the fact that static types with no mutable fields are just a collection of static methods that can be easily splitted and moved from one type to another.
Avoid types with too many fields - Code Smells
Notice that constant fields and static-readonly fields are not counted. Enumerations types are not counted also.
Having many fields for a type might be a symptom of too many responsibilities implemented.
How to Fix:
To refactor such type and increase code quality and maintainability, certainly you'll have to group subsets of fields into smaller types and dispatch the logic implemented into the methods into these smaller types.
More refactoring advices can be found in the default rule Types to Big, HowToFix section.
The estimated Debt, which means the effort to fix such issue, varies linearly from 1 hour for a type with 15 fields, to up to 10 hours for a type with 200 or more fields.
Avoid methods too big, too complex - Code Smells
Maybe you are facing the God Method phenomenon. A "God Method" is a method that does way too many processes in the system and has grown beyond all logic to become The Method That Does Everything. When need for new processes increases suddenly some programmers realize: why should I create a new method for each processe if I can only add an if.
See the definition of the CyclomaticComplexity metric here: http://www.cppdepend.com/Metrics.aspx#CC
How to Fix:
A large and complex method should be split in smaller methods, or even one or several classes can be created for that.
During this process it is important to question the scope of each variable local to the method. This can be an indication if such local variable will become an instance field of the newly created class(es).
Large switch…case structures might be refactored through the help of a set of types that implement a common interface, the interface polymorphism playing the role of the switch cases tests.
Unit Tests can help: write tests for each method before extracting it to ensure you don't break functionality.
The estimated Debt, which means the effort to fix such issue, varies from 40 minutes to 8 hours, linearly from a weighted complexity score.
Avoid methods with too many parameters - Code Smells
How to Fix:
More properties/fields can be added to the declaring type to handle numerous states. An alternative is to provide a class or a structure dedicated to handle arguments passing.
The estimated Debt, which means the effort to fix such issue, varies linearly from 1 hour for a method with 7 parameters, up to 6 hours for a methods with 40 or more parameters.
Avoid methods with too many local variables - Code Smells
Methods where NbVariables > 8 are hard to understand and maintain. Methods where NbVariables > 15 are extremely complex and must be refactored.
See the definition of the Nbvariables metric here: http://www.cppdepend.com/Metrics.aspx#Nbvariables
How to Fix:
To refactor such method and increase code quality and maintainability, certainly you'll have to split the method into several smaller methods or even create one or several classes to implement the logic.
During this process it is important to question the scope of each variable local to the method. This can be an indication if such local variable will become an instance field of the newly created class(es).
The estimated Debt, which means the effort to fix such issue, varies linearly from 10 minutes for a method with 15 variables, up to 2 hours for a methods with 80 or more variables.
Avoid methods with too many overloads - Code Smells
This rule matches sets of methods with 6 overloads or more.
Such method set might be a problem to maintain and provokes coupling higher than necessary.
See the definition of the NbOverloads metric here http://www.cppdepend.com/Metrics.aspx#NbOverloads
How to Fix:
Typically the too many overloads phenomenon appears when an algorithm takes a various set of in-parameters. Each overload is presented as a facility to provide a various set of in-parameters. The too many overloads phenomenon can also be a consequence of the usage of the visitor design pattern http://en.wikipedia.org/wiki/Visitor_pattern since a method named Visit() must be provided for each sub type. In such situation there is no need for fix.
Sometime too many overloads phenomenon is not the symptom of a problem, for example when a numeric to something conversion method applies to all numeric and nullable numeric types.
The estimated Debt, which means the effort to fix such issue, is of 2 minutes per method overload.
Avoid methods potentially poorly commented - Code Smells
See the definitions of the Comments metric here: http://www.cppdepend.com/Metrics.aspx#PercentageComment http://www.cppdepend.com/Metrics.aspx#NbLinesOfComment
Notice that only comments about the method implementation (comments in method body) are taken account.
How to Fix:
Typically add more comment. But code commenting is subject to controversy. While poorly written and designed code would needs a lot of comment to be understood, clean code doesn't need that much comment, especially if variables and methods are properly named and convey enough information. Unit-Test code can also play the role of code commenting.
However, even when writing clean and well-tested code, one will have to write hacks at a point, usually to circumvent some API limitations or bugs. A hack is a non-trivial piece of code, that doesn't make sense at first glance, and that took time and web research to be found. In such situation comments must absolutely be used to express the intention, the need for the hacks and the source where the solution has been found.
The estimated Debt, which means the effort to comment such method, varies linearly from 2 minutes for 10 lines of code not commented, up to 20 minutes for 200 or more, lines of code not commented.
Avoid types with poor cohesion - Code Smells
The LCOM metric measures the fact that most methods are using most fields. A class is considered utterly cohesive (which is good) if all its methods use all its instance fields.
Only types with enough methods and fields are taken account to avoid bias. The LCOM takes its values in the range [0-1].
This rule matches types with LCOM higher than 0.8. Such value generally pinpoints a poorly cohesive class.
How to Fix:
To refactor a poorly cohesive type and increase code quality and maintainability, certainly you'll have to split the type into several smaller and more cohesive types that together, implement the same logic.
The estimated Debt, which means the effort to fix such issue, varies linearly from 5 minutes for a type with a low poorCohesionScore, up to 4 hours for a type with high poorCohesionScore.
From now, all types added should respect basic quality principles - Code Smells Regression
This rule can be easily modified to also match types refactored since baseline, that don't satisfy all quality criterions.
Types matched by this rule not only have been recently added or refactored, but also somehow violate one or several basic quality principles, whether it has too many methods, it has too many fields, or is using too many types. Any of these criterions is often a symptom of a type with too many responsibilities.
Notice that to count methods and fields, methods like constructors or property and event accessors are not taken account. Notice that constants fields and static-readonly fields are not counted. Enumerations types are not counted also.
How to Fix:
To refactor such type and increase code quality and maintainability, certainly you'll have to split the type into several smaller types that together, implement the same logic.
Issues of this rule have a constant 10 minutes Debt, because the Debt, which means the effort to fix such issue, is already estimated for issues of rules in the category Code Quality.
However issues of this rule have a High severity, with even more interests for issues on new types since baseline, because the proper time to increase the quality of these types is now, before they get commited in the next production release.
From now, all types added should be 100% covered by tests - Code Smells Regression
This rule can be easily modified to also match types refactored since baseline, that are not 100% covered by tests.
This rule is executed only if some code coverage data is imported from some code coverage files.
Often covering 10% of remaining uncovered code of a class, requires as much work as covering the first 90%. For this reason, typically teams estimate that 90% coverage is enough. However untestable code usually means poorly written code which usually leads to error prone code. So it might be worth refactoring and making sure to cover the 10% remaining code because most tricky bugs might come from this small portion of hard-to-test code.
Not all classes should be 100% covered by tests (like UI code can be hard to test) but you should make sure that most of the logic of your application is defined in some easy-to-test classes, 100% covered by tests.
In this context, this rule warns when a type added or refactored since the baseline, is not fully covered by tests.
How to Fix:
Write more unit-tests dedicated to cover code not covered yet. If you find some hard-to-test code, it is certainly a sign that this code is not well designed and hence, needs refactoring.
You'll find code impossible to cover by unit-tests, like calls to MessageBox.Show(). An infrastructure must be defined to be able to mock such code at test-time.
Issues of this rule have a constant 10 minutes Debt, because the Debt, which means the effort to write tests for the culprit type, is already estimated for issues in the category Code Coverage.
However issues of this rule have a High severity, with even more interests for issues on new types since baseline, because the proper time to write tests for these types is now, before they get commited in the next production release.
From now, all methods added should respect basic quality principles - Code Smells Regression
This rule can be easily modified to also match methods refactored since baseline, that don't satisfy all quality criterions.
Methods matched by this rule not only have been recently added or refactored, but also somehow violate one or several basic quality principles, whether it is too large (too many lines of code), too complex (too many if, switch case, loops…) has too many variables, too many parameters or has too many overloads.
How to Fix:
To refactor such method and increase code quality and maintainability, certainly you'll have to split the method into several smaller methods or even create one or several classes to implement the logic.
During this process it is important to question the scope of each variable local to the method. This can be an indication if such local variable will become an instance field of the newly created class(es).
Large switch…case structures might be refactored through the help of a set of types that implement a common interface, the interface polymorphism playing the role of the switch cases tests.
Unit Tests can help: write tests for each method before extracting it to ensure you don't break functionality.
Issues of this rule have a constant 5 minutes Debt, because the Debt, which means the effort to fix such issue, is already estimated for issues of rules in the category Code Quality.
However issues of this rule have a High severity, with even more interests for issues on new methods since baseline, because the proper time to increase the quality of these methods is now, before they get commited in the next production release.
Avoid decreasing code coverage by tests of types - Code Smells Regression
This rule is executed only if some code coverage data is imported from some code coverage files.
This rule warns when the number of lines of a type covered by tests decreased since the baseline. In case the type faced some refactoring since the baseline, this loss in coverage is estimated only for types with more lines of code, where # lines of code covered now is lower than # lines of code covered in baseline + the extra number of lines of code.
Such situation can mean that some tests have been removed but more often, this means that the type has been modified, and that changes haven't been covered properly by tests.
To visualize changes in code, right-click a matched type and select:
• Compare older and newer versions of source file
How to Fix:
Write more unit-tests dedicated to cover changes in matched types not covered yet. If you find some hard-to-test code, it is certainly a sign that this code is not well designed and hence, needs refactoring.
The estimated Debt, which means the effort to cover by test code that used to be covered, varies linearly 15 minutes to 3 hours, depending on the number of lines of code that are not covered by tests anymore.
Severity of issues of this rule varies from High to Critical depending on the number of lines of code that are not covered by tests anymore. Because the loss in code coverage happened since the baseline, the severity is high because it is important to focus on these issues now, before such code gets released in production.
Avoid making complex methods even more complex - Code Smells Regression
The method complexity is measured through the code metric Cyclomatic Complexity defined here: http://www.cppdepend.com/Metrics.aspx#CC
This rule warns when a method already complex (i.e with Cyclomatic Complexity higher than 6) become even more complex since the baseline.
To visualize changes in code, right-click a matched method and select:
• Compare older and newer versions of source file
How to Fix:
A large and complex method should be split in smaller methods, or even one or several classes can be created for that.
During this process it is important to question the scope of each variable local to the method. This can be an indication if such local variable will become an instance field of the newly created class(es).
Large switch…case structures might be refactored through the help of a set of types that implement a common interface, the interface polymorphism playing the role of the switch cases tests.
Unit Tests can help: write tests for each method before extracting it to ensure you don't break functionality.
The estimated Debt, which means the effort to fix such issue, varies linearly from 15 to 60 minutes depending on the extra complexity added.
Issues of this rule have a High severity, because it is important to focus on these issues now, before such code gets released in production.
Avoid making large methods even larger - Code Smells Regression
This rule warns when a method already large (i.e with more than 15 lines of code) become even larger since the baseline.
The method size is measured through the code metric # Lines of Code defined here: http://www.cppdepend.com/Metrics.aspx#NbLinesOfCode
To visualize changes in code, right-click a matched method and select:
• Compare older and newer versions of source file
How to Fix:
Usually too big methods should be split in smaller methods.
But long methods with no branch conditions, that typically initialize some data, are not necessarily a problem to maintain, and might not need refactoring.
The estimated Debt, which means the effort to fix such issue, varies linearly from 5 to 20 minutes depending on the number of lines of code added.
The estimated Debt, which means the effort to fix such issue, varies linearly from 10 to 60 minutes depending on the extra complexity added.
Issues of this rule have a High severity, because it is important to focus on these issues now, before such code gets released in production.
Avoid adding methods to a type that already had many methods - Code Smells Regression
Types where number of methods is greater than 15 might be hard to understand and maintain.
This rule lists types that already had more than 15 methods at the baseline time, and for which new methods have been added.
Having many methods for a type might be a symptom of too many responsibilities implemented.
Notice that constructors and methods generated by the compiler are not taken account.
How to Fix:
To refactor such type and increase code quality and maintainability, certainly you'll have to split the type into several smaller types that together, implement the same logic.
The estimated Debt, which means the effort to fix such issue, is equal to 10 minutes per method added.
Issues of this rule have a High severity, because it is important to focus on these issues now, before such code gets released in production.
Avoid adding instance fields to a type that already had many instance fields - Code Smells Regression
Types where number of fields is greater than 15 might be hard to understand and maintain.
This rule lists types that already had more than 15 fields at the baseline time, and for which new fields have been added.
Having many fields for a type might be a symptom of too many responsibilities implemented.
Notice that constants fields and static-readonly fields are not taken account. Enumerations types are not taken account also.
How to Fix:
To refactor such type and increase code quality and maintainability, certainly you'll have to group subsets of fields into smaller types and dispatch the logic implemented into the methods into these smaller types.
The estimated Debt, which means the effort to fix such issue, is equal to 10 minutes per field added.
Issues of this rule have a High severity, because it is important to focus on these issues now, before such code gets released in production.
Avoid transforming an immutable type into a mutable one - Code Smells Regression
A type is considered as immutable if its instance fields cannot be modified once an instance has been built by a constructor.
Being immutable has several fortunate consequences for a type. For example its instance objects can be used concurrently from several threads without the need to synchronize accesses.
Hence users of such type often rely on the fact that the type is immutable. If an immutable type becomes mutable, there are chances that this will break users code.
This is why this rule warns about such immutable type that become mutable.
The estimated Debt, which means the effort to fix such issue, is equal to 2 minutes per instance field that became mutable.
How to Fix:
If being immutable is an important property for a matched type, then the code must be refactored to preserve immutability.
The estimated Debt, which means the effort to fix such issue, is equal to 10 minutes plus 10 minutes per instance fields of the matched type that is now mutable.
Issues of this rule have a High severity, because it is important to focus on these issues now, before such code gets released in production.
Base class should not use derivatives - Object Oriented Design
Hence a base class should be designed properly to make it easy to derive from, this is extension. But creating a new derived class, or modifying an existing one, shouldn't provoke any modification in the base class. And if a base class is using some derivative classes somehow, there are good chances that such modification will be needed.
Extending the base class is not anymore a simple operation, this is not good design.
How to Fix:
Understand the need for using derivatives, then imagine a new design, and then refactor.
Typically an algorithm in the base class needs to access something from derived classes. You can try to encapsulate this access behind an abstract or a virtual method.
If you see in the base class some conditions on typeof(DerivedClass) not only urgent refactoring is needed. Such condition can easily be replaced through an abstract or a virtual method.
Sometime you'll see a base class that creates instance of some derived classes. In such situation, certainly using the factory method pattern http://en.wikipedia.org/wiki/Factory_method_pattern or the abstract factory pattern http://en.wikipedia.org/wiki/Abstract_factory_pattern will improve the design.
The estimated Debt, which means the effort to fix such issue, is equal to 3 minutes per derived class used by the base class + 3 minutes per member of a derived class used by the base class.
Class shouldn't be too deep in inheritance tree - Object Oriented Design
In theory, there is nothing wrong having a long inheritance chain, if the modelization has been well thought out, if each base class is a well-designed refinement of the domain.
In practice, modeling properly a domain demands a lot of effort and experience and more often than not, a long inheritance chain is a sign of confused design, that will be hard to work with and maintain.
How to Fix:
In Object-Oriented Programming, a well-known motto is Favor Composition over Inheritance.
This is because inheritance comes with pitfalls. In general, the implementation of a derived class is very bound up with the base class implementation. Also a base class exposes implementation details to its derived classes, that's why it's often said that inheritance breaks encapsulation.
On the other hands, Composition favors binding with interfaces over binding with implementations. Hence, not only the encapsulation is preserved, but the design is clearer, because interfaces make it explicit and less coupled.
Hence, to break a long inheritance chain, Composition is often a powerful way to enhance the design of the refactored underlying logic.
You can also read: http://en.wikipedia.org/wiki/Composition_over_inheritance and http://stackoverflow.com/questions/49002/prefer-composition-over-inheritance
The estimated Debt, which means the effort to fix such issue, depends linearly upon the depth of inheritance.
Constructor should not call a virtual methods - Object Oriented Design
When an object written in C++ is constructed, what happens is that constructors run in order from the base class to the most derived class.
Also objects do not change type as they are constructed, but start out as the most derived type, with the method table being for the most derived type. This means that virtual method calls always run on the most derived type, even when calls are made from the constructor.
When you combine these two facts you are left with the problem that if you make a virtual method call in a constructor, and it is not the most derived type in its inheritance hierarchy, then it will be called on a class whose constructor has not been run, and therefore may not be in a suitable state to have that method called.
Hence this situation makes the class fragile to derive from.
How to Fix:
Violations reported can be solved by re-designing object initialisation or by declaring the parent class as sealed, if possible.
Don't assign static fields from instance methods - Object Oriented Design
More discussion on the topic can be found here: http://codebetter.com/patricksmacchia/2011/05/04/back-to-basics-usage-of-static-members/
How to Fix:
If the static field is just assigned once in the program lifetime, make sure to declare it as readonly and assign it inline, or from the static constructor.
In Object-Oriented-Programming the natural artifact to hold states that can be modified is instance fields.
Hence to fix violations of this rule, make sure to hold assignable states through instance fields, not through static fields.
Avoid Abstract Classes with too many methods - Object Oriented Design
A property with getter or setter or both count as one method. An event count as one method.
How to Fix:
Typically to fix such issue, the interface must be refactored in a grape of smaller single-responsibility interfaces.
A classic example is a ISession large interface, responsible for holding states, run commands and offer various accesses and facilities.
The classic problem for a large public interface is that it has many clients that consume it. As a consequence splitting it in smaller interfaces has an important impact and it is not always feasible.
The estimated Debt, which means the effort to fix such issue, varies linearly from 20 minutes for an interface with 10 methods, up to 7 hours for an interface with 100 or more methods. The Debt is divided by two if the interface is not publicly visible, because in such situation only the current project is impacted by the refactoring.
Type should not have too many responsibilities - Object Oriented Design
Nested types should not be visible - Object Oriented Design
A nested type is a type declared within the scope of another type. Nested types are useful for encapsulating private implementation details of the containing type. Used for this purpose, nested types should not be externally visible.
Do not use externally visible nested types for logical grouping or to avoid name collisions; instead use namespaces.
Nested types include the notion of member accessibility, which some programmers do not understand clearly.
Protected types can be used in subclasses and nested types in advanced customization scenarios.
How to Fix:
If you do not intend the nested type to be externally visible, change the type's accessibility.
Otherwise, remove the nested type from its parent and make it non-nested.
If the purpose of the nesting is to group some nested types, use a namespace to create the hierarchy instead.
The estimated Debt, which means the effort to fix such issue, is 2 minutes per nested type plus 4 minutes per outter type using such nesting type.
Projects with poor cohesion (RelationalCohesion) - Object Oriented Design
The Relational Cohesion for an project, is the total number of relationship between types of the projects, divided by the number of types. In other words it is the average number of types in the project used by a type in the project.
As classes inside an project should be strongly related, the cohesion should be high. On the other hand, a value which is too high may indicate over-coupling. A good range for Relational Cohesion is 1.5 to 4.0.
Notice that projects with less than 20 types are ignored.
How to Fix:
Matches of this present rule might reveal either projects with specific coding constraints (like code generated that have particular structure) either issues in design.
In the second case, large refactoring can be planned not to respect this rule in particular, but to increase the overall design and code maintainability.
The severity of issues of this rule is Low because the code metric Relational Cohesion is an information about the code structure state but is not actionable, it doesn't tell precisely what to do obtain a better score.
Fixing actionable issues of others Architecture and Code Quality default rules will necessarily increase the Relational Cohesion scores.
Projects that don't satisfy the Abstractness/Instability principle - Object Oriented Design
• Abstractness: If an project contains many abstract types (i.e interfaces and abstract classes) and few concrete types, it is considered as abstract.
• Stability: An project is considered stable if its types are used by a lot of types from other projects. In this context stable means painful to modify.
From these metrics, we define the perpendicular normalized distance of an project from the idealized line A + I = 1 (called main sequence). This metric is an indicator of the project's balance between abstractness and stability. We precise that the word normalized means that the range of values is [0.0 … 1.0].
This rule warns about projects with a normalized distance greater than than 0.7.
This rules use the default code metric on project Normalized Distance from the Main Sequence explained here: http://www.cppdepend.com/Metrics#DitFromMainSeq
These concepts have been originally introduced by Robert C. Martin in 1994 in this paper: http://www.objectmentor.com/resources/articles/oodmetrc.pdf
How to Fix:
Violations of this rule indicate projects with an improper abstractness / stability balance.
• Either the project is potentially painful to maintain (i.e is massively used and contains mostly concrete types). This can be fixed by creating abstractions to avoid too high coupling with concrete implementations.
• Either the project is potentially useless (i.e contains mostly abstractions and is not used enough). In such situation, the design must be reviewed to see if it can be enhanced.
The severity of issues of this rule is Low because the Abstractness/Instability principle is an information about the code structure state but is not actionable, it doesn't tell precisely what to do obtain a better score.
Fixing actionable issues of others Architecture and Code Quality default rules will necessarily push the Abstractness/Instability principle scores in the right direction.
Higher cohesion - lower coupling - Object Oriented Design
Constructors of abstract classes should be declared as protected or private - Object Oriented Design
The class does not have a constructor. - Object Oriented Design
Class has a constructor with 1 argument that is not explicit. - Object Oriented Design
Value of pointer var, which points to allocated memory, is copied in copy constructor instead of allocating new memory. - Object Oriented Design
class class does not have a copy constructor which is recommended since the class contains a pointer to allocated memory. - Object Oriented Design
Member variable is not initialized in the constructor. - Object Oriented Design
Member variable is not assigned a value in classname::operator=. - Object Oriented Design
Unused private function: classname::funcname - Object Oriented Design
Using memfunc on class that contains a classname. - Object Oriented Design
Using memfunc on class that contains a reference. - Object Oriented Design
Using memset() on class which contains a floating point number. - Object Oriented Design
Memory for class instance allocated with malloc(), but class provides constructors. - Object Oriented Design
Memory for class instance allocated with malloc(), but class contains a std::string. - Object Oriented Design
class::operator= should return class &. - Object Oriented Design
Class Base which is inherited by class Derived does not have a virtual destructor. - Object Oriented Design
Suspicious pointer subtraction. Did you intend to write ->? - Object Oriented Design
operator= should return reference to this instance. - Object Oriented Design
No return statement in non-void function causes undefined behavior. - Object Oriented Design
operator= should either return reference to this instance or be declared private and left unimplemented. - Object Oriented Design
operator= should check for assignment to self to avoid problems with dynamic memory. - Object Oriented Design
Variable is assigned in constructor body. Consider performing initialization in initialization list. - Object Oriented Design
Member variable is initialized by itself. - Object Oriented Design
The class class defines member variable with name variable also defined in its parent class class. - Object Oriented Design
Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') - CWE Coding Standard
Divide By Zero - CWE Coding Standard
Unchecked Error Condition - CWE Coding Standard
Declaration of Catch for Generic Exception - CWE Coding Standard
Improper Release of Memory Before Removing Last Reference ('Memory Leak') - CWE Coding Standard
Double Free - CWE Coding Standard
Use of Uninitialized Variable - CWE Coding Standard
Incomplete Cleanup - CWE Coding Standard
NULL Pointer Dereferenced - CWE Coding Standard
Use of Obsolete Functions - CWE Coding Standard
Comparing instead of Assigning - CWE Coding Standard
Omitted Break Statement in Switch - CWE Coding Standard
Dead Code - CWE Coding Standard
Assignment to Variable without Use ('Unused Variable') - CWE Coding Standard
Expression is Always False - CWE Coding Standard
Expression is Always True - CWE Coding Standard
Function Call with Incorrectly Specified Arguments - CWE Coding Standard
Use of Potentially Dangerous Function - CWE Coding Standard
Operator Precedence Logic Error - CWE Coding Standard
Returning/dereferencing p after it is deallocated / released - Leaks
Memory pointed to by varname is freed twice. - Leaks
Allocation with funcName, funcName doesnt release it. - Leaks
Return value of allocation function funcName is not stored. - Leaks
Possible leak in public function. The pointer varname is not deallocated before it is allocated. - Leaks
Class class is unsafe, class::varname can leak by wrong usage. - Leaks
Memory leak: varname - Leaks
Resource leak: varname - Leaks
Deallocating a deallocated pointer: varname - Leaks
Dereferencing varname after it is deallocated / released - Leaks
The allocated size sz is not a multiple of the underlying types size. - Leaks
Mismatching allocation and deallocation: varname - Leaks
Common realloc mistake: varname nulled but not freed upon failure - Leaks
Null pointer dereference - Null Pointer
Possible null pointer dereference if the default parameter value is used: pointer - Null Pointer
Either the condition is redundant or there is possible null pointer dereference: pointer. - Null Pointer
Address of local auto-variable assigned to a function parameter. - Auto Variables
Address of an auto-variable returned. - Auto Variables
Pointer to local array variable returned. - Auto Variables
Reference to auto variable returned. - Auto Variables
Reference to temporary returned. - Auto Variables
Deallocation of an auto-variable results in undefined behaviour. - Auto Variables
Address of function parameter parameter returned. - Auto Variables
Assignment of function parameter has no effect outside the function. - Auto Variables
Assignment of function parameter has no effect outside the function. Did you forget dereferencing it? - Auto Variables
Array array[2] index array[1][1] out of bounds. - Bounds Checking
Buffer is accessed out of bounds: buffer - Bounds Checking
Dangerous usage of strncat - 3rd parameter is the maximum number of characters to append. - Bounds Checking
index is out of bounds: Supplied size 2 is larger than actual size 1. - Bounds Checking
The size argument is given as a char constant. - Bounds Checking
Array index -1 is out of bounds. - Bounds Checking
Buffer overrun possible for long command line arguments. - Bounds Checking
Undefined behaviour, pointer arithmetic is out of bounds. - Bounds Checking
Array index index is used before limits check. - Bounds Checking
Possible buffer overflow if strlen(source) is larger than or equal to sizeof(destination). - Bounds Checking
The array array is too small, the function function expects a bigger one. - Bounds Checking
Memory allocation size is negative. - Bounds Checking
Declaration of array with negative size is undefined behaviour - Bounds Checking
Array x[10] accessed at index 20, which is out of bounds. Otherwise condition y==20 is redundant. - Bounds Checking
Invalid iterator: iterator - STL
Same iterator is used with different containers container1 and container2. - STL
Iterators of different containers are used together. - STL
Invalid iterator iter used. - STL
When i==foo.size(), foo[i] is out of bounds. - STL
After push_back|push_front|insert(), the iterator iterator may be invalid. - STL
Invalid pointer pointer after push_back(). - STL
Dangerous comparison using operator< on iterator. - STL
Suspicious condition. The result of find() is an iterator, but it is not properly checked. - STL
Inefficient usage of string::find() in condition; string::compare() would be faster. - STL
Dangerous usage of c_str(). The value returned by c_str() is invalid after this call. - STL
Returning the result of c_str() in a function that returns std::string is slow and redundant. - STL
Passing the result of c_str() to a function that takes std::string as argument no. 0 is slow and redundant. - STL
Possible inefficient checking for list emptiness. - STL
Missing bounds check for extra iterator increment in loop. - STL
Redundant checking of STL container element existence before removing it. - STL
Copying auto_ptr pointer to another does not create two equal objects since one has lost its ownership of the pointer. - STL
You can randomly lose access to pointers if you store auto_ptr pointers in an STL container. - STL
Object pointed by an auto_ptr is destroyed using operator delete. You should not use auto_ptr for pointers obtained with operator new[]. - STL
Object pointed by an auto_ptr is destroyed using operator delete. You should not use auto_ptr for pointers obtained with function malloc. - STL
It is inefficient to call str.find(str) as it always returns 0. - STL
It is inefficient to swap a object with itself by calling str.swap(str) - STL
Ineffective call of function substr because it returns a copy of the object. Use operator= instead. - STL
Ineffective call of function empty(). Did you intend to call clear() instead? - STL
Return value of std::remove() ignored. Elements remain in container. - STL
Possible dereference of an invalid iterator: i - STL
Boolean value assigned to pointer. - Boolean
Boolean value assigned to floating point variable. - Boolean
Comparison of a function returning boolean value using relational (<, >, <= or >=) operator. - Boolean
Comparison of two functions returning boolean value using relational (<, >, <= or >=) operator. - Boolean
Comparison of a variable having boolean value using relational (<, >, <= or >=) operator. - Boolean
Incrementing a variable of type bool with postfix operator++ is deprecated by the C++ Standard. You should assign it the value true instead. - Boolean
Comparison of a boolean expression with an integer other than 0 or 1. - Boolean
Converting pointer arithmetic result to bool. The bool is always true unless there is undefined behaviour. - Boolean
Modifying string literal directly or indirectly is undefined behaviour. - String
Undefined behavior: Variable varname is used as parameter and destination in s[n]printf(). - String
Unusual pointer arithmetic. A value of type char is added to a string literal. - String
String literal Hello World doesnt match length argument for substr(). - String
String literal compared with variable foo. Did you intend to use strcmp() instead? - String
Char literal compared with pointer foo. Did you intend to dereference it? - String
Conversion of string literal Hello World to bool always evaluates to true. - String
Unnecessary comparison of static strings. - String
Comparison of identical string variables. - String
Shifting 32-bit value by 64 bits is undefined behaviour - Type
Signed integer overflow for expression . - Type
Suspicious code: sign conversion of var in calculation, even though var can have a negative value - Type
int result is assigned to long variable. If the variable is long to avoid loss of information, then you have loss of information. - Type
int result is returned as long value. If the return value is long to avoid loss of information, then you have loss of information. - Type
scanf is deprecated: This function or variable may be unsafe. Consider using scanf_s instead. - IO usage
Invalid usage of output stream: << std::cout. - IO usage
fflush() called on input stream stdin may result in undefined behaviour on non-linux systems. - IO usage
Read and write operations without a call to a positioning function (fseek, fsetpos or rewind) or fflush in between result in undefined behaviour. - IO usage
Read operation on a file that was opened only for writing. - IO usage
Write operation on a file that was opened only for reading. - IO usage
Used file that is not opened. - IO usage
Repositioning operation performed on a file opened in append mode has no effect. - IO usage
scanf() without field width limits can crash with huge input data. - IO usage
printf format string requires 3 parameters but only 2 are given. - IO usage
%s in format string (no. 1) requires a char * but the argument type is Unknown. - IO usage
%d in format string (no. 1) requires int * but the argument type is Unknown. - IO usage
%f in format string (no. 1) requires float * but the argument type is Unknown. - IO usage
%s in format string (no. 1) requires char * but the argument type is Unknown. - IO usage
%n in format string (no. 1) requires int * but the argument type is Unknown. - IO usage
%p in format string (no. 1) requires an address but the argument type is Unknown. - IO usage
%X in format string (no. 1) requires unsigned int but the argument type is Unknown. - IO usage
%u in format string (no. 1) requires unsigned int but the argument type is Unknown. - IO usage
%i in format string (no. 1) requires int but the argument type is Unknown. - IO usage
%f in format string (no. 1) requires double but the argument type is Unknown. - IO usage
I in format string (no. 1) is a length modifier and cannot be used without a conversion specifier. - IO usage
Width 5 given in format string (no. 10) is larger than destination buffer [0], use %-1s to prevent overflowing it. - IO usage
printf: referencing parameter 2 while 1 arguments given - IO usage
Assigning a pointer to an integer is not portable. - 64-bit portability
Assigning an integer to a pointer is not portable. - 64-bit portability
Returning an integer in a function with pointer return type is not portable. - 64-bit portability
Returning an address value in a function with integer return type is not portable. - 64-bit portability
Either the condition is redundant or there is division by zero at line 0. - Misc
Instance of varname object is destroyed immediately. - Misc
Casting between float* and double* which have an incompatible binary data representation. - Misc
Shifting a negative value is undefined behaviour - Misc
Buffer varname must have size of 2 integers if used as parameter of pipe(). - Misc
Race condition: non-interlocked access after InterlockedDecrement(). Use InterlockedDecrement() return value instead. - Misc
Buffer var is being written before its old content has been used. - Misc
Variable var is reassigned a value before the old one has been used. - Misc
Comparison of two identical variables with isless(varName,varName) always evaluates to false. - Misc
Storing func_name() return value in char variable and then comparing with EOF. - Misc
Function parameter parametername should be passed by reference. - Misc
Redundant code: Found a statement that begins with type constant. - Misc
Signed char type used as array index. - Misc
char type used as array index. - Misc
When using char variables in bit operations, sign extension can generate unexpected results. - Misc
The scope of the variable varname can be reduced. - Misc
Variable var is reassigned a value before the old one has been used. break; missing? - Misc
Buffer var is being written before its old content has been used. break; missing? - Misc
Redundant assignment of varname to itself. - Misc
memset() called to fill 0 bytes. - Misc
The 2nd memset() argument varname is a float, its representation is implementation defined. - Misc
The 2nd memset() argument varname doesnt fit into an unsigned char. - Misc
Clarify calculation precedence for + and ?. - Misc
Ineffective statement similar to *A++;. Did you intend to write (*A)++;? - Misc
Same expression on both sides of &&. - Misc
Same expression in both branches of ternary operator. - Misc
Consecutive return, break, continue, goto or throw statements are unnecessary. - Misc
Statements following return, break, continue, goto or throw will never be executed. - Misc
Checking if unsigned variable varname is less than zero. - Misc
Unsigned variable varname cant be negative so it is unnecessary to test it. - Misc
A pointer can not be negative so it is either pointless or an error to check if it is. - Misc
A pointer can not be negative so it is either pointless or an error to check if it is not. - Misc
Passing NULL after the last typed argument to a variadic function leads to undefined behaviour. - Misc
Using NaN/Inf in a computation. - Misc
Comma is used in return statement. The comma can easily be misread as a ;. - Misc
Redundant pointer operation on varname - its already a pointer. - Misc
Label is not used. Should this be a case of the enclosing switch()? - Misc
Label is not used. - Misc
Expression x = x++; depends on order of evaluation of side effects - Misc
Prefer prefix ++/-- operators for non-primitive types. - Misc
Source files should not use the '\r' (CR) character - Vera++
File names should be well-formed - Vera++
No trailing whitespace - Vera++
Don't use tab characters - Vera++
No leading and no trailing empty lines - Vera++
Line cannot be too long - Vera++
There should not be too many consecutive empty lines - Vera++
Source file should not be too long - Vera++
One-line comments should not have forced continuation - Vera++
Reserved names should not be used for preprocessor macros - Vera++
Some keywords should be followed by a single space - Vera++
Some keywords should be immediately followed by a colon - Vera++
Keywords break and continue should be immediately followed by a semicolon - Vera++
Keywords return and throw should be immediately followed by a semicolon or a single space - Vera++
Semicolons should not be isolated by spaces or comments from the rest of the code - Vera++
Keywords catch, for, if, switch and while should be followed by a single space - Vera++
Comma should not be preceded by whitespace, but should be followed by one - Vera++
Identifiers should not be composed of 'l' and 'O' characters only - Vera++
Curly brackets from the same pair should be either in the same line or in the same column - Vera++
Negation operator should not be used in its short form - Vera++
Source files should contain the copyright notice - Vera++
HTML links in comments and string literals should be correct - Vera++
Calls to min/max should be protected against accidental macro substitution - Vera++
Calls Unnamed namespaces are not allowed in header files - Vera++
Using namespace is not allowed in header files - Vera++
Control structures should have complete curly-braced block of code - Vera++
API Breaking Changes: Types - API Breaking Changes
API Breaking Changes: Methods - API Breaking Changes
API Breaking Changes: Fields - API Breaking Changes
API Breaking Changes: Interfaces and Abstract Classes - API Breaking Changes
Avoid transforming immutable types into mutable types - API Breaking Changes
New Projects - Code Diff Summary
Projects removed - Code Diff Summary
Projects where code was changed - Code Diff Summary
New namespaces - Code Diff Summary
Namespaces removed - Code Diff Summary
Namespaces where code was changed - Code Diff Summary
New types - Code Diff Summary
Types removed - Code Diff Summary
Types where code was changed - Code Diff Summary
Heuristic to find types moved from one namespace or project to another - Code Diff Summary
Types directly using one or several types changed - Code Diff Summary
Types indirectly using one or several types changed - Code Diff Summary
New methods - Code Diff Summary
Methods removed - Code Diff Summary
Methods where code was changed - Code Diff Summary
Methods directly calling one or several methods changed - Code Diff Summary
Methods indirectly calling one or several methods changed - Code Diff Summary
New fields - Code Diff Summary
Fields removed - Code Diff Summary
Third party types that were not used and that are now used - Code Diff Summary
Third party types that were used and that are not used anymore - Code Diff Summary
Third party methods that were not used and that are now used - Code Diff Summary
Third party methods that were used and that are not used anymore - Code Diff Summary
Third party fields that were not used and that are now used - Code Diff Summary
Third party fields that were used and that are not used anymore - Code Diff Summary
Code should be tested - Code Coverage
For each match, the rules estimates the technical debt, i.e the effort to write unit and integration tests for the method. The estimation is based on the effort to develop the code element multiplied by factors in the range ]0,1.3] based on
• the method code size and complexity
• the actual percentage coverage
• the abstracness of types used, because relying on classes instead of interfaces makes the code more difficult to test
• the method visibility because testing private or protected methods is more difficult than testing public and internal ones
• the fields used by the method, because is is more complicated to write tests for methods that read mutable static fields whose changing state is shared across tests executions.
• whether the method is considered JustMyCode or not because NotMyCode is often generated easier to get tested since tests can be generated as well.
This rule is necessarily a large source of technical debt, since the code left untested is by definition part of the technical debt.
This rule also estimates the annual interest, i.e the annual cost to let the code uncovered, based on the effort to develop the code element, multiplied by factors based on usage of the code element.
How to Fix:
Write unit tests to test and cover the methods and their parent classes matched by this rule.
New Methods should be tested - Code Coverage
This rule is executed only if some code coverage data is imported from some code coverage files.
It is important to write code mostly covered by tests to achieve maintainable and non-error-prone code.
In real-world, many code bases are poorly covered by tests. However it is not practicable to stop the development for months to refactor and write tests to achieve high code coverage ratio.
Hence it is recommended that each time a method (or a type) gets added, the developer takes the time to write associated unit-tests to cover it.
Doing so will help to increase significantly the maintainability of the code base. You'll notice that quickly, refactoring will also be driven by testability, and as a consequence, the overall code structure and design will increase as well.
Issues of this rule have a High severity because they reflect an actual trend to not care about writing tests on refactored code.
How to Fix:
Write unit-tests to cover the code of most methods and classes added.
Methods refactored should be tested - Code Coverage
This rule is executed only if some code coverage data is imported from some code coverage files.
It is important to write code mostly covered by tests to achieve maintainable and non-error-prone code.
In real-world, many code bases are poorly covered by tests. However it is not practicable to stop the development for months to refactor and write tests to achieve high code coverage ratio.
Hence it is recommended that each time a method (or a type) gets refactored, the developer takes the time to write associated unit-tests to cover it.
Doing so will help to increase significantly the maintainability of the code base. You'll notice that quickly, refactoring will also be driven by testability, and as a consequence, the overall code structure and design will increase as well.
Issues of this rule have a High severity because they reflect an actual trend to not care about writing tests on refactored code.
How to Fix:
Write unit-tests to cover the code of most methods and classes refactored.
Types almost 100% tested should be 100% tested - Code Coverage
Often covering the few percents of remaining uncovered code of a class, requires as much work as covering the first 90%. For this reason, often teams estimate that 90% coverage is enough. However untestable code usually means poorly written code which usually leads to error prone code. So it might be worth refactoring and making sure to cover the few uncovered lines of code because most tricky bugs might come from this small portion of hard-to-test code.
Not all classes should be 100% covered by tests (like UI code can be hard to test) but you should make sure that most of the logic of your application is defined in some easy-to-test classes, 100% covered by tests.
Issues of this rule have a High severity because as explained, such situation is bug-prone.
How to Fix:
Write more unit-tests dedicated to cover code not covered yet. If you find some hard-to-test code, it is certainly a sign that this code is not well designed and hence, needs refactoring.
Namespaces almost 100% tested should be 100% tested - Code Coverage
Often covering the few percents of remaining uncovered code of one or several classes in a namespace requires as much work as covering the first 90%. For this reason, often teams estimate that 90% coverage is enough. However untestable code usually means poorly written code which usually leads to error prone code. So it might be worth refactoring and making sure to cover the few uncovered lines of code because most tricky bugs might come from this small portion of hard-to-test code.
Not all classes should be 100% covered by tests (like UI code can be hard to test) but you should make sure that most of the logic of your application is defined in some easy-to-test classes, 100% covered by tests.
Issues of this rule have a High severity because as explained, such situation is bug-prone.
How to Fix:
Write more unit-tests dedicated to cover code not covered yet in the namespace. If you find some hard-to-test code, it is certainly a sign that this code is not well designed and hence, needs refactoring.
Types that used to be 100% covered by tests should still be 100% covered - Code Coverage
This rule is executed only if some code coverage data is imported from some code coverage files.
Often covering 10% of remaining uncovered code of a class, requires as much work as covering the first 90%. For this reason, typically teams estimate that 90% coverage is enough. However untestable code usually means poorly written code which usually leads to error prone code. So it might be worth refactoring and making sure to cover the 10% remaining code because most tricky bugs might come from this small portion of hard-to-test code.
Not all classes should be 100% covered by tests (like UI code can be hard to test) but you should make sure that most of the logic of your application is defined in some easy-to-test classes, 100% covered by tests.
In this context, this rule warns when a type fully covered by tests is now only partially covered.
Issues of this rule have a High severity because often, a type that used to be 100% and is not covered anymore is a bug-prone situation that should be carefully handled.
How to Fix:
Write more unit-tests dedicated to cover code not covered anymore. If you find some hard-to-test code, it is certainly a sign that this code is not well designed and hence, needs refactoring.
You'll find code impossible to cover by unit-tests, like calls to MessageBox.Show(). An infrastructure must be defined to be able to mock such code at test-time.
Methods should have a low C.R.A.P score - Code Coverage
So far this rule is disabled because other code coverage rules assess properly code coverage issues.
Change Risk Analyzer and Predictor (i.e. CRAP) is a code metric that helps in pinpointing overly both complex and untested code. Is has been first defined here: http://www.artima.com/weblogs/viewpost.jsp?thread=215899
The Formula is: CRAP(m) = CC(m)^2 * (1 – cov(m)/100)^3 + CC(m)
• where CC(m) is the cyclomatic complexity of the method m
• and cov(m) is the percentage coverage by tests of the method m
Matched methods cumulates two highly error prone code smells:
• A complex method, difficult to develop and maintain.
• Non 100% covered code, difficult to refactor without introducing any regression bug.
The higher the CRAP score, the more painful to maintain and error prone is the method.
An arbitrary threshold of 30 is fixed for this code rule as suggested by inventors.
Notice that no amount of testing will keep methods with a Cyclomatic Complexity higher than 30, out of CRAP territory.
Notice that this rule doesn't match too short method with less than 10 lines of code.
How to Fix:
In such situation, it is recommended to both refactor the complex method logic into several smaller and less complex methods (that might belong to some new types especially created), and also write unit-tests to full cover the refactored logic.
You'll find code impossible to cover by unit-tests, like calls to MessageBox.Show(). An infrastructure must be defined to be able to mock such code at test-time.
Potentially dead Types - Dead Code
Potentially dead Methods - Dead Code
Potentially dead Fields - Dead Code
Use auto specifier - Modernize C++ Code
Use nullptr - Modernize C++ Code
Modernize loops - Modernize C++ Code
Use unique_ptr instead of auto_ptr - Modernize C++ Code
Use override keyword - Modernize C++ Code
Pass By Value - Modernize C++ Code
Avoid Bind - Modernize C++ Code
Modernize deprecated headers - Modernize C++ Code
Modernize make_shared - Modernize C++ Code
Modernize make_unique - Modernize C++ Code
Modernize raw string literal - Modernize C++ Code
Modernize redundant void arg - Modernize C++ Code
Modernize random shuffle - Modernize C++ Code
Modernize return braced init list - Modernize C++ Code
Modernize shrink-to-fit - Modernize C++ Code
Modernize unary static-assert - Modernize C++ Code
Modernize use bool literals - Modernize C++ Code
Modernize use default member init - Modernize C++ Code
Modernize use emplace - Modernize C++ Code
Modernize use equals default - Modernize C++ Code
Modernize use equals delete - Modernize C++ Code
Modernize use noexcept - Modernize C++ Code
Modernize use transparent functors - Modernize C++ Code
Modernize use using - Modernize C++ Code
Braces around statements - HICPP coding standard
Deprecated headers - HICPP coding standard
Exception baseclass - HICPP coding standard
Explicit conversions - HICPP coding standard
Function size - HICPP coding standard
Invalid access moved - HICPP coding standard
Member init - HICPP coding standard
Move const arg - HICPP coding standard
Named parameter - HICPP coding standard
New and delete overloads - HICPP coding standard
No array decay - HICPP coding standard
No assembler - HICPP coding standard
No malloc - HICPP coding standard
Signed bitwise - HICPP coding standard
Special member functions - HICPP coding standard
Undelegated constructor - HICPP coding standard
Use emplace - HICPP coding standard
Use noexcept - HICPP coding standard
Use auto - HICPP coding standard
HICPP-Use nullptr - HICPP coding standard
Use equals default - HICPP coding standard
Use equals delete - HICPP coding standard
Static assert - Cert coding standard
Check Postfix operators - Cert coding standard
Check C-style variadic functions - Cert coding standard
Delete null pointer - Cert coding standard
Check new and delete overloads - Cert coding standard
check change of std or posix namespace - Cert coding standard
Finds anonymous namespaces in headers. - Cert coding standard
Do not call system() - Cert coding standard
Finds violations of the rule Throw by value, catch by reference. - Cert coding standard
Detect errors when converting a string to a number. - Cert coding standard
Do not use setjmp() or longjmp(). - Cert coding standard
Handle all exceptions thrown before main() begins executing - Cert coding standard
Exception objects must be nothrow copy constructible. - Cert coding standard
Do not copy a FILE object. - Cert coding standard
Do not use floating-point variables as loop counters. - Cert coding standard
Check the usage of std::rand() - Cert coding standard
Performance of move constructor init - Cert coding standard
Instance fields should be prefixed with a 'm_' - Naming Conventions
Static fields should be prefixed with a 's_' - Naming Conventions
Exception class name should be suffixed with 'Exception' - Naming Conventions
Types name should begin with an Upper character - Naming Conventions
Avoid types with name too long - Naming Conventions
Avoid methods with name too long - Naming Conventions
Avoid fields with name too long - Naming Conventions
Avoid naming types and namespaces with the same identifier - Naming Conventions
All CPD duplications - CPD Queries
Most duplicated code lines - CPD Queries
Big duplications - CPD Queries
Classes containing big duplication - CPD Queries
Classes containing Many duplications - CPD Queries
Types Hot Spots - Hot Spots
Both issues on the type and its members are taken account.
Since untested code often generates a lot of Debt, the type size and percentage coverage is shown (just uncomment t.PercentageCoverage in the query source code once you've imported the coverage data).
The Debt Rating and Debt Ratio are also shown for informational purpose.
--
The amount of Debt is not a measure to prioritize the effort to fix issues, it is an estimation of how far the team is from clean code that abides by the rules set.
For each issue the Annual Interest estimates the annual cost to leave the issues unfixed. The Severity of an issue is estimated through thresholds from the Annual Interest.
The Debt Breaking Point represents the duration from now when the estimated cost to leave the issue unfixed costs as much as the estimated effort to fix it.
Hence the shorter the Debt Breaking Point the largest the Return on Investment for fixing the issue. The Breaking Point is the right metric to prioritize issues fix.
Types to Fix Priority - Hot Spots
For each issue the Debt estimates the effort to fix the issue, and the Annual Interest estimates the annual cost to leave the issue unfixed. The Severity of an issue is estimated through thresholds from the Annual Interest of the issue.
The Debt Breaking Point represents the duration from now when the estimated cost to leave the issue unfixed costs as much as the estimated effort to fix it.
Hence the shorter the Debt Breaking Point the largest the Return on Investment for fixing the issues.
Often new and refactored types since baseline will be listed first, because issues on these types get a higher Annual Interest because it is important to focus first on new issues.
--
Both issues on the type and its members are taken account.
Only types with at least 30 minutes of Debt are listed to avoid parasiting the list with the numerous types with small Debt, on which the Breaking Point value makes less sense.
The Annual Interest estimates the cost per year in man-days to leave these issues unfixed.
Since untested code often generates a lot of Debt, the type size and percentage coverage is shown (just uncomment t.PercentageCoverage in the query source code once you've imported the coverage data).
The Debt Rating and Debt Ratio are also shown for informational purpose.
Issues to Fix Priority - Hot Spots
Double-click an issue to edit its rule and select the issue in the rule result. This way you can view all information concerning the issue.
For each issue the Debt estimates the effort to fix the issue, and the Annual Interest estimates the annual cost to leave the issue unfixed. The Severity of an issue is estimated through thresholds from the Annual Interest of the issue.
The Debt Breaking Point represents the duration from now when the estimated cost to leave the issue unfixed costs as much as the estimated effort to fix it.
Hence the shorter the Debt Breaking Point the largest the Return on Investment for fixing the issue.
Often issues on new and refactored code elements since baseline will be listed first, because such issues get a higher Annual Interest because it is important to focus first on new issues on recent code.
Debt and Issues per Rule - Hot Spots
A rule violated has issues. For each issue the Debt estimates the effort to fix the issue.
--
The amount of Debt is not a measure to prioritize the effort to fix issues, it is an estimation of how far the team is from clean code that abides by the rules set.
For each issue the Annual Interest estimates the annual cost to leave the issues unfixed. The Severity of an issue is estimated through thresholds from the Annual Interest.
The Debt Breaking Point represents the duration from now when the estimated cost to leave the issue unfixed costs as much as the estimated effort to fix it.
Hence the shorter the Debt Breaking Point the largest the Return on Investment for fixing the issue. The Breaking Point is the right metric to prioritize issues fix.
--
Notice that rules can be grouped in Rule Category. This way you'll see categories that generate most Debt.
Typically the rules that generate most Debt are the ones related to Code Coverage by Tests, Architecture and Code Smells.
New Debt and Issues per Rule - Hot Spots
A rule violated has issues. For each issue the Debt estimates the effort to fix the issue.
--
New issues since the baseline are consequence of recent code refactoring sessions. They represent good opportunities of fix because the code recently refactored is fresh in the developers mind, which means fixing now costs less than fixing later.
Fixing issues on recently touched code is also a good way to foster practices that will lead to higher code quality and maintainability, including writing unit-tests and avoiding unnecessary complex code.
--
Notice that rules can be grouped in Rule Category. This way you'll see categories that generate most Debt.
Typically the rules that generate most Debt are the ones related to Code Coverage by Tests, Architecture and Code Smells.
Debt and Issues per Code Element - Hot Spots
For each code element the Debt estimates the effort to fix the element issues.
The amount of Debt is not a measure to prioritize the effort to fix issues, it is an estimation of how far the team is from clean code that abides by the rules set.
For each element the Annual Interest estimates the annual cost to leave the elements issues unfixed. The Severity of an issue is estimated through thresholds from the Annual Interest of the issue.
The Debt Breaking Point represents the duration from now when the estimated cost to leave the issues unfixed costs as much as the estimated effort to fix it.
Hence the shorter the Debt Breaking Point the largest the Return on Investment for fixing the issue. The Breaking Point is the right metric to prioritize issues fix.
New Debt and Issues per Code Element - Hot Spots
For each code element the Debt estimates the effort to fix the element issues.
New issues since the baseline are consequence of recent code refactoring sessions. They represent good opportunities of fix because the code recently refactored is fresh in the developers mind, which means fixing now costs less than fixing later.
Fixing issues on recently touched code is also a good way to foster practices that will lead to higher code quality and maintainability, including writing unit-tests and avoiding unnecessary complex code.
Most used types (Rank) - Statistics
Most used methods (Rank) - Statistics
Most used namespaces (#NamespacesUsingMe ) - Statistics
Most used types (#TypesUsingMe ) - Statistics
Most used methods (#MethodsCallingMe ) - Statistics
Namespaces that use many other namespaces (#NamespacesUsed ) - Statistics
Types that use many other types (#TypesUsed ) - Statistics
Methods that use many other methods (#MethodsCalled ) - Statistics
High-level to low-level Projects (Level) - Statistics
High-level to low-level namespaces (Level) - Statistics
High-level to low-level types (Level) - Statistics
High-level to low-level methods (Level) - Statistics
Check that all types that derive from Foo, has a name that ends up with Foo - Custom Naming Conventions
Check that all namespaces begins with CompanyName.ProductName - Custom Naming Conventions
# Lines of Code - Code Size
# Lines of Code (JustMyCode) - Code Size
# Lines of Code (NotMyCode) - Code Size
# Lines of Code Added since the Baseline - Code Size
# Source Files - Code Size
# Lines of Comments - Code Size
# Projects - Code Size
# Namespaces - Code Size
# Types - Code Size
# Classes - Code Size
# Abstract Classes - Code Size
# Interfaces - Code Size
# Structures - Code Size
# Methods - Code Size
# Abstract Methods - Code Size
# Concrete Methods - Code Size
# Fields - Code Size
Max # Lines of Code for Methods (JustMyCode) - Maximum and Average
Average # Lines of Code for Methods - Maximum and Average
Average # Lines of Code for Methods with at least 3 Lines of Code - Maximum and Average
Max # Lines of Code for Types (JustMyCode) - Maximum and Average
Average # Lines of Code for Types - Maximum and Average
Max Cyclomatic Complexity for Methods - Maximum and Average
Max Cyclomatic Complexity for Types - Maximum and Average
Average Cyclomatic Complexity for Methods - Maximum and Average
Average Cyclomatic Complexity for Types - Maximum and Average
Max Nesting Depth for Methods - Maximum and Average
Average Nesting Depth for Methods - Maximum and Average
Max # of Methods for Types - Maximum and Average
Average # Methods for Types - Maximum and Average
Max # of Methods for Interfaces - Maximum and Average
Average # Methods for Interfaces - Maximum and Average
Percentage Code Coverage - Coverage
# Lines of Code Covered - Coverage
# Lines of Code Not Covered - Coverage
# Lines of Code in Types 100% Covered - Coverage
# Lines of Code in Methods 100% Covered - Coverage
Max C.R.A.P Score - Coverage
The Formula is: CRAP(m) = CC(m)^2 * (1 – cov(m)/100)^3 + CC(m)
• where CC(m) is the cyclomatic complexity of the method m
• and cov(m) is the percentage coverage by tests of the method m
Matched methods cumulates two highly error prone code smells:
• A complex method, difficult to develop and maintain.
• Non 100% covered code, difficult to refactor without any regression bug.
The highest the CRAP score, the more painful to maintain and error prone is the method.
An arbitrary threshold of 30 is fixed for this code rule as suggested by inventors.
Notice that no amount of testing will keep methods with a Cyclomatic Complexity highest than 30, out of CRAP territory.
Notice that CRAP score is not computed for too short methods with less than 10 lines of code.
To list methods with highest C.R.A.P scores, please refer to the default rule: Test and Code Coverage > C.R.A.P method code metric
Average C.R.A.P Score - Coverage
The Formula is: CRAP(m) = CC(m)^2 * (1 – cov(m)/100)^3 + CC(m)
• where CC(m) is the cyclomatic complexity of the method m
• and cov(m) is the percentage coverage by tests of the method m
Matched methods cumulates two highly error prone code smells:
• A complex method, difficult to develop and maintain.
• Non 100% covered code, difficult to refactor without any regression bug.
The highest the CRAP score, the more painful to maintain and error prone is the method.
An arbitrary threshold of 30 is fixed for this code rule as suggested by inventors.
Notice that no amount of testing will keep methods with a Cyclomatic Complexity highest than 30, out of CRAP territory.
Notice that CRAP score is not computed for too short methods with less than 10 lines of code.
To list methods with highest C.R.A.P scores, please refer to the default rule: Test and Code Coverage > C.R.A.P method code metric