They find architectural complexity accounts for defects (x3), productivity (50%) and staff turnover (order-of-magnitude).
The "McCabe cyclomatic complexity metric" doesn't predict the last two. Instead they use the "MacCormack" method (who supervised this PhD...), so I wonder how it differs?
funfact: they reference an evaluation of Mozilla'a refactoring (the one Joel said you must never do).
It's a PhD thesis, and it's long, detailed and academic (tautology), yet lacks an introduction, so:
The Missing Introduction
"Architectural complexity" here doesn't mean the complexity of architecture features themselves (e.g. architecture astronauts making too many layers etc), but direct and indirect interactions between parts.
Instead of diagrams of arcs and nodes, they believe a matrix representation helps show complexity (vertical axis: using; horizontal axis used - or maybe vice versa - with dots at that location), called a "Design Structure Matrix" (DSM).
The key sections seem to be:
3.6-7 [p.34-42] about DSMs
5.1 [p.67-83] about complexity by the "MacCormack approach" [pasted below, from p. 70]
[5.1.2.1] Capture a network representation of a software product's source-code using dependency extraction tools
[5.1.2.2] Find all the indirect paths between files in the network by computing the graph's transitive closure
[5.1.2.3] Assign two visibility scores to each file that represent its reachability from other files or ability to reach other files in the network. [Visibility Fan In, Visibility Fan Out]
[5.1.3] Use these two visibility scores to classify each file as one of four types: peripheral, utility, control, or core.
They also have "network density" and "propagation cost" measures [p.76]
---
Aside: A PhD thesis takes some time to read before commenting. I wish there was a way to facilitate discussion of the submission, in addition to the title-based opinion and experience that we have now. The patient_hackernews experiment (https://old.reddit.com/r/patient_hackernews/) didn't seem to work out.
Perhaps just a re-submission (or a bump?) labelled "for readers only" 24 hours later?
The "McCabe cyclomatic complexity metric" doesn't predict the last two. Instead they use the "MacCormack" method (who supervised this PhD...), so I wonder how it differs?
funfact: they reference an evaluation of Mozilla'a refactoring (the one Joel said you must never do).
It's a PhD thesis, and it's long, detailed and academic (tautology), yet lacks an introduction, so:
"Architectural complexity" here doesn't mean the complexity of architecture features themselves (e.g. architecture astronauts making too many layers etc), but direct and indirect interactions between parts.Instead of diagrams of arcs and nodes, they believe a matrix representation helps show complexity (vertical axis: using; horizontal axis used - or maybe vice versa - with dots at that location), called a "Design Structure Matrix" (DSM).
The key sections seem to be:
[5.1.2.1] Capture a network representation of a software product's source-code using dependency extraction tools[5.1.2.2] Find all the indirect paths between files in the network by computing the graph's transitive closure
[5.1.2.3] Assign two visibility scores to each file that represent its reachability from other files or ability to reach other files in the network. [Visibility Fan In, Visibility Fan Out]
[5.1.3] Use these two visibility scores to classify each file as one of four types: peripheral, utility, control, or core.
They also have "network density" and "propagation cost" measures [p.76]
---
Aside: A PhD thesis takes some time to read before commenting. I wish there was a way to facilitate discussion of the submission, in addition to the title-based opinion and experience that we have now. The patient_hackernews experiment (https://old.reddit.com/r/patient_hackernews/) didn't seem to work out.
Perhaps just a re-submission (or a bump?) labelled "for readers only" 24 hours later?