You may have seen the terms flitting around the internet: variants, variant management, product line engineering (PLE). You may have read that they represent an onrushing trend in product development. You may be wondering “just what is all this and how is it going to affect us”. In this 3-part series based on a new whitepaper from Polarion Software, we’re going to try to answer these questions and others, and talk about solutions.
In Part 1 “Variety is the Spice of Life” we talked about some of the drivers behind the trend of demand for variation and customization. In this second part, we look into tactical approaches to variation.
Traditional Responses to Software Variation are Insufficient
Software development organizations are forever challenged and pressured to author more, to do it faster, and to release higher quality deliverables. The introduction of software variants places even more pressure on software development teams. Traditional development responses to the ensuing disruption become insufficient as levels of software variation increase and/or accelerate.
Agile Development Practices
Recently the adoption of agile development approaches have gained widespread popularity as a way to address both the need to develop faster, and to provide more successful results with respect to fulfilling requirements. While so-called Agile methodology certainly allows developers to more rapidly iterate releases, and ensures the ongoing validation of delivered functionality throughout the development process, Agile methodologies and practices do not specifically contemplate variants management in needed fundamental ways. Hence, the adoption of an agile development methodology is not an effective strategy, in and of itself, to implement variant management in the development lifecycle (although the combination of agile practices with other means of variant management may provide hugely synergistic benefits to a development effort). Increasing the speed of development, i.e., release rapidity, is not scalable to accommodate increasing numbers of variants, and thus will require unrealistic resourcing levels and/or will introduce resulting organizational complexity, which even then will not provide an appropriate contextual framework to efficiently manage variants.
Adaptations of Version-Oriented Release Management
Development practice has long leveraged ‘cloning’ strategies (a.k.a. “clone & own,” “branching and merging,” as well as other terminology) as a reusability technique in order to both speed development processes, and to achieve quality outcomes. Reuse is essentially basic human behavior as a response mechanism –solve thorny problems with known existing solutions, and/or by adapting similar solutions, in order to save time.
While leveraging the reuse of components from predecessor and parallel projects is one of the keys to variants management, adaptations of traditional version-oriented approaches are inherently limited with respect to accommodating broad levels of reusable assets, as well as being inherently single-project focused. Mass variation, i.e., development of numerous, possibly similar components, in multiple projects, soon overwhelms the efficacy of branching techniques. Conflicts between branches may remain unidentified, and/or the ability to establish a new branch is inhibited by known conflicts. The same functionality may be developed repeatedly for slight variations, and/or seemingly identical functions behave differently depending upon their position in a branching hierarchy. Traceability and accountability are the first casualties when adapting single-project-version-oriented techniques for variants management, and often accountability is completely lost, e.g., testing cycles are lengthened, quality becomes compromised, and validation for regulatory and/or standards compliance purposes becomes problematic. Maintenance efforts explode exponentially, as complexity is increasing with each branching operation, as all the while traceability becomes more and more obfuscated. Merging changes becomes multifarious, and in the worst of cases merging becomes entirely impossible because of conflicting changes.
Re-architecting an existing solution to provide more robust variation points is often viewed as a tempting alternative, the idea being that if we can essentially entrench more modularity and capability into the technical architecture, we can better accommodate variability. Stable and flexible architectures are important, but focusing too much on technical architecture will result in shallow implementations that are largely insubstantial with respect to a needed broadening of ongoing software development processes to include required capability to support increased variation. Similarly, conditional compilation is often viewed as another related architectural technique intended to facilitate software variation. However, conditional compilation is not applicable across all programming paradigms, and whenever utilized software variation remains “buried” within source code, thus remaining difficult to interpret and to represent due to a lack of transparency, especially for the nested conditional dependencies. Integrating re-architected solutions is an even more disruptive activity, and often requires too many changes and compromises that become unacceptable, and/or impractical either technologically or economically. Quality is as always a significant barrier. Market resistance may also be an overriding consideration –customers have come to expect a familiar configuration, and newly architected solutions may not be well received, threatening continued success and/or precluding market growth.
Watch for Part 3 of this series coming soon: “Application Lifecycle Management”.
VALUABLE FREE WEBINAR
Featuring OVUM Principal Analyst Michael Azoff
Live Event: September 24, 2015
Recorded stream approx. 1 week later