Auditing Software Development: Five Rules Of Thumb | IVT

Abstract
Auditors have little time to assess a situation and separate the critical compliance risks and issues from mere annoyances. The ability to identify patterns of behavior and their signs is a key attribute of a successful auditor. We are deluged with information, and in short order are required to make sense of it. Strategies (shortcuts) and tools (rules of thumb) are needed to quickly and effectively “size-up” a situation and present a compliance judgment (observations of noncompliance).

In this article we define certain rules of thumb or strategies to help identify the “patterns of behavior” among software development firms. These strategies should provide auditors with some effective tools to focus on the issues that may impact product quality and software maintainability while filtering out the background radiation.

Introduction
After a few hundred audits of software development companies, and other software providers, patterns begin to emerge and habits come into view. Although it is fundamentally true, as Heraclitus claimed, when auditing a company “you never step in the same river twice,” there are sufficient similarities among companies, large and small, to permit an outline of what we could call “patterns of behavior,” “cultural archetypes,” perhaps even “epistemes” or “paradigms,” depending on one’s inclinations. Family resemblances can be discerned across companies of all types in the subtle yet visible micro-practices such as how code is reviewed and documented, in the way tests and documents are traced, in the manner bugs are classified, tracked, and resolved. All of these constitute practices that reflect and sustain an underlying socio-cultural frame.

If we were to classify or typify, software development within the Food and Drug Administration (FDA) regulated arena we could distinguish between those companies whose primary business is to supply a software product to bio-pharma companies and those who provide services and develop code to meet those needs (e.g., clinical databases). Having done that, we might then distinguish those companies that are development centric, putting an emphasis on coding, from those who are project management or process centric. We could then move along the product/service continuum, transected by the code/process axis, and with a bit of multi-dimensional scaling, map the world of software development. With such a map, the auditor could enter a setting and immediately assess a situation, focusing directly on the known shortcomings of a given configuration (read: cultural archetype).

At one extreme, say where “the project is the product” (McLuhan) a concession to the Software Development LifeCycle (SDLC) may translate into a poorly articulated whole, where the phase deliverables (e.g., trace matrix) take on a dimension of their own detached from the code objects it (the SDLC) attempts to manage. Such a map would indeed be a powerful tool in the Quixotean compliance struggle. Alas, I must admit up-front, in order not to mislead the reader, that I currently possess no such map (or even have clear insight into the socio-cultural dynamics of a software development firm - a sociological thesis unto itself). I do, however, possess what might be taken as the basis for such a map: the legends, keys, and most importantly, a sense of “which way is up.”

It is this “sense” that I would like to share here in the form of various “rules of thumb”: a handful of anecdotes (legends) that can be taken as the keys to, dare I say, predictable deficiencies (see Note 1) in the development of software. These rules of thumb represent the surface structure, to borrow a structuralist idiom, that betrays despite themselves, an underlying deep structure or dispositif. I have not labeled the underlying structure -that would be the thesis referenced above- yet I have provided strategies for identifying the symptoms and made recommendations on what to observe. What I present here is, in effect, a symptomatology of software development practices within the defined context of the regulated environment.

The Software Development Process
Software development, in a nutshell, begins with an idea, a business process. The software must do something (excluding, for the sake of argument and simplification, the class of software that does nothing, or more correctly allows one to do something, say manage workflows, relations, and schemas, but has no inherent business process.) Marketing analysis and requirements gathering initiate a software development project. They establish what the product will be, but not how it will be. Within this initiation phase (see Note 2) much ground work is established. An analogy, such as the foundation of a house, might provide an acceptable image here. Without a well articulated process model you cannot make the leap from “that” it is to “what” it is (to borrow a Wittgensteinean distinction). The product in search of a process is an anomaly, unlike the new chemical entity in search of an indication in our industry, perhaps it is the exception that proves the rule. Our first rule of thumb then, we will call The Cart Before the Horse. (More follows below on how to detect such a phenomenon and its consequences.)

Note 1: S ince we are dealing with human beings (behaviors, psyches) and institutions (systems, administrations) we cannot talk in absolutes, but rather in terms of tendencies.

Note 2: Different organizations have defined these phases, and the terminology is less important than the associated activities. Consult IS PE GAMP, ISO , and/or IEEE for various models and terms. See Reference Section following this article.

Once the process has been mapped and the business has defined its requirements - or the marketing specialists their opportunities - the application (hardware, development platform, tools) begins to take form. During the elaboration and design phase, critical decisions regarding architecture, scalability and performance, speed to market, and resources converge to define how the requirements will be implemented in code (see Note 3). Functional specifications and detailed design specifications are outlined to define how the application will function, perform, interact, and integrate. These documents become the basis and the baseline for coding, assembling, and configuring the various components that will constitute the “system as a whole,” to borrow a turn of phrase from an FDA guidance document. Data flow diagrams, system architecture diagrams, entity relationship diagrams, and module dependency matrices, illustrate how the application components interact.

Applications today are an assemblage, collage, or pastiche of disparate elements, coded in a variety of languages; no longer monolithic entities developed on a single platform. Component- based architecture and object-oriented programming have introduced and amplified the need to clearly document module interactions, inventory configurable objects, and catalogue stored procedures. Unfortunately, it is all too common to find applications developed for use in the regulated industry that have little to no documented design basis, much less clear and accurate traces across components. More often than I care to recall, have I been in the situation of trying to understand a system simply to assess whether the testing was adequate for its complexity, by having the development manager white board the interaction between components for the first time before my very eyes. My second rule of thumb I will call Traces in a Cloud Chamber to mark the difficulty of identifying an object through its effects on the surrounding environment (e.g., through black-box testing).

Once coded, the product is tested. Simple enough; and yet what can we really conclude from testing? It comes in a variety of flavors, from unit and module testing to functional, black-box testing, passing through regression, performance, integration, and system. Testing can be specification based, fault based, use-case, or driven by the nature of the application (GUI). But testing is fraught with limitations, indeterminacies and incompleteness: the oracle problem (pass/fail is ultimately a judgment) and the Dijkstra aphorism (the absence of bugs cannot be demonstrated), to name a few. The evaluation of testing has to be one of the most challenging activities to face an auditor - after he or she has determined what the system is and how it has been coded. Luckily, it is much easier to determine that testing is inadequate (i.e., incomplete, incommensurate with application complexity and criticality, unplanned and ad hoc) than to conclude that it is necessary and sufficient. Dijkstra’s aphorism inverted: it is easier to point to the absence of quality than to itemize its presence. Our third rule of thumb, to continue with the construction analogy, we will call The House of Cards.

Finally, the application is released as alpha, beta, or release candidate 1.0, build x. The maintenance phase begins. A whole apparatus of people and processes are initiated: call centers and help desks, change configuration and release management, Corrective And Preventive Action (CAPA) systems are populated and metrics are gathered. The maintenance phase is, in fact, the culmination of all the activities that preceded it and its ultimate arbiter. Good code, bad code, adequate or inadequate testing are all judged on this day, and the shortcomings cannot be hidden in anodized test summaries or sanitized release notes.

Issues are bound to surface as a product gets used or abused in the conduct of a business process. New and previously unanticipated combinations (key strokes, data entry) challenge the application beyond its test regimen. A product, and a company, will be judged not on its adherence to standards, but on its ability to manage issues. This process is complex, but should follow a simple motto “anomalies are symptomatic of process failures, not simply product failures” (paraphrased from FDA Guidance on Software Validation).

Poorly documented, and understood, code will only exacerbate the maintainability of the product. Each new bug fix will engender additional issues -and, given the laws of mathematics, either geometric or exponential series- one is soon faced with an unstable application, which we know is the antithesis of the validated state to which we all aspire. The fourth rule of thumb, I would have liked to call the Frazier’s Figure, after the geometric illusion of a downward spiral, but that would have been too dramatic, let us call it the Just Fix It rule, after the Nikean motto. This one is the easiest to detect, but the hardest to redress.

Five Rules of Thumn
The SDLC activities described above are brought together under the direction of the company’s policies and procedures. And in fact, this is what an auditor first encounters. The first act of auditing is to determine how a company has defined its universe: its pantheon of gods, its daily rituals, and its names for things. Since there is no single prescribed or approved way of doing software development, it is important for the auditor to read the many policies and procedures that are designed to guide and direct practices. Here one can immediately encounter a culture clash between the bio-pharma industry and the software development industry. To cite (out of context) a white paper from Ivar Jacobson International (on the new Essential Unified Process, successor of the RUP methodology): “Developers are tired of process (see Note 4).” Indeed there can be a significant culture shock for auditors who assume that procedures are intended to be determinant and prescriptive (as they need to be when manufacturing drug products), only to find that they are merely illustrative and suggestive (guidelines that reflect a company’s ethos). As such, my final rule of thumb called, “When It’s Too Good To Be True,” reflects this differend, aporia, schism, delta, gap or Rubicon, depending on the facts at hand, when practices are out of alignment with the “gods:” (industry, or regulatory standards). Let us explore this rule first.

Continue Reading

By: Jacques Mourrain