Validation is important, but it's largely stupid - like much of work, I suppose. Perhaps stupid is too strong a word. Dumb is probably a better choice, the opposite of smart. Validation is dumb, and there's no escaping it.
What to do?
Well, we can talk about it first (even if this is not doing anything about it). Validation is the antimatter of development - put the two together and you destroy engineering life itself (overdramatising somewhat). What I mean is this: development is subtle, replete with success and failure, nuance, puzzlement and, hopefully, finally, understanding. Validation is a box-ticking exercise that brings nothing of interest to anybody, except the approval to sell product, which is a key aspect of business, I have been led to understand... so validation is important.
Testing is an integral part of development, one that I certainly don't want to discard. When we develop components, we have to pass testing and other approval gateways; based on the resulting performance, either we confirm that parts need to be redesigned because they don't meet a particular specification, we confirm that our parts do meet a particular specification, or we can write our own spec if one doesn't exist already. At the end of a development project, validation confirms that the first parts off the real production process are as good as all those expensive prototype parts that were tested previously: validation can be the joyous culmination of all that development effort, champagne all round. So really, my gripe isn't with concept or process validation testing - these are valid steps in getting a product to market.
Get over it
My gripe is with approval testing; testing that is set as a hurdle to delivery to a given customer.
Hurdles are good in many ways. If everything we did were trivial, anybody could do it. Fortunately, what we do is not trivial, and there is a limited number of companies involved in our market, each with their own strengths and... opportunities. Validation testing from the customer side is a way of filtering out duff suppliers.
But the work that is sucking out the oxygen of development for me right now is not linked to any new development or any new product. It has the Kafkaesque whiff of bureaucracy generating a lot of effort for no tangible benefit at all.
Validation is time-consuming, expensive and is (supposed to be) a key component in my other important-stupid bugbear, PPAPs
So, as I asked before, what to do with it?
The phrase "creative destruction" comes to mind - destroy validation and PPAPs and all of that dumb junk to free ourselves to think creatively. Let's explore that: can we ditch it all?
Ideally, yes: with all the data swilling around in our company, from development results to production and quality control systems, we should be able to show that production parameters haven't changed significantly since the product was first approved, and that the standard production checks will have filtered out any weaknesses.
This can work - but often the customer wants to use an existing part on a new platform, or a new part on a new platform, so wants a full set of confirmation test results, in their unique format. Also, it doesn't avoid the question of whim. A customer can demand results against a particular set of specifications whenever they want. So, you test and you validate.
But why me?
If there's one industry where whim and whimsy is at a minimum, it's motorsport. What do they do there? Well, even if they are producing a "series of one-offs", they test and they inspect and they destroy things, too. Yes, they validate. How they go about it is hinted at in this puff-piece by and from the Lotus F1 team.
The article doesn't tell us much, but the important point is that they have an inspection team. Just as software smithies have test teams, engineers should have inspection and validation teams. This is what I am missing where I am now. We have fallen in slow-motion into the trap of mixing development and validation in one pot. Alas, because the output of validation is a ticked box (or a boxed tick), and since those ticks in boxes are prerequisites to supply and therefore making money, validation more often than not takes priority, thereby hindering development.
Unite and advance, Divide and conquer
The answer, then, is twofold - and also expensive. First of all, the vexing question of validation testing needs to be tackled at the source - with the customer. To be able to come to a sensible agreement on what's relevant and what's useful to test, the onus is on the supplier to show his understanding of the product - how it performs, how stable the processes are, what could potentially go wrong. This is expensive in terms of effort - data-hunting and gathering, condensing it into digestible information and then taking it to the customer for (in all probability) a series of meetings and discussions to come to a sensible arrangement.
Then validation testing must be separated from development. The two should be unique teams with limited overlap, ideally with separate facilities. This, too, is expensive - but not focussing sufficiently on development is more expensive still in the long run.
But perhaps I'm biased.