Are you engaged in the requirements of Automotive SPICE®, especially for "Software Integration" and its tests? On this compact page you will find relevant info on the key process SWE.5 from the VDA scope, including a video and a free whitepaper.
back to Automotive SPICEIf you would like to understand more about the Automotive SPICE process "Software Integration (SWE.5)" from the VDA Scope? In our free whitepaper you will find all information summarized and a reading sample from the book "Automotive SPICE® Essentials", the book for beginners in the topic of process improvements.
Automotive SPICE is a brand of the VDA QMC.
The Software Integration and Integration Test process in Automotive SPICE® (also known as SWE.5) helps your organization to ensure that the individual elements of the software architecture are integrated and then tested to prove that they work together as planned and interact as described in the software architecture
In my classes and assessments, I get a lot of questions like "Hey, we do a huge amount of requirements testing, is the extra effort for integration testing really worth it"?
So, what's your opinion? Ever thought about it?
To answer this question, we must first clarify what integration tests mean.
Need support with a key project? We’re your first port of call when it comes to management consulting and improvement programmes in electronics development.
Steffen Herrmann and the sales team
The purpose of integration tests is to check the compliance with the software architecture. This includes checking the interfaces, dynamic behavior and resource consumption according to SWE.2, the Software Architectural Design Process.
Well, let's modify our initial question to: "Can requirements tests prove conformity with the software architecture"? And the answer is: of course not, simply because these tests test against requirements and not against the architecture. So far, so good. But some of you might say "OK, got it. But what is the added value of testing against the architecture anyway? If the functionality works well, isn't that good enough?"
Let's take a closer look: Can a nonconformity with the architecture be detected in requirements testing? Yes, it can, but there is no guarantee. You could multiply the number of requirement tests by one thousand, ten thousand, one hundred thousand, but there is still no guarantee of that. The cost would explode, but the software would still have errors that could have been avoided simply by testing against the architecture! Now here's the bottom line: Performing integration testing gives you more robust software at a lower cost compared to exhaustive requirement testing. Congratulations! You now belong to the exclusive circle of people who understand the value of this process.
The following are the most important aspects of Software Integration and Integration Test in Automotive SPICE®
As you may know from some of our videos, a strategy is an easy to understand instructional description. This is especially important for larger, distributed projects, so that all people know how we do it.
This strategy first describes the workflow of the software team from the individual developer, who could perform some simple integration tests before delivering his software into the team workstream. And then the team could perform some integration tests as part of its nightly test runs. And when they are finished, they would perform some final integration tests before delivering their work into the project workstream. Then there could be some additional integration testing. In addition, Automotive SPICE requires a "regression test strategy". Regression testing simply means that if you change something in the software, you make sure that everything that hasn't changed still works well.
A practical example: If you use Continuous Integration, you typically run all integration tests again during your nightly tests.
So, the strategy would be simply the sentence I just said.
Traceability means that for each relevant architectural element, such as an interface, you can localize the corresponding test. If you can do this vice versa, then the traceability is bidirectional.
Now what does consistency mean? In our previous example, consistency would require that the interface is linked to the correct test (and not to the test of something else). This test is suitable to test the interface completely. If this is not the case, additional tests must be linked. Consistency also requires that the tests actually test the interface correctly. In other words, no faulty tests.
This is usually referred to as a test summary report. And this summary report should be sent to the people who need this information, such as the development team, project manager, quality engineers, and so on. Let's take a closer look at how this report should look. As the name suggests, it should summarize the results and hide unnecessary details. What is the main message that the summary report should convey? What do you think? Well, it should, of course, show compliance with the software architecture.
We are a founding member of intacs™ (the International Assessor Certification Scheme). The intacs™ initiative dovetails with Automotive SPICE® to safeguard common standards laid down by the German automotive industry for the training and certification of assessors.
I'll give you a counterexample that I often see in assessments. The report contains pie charts showing the 539 tests that should have been performed, but of which 75 could not be performed and 34 failed. That's it. There is no information as to why the 75 tests could not be carried out and what the risks are. There is also no information about how big the problem is with the 34 failed tests. Nor does the report show compliance with the software architecture. In fact, the software architecture is not mentioned at all. They would have to relate the pie chart to the 247 architectural elements, not just to the 539 tests. I think you have understood the point – this of course leads to weaknesses in the assessment.