Play

Machine learning makes cars smarter

Innovative functions are realized via software, which often implements software artifacts created using machine learning. In our video, you will learn how to reliably and safely integrate machine learning into your product development.

Back to Automotive SPICE
At a glance

You can download the comprehensive information on Machine Learning Engineering in our free whitepaper.

The real strength of machine learning is the fast identification of patterns. In automotive electronics, this ability is a huge asset. Applications, functional software or even sensors and actuators, benefit from the fact that machine learning models recognize patterns faster than humans. Machine learning is, to put it bluntly, a form of artificial intelligence designed to make devices smarter. We need these capabilities for Advanced driver-assistance systems or autonomous driving, but they have its uses also in other systems, e.g for predictions regarding wear and tear.

The standard on Machine Learning shows how you can use ML-based applications reliably and safely in the development of automotive electronics. This standard complements and deepens the application of Automotive SPICE. It thus provides an approach to support quality development and with that a reliable basis for Functional safety, Sotif and Cybersecurity.

We're here for you

Need support with a key project? We’re your first port of call when it comes to management consulting and improvement programmes in electronics development.

Steffen Herrmann and the sales team

Machine Learning as a challenge and an Engineering Discipline

ML for newbies: How differ artificial intelligence (AI), machine learning (ML), neural networks and deep learning (DL) from each other?

If you look at the literature, you will find a lot of different explanations and categorizations of artificial intelligence. We are using the following in this context which is relating to existing applications.

Artificial intelligence is the umbrella term. The currently hotly discussed chatbots belong to this category. A so-called foundation model is at the center of these. This is a pre-trained language model. This large neural network has been trained on massive amounts of text data, typically using unsupervised learning techniques. Based on this data corpus, a chatbot can generate answers to questions posed to it by a human. AI uses static probabilities for this purpose.

In the middle circle, we have already arrived at machine learning. ML is a subset of artificial intelligence. Here, too, algorithms are created on the basis of the data provided. These are used to make a machine, i.e. application software or components, more powerful. In the case of automotive electronics, for example, machine learning provides the ability to recognize and interpret traffic signs.

The inner ring - deep learning - is a subset of machine learning. Here, the algorithm is more complex and so-called neural networks are used which simulate the structure of neurons in the brain.

At the beginning there is a problem we would like to solve with machine learning. It should be a problem where patterns can be used. For this problem we collect representative data. The developers or if it is more complex, the organization have to collect a large amount of sample data, such as traffic signs. Now the algorithm comes into play: it transfers the given, unstructured information from a dataset into a standardized format.

What is important to understand from a machine learning engineering perspective is that no human programmed this software artifact itself. The algorithm optimizes and refines itself through the training process. Through this process, it tries to approximate behavior of and to learn meaningful patterns from examples. However, there is not only one algorithm, but countless ways to reach the goal. The machine learning engineer selects the most suitable candidate for further use.

In essence, the algorithm distills instructions independently from the data set. This is why it is called machine learning: The algorithm teaches itself how to structure the given data into information. For the training to produce the desired result, the algorithm is told how to interpret the environment from an engineering point of view. In our example, traffic sign recognition, it may detect patterns by analyzing the color distribution and assigning the most probable traffic sign names.

This training is an iterative process. Machine learning engineers regularly check the algorithm: Can it already structure the training data to assign it to the names of the traffic signs? The algorithm is then refined with additional training material. This is done until the end result is a robust model. The engineers expose the algorithm to the data and check the outputs. This is an R&D approach with trial-and-error cycles.

The model is the asset that contains all the information. This model can then be implemented in the application software. For example, the model contains a decision tree that tells us how the image colors are linked to different traffic signs.

The model is verified against an independent data set to ensure that you do not train the model against the data but against the possible situations in the actual environment.

Because the software was not written by a human being, it forms a black box: The algorithm delivers a result, the model. However, the structure of this model cannot be understood with reasonable effort. So, we are faced with the challenge of having to write reliable software for a safety-critical system - and we don't know how it works. From a quality assurance perspective, this is unacceptable.

With the introduction of machine learning models in car software and in the backend, we are facing a new challenge in terms of quality, but also in terms of functional safety and cybersecurity. The reason lies in the development approach of machine learning.

The conventional processes provided by standards like Automotive SPICE do not work in conjunction with machine learning. In conventional development, you describe the requirements: What do you want the software to do. Then you design the architecture - a solution of how you would organize your software to provide those functions. Finally, for each unit, you define the details that need to be implemented in the code. Along these levels of detail, you run a series of tests. In machine learning, there are no such formalized tests. Once the developer is satisfied with the result that meets the predefined criteria, the algorithm is tested and released. Verification and testing of machine learning is therefore mostly based on data labeling, training, anti-biasing, and ensuring that KPIs are met.

From a quality assurance perspective, we need a reliable and systematic approach. We address this question by imagining the training and verification, and testing process like the peels of an onion. This onion illustrates the determined sequence as well as the dependencies.

At the heart of this onion, we see the algorithm. This has been configured for the desired functionality by the machine learning engineer. This customization can, for example, control the number of neural   connections, so-called nodes. If an update of the algorithm is required, new training and verification is needed. Together, the two peels form the trained model. The testing of the model then act on these two peels.

Since the model is based on training data, data quality is obviously a key issue. In the diagram, we see that there is a separate process addressing the data.

Processes for software artifacts created by machine learning are software processes. Therefore, the ASPICE standard is based on the software engineering processes. The requirements for the software to be created are formulated in the requirements process SWE.1, as is the case for any software for automotive electronics. For this purpose, the relevant system requirements are transformed into a set of software requirements - some of which then concern software artifacts that are to be created using machine learning. The software architecture, SWE.2, regulates what these are.

And this brings us to machine learning. Because the software artifacts are to be generated by an algorithm, we need special processes that replace the classical software detailed design.

At the top left of the imaginary vee is the Machine Learning Requirements Analysis process, MLE.1. The task of this process is to determine special requirements for machine learning from the software requirements.

This process is followed by the Architecture for Machine Learning, MLE.2. This process is about the ML architecture supporting the training and creation of the algorithm, but also other software which may be necessary like for example pre- and post-processing software.

And this brings us to the right side of the vee, to the actual training of the algorithm. It is crucial that the model not only performs, but above all that it meets the specified requirements (MLE.3). Like mentioned above, the training is a systematic approach on trial and error-basis.

During testing, it must then be ensured that both the trained ML model and finally the implemented ML model meet the specifications of the machine learning requirements (MLE.4). We distinguish here between the trained model which has been verified and the deployed model which can be used in the actual software of the system.

Then, in software integration testing (SWE.5), all artifacts are integrated and tested together. This is where all artifacts come together, regardless of whether they were coded in a conventional way or trained using a machine learning approach.

However, this still leaves the question of data quality unanswered. This is the purpose of the new support process SUP.11. ML Data Management is about defining the data relevant for machine learning in accordance with the ML data requirements and ensuring their integrity.

We only addressed this in a supporting process. However, data management is a very complex task which is often addressed by large groups within an organization. Hence, there is a separate working group which has developed an own process assessment model “data management”.

What are the changes by Automotive SPICE v4.0?

Learn more how the Version v4.0 of Automotive SPICE® will impact your R&D processes. Visit our short video, or join our training classes.

With the version 4.0 of Automotive SPICE, machine learning becomes part of this standard. In connection with the conventionally developed software, machine learning development can now be addressed from a quality point of view. This will be a good basis for functional safety and cybersecurity as well.

We can support you with

  • Using Automotive SPICE® to achieve the required maturity levels within your key processes in development
  • Systematically improving existing workflows and methods
  • Evaluating the status of your process improvements through formal assessments and gap analysis
  • Fulfilling the requirements of Automotive SPICE® in harmony with SECURITY,FUNCTIONAL SAFETY and AGILE METHODS
  • Training your staff and assessors

Download Whitepaper