Automotive
Can ADAS systems mitigate driver error and reduce traffic deaths? The evidence suggests that, yes, they help prevent accidents. That said, ADAS systems can themselves cause harm, if they malfunction. Imagine, for example, an adaptive cruise control system that underestimates the distance of a car up ahead. Which raises the question: how can you trust the safety claims for an ADAS system? And how do you establish that the evidence for those claims is sufficient?
Enter ISO 26262. This standard, introduced in 2011, provides a comprehensive framework for validating the functional safety claims of ADAS systems, digital instrument clusters, and other electrical or electronic systems in production passenger vehicles.
ISO 26262 isn’t for the faint of heart. It’s a rigorous, 10-part standard that recommends tools, techniques, and methodologies for the entire development cycle, from specification to decommissioning. In fact, to develop a deep understanding of 26262 you must first become versed in another standard, IEC 61508, which forms the basis of 26262.
ISO 26262 starts from the premise that no system is 100% safe. Consequently, the system designer must perform a hazard and risk analysis to identify the safety requirements and residual risks of the system being developed. The outcome of that analysis determines the Automotive Safety Integrity Level (ASIL) of the system, as defined by 26262. ASILs range from A to D, where A represents the lowest degree of hazard and D, the highest. The higher the ASIL, the greater the degree of rigor that must be applied to assure the system avoids residual risk.
Having determined the risks (and the ASIL) , the system designer selects an appropriate architecture. The designer must also validate that architecture, using tools and techniques that 26262 either recommends or highly recommends. If the designer believes that a recommended tool or technique isn’t appropriate to the project, he or she must provide a solid rationale for the decision, and must justify why the technique actually used is as good or better than that recommended by 26262.
The designer must also prepare a safety case. True to its name, this document presents the case that the system is sufficiently safe for its intended application and environment. It comprises three main components: 1) a clear statement of what is claimed about the system, 2) the argument that the claim has been met, and 3) the evidence that supports the argument. The safety case should convince not only the 26262 auditor, but also the entire development team, the company’s executives, and, of course, the customer. Of course, no system is safe unless it is deployed and used correctly, so the system designer must also produce a safety manual that sets the constraints within which the product must be deployed.
Achieving 26262 compliance is a major undertaking. That said, any conscientious team working on a safety-critical project would probably apply most of the recommended techniques. The standard was created to ensure that safety isn’t treated as an afterthought during final testing, but as a matter of due diligence in every stage of development.
If you’re a system designer or implementer, where do you start? I would suggest “A Developer’s View of ISO 26262”, an article recently authored by my colleague Chris Hobbs and published in EE Times Automotive Europe. The article provides an introduction to the standard, based on experience of certifying software to ISO 26262, and covers key topics such as ASILs, recommended verification tools and techniques, the safety case, and confidence from use.
I also have two whitepapers that may prove useful: Architectures for ISO 26262 systems with multiple ASIL requirements, written by my colleague Yi Zheng, and Protecting software components from interference in an ISO 26262 system, written by Chris Hobbs and Yi Zheng.