Reliability and Maintenance Assessement From Statistics to Probability – Part 1 (2)
In order to assess reliability in mechanical field, recording only the statistical data of the behaviour of structural mechanical items is ineffective. It is a very expensive task and does not give any direct knowledge about the causes of the evolution of the items behaviour. Instead it is possible to conceive probabilistic models of processes leading to potential failure (degradation) and then to failure.
Generally the reliability forecasting makes use of the hazard rate approach, the idea is to put in operation several identical items at the same instant t0, and to record their behaviour along the time. For example, let us suppose to put in operation N = 1 million Integrated Circuits and record the number of failures along the time. The simplest case that could occur is the recording of 1 failure each hour and replacing each time the failed item with a new identical one. In this case the number of expected failures in 1 hour would be G1 = 1, in 2 hours G2= 2 and so on, and the probability of failure F = 1*10-6.
The problem can be generalized introducing the hazard rate l concept, which is the number of expected failures per 1 item and per unit of time, in this case, the number of expected failures at any point of time t is G(t) = N*l*t. It is easy to demonstrate that in this case of simple Boolean (Save OR Failed) observation, the probability to have no failure (Reliability) at any time t is and the probability to have at least 1 failure is F(t) = 1–R(t). Generally, observations are not so precise and the evaluation of the hazard rate l requires statistically significant amount of data. In any case, the main advantage of such an approach is the very simple mathematics.
This mathematic model can be applied to any reliability layout from the simplest to the most complicated, including redundancy paths of every nature and failure conditions, such as k out n patterns. However, there are at least two disadvantages:
- The approach does not provide or suggest any information about the causes and about the process evolution leading to the failure.
- It requires a lot of experimental data on the finite products in order to provide reliable forecasting, it is only possible to average the times to failure but it is not possible to obtain the behaviour over time of each item.
Figure 1. The probabilistic approach overcomes the problems that the failure rate approach presents in processes where the failure causes are few and known and the random oriented approaches fail [9].
Indeed, this approach may give satisfactory results when at least the following conditions occur:
- The analysis concerns a large quantity of products
- There are numerous causes of failure
- There is sufficient amount of test data to derive meaningful statistics
- The population for the statistical data must be strictly homogeneous.
As for the first and second condition, it needs to be noted that the mathematical hypothesis at the basis of the Poisson formula is, that the phenomena leading to failure are essentially of casual nature, randomness prevails heavily on the causeeffect process. The most famous reliability handbook MIL HDBK 217 [1] corrects afterwards the assumed initial hypothesis of complete randomness introducing some correction factors concerning the activation energy (like Arrhenius law), environment influence, quality level, etc. This kind of approach works very well in reliability prediction for electronic boards, that fully meet the above conditions 1 and 2.
The above conditions 1 and 2 do not occur in mechanical devices, especially if customized, and the reliability assessment using the methods of the mechanical reliability handbook NWCS [3], similar to MIL HDBK 217, is highly questionable. In fact, the handbook considers the general randomness still valid and the results of any process leading to a Failure Mode as correction factors. As a consequence, the handbook renounces to any probabilistic analysis of the considered processes.
As for the third condition, it can be remarked that it is impossible to perform forecasting for new products under development by the means of the hazard rate approach, since nobody has experience of their operation and in any case one runs at least the risk to use old data [4]. To the fourth condition we can state that there is a contradiction between that and the second one.
Further problems arise when we consider engines instead of electronic integrated circuits:
- Costs involved in the execution of these huge dimension checks
- Mechanical items are time limited devices and each one has different causes limiting their life: wear, fatigue, corrosion, etc., or a combination of them.
- Implementation of failure mode catalogue does not fix the problem, since isolating a failure mode does not explain the evolution of the degradation process leading to the failure.
- Combination of several degradation causes leads to concentrate the end of life to a certain but not determined time.
- Causes of degradation and failure are much less than in electronics and the randomness of the phenomena leading to failures concern rather the materials behaviour than the participation of the each part to the failure.
The adoption of the Weibull model (where h represents the expected value of failure time and b the concentration tendency towards h) instead of Poisson model does not solve the problem of assessing the evolution process towards the failure.
In the past, some commendable attempts [5] have been performed to adapt the hazard rate reliability assessment to non mass production items, but with very questionable results.
In fact, considering that the mechanical structures are generally customized, the presence of uncertainties in the design parameters have traditionally been solved by means of coefficient of safety; since several years doubts have been expressed on the validity this approach. [6], [7].
Approach to Probability
The two different approaches are shown in Figure 1. The traditional probabilistic approach of hazard rate to analyze the operability evolution can be reversed, it can start from deterministic models of physical processes to turn them into probabilistic.
Generally, a physical process describes a situation where some parameters act as components of the activation energy of a material whose task is to contain the dominion of the phenomenon. As long as the stress loading of the material is less than its strength, the dominion is safe; the dominion fails when the stress becomes greater than the strength.
From the probabilistic standpoint, the circumstance can be expressed in term of probability and two scenarios can be considered: time independent and time dependent. In the first case the simulation of probabilistic process can be expressed as function of the rated values and the uncertainties (tolerances) of the physical and geometrical characteristic of the dominion and of the applied loads. In the second case the degradation of the material strength is also considered. A process simulation, even if deterministic, can be expressed by means polynomial equations or by matrix relationship, as in the Finite Elements method.
Time Independency
If the phenomenon is time independent, the problem can be approached in two ways in the polynomial simulations of the process:
- With Montecarlo method: extracting randomly at least 1000 values inside each tolerance gape Di of design parameters.
- Assigning a probability density function to each design parameter and integrating them in the multidomain space. The advantage is the generalization of the solutions and the precision of the output; the disadvantage is the need of a lot of data and cumbersome calculation.
There is also a third possible approach, that is to describe the process like in the deterministic way and to assign to the parameters both their rated values and their uncertainties, expressed in standard deviations units. The output result is the expected value and the standard deviation of the stress and the probability of success.
The advantages are evident: simplicity and generalization of the input, ease to find the necessary data, possibility to assess the interdependences and synergies between the parameters and the importance of each parameter in the process. The disadvantage may be the necessity to build a proper pseudo-algebra and, in order to perform calculations, the prediction of the probability density function of the output parameter and the calculation precision in some cases, when the ratio between the standard deviation and the expected value in input parameters is greater than 20 %.
As for the pseudo-algebra, once it is implemented in a specified software tool, it lends itself to recursive calculations. [10] [11]. So far no complete commercial solutions are available for the Finite Elements (FE) simulation approach. The MSC (MSC Software Corporation) solution [1] does not face this issue as it is useful for specific optimization problems but does not take in to account the uncertainties of loads.
Since FE meshes generally contain many thousands of elements and nodes, one calculation requires 10 to 60 minutes. If we suppose to simulate with FE a process with 5 parameters, and for each parameter performing 1000 random extractions, the complete simulation should require 1014 to1015 hours of calculation, impracticable! The authors [12] making use of a particular algorithm inside variance analysis, reduced the number of calculations per parameter from 1000 to 3, or minimum to 2, as reported in example 2. The method has been implemented in the reliability tool “RELYSOFT”.
Time Dependency
One of the most common time dependent phenomenons leading to failure is fatigue which occurs when material is stressed by a cyclic loading. In the simplest case, the material is tensed during a half cycle and is compressed during the other half. The fatigue evolution can be examined by means of the Wohler’s curves or integrating directly the Paris-Erdogan law.
In the Wohler’s model, the number of cycles to which the structure is subject is reported in abscissa and the maximum equivalent stress is reported in ordinates.
The use is until now deterministic, the graph is showing the correspondence between the maximum number of cycles at the failure and the maximum stress at which the item is subjected.
The integration of the Paris-Erdogan law describes the evolution of the crack growth during the application of the cyclic stress, its traditional use is also deterministic: the structure fails as soon as the maximum allowed number of cycles is reached.
Three examples will be described in details in the second part of this article due to be published in the next issue of Maint- World.
»»References ›› [1] MIL-HDBK-217-F2-N2, “Reliability Prediction of Electronic Equipment”. ›› [2] K en Blakely, “Using Design Sensitivity for Statistical Response Analysis”, The MacNeal Schwendler C orporation, C A, 1975. ›› [3] Naval Surface Warfare C enter – Carderock Division, “Handbook of Reliability Prediction Procedure for Mechanical Equipment”, C ARDEROCK DIV , NSWC-07, Sept 2007. ›› [4] Richard E. Barlow, “A Bayes Explanation of an apparent Failure Rate Paradox”, I EEE Trans. O n Reliability, V ol, R-34, NO. 2, June 1985. ›› [5] S.R.Calabro, B.G.Horowitz, “Evaluation Equipment Reliability using Techniques based on kappa square statistics”, I EEE Trans. on Power Apparatus and Systems, Vol. PAS-100, N 8, Aug 1981. ›› [6] NTIS# PB2003 - 01115 - SSC - 420, “FAILURE DEFINITIO N FOR STRUC TURAL RELIABILITY ASSESSMENT”, Ship Structure Committee-2002. ›› [7] Ministero delle I nfrastrutture e dei Trasporti, “TESTO U NICO , NORME TECNIC HE PER LE CO STRUZIO NI”. ›› [8] AD813574, “Reliability Prediction – Mechanical Stress/Strength I nterference”, Technical Report, RADC – TR-66-710, March 1967. ›› [9] RAC, Reliability Analysis C enter, .DoD, NPRD, “Non Electronic Parts Reliability Data”, 1995. ›› [10] R. Paggi, G .Mariotti, & alias. “An Integrated Approach to RCM”, 17th International C onference of the I srael Society for Quality, G erusalem, 16–20 Nov 2008, page 65. ›› [11] R. Paggi, V .Bisti & alias. “Posterior Probability Density Function in Simulation Processes via Bayesian Approach”. 18th International C onference of the I srael Society for Quality, G erusalem, 15–19 Nov 2010. ›› [12] R. Paggi, A. Paggi & alias, “Probabilistic Approach to the design through the use of Nastran”, 2011 MSC I talian U sers Conference, Torino 12–13 O ct, 2011.