Neural network-based response surface approximation
The structural analysis of complex structures is usually solved by means of a computational model. The computational model is an abstraction of the structure and its environment including their interactions. Such a model is used to map structural parameters onto structural responses. Structural analysis represents the basis for the structural design in a variety of approaches. The values of the structural parameters are generally not predetermined but vary within specific ranges. This phenomenon is referred to as uncertainty and is accounted for with the aid of an uncertain structural analysis, e.g., stochastic analysis, fuzzy analysis, or fuzzy stochastic analysis. In any case, an uncertain structural analysis requires the repeated application of the underlying deterministic computational model. The more complex the model is the higher is the computational effort needed to perform the computation. This effort may be reduced by simplifying the computational model and reducing the number of calls of the deterministic computational model. In this context the use of an adequate approximation of the structural response represents the most effective measure to achieve a reasonable reduction of the numerical effort. Subsequently, an approximation by means of the response surface method based on neural networks is elucidated.
Neural NetworksThe idea of artificial neural networks is based on the design of the human brain. It is constituted by neurons (information processing units) which are connected by synapses (information transferring links), it has the ability of mapping input signals onto output signals and to adapt to certain tasks during a training phase. The output produced by a neural network is called response surface. The goal is to replace the deterministic computational model for structural analysis by a neural network. That is, the input signals comprise structural parameters such as loads, material parameters, and geometrical parameters and the network output provides the associated response surface in the form of stresses, displacements, or deformations. There exist a variety of alternatives to design a neural network. The focus of this study is set on feedforward neural networks which are already applied successfully in many fields of engineering. A specific network architecture is built by combining the two main components of a neural network, the neurons and the synapses. Each synapse connects two neurons with each other and possesses a synaptic weight w. It enables the signal flow from one neuron to the next one. A neuron represents an information processing unit that maps an input signal onto an output signal as illustrated in Fig. 1(a). It contains a summing junction which lumps together the incoming signals x, each weighted by a specific synaptic weight w. The summation of input signals for neuron k also involves an external term called bias b.
Fig. 1: Neural network
Comparison of Training AlgorithmsThe iterative process of adjustment of weights of a neural network is called training. The aim is to derive a neural network response that approximates the underlying data set with a maximum quality. In this study the focus is on the backpropagation algorithm as one form of supervised error-correction learning. This algorithm requires training data in form of input-output pairs that are obtained by repeatedly applying the computational model for structural analysis. The backpropagation algorithm may be coarsely described with the following three steps, which have to be applied several times in an iteration.
- Forward computation of input signal of training sample and determination of neural network response
- Computation of an error between desired response and neural network response
- Backward computation of the error and calculation of corrections to synaptic weights and biases
By applying these corrections to the weights it is attempted to minimize the error surface. Backpropagation is based on a standard gradient method. Within the backpropagation algorithm two different modes can be applied, which are called incremental and batch. The incremental mode applies a weight correction after each presentation of one sample of training data. The training sample is chosen randomly out of the training data. The batch mode applies a weight correction only once after each epoch. During one epoch every sample of the training data is presented to the neural network. For the comparison of the incremental mode and the batch mode calculations are performed under identical conditions. The training behavior varies in dependence on the randomly selected initial values for the synaptic weights and biases and on the training mode – incremental or batch. The evaluation of training results is performed by means of the root mean square error of the approximation.
The numerical example deals with the planned bridge of Messina in Italy (see Fig. 3(a)). The region around the bridge is subject to seismic activity. Therefore, the bridge behavior under seismic loads needs to be examined. The input parameters are seismic loads and the structural response is the displacement in the middle of the bridge in cross direction. Training and testing results are shown in Fig. 3(b).
Fig. 3: Numerical example
Improvement of Approximation AccuracyIn many cases the quality of a response surface based on a neural network still raises the desire for further improvement. Different approaches to improve the approximation quality are, e.g.,
- an increase of complexity of the neural network by adding more hidden layers and / or
- by adding more neurons and
- the use of more than one neural network for the response approximation.
- committee machines,
- by section-wise approximations, and
- by neural network composites.
Neural network compositesSubsequently, the specific use of a neural network composite is explained. The proposed approach is visualized in Fig. 4. Herein, a first neural network is trained to approximate a certain response. The underlying task is very complex. Thus, the neural network response is only capable of approximating certain features of the desired response. The idea is now to use the computed remaining error surface to train a second neural network. The overall response is then constituted by adding the responses of both networks. For further improvement of the approximation more neural networks may be added in the same manner.
Fig. 4: Neural network composite
Fig. 5: Responses
ConclusionsThis study discusses two different issues of response surface approximation based on neural networks. First, the differences between the incremental mode and the batch mode of the backpropagation algorithm applied for training neural networks are investigated. For the examined examples it is found that the incremental mode possesses advantages such as faster convergence and a less computational effort compared to the batch mode. Consequently, an application of the incremental mode is proposed in approaches on this basis such as structural design and optimization. The second issue is the quality improvement of the response approximation by using more than one neural network. A network composite applies a stepwise reduction of the approximation error by using the error surface for network training. In the chosen example it is shown that a network composite is able to approximate several features of the desired response.
- Haykin, S (1999) Neural Networks: A Comprehensive Foundation, Prentice Hall PTR, New York.
- Beyer, W, Liebscher, M, Beer, M, and Graf, W (2006) Neural Network Based Response Surface Methods - a Comparative Study, In: Proceedings of the 5th German LS-DYNA Forum 2006. DYNAmore GmbH, Ulm.