• Nie Znaleziono Wyników

Virtual thermo-mechanical prototyping of microelectronics devices

N/A
N/A
Protected

Academic year: 2021

Share "Virtual thermo-mechanical prototyping of microelectronics devices"

Copied!
192
0
0

Pełen tekst

(1)

Virtual Thermo-Mechanical Prototyping of

Microelectronics Devices

Proefschrift

ter verkrijging van de graad van doctor aan de Technische Universiteit Delft,

op gezag van de Rector Magnificus Prof. Dr. Ir. J.T. Fokkema, voorzitter van het College voor Promoties,

in het openbaar te verdedigen op dinsdag 2 oktober 2007 om 15:00 uur

door

Willem Dirk VAN DRIEL

Medisch Werktuigkundig Ingenieur

(2)

Dit proefschrift is goedgekeurd door de promotoren: Prof. Dr. Ir. G.Q. Zhang

Prof. Dr. Ir. L.J. Ernst

Samenstelling promotiecommissie:

Rector Magnificus, voorzitter

Prof. Dr. Ir. G.Q. Zhang, Technische Universiteit Delft, promotor

Prof. Dr. Ir. L.J. Ernst, Technische Universiteit Delft, toegevoegd promotor Prof. Dr. C.I.M. Beenakker, Technische Universiteit Delft

Prof. X.J. Fan, Taiyuan University of Technology, China Prof. A.A.O. Tay, National University Singapore, Singapore Prof. B. Michel, Technical University Berlin, Germany Ir. J.H.J. Janssen, NXP Semiconductors Nijmegen, adviseur Prof. Dr. Ir. F. van Keulen, Technische Universiteit Delft, reserve

ISBN: 978-90-9022179-3

Copyright © 2007 by W.D. van Driel

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written permission of the publisher.

Cover by C.C.M. Rijkers

(3)

Contents

Glossary ... vi

1. Introduction ... 1

1.1 Microelectronics Development ... 1

1.2 IC Packaging Development ... 2

1.3 Thermo-Mechanical Reliability: Literature Review ... 5

1.4 Virtual Prototyping ... 15

1.5 Objectives and Approach ... 17

1.6 Outline of the Thesis ... 18

2. Microelectronics Technology ... 20

2.1 IC Backend Processes ... 20

2.2 Packaging Processes ... 23

2.3 Reliability Testing for IC Packages ... 27

3. Simulation-based Optimisation ... 31

3.1 Strategy, Methodology and Procedures ... 31

3.2 DOE, RSM and Design Optimisation Techniques ... 33

3.2.1 Design of Experiments ... 33

3.2.2 Response Surface Models ... 36

3.2.3 Design Optimisation, Robust Design and Parameter Sensitivity ... 40

3.3 Efficient Global Optimisation ... 42

4. Accurate and Efficient Prediction Models ... 46

4.1 IC Backend Process Induced Warpage ... 49

4.1.1 Characterization of Thin Films ... 51

4.1.2 Prediction of Thin Film Warpage and Stresses ... 55

4.2 Process Induced Warpage of Electronic Packages ... 58

4.2.1 Finite Element Models ... 59

4.2.2 3D Interferometry – Theory and Principle ... 63

(4)

4.2.4 Results and Discussion ... 66

4.3 Interface Strength Characterization ... 72

4.3.1 The Four Point Bending Test ... 73

4.3.2 Sample and Finite Element Model Description ... 76

4.3.3 Results and Discussion ... 77

4.4 Characterization and Prediction of Moisture Driven Failures ... 82

4.4.1 Characterization of Moisture Properties ... 82

4.4.2 Prediction of Moisture Driven Failures ... 85

5. Response Surface Modelling for Non-linear Packaging Stresses ... 88

5.1 Introduction ... 88

5.2 Reliable FEM-based Physics of Failure Models ... 89

5.3 Results for Nominal Design ... 93

5.4 Response Surface Modelling and Optimisation ... 98

5.5 Package Qualification ... 106

5.6 Conclusions ... 107

6. Virtual Prototyping based IC Package Stress Design Rules ... 109

6.1 Introduction ... 109

6.2 Methodology ... 111

6.3 Results ... 115

6.4 Response Surface based Design Rules ... 119

6.5 Conclusions ... 121

7. Structural Similarity Rules for the BGA Package Family ... 123

7.1 Introduction ... 123

7.2 The Ball Grid Array Package Family ... 125

7.3 Multi-Physics Finite Element Modelling ... 128

7.4 Response Surface based Structural Similarity Rules ... 132

7.5 Conclusions ... 137

8. Driving Mechanisms of Delamination Related Reliability Problems in Exposed Pad Packages ... 138

8.1 Introduction ... 138

(5)

8.3 Multi-Physics FE Modelling ... 142

8.4 Results ... 144

8.5 RSM based Design Rules ... 150

8.6 Conclusions ... 152

9. Conclusions and Recommendations ... 154

(6)

Glossary

ARE Area Release Energy

ASTM American Society for Testing and Materials Standards

BEOL Back End of Line

BOM Bill Of Materials

BT Bismaleimide Triazine

BGA Ball Grid Array

CMOS Complementary Metal Oxide Semiconductor

COB Chip On Board

C-SAM C-mode Scanning Acoustic Microscopy CSP Chip Scale Package or Chip Size Package CME Coefficient of Moisture Expansion

CTE Coefficient of Thermal Expansion

CVD Chemical Vapour Deposition

DBS DIL Bent SIL power package

DIP Dual Inline Package

DNP Distance to Neutral Point

DOE Design Of Experiment

DRAM Dynamic Random Access Memory

DSP Digital Signal Processors

EGO Efficient Global Optimisation

EI Expected Improvement

FC Flip Chip

FEM Finite Element Method

FET Field Effect Transistor

FR4 Flame Retardant Type 4

Ge Germanium

GQS General Quality System

HAST Highly Accelerated Stress Test

HBGA Heatsink Ball Grid Array

HTOL High Temperature Operating Life

(7)

IC Integrated Circuit IEC International Electro technical Commission IEEE Institute of Electrical and Electronics Engineers IMAPS International Microelectronics And Packaging Society

IMC Inter Metallic Compound

IO Input Output

IPC Institute for Interconnection and Packaging electronic Circuits

ITRI Interconnection Technology Research Institute

ITRS International Technology Roadmap for Semiconductors JEDEC Joint Electron Device Engineering Council

LED Light Emitted Diode

LEFM Linear Elastic Fracture Mechanics

LH Latin Hypercube

LSI Large Scale Integration IC

LTCC Low Temperature Co-fired Ceramic

MCM Multi Chip Module

MEMS Micro Electro Mechanical Systems

MPW Multi Project Wafer

MSL Moisture Sensitivity Level

NEMI National Electronics Manufacturing Initiative NIST National Institute of Standards and Technology

PCB Printed Circuit Board

PoP Package on Package

ppm parts per million

ppf pre-plated frame

PPOT Pressure POT test

Precon Preconditioning

PSG PhosphoSilicate Glass

PV PassiVation

QFN Quad Flat package No lead

QFP Quad Flat Package

(8)

RH Relative Humidity

RSM Response Surface Model

SEM Scanning Electron Microscope

SEMATECH SEmiconductor MAnufacturing TECHnology consortium SEMI Semiconductor Equipment and Materials International Si Silicon

SIL Single In Line package

SiP System in Package

SMD Surface Mount Device

SMT Surface Mount Technology

SO Small Outline package

SOC System On Chip

SOT Small Outline Transistor

SSOP Shrink Small Outline Package

TEOS TetraEthylOxiSilane TFBGA Thin Fine pitch Ball Grid Array

TO Transistor Outline

Tg Glass transition temperature

TMCL Temperature Cycling Testing

UBM Under Bump Metallisation

UPOT Unsaturated Pressure POT test VCCT Virtual Crack Closure Technique

VLSI Very Large Scale Integration

VP Virtual Prototyping

(9)

Chapter 1

Introduction

1.1 Microelectronics Development

The semiconductors industry and its suppliers are the cornerstones of today’s high-tech economy, representing a worldwide sales value of 250 billion euros in 2004 (ITRS, 2005). The microelectronics sector supported a global market of more than 6 trillion euros in terms of electronic systems and services. For the past fifty years, microelectronics products have pervaded our lives, with massive penetration into health, mobility, security and identification, communications, education, entertainment and virtually every aspect of human life.

The era of semiconductors started in 1959, when Jack S. Kilby, then at Texas Instruments, submitted a patent request on miniaturized electronic circuits. His invention demonstrated the feasibility of realizing resistors and capacitors based on semiconductor technology together with transistors in one and the same substrate. With that, the integrated circuit (IC) was born and Jack S. Kilby received the Nobel Prize in Physics in 2000, together with Zhores I. Alferov and Herbert Kroemer. Just a few years later, in 1965, Fairchild engineer Gordon E. Moore postulated a bold theorem that predicted exponential growth in the semiconductors industry [Moore, 1965]. He stated that:

The complexity for minimum component costs has increased at a rate of roughly a factor of two per year.

(10)

dimensions measured in nanometers will make the semiconductor sector even more pervasive than it is today.

The smallest feature sizes on ICs are already falling to 65nm and beyond. The International Technology Roadmap for Semiconductors projects that in 15 years the smallest feature size will be even smaller than 10nm. Comparing it with the well-known quote from the Scientific American in 1977, stating that:

Present technology can routinely reproduce elements a few μm’s across, and it appears possible to reduce the smallest features to about one μm,

and one instantly grasps the enormous development over years. Modern semiconductor technology is characterized by key requirements such as development speed (lowest possible time to volume, first-time-right), optimised cost structures and highest quality (zero defects). But with the increase of the interconnect density, also an increase of power dissipation density and thus temperature is anticipated. The combination of both miniaturization and function integration trends drives microelectronics technology into an unknown level of complexity, characterized by ongoing miniaturization down to nano-scale, heterogeneous and multi-functionality, discipline, scale, technology, process, multi-material/interface, multi-damage and multi-failure mode. As consequences, we are confronted with ever increased design complexity, dramatically decreased design margins, increased chances and consequences of failures, decreased product development and qualification times, increased gap between technology advance and development of fundamental knowledge and increased difficulties to meet quality, robustness and reliability requirements. If the industry is going to rapidly decrease these sizes at high volume production during the next 15 years, then we need to face the challenges to design for sufficient product reliability today.

1.2 IC Packaging Development

(11)

With the introduction of transistors in the late 1960’s, protective packaging methods were needed. In the beginning, ceramic materials were the mostly used carrier, covered by a cap. As time progresses, metal-based carriers, leadframes, were introduced, which are substantially cheaper. As the volume increased, and the production became automated, metal leadframes became the favourite carrier for IC packaging technologies. As time progressed, many leadframe-based variants were introduced ranging from Dual In Line (DIP) and Small Outline (SO) in the 1970s, Quad Flat Packages (QFP) and DIL Bent SIL packages (DBS) in the 1980s. Exposed pad packages were introduced because of their excellent thermal and electrical performance. An example for this is HTQFP and HTSSOP were H stands for heat and T for thin. The exposed pad is a standard feature for QFN (Quad Flat No lead) packages, which is just recently introduced.

In the early 1990s, BGA-like packages were introduced based on a new multi-layer process; double-sided flex circuit pairs were stacked and laminated using adhesives to provide vertical connections. During the development stage, ideas emerged that moved closer to the current BGA packaging concept. The BGA family allows for low profiles and outlines and is currently the standard for high-density IO packages. The concept is based on using an organic laminate, be it FR4 or BT, including copper traces that connect to the IC using wire bonds and further on encapsulated by using a moulding compound. Many different variations are available on the market, such as TFBGA and HBGA, were F stands for fine pitch.

(12)

Continuing the drive for a small package, Chip Scale or Wafer Level packages (CSP, WLP) are introduced in the mid 1990s. CSP is a potentially attractive technology for a large variety of microelectronics applications. It is one of the most advanced packaging concepts. It combines the advantages of flip chip and conventional surface mount technologies. Like flip chips, CSP offer area array arrangements of interconnects and can be solder bumped efficiently at the back end structures of wafer processes. The increasing interest for CSP is mainly due to the fact that it offers the smallest form factor available - a real chip size! Combined with the fact that CSPs are generally manufactured at lower cost then conventional packages makes it the perfect choice for products in the low cost and handheld market. The strength of this concept - that it is real chip size - also is its limitation: the number of IOs that the package will allow is the number of IOs that can be routed in the application, usually at a pitch of 0.5mm or at best 0.4mm.

Figure 1.1: Packaging development trend.

(13)

package is stacking multiple packages leading to the so-called Package on Package (PoP) technology.

Due to the introduction of semiconductor solutions into other fields, the added value of packaging technologies have increased substantially. Examples are Micro Electro Mechanical Systems (MEMS), digital cameras in mobile phones and Light Emitted Diodes (LED). In these applications, packaging is a critical factor affecting the final costs. As such, the economic and technical importance of electronic packaging has greatly increased. According to the road maps, ICs will have more IOs, larger sizes, smaller pitches and higher operation temperatures in the next years. IC packages will be subjected to harsher environments, not only increased temperatures but also increased mechanical loading conditions such as vibration and impact. The key question for next-generation packaging is whether it can fill its traditional role of interconnecting, powering, cooling and protecting semiconductor ICs while also addressing these consequences. Combined with the business trends, mainly represented by cost reduction and shorter time-to-market, IC packaging technologies are driven into an unknown level of complexity. Again, the consequences are clear: dramatically decreased design margins, increased chances and consequences of failures, decreased product development and qualification times, increased gap between technology advance and development of fundamental knowledge and increased difficulties to meet quality, robustness and reliability requirements.

1.3 Thermo-Mechanical Reliability: Literature Review

(14)

from the design phase of product and process. Examples of failures are cracks, voids, delamination, buckling, hillocks, wire fatigue and many more. Thermo-mechanical reliability is becoming one of the major bottlenecks for both current and future microelectronics technologies.

Mechanics have been playing a prominent role in industrial and technological development for decennia, such as in aerospace and transport industries and in a very broad application spectrum of mechanical and civil engineering. Computational mechanics, such as Finite Element Methods, have also been impacting various industries and technologies for several decades, especially due to the rapid development of computational capability. Microelectronics products can be recognized as layer structures. The earliest study in thermal stress of multi-layered structures is presented by Timoshenko [1925], who proposed a general theory, based on the beam theory, of bending of a bi-material subjected to a uniform thermal loading. With the introduction of the Finite Element (FE) method [Hughes, 1987; Zienkiwicz and Taylor, 1989], both 2D and 3D models are created that facilitate to predict stress and strain levels in bonded dissimilar materials under thermal loading conditions [Suhir, 1989; Eischen et al., 1990; Pan and Pao, 1990; Jiang et al., 1997]. At present, FE techniques are widely used to predict the stress and strain levels and their evolution during the different manufacturing stages of microelectronics devices. Shen et al. [1996] investigated the stress and strain response of patterned lines on silicon wafers using the traditional Al/SiO2 CMOS technology. van Silfhout et al.

[2001] gave an overview on the state-of-the-art of thermo-mechanical modelling of ICs during backend processes and identified an industry need to speed up the development and application of advanced numerical techniques to optimise the mechanical behaviour of ICs. With the introduction of the Cu/lowk technology, to replace the traditional Al/SiO2, this need became urgent to investigate new failures

modes in ICs, such as debonding between the different layers [Du et al., 2002]. This debonding turned out to be related with the (changed) backend process as investigated by Gonda et al. [2004] and Orain et al. [2004] and the metal layout as investigated by van Silfhout et al. [2004].

(15)

plastic packages. Liu and Mei [1995] studied the stress response of IC packages during epoxy moulding, moisture absorption and subsequent wave soldering processes, Yueng and Yuen [1996] during IC packaging and included the time-dependent response of the epoxy. Chen et al. [2000, 2002] used Raman spectroscopy to measure the IC stress levels after being manufactured into a package. FE studies on the wire bonding process indicate a strong interaction with the thin IC layers to which the wire is bonded to [Degryse et al., 2004; Liu et al., 2004, Srikanth et al., 2005; Viswanath et al., 2005].

All of the above mentioned FE studies have investigated a certain process, be it somewhere during IC or packaging processes, but none of them have integrated both. For the successful development of IC structures and processes, it is essential to take into account the influence of packaging manufacturing and reliability qualification. In this thesis, this so-called integral approach, accounting for all the major loading sources and history during the complete product creation process, will be demonstrated.

Given the large size differences from a nm-scale IC layer to a mm-scale package size, reliability predictions of the integral IC package level require advanced modelling techniques. These techniques include the use of contact algorithms, multi-level (or multi-scale) modelling, element birth and death (also referred to as activation and deactivation) techniques, advanced material constitutive models to include the time- and temperature-dependent response of the constitutes, and advanced mechanics theory, such as fracture mechanics to predict interface delamination and/or hygro-mechanics to predict the effect of moisture in the microelectronics device. The reminder of this section presents literature and background information on these topics.

Mercado et al. [2003a, 2003b] and Wang et al. [2003a] have used multi-level techniques to predict debonding in Cu/lowk technology for Flip Chip packages, where Fiori and Orain [2005] used it to evaluate bondpad architectures subjected to wire bonding forces. Their results showed that multi-level techniques can be successfully applied to cover the large size differences between ICs and packages. In this thesis, multi-level techniques combined with contact algorithms and element (de-) activation is used to cover the integral IC package approach.

(16)

theory and associated measurements for epoxy moulding compounds. Yi and Sze [1998] and Yueng and Yuen [1996] reported that by using a visco-elastic model for the moulding compound, the predicted stresses and deformations are closer to the real situations. A cure-dependent visco-elastic constitutive model is established as well to describe the evolution of material properties during the curing process of a thermosetting polymer [Ernst et al., 2002; Ernst et al., 2004; Jansen et al., 2004b; Yang et al., 2004b; van ‘t Hof, 2006]. The need for such a constitutive model is that package concepts such as QFN and BGA will be assembled in full map systems instead of the traditional strip systems. One critical issue for manufacturing such map systems is the warpage induced during the curing of the moulding compound, which has a significant contribution [Yang et al. 2004a; Yang et al. 2005]. In this thesis, where applicable, epoxy moulding compounds are described by a linear visco-elastic behaviour and the effect of this, compared to a linear approach, to accurately predict packaging induced warpage is presented.

For the metals used in packages, typically, visco-plastic constitutive models are used [Neu et al., 2001; Sharma and Dasgupta, 2002; Schubert et al., 2003; Wiese and Meusel, 2003; Erinc et al., 2004; Dudek, 2006]. A variety of metals are available, including copper for leadframes and solders as interconnect material. Different solder compositions can be applied with the traditionally applied alloys containing lead, which is currently going to be replaced by other materials because of its toxic nature. But the solder still typical for electronic applications is the near-eutectic Sn60Pb40 alloy, frequently with 1-2% Ag and have been studied for decades [Dudek, 2006]. Much less is known about the lead-free alloys [Lau and Chang, 1998b; Wiese and Meusel, 2003]. The mechanical behaviour of solder is non-linear and temperature dependent and creep processes dominate the deformation kinetics. A variety of studies have been published concerning creep constitutive models and related properties of different solders [Grivas et al., 1979; Darveaux and Banerji, 1998; Schubert et al., 2003; Pang et al., 2003; Clech, 2005]. In this thesis, where applicable, metal leadframes are described as an ideally elasto-plastic material and solders by a visco-plastic constitutive model based on Darveaux and Banerji [1998].

(17)
(18)

Soestbergen et al., 2006]. The dielectric film is constrained by two parallel aluminium electrodes of which the bottom electrode is bonded onto the substrate. A uniform pressure is applied around the free edges of the capacitor. Due to this pressure the capacitor will be compressed and the thickness of the dielectric film will decrease, causing a measurable change in the capacitance. In this thesis, due to the simplicity of the method, wafer warpage is used to determine the isotropic properties of the thin IC layers.

(19)

such, the interface is mimicked by so-called interface elements which properties describe the adhesion between the two materials. Due to this nature, cohesive zone methods require more (material) input, for instance, the (initial) interface stiffness. Dedicated experiments are required to obtain this input. Besides this, when using cohesive zone techniques long calculation times are to be anticipated, especially when many and brittle interfaces are present [Chandra et al., 2002; Tomar et al., 2004]. This is due to the fact that when describing the fracture properly, high mesh densities are needed. Major advantage of this method, besides the fact that it is easy to use (relative to the other techniques), is the ability to predict initiation and propagation of interface delamination. For analysing and comparing different back end structures, a novel failure index, the so-called Area Release Energy (ARE) is recently developed [van Gils et al., 2005; van der Sluis et al., 2006]. This value predicts the change of delamination of critical interfaces without knowing a priori the exact location of the delamination. The amount of energy is calculated that is released upon delamination for any position along a critical interface. The advantage of the ARE method is that it does not require any presupposed position of any initial crack. Instead, at any desired positions within the specimen, an area energy release value is calculated which basically results from releasing an area (having a defined dimension) around each point in the specimen. In this thesis, the numerical techniques for interface delamination mainly focus on the J-integral method.

(20)
(21)

ball-on-ring or shaft-loaded-blister test a stainless steel cylindrical shaft with a concave end is attached to the load-cell of a universal-testing machine. The specimen with the hole facing up is put on a ring support so that the path of the shaft will not be obstructed. A steel ball is placed inside the blind hole and the shaft is adjusted to just touch the steel ball. A crosshead speed is set on the universal testing machine. The applied load versus shaft displacement is recorded simultaneously throughout the entire loading process. Tay et al. [1999] used this set-up to measure the adhesion in a QFP package between moulding compound and leadframe as a function of temperature and moisture. Their test set-up is different in that sense that the load is placed on the moulding compound instead of on the frame. Using this set-up, Tay et al. [1999] measured an interface strength of 4.5J/m2 (ψ = 0º) for the leadframe – compound

interface. The mixed mode bending test can be regarded as a superposition of the dual cantilever beam test, which is pure mode I, and the end notched flexure test, which is pure mode II. The mixed mode bending test provides stable crack growth over the full range of mode angles [Reeder and Crews, 1990]. Via a lever balance system, the magnitudes of the forces acting on the sample can be changed and, thus the mode mixity can be varied. Merrill and Ho [2004] used a mixed mode loading apparatus to conduct an asymmetric dual cantilever beam test, which exhibits stable crack growth. However, their test apparatus involves a complex loading mechanism, which will cause a substantial amount of anticipated friction. Thijsse et al. [2006] improved their set up and applied it to measure the interface strength between moulding compound and leadframe. An advantage of the mixed mode bending test is that only one type of sample geometry is needed. Off course, the mode mixity should not change during crack growth, as this would complicate the interpretation of experimental data. In this thesis, the experimental techniques for interface delamination mainly focus on the four point bending method.

(22)

encapsulation cooling, moisture absorption and wave soldering. Galloway and Miles [1997] made an excellent contribution to the moisture diffusion modelling and characterization for various kinds of plastic materials. Wong et al. [1998, 2000] followed Galloway’s approach to propose an alternative variable for moisture diffusion modelling. Tee and Zhong [2003] and Tee et al. [2004] developed a fully integrated modelling approach to investigate the moisture behaviour during reflow. Dudek et al. [2002] presented parametric studies on moisture diffusion to investigate popcorn cracking. Liu et al., [2003] and Fan et al. [2004] recently introduced a micromechanics approach to study the moisture behaviour and delamination initiation, and studied the impact of non-uniform moisture distribution on characterising hygroscopic material behaviours of materials. In this thesis, the wetness approach is used for moisture diffusion modelling and measurement results on the diffusivity as function of humidity and temperature conditions as well as hygroscopic swelling values for epoxy materials are presented.

To verify simulation results in microelectronics, different techniques exist to measure package stresses and/or deformations among which are micro-raman, interferometry, digital image speckle correlation and strain gauges. Micro-Raman spectroscopy is used to measure Silicon stress levels [Chen et al., 2000; Chen, et al., 2002]. Important feature of micro-Raman spectroscopy is the fact that the frequency of the Silicon Raman peak depends on the level of mechanical stress. If tensile stress is present, the Silicon Raman frequency will shift downwards, for compressive stress, it will shift upwards. For more complicated stress distributions, the relation between the stress tensor components and the measured Raman shift is not so straightforward, and special modelling is necessary if quantitative stress values are required. Interferometry is mainly used to measure package deformation (warpage). Several techniques exist, such as [Dai et al., 1990; Post et al., 1990; Shield and Kim, 1991]: • Laser profile Interferometry, using a laser beam that scans the surface of the

sample;

• Projection/Shadow Moiré Interferometry, for out-of-plane deformation measurement;

• Twyman/Green Interferometry, for in-plane deformation measurement; • Holograpic Interferometry;

(23)

Basically all above techniques can be used to measure package deformation with different levels of resolutions and/or sensitivity in the order of 0.3-0.5μm/fringe. Digital image speckle correlation is a technique, which correlates a pair of digital speckle patterns obtained at two different loading conditions [Shi et al., 2004]. Next, it searches for the location of any one point within one speckle pattern spreading all over another one speckle pattern, by maximizing the correlation coefficient of the pair of digital speckle patterns, and determines the deformation of the specimen subjected to any loading. Digital image correlation can be used as an experimental tool to characterize properties of electronic materials and verify findings of theoretical and/or numerical models. Other techniques that can be used for this purpose are, for instance, classical strain gauges that are able to measure strain levels at certain locations in microelectronics products. In this thesis, Interferometry measurements are presented to verify the prediction of process-induced warpage of microelectronics devices.

1.4 Virtual Prototyping

(24)
(25)

1.5 Objectives and Approach

In this thesis, the research objectives are set as follows:

• The development of a general virtual thermo-mechanical prototyping framework that is able to predict the non-linear responses in microelectronics devices during manufacturing and testing. This includes:

o The development of accurate and efficient DOE and RSM methods. o The development of accurate and efficient thermo-mechanical prediction

models.

• The application of the developed virtual prototyping framework to several reliability problems in microelectronics devices during manufacturing and testing.

To meet the first objective, a general virtual thermo-mechanical prototyping framework is developed; Figure 1.2 shows a schematic representation of the framework, wherein two core building blocks are present:

1. Simulation-based Optimisation Methods

Simulation-based optimisation is optimisation based on and integrated with advanced simulation models that can predict the product and process behaviour accurately and efficiently. Optimisation here refers to different design needs, such as to find the maximum or minimum, to know the parameter sensitivity, to obtain robust design, to develop design rule and to know the probability of failures according to the given derivation of design parameters.

2. Accurate and Efficient Prediction Models

(26)

Both above-mentioned factors are seamlessly integrated, in order to predict, qualify, optimise and design microelectronics against the actual requirements prior to major physical prototyping, manufacturing investments and reliability qualification tests, in an effective and efficient manner. This is part of the second objective in which the techniques are applied to various reliability problems in microelectronics. This includes: the occurrence of die crack in power packages; the interaction between IC and packaging processes to obtain the IC package stress design rules; the generation of structural similarity rules for BGA packages in order to reduce reliability testing; and delamination related reliability problems in exposed pad packages. This variety of practical problems demonstrates the applicability of the developed techniques to a wide range of industrial problems in microelectronics.

Virtual

thermo-mechanical prototyping

Sustainable

profitability

Accurate & efficient

prediction models

Simulation-based

optimisation methods

Virtual

thermo-mechanical prototyping

Sustainable

profitability

Accurate & efficient

prediction models

Simulation-based

optimisation methods

Figure 1.2: General virtual thermo-mechanical prototyping framework.

1.6 Outline of the Thesis

(27)

techniques. Chapter 4 describes the numerical and experimental techniques used to obtain accurate and efficient prediction models. Chapters 5, 6, 7 and 8 describe the application of the developed virtual prototyping framework through a series of examples. Finally, the conclusions of the present research work and recommendations for future work are presented in Chapter 6. The outline of the thesis is schematically depicted in Figure 1.3. This scheme may serve as a guideline for the reader. Chapter 3 and 4 contain descriptions of techniques and results that are used in the application Chapters 5, 6, 7 and 8.

It should be noted that all chapters are written on the basis of journal publications and/or contributions to conference proceedings, they can be read independently but may contain some overlap of content.

Chapter 2: Microelectronics Technology

Chapter 3: Simulation-based Optimisation

4.1: IC Backend Process Induced Warpage

4.2: Process Induced Warpage of Electronic Packages 4.3: Interface Strength Characterisation

4.4: Characterization and Prediction of Moisture Driven Failures

Chapter 5: Response Surface Modelling for Non-linear Packaging Stresses Chapter 6: Virtual Prototyping based IC Package Stress Design Rules Chapter 7: Structural Similarity Rules for the BGA Package Family

Chapter 8: Driving Mechanisms of Delamination Related Reliability Problems in Exposed Pad Packages Accurate and Efficient Prediction Models Simulation-based Optimisation Methods

Applications of Virtual Thermo-Mechanical Prototyping

Chapter 9: Conclusions and Recommendations Chapter 2: Microelectronics Technology

Chapter 3: Simulation-based Optimisation

4.1: IC Backend Process Induced Warpage

4.2: Process Induced Warpage of Electronic Packages 4.3: Interface Strength Characterisation

4.4: Characterization and Prediction of Moisture Driven Failures

Chapter 5: Response Surface Modelling for Non-linear Packaging Stresses Chapter 6: Virtual Prototyping based IC Package Stress Design Rules Chapter 7: Structural Similarity Rules for the BGA Package Family

Chapter 8: Driving Mechanisms of Delamination Related Reliability Problems in Exposed Pad Packages Accurate and Efficient Prediction Models Simulation-based Optimisation Methods

Applications of Virtual Thermo-Mechanical Prototyping

Chapter 9: Conclusions and Recommendations

(28)

Chapter 2

Microelectronics Technology

The semiconductor industry has seen the continuous development of new and improved processes leading to highly integrated and reliable circuits and packages. From the circuit side, these improvements led to a variety of different technologies to make logic functions, high- and low-voltage transistors, diodes, bipolar transistors and/or field-effect transistor (FET) either made in germanium (Ge) of silicon (Si). From the packaging side, new and cheaper concepts entered the market with better electronic performance; examples are ball grid arrays (BGA), quad flat no lead (QFN) and/or chip scale packages (CSP). This chapter describes the major processes to manufacture electronic devices, with an example for complementary metal oxide semiconductor (CMOS) and leadframe-based packages such as QFP. In the end of this chapter all testing methods are listed to assess the reliability of microelectronics products and the failure modes found as a result of these tests.

2.1 IC Backend Processes

Over 80% of today’s microelectronics products depend on CMOS baseline technology, with Moore’s Law as guiding light. A basic description of the CMOS device and how it can be made is given here; details about physical design in CMOS technology can be found in [van Zant, 2000]. An IC is a layered stack of substrate and thin films with thicknesses ranging from approximately 100nm to 1μm. For a typical CMOS process these films include semiconductors (as active part), metal interconnects and via plugs (as carrier for current), dielectrics (for electrical isolation) and passivation layers (for mechanical protection). The relatively thick single crystal silicon substrate serves as ground material and as a mechanical carrier during processing. Total IC processing can be divided into 2 serial processes:

(29)

2. Backend process: metal interconnect lines, dielectrics, via plugs and passivation layers are deposited on the frontend wafer. This multi-layered stack of ductile and brittle thin films are deposited by cycles of:

• Deposition of the thin film material (and further treatment if necessary). The growth of the number of thin film materials has resulted in a number of deposition techniques. Still, the majority of the films are deposited by a Chemical Vapour Deposition (CVD) technique. Chemicals containing the atoms or molecules required in the final film are mixed and reacted in a deposition chamber to form a vapour. The atoms or molecules deposit on the wafer surface and build up or form a film. During the process the deposited film grows until the required thickness is obtained. Typical process temperature is 400˚C.

• Lithographically patterning. Photolithography is one of the most critical operations in semiconductor processing. It is the process that sets the surface dimensions on the various parts of the devices and circuits. The required pattern is formed by using reticles or photomasks and takes place in two steps. First, the pattern on the reticle or mask is transferred into a layer of photoresist. Exposure to light will change the photoresist from a soluble condition to an insoluble one, which enables the formation of the pattern. Typical process temperature is 450˚C.

• Removal of the photoresist. The second step in the patterning is the removal of the soluble photoresist material. The chemistry of photoresists is such that they do not dissolve in the chemical etching solutions.

• Annealing if necessary. Disruption of crystal structures and/or metal alloying is obtained by a heat treatment between 450-1000°C called annealing.

(30)

Wafer

Process Step Purpose

1. Surface preparation Clean and dry surface

2. Photoresist apply Spin coat a thin layer of

photoresist on surface

3. Exposure Exposure of photoresist via mask /

reticle

4. Develop, bake, and inspect Removal of unpolymerized resist,

inspection of surface

5. Etch Top layer of wafer is removed

through openingin resist layer

6. Photoresist removal Remove photoresist from layer

7. Final inspection Surface inspection for etch

irregularities Wafer Wafer Wafer Wafer Wafer Wafer Wafer

Process Step Purpose

1. Surface preparation Clean and dry surface

2. Photoresist apply Spin coat a thin layer of

photoresist on surface

3. Exposure Exposure of photoresist via mask /

reticle

4. Develop, bake, and inspect Removal of unpolymerized resist,

inspection of surface

5. Etch Top layer of wafer is removed

through openingin resist layer

6. Photoresist removal Remove photoresist from layer

7. Final inspection Surface inspection for etch

irregularities Wafer Wafer Wafer Wafer Wafer Wafer Wafer Wafer Wafer Wafer

Figure 2.1: Schematic representation of the step-wise photomasking process.

Backend processes are carried out in a waferfab, where continuous monitoring is carried out. Monitoring regularly takes place after each process step, regarding film thickness, warpage of the wafer and interference of moisture and dust. The measured bending of the wafer (warpage) derives a biaxial stress state in a thin film bonded to a thick substrate. During and after the backend process, significant stress levels are observed in ICs. The total stress level in a particular film is considered as the result of two stresses; intrinsic stresses and thermo-mechanical stresses. Intrinsic stresses are induced at the deposition process due to the non-equilibrium microstructure of the film. Subsequently, during cool down from the deposition temperature, thermo-mechanical stresses occur due to thermal mismatch between the different materials. Failures observed during and after backend processes can be attributed to these stress levels. Typical backend related materials are listed in Table 2.1. Figure 2.2 shows a cross-section of a typical IC.

Table 2.1: Typical backend materials. Application Material

Substrate Silicon (Si)

(31)

Application Material

Via plugs Tungsten (W), filled up with Cu / Al

Dielectric Amorphous Silicon Oxide (SiO2), TetraEthylOxiSilane

(TEOS), Polymers (FPI, FPAE), BoroPhosphosilicate glass (BPSG), Black Diamond BDI / BDII (SiOC:H)

Planarisation Spin on glass (SOG), Silicon Nitride (Si3N4),

PhosphoSilicate Glass (PSG), Benzocyclobutene (BCB)

Figure 2.2: Cross-section of a typical IC.

2.2 Packaging Processes

(32)

packages developed by the assembly companies. Most used classification is the distinction between:

I. Through Hole Mounted IC Packages II. Surface Mounted IC Packages III. Contactless Mounted IC Packages

Packages are manufactured through a series of sequential processes, widely using metals and polymers in various forms such as in leadframes, encapsulants, adhesives, underfills, moulding compounds and coatings [Seraphim et al., 1989; Harper, 1991; Tummula et al., 1997; Lau, 1998a]. For a surface mounted package the manufacturing process involves:

0. Probing of each individual IC on the wafer. By means of a probing card, each IC on the wafer is tested, using a so-called probing station, towards the desired electrical function. Only functional ICs will be packaged, this is denoted as the ‘Known Good Die’ concept.

1. Grinding, Etching and Sawing of the wafer. ICs are cut out of the wafer and can be used for further ‘single’ processing. This is done at room temperature using a circular sawing or a laser cutting process.

2. Die-attach. The single IC is attached on a carrier material, here a metal, by using some kind of glue. Typical process temperature is 150-175˚C and strongly depends on the die-attach type. Many different die-attach types exist; all are related either to their function or either to specific processing demands. For example, die-attach may be thermally and/or electrically conductive (by adding silver flakes), non-bleeding, snap-cured or oven-cured.

(33)

4. Chipcoat. To protect the IC top surface a chipcoat material can be used. Chipcoat is a highly viscous liquid or paste that encapsulates the IC - mostly epoxies or silicones, with some inorganic fillers. Typical process temperature is 150˚C.

5. Mould. IC, wires and carrier are encapsulated by an epoxy. Moulding compounds are very complex mixtures of epoxy resin(s), hardener(s), mixture of accelerator(s), (very) high filler loadings, adhesion promoters, release agents, flow additives, carbon black, ion trapping agents, stress absorbers and flame retarders [Bressers et al., 2006]. The chemistry of widely used moulding compounds can be described by a combination of different building blocks with epoxy and hydroxyl reactive groups. Phenol novolac and cresol novolac based resins and hardeners are common, but also newcomers such as biphenyl- and multi-aromatic based precursors and mixtures thereof are being used regarding environmentally green and/or very good performing materials when it comes to 260°C reflow soldering conditions. The moulding compound is injected under high pressure at a process temperature of about 175˚C, followed by a 3 to 4 hours curing step at 175˚C.

6. Solder plate. The leads are treated as to obtain a better contact with the Printed Circuit Board (PCB) in later manufacturing stages. Plating is done at room temperature in a plating bath.

7. Mark, Trim and Form. Package is marked and by a trim and form process redundant material is removed.

8. Final test. Package is electronically tested whether it fulfils the aimed function. 9. Pack. A number of packages are packed in either tubes, trays, or reels. This is

done for reasons of easy transport to the (end-) customer.

(34)

Grind Solder plate Chipcoat Wire Die-attach Saw

Mark Trim & Form

Final test Pack

Mould Grind Solder plate Chipcoat Wire bond Die-attach Saw

Mark Trim & Form Grind Probe Grind Solder plate Chipcoat Wire Die-attach Saw

Mark Trim & Form

Final test Pack

Mould Grind Solder plate Chipcoat Wire bond Die-attach Saw

Mark Trim & Form Grind

Probe

Figure 2.3: Assembly process flowchart for a leadframe based package.

Table 2.2: Typical packaging materials.

Application Material

Leadframe and Heatsink Copper (Cu), Copper-alloys (CuNi3), Iron (FeNi3)

Substrate (BGA-like) BT (bismaleimide triazine) based, Flame Retardant Type 4 (woven glass reinforced epoxy resin), Tape, teflon or polyimide based (PTFE, PI), Ceramic

Die-attach Conductive and non-conductive adhesive, Metal-filled epoxies (thermoset) or polyimide siloxanes, Underfill (silica-filled epoxy resins)

Wire bond Gold (Au), Copper (Cu), Aluminium (Al)

Compound Granulated and powdered resin, with hardener, accelerator, fillers, flame retardents and other modifiers

(35)

lead compound diepad wire IC die-attach lead compound diepad wire IC die-attach

a)

c)

b)

lead compound diepad wire IC die-attach lead compound diepad wire IC die-attach lead compound diepad wire IC die-attach lead compound diepad wire IC die-attach

a)

c)

b)

Figure 2.4: Picture of a) leadframe, b) QFP cross-section and c) 3D view of a package.

2.3 Reliability Testing for IC Packages

Reliability is defined as the probability that a product in operation will survive under certain conditions during a certain period of time [Kuper and Fan, 2006]. From this definition it is clear that all products always fail eventually, a probability of zero is physically impossible. Nevertheless, for semiconductor devices, zero is approached quite closely and the probability that a device returns within the guarantee period is typically expressed in failed ‘parts per million (ppm)’. To uncover specific construction, material and/or process related marginalities, semiconductor devices are qualified using specially designed tests to ensure that they have sufficient life so that failures do not occur during the normal usage period. These test are called reliability tests and the specific purpose is to determine the failure distributions, evaluate new designs, components, processes and materials, discover problems with safety, collecting reliability data and to perform reliability control.

(36)

Electro technical Commission (IEC). Table 2.3 lists typically used test conditions and requirements for currently classified reliability tests.

Table 2.3: Typical reliability tests and their conditions.

Test (abbreviation) Typical Condition Typical Requirement Preconditioning (PRECON) Per MSL 1 - 6 Pas level

Temperature Cycling (TMCL) -65ºC to +150ºC, unbiased 200 or 500 cycles Pressure Cooker Saturated (PPOT) Unsaturated (UPOT) 121ºC, 100%RH, unbiased 130ºC, 85%RH, unbiased 96 hours 96 hours Temperature Humidity Bias

Static / Cycled (THBS/C) 85ºC, 85%RH, biased 1000 hours Highly Accelerated Stress

Test (HAST) 130ºC, 85%RH, biased 96 hours

High Temperature Storage

Life (HTSL) 150ºC, biased 1000 hours

Below a short description of the different test methods. 1. Preconditioning (Precon)

(37)

• MSL1: unlimited floor life under a 85%RH and a temperature lower then 30°C. In a MSL assessment, the IC package should withstand experimental conditions of 85%RH/85°C for a period of 168 hours.

• MSL3: limited floor life of 168 hours under a 60%RH and a temperature lower then 30°C. In a MSL assessment, the IC package should withstand experimental conditions of 60%RH/30°C for a period of 168 hours.

• MSL6: limited floor life of 6 hours under a 60%RH and a temperature lower then 30°C. In a MSL assessment, the IC package should withstand experimental conditions of 60%RH/30°C for a period of 6 hours.

2. Temperature Cycling (TMCL)

Temperature cycling is used to simulated both ambient and internal temperature changes that result during device power up, operation and ambient storage in controlled and uncontrolled environments. During the test, IC packages are subjected to a typical temperature change from -65°C to 150°C for a number of cycles. Typical numbers of cycles are 200 to 500, and depend on the application. For instance, the demand for IC packages aimed for an automotive application (under the hood of a car the temperature change is large) is higher than those aimed for an end-customer application (for instance a mobile phone).

3. Combined Pressure, Moisture and Temperature (PPOT, THB, HAST)

A combination of pressure, moisture and temperature are used to accelerate the penetration of moisture into the IC package. Bias can be applied to even further accelerate the test. The tests are used to identify failure mechanisms internal to the IC package, like metal migration, corrosion, dielectric breakdown, and are destructive for the device.

4. High Temperature Storage Life (HTSL)

This test is used to simulate a use environment where a device is continuously powered or stored at high temperature. The IC package is subjected to a temperature of 150°C and will switch on and off continuously. In the test, the operational life is accelerated and the product is qualified if it can sustain 1000 to 2000 hours under these conditions.

(38)

whether the long-term reliability demands can be met. From virtual prototyping point of view, these tests form part of the loading regime that the IC packages are subjected to.

As a consequence of the reliability test, which mimics the real life operation of the device, failures may occur in the IC packages. Figure 2.5 lists the currently observed failure modes that are thermo-mechanically related. The modes include overstress modes, such as die, package, passivation and substrate crack and fatigue modes, such as broken wires and solder fatigue. For die crack, surface scratching and cracks in dies may form during packaging processes, such as dicing. If an initial flaw equals to or greater than the critical crack size exist, the die can catastrophically fracture in a brittle manner during temperature cycling. Interface delamination between two adjacent materials is one of the major problems in microelectronics. Moisture ingress, either through the bulk epoxy or along the interface can accelerate delamination in plastic IC packages. So-called C-mode Surface Acoustic Microscopy (C-SAM) can be used to identify delamination. From virtual prototyping point of view, these failure modes are to be transformed into allowable stress and strain levels to form the objectives that can be optimised.

12: Substrate cracks 11:Solder fatigue 10: Ball neck break 9: Broken wire

8: Bond ball lift 4: Passivation crack 7: Stitch break 3: Delamination 6: Die Lift 2: Warpage 5: Die Crack 1: Package Crack 12: Substrate cracks 11:Solder fatigue 10: Ball neck break 9: Broken wire

8: Bond ball lift 4: Passivation crack 7: Stitch break 3: Delamination 6: Die Lift 2: Warpage 5: Die Crack 1: Package Crack

(39)

Chapter 3

Simulation-based Optimisation

Currently, the microelectronics industry is driven by an experience-based design and qualification method that cannot lead to competitive products with shorter time-to-market, optimised performance, low costs and guaranteed quality, robustness and reliability. Therefore, there is an urgent need to develop and exploit virtual prototyping methods. With pre-knowledge and proper execution, these simulation supports can be considerably faster, less expensive and able to provide more insights than physical prototyping and testing. As mentioned in the introduction, the developed virtual thermo-mechanical prototyping framework consists of two building blocks: accurate and efficient prediction models and advanced simulation-based optimisation methods. This chapter describes the developed advanced based optimisation methods based on the theories and methodologies of simulation-based optimisation, consisting of Design Of Experiments (DOE), Response Surface Models (RSM) and optimisation methods. The chapter contains three sections. The first section describes the developed strategy, methodology and procedures, the second part fundamental DOE, RSM and design optimisation techniques. The final section presents a developed improved approach, called Efficient Global Optimisation (EGO), in which DOE points are automatically chosen in such a way as to provide accurate response surfaces.

3.1 Strategy, Methodology and Procedures

Figure 3.1 shows the methodology and procedure to conduct the simulation-based optimisation. The procedures consist of the following steps:

1. Problem specification

(40)

be any FE code combined with a statistical evaluation code. Once the problem of study is targeted, the next step is to specify the design parameters and their ranges and define the design space. These ranges are bounded by values that can be manufactured.

2. Sampling the design space

In this step, one has to select from a number of pre-defined experimental plans to sample the design space. This is similar to generating several physical prototypes and testing them to see how each of them perform. Design of Experiment (DOE) methodologies automatically generates several designs composed of certain combinations of the design parameters. In this thesis, the Latin Hypercube DOE method is used.

3. Generation of Response Surfaces

Using a simulation tool, in this case a FE code, for each DOE set, response parameters will be generated, for instance deformations, stress levels at certain critical locations, etc. A Response Surface Model (RSM) approximates the response in a form of a functional relationship between the response and the design space. Remind that the response surface is an approximation, which should be mimicked to certain statistical criteria, such as accuracy. In this thesis, both quadratic and Kriging RSM methods are used.

4. Selecting the best design

The selection of the best design among all the alternatives generated in the previous step is achieved through interactive search, visualization and data analysis techniques. This step answers questions such as:

• Which of the design parameters affect the responses the most?

• What amount of change is necessary for each design parameter to achieve the target values for the responses?

(41)

Simulation

RSM

Problem

Specification

Criteria

satisfied?

N

Y

Optimisation

- maximum & minimum - parameter sensitivity - robust design

Response

parameter(s)

DOE

Simulation

tool

Simulation

RSM

Problem

Specification

Criteria

satisfied?

N

Y

Optimisation

- maximum & minimum - parameter sensitivity - robust design

Response

parameter(s)

DOE

Simulation

tool

Figure 3.1: Simulation-based optimisation methodology and procedure.

3.2 DOE, RSM and Design Optimisation Techniques

3.2.1 Design of Experiments

The Design of Experiments (DOE) technique is a systematic approach to get the maximum amount of information out of various types of experiments while minimizing the amount of them. Unfortunately, the number of experiments to be done grow exponential with the number of design parameters. Therefore, the basic challenge is to design the optimal DOE, which include experiments that provide appropriate information to the model and skip those experiments that are overlapped or not required [Beauregard et al., 1989; Trocine and Malone, 2000]. Until now, many DOE methods are developed and available for different kind of applications. No attempt will be made here to summarize all of those methods. In principle, DOE methods can be classified in two categories, being orthogonal and random designs.

(42)

p3. The starting point in such a classic orthogonal DOE design for constructing

response surfaces is that experiments are subject to noise. This typically holds for physical experiments. This approach of DOE takes the response surface as a deterministic function of which one can only observe noisy values. To control the effect of the noise, linear of quadratic functions are fitted through the responses to obtain the response surface. As such, model fitting becomes a statistical parameter estimation issue that can be resolved by using regression techniques [Green and Launsby, 1995; Montgomery, 2005]. In order to get more confidence in the eventual response surface, the number of experiments can be increased. In case of orthogonal designs, the number of experiments, N, increases exponential with the number of design parameters, p: DOE level 3 a for 3 DOE level 2 a for 2 p p N N = = (3.1)

In fact, orthogonal designs can be accepted only for a limited number of design parameters otherwise the number of experiments becomes too large. Besides this, other disadvantages are:

• Initially it is usually not clear which factor is important and which not. Since the underlying function is deterministic there is a potential hazard that some of the initial design points collapse and one or more of the time consuming computer experiments become useless. This is called the collapse problem.

• Most classic DOEs are only applicable for rectangular design regions.

p1 p2 p3 p1 p2 p3 p1 p2 p3 a) b) c) p1 p2 p3 p1 p2 p3 p1 p2 p3 a) b) c)

(43)

In the case that the DOE is build up by a certain amount of computer simulations and as long as the simulation models are verified and reliable, the concern for experimental noise can be eliminated. In case of numerical experiments it is almost a rule of thumb to use experiments that are:

• Space filling. Space filling indicates that numerical experiments are evenly spread out throughout the design region.

• Non-collapsing. Every experiment gives information about the influence of the other design parameters on the response.

• Sequential. Numerical experiments allow minimizing the required number of experiments by selecting an initial scheme and then carrying out additional experiments in order to improve the accuracy of the response surface.

• Able to handle non-box design regions. In most cases the feasible design region is non-box as points outside this region may have no physical interpretation.

(44)

design. The required number of experiments for a LH design is determined by the complexity of the underlying model (linear, non-linear, continuous or discontinuous). Figure 3.3 gives a schematic representation of a LH design in case of two design parameters, p1 and p2.

p

1

p

2

p

1

p

2

Figure 3.3: Example of a random LH design in case of two design parameters p1

and p2. Each black dot represents one experiment.

3.2.2 Response Surface Models

(45)

1996; Meyers 1999]. In this thesis, the least squares based RSM methodology and the stochastic interpolating Kriging based method are used.

Given a set of observations, one often wants to condense and summarize the data by fitting it to a model that depends on adjustable parameters. The basic relation between a given set of input design parameters X and measured and/or simulated output variables Y can be described as:

) , (X Z f

Y = (3.2)

Where X = (x1,…,xi), Y = (y1,…,yi) and Z reflects the unknown or uncontrollable

variables, such as noise. In case of physical experiments this noise cannot be neglected and approximation RSM techniques are most common response surface functions to use. Polynomials are mainly used because of simplicity. Examples are linear and quadratic functions, described as:

functions quadratic for functions linear for 1 1 1 1 2 1 0 1 0

∑ ∑

− = =+ = = = + + + = + = k i k i j j i ij k i i ii k i i i k i i i x x x x y x y α α α α α α (3.3) Where α = (α0,…,αij) are a set of adjustable parameters. The general method of least

square is to figure out the parameters α in the model, so that the sum of squares of the error takes out a minimum, that is:

[

]

2 1 ) ; ( min

= − k i i i y x y α α (3.4)

Other approximation functions that can be used are higher order polynomials and spline functions. The basic approach becomes like this: choose or design a function that measures the agreement between the data and the model with a particular choice of parameters. The parameters of the model are then adjusted to achieve a minimum in the function, yielding best-fit parameters. Employ the linear least squares method using the sum of squares of the error between the approximated value and the exact observation, which one can show is a maximum likelihood estimator.

(46)

is based on considering the deterministic response y(X) as a realization of a stochastic process, which means that an error is replaced by a random process. One of the most popular methods for such a stochastic model interpolation is Kriging. Kriging is extremely flexible due to the wide range of correlation functions that may be chosen. Depending on the choice of a correlation function, Kriging can either result in exact interpolation of the data points or smooth interpolation, providing an inexact interpolation. It is worth noticing that Kriging is different than fitting splines and in fact it is believed even better than splines. Numerical analysis is deterministic and not subjected to a measurement error. Therefore the usual uncertainty derived from least squares residuals has no meaning. Because of this the response model can be treated as a combination of a polynomial model and an additional factor referring to the deviation from the assumed model. The most straightforward way to fit a response to a data is by linear regression (approximation of the sampled data):

= + = k i i i i if x x y 1 ) ( ) ( ε α (3.5)

Where f(xi) can be a linear or non-linear function, α are the unknown coefficients to

be estimated and finally ε(xi) are error terms of the systematic deviation with a normal

distribution. In this way we can get an interpolation of the sampled data. The function

ε(xi) represents the realization of a stochastic process and is assumed to have zero

mean and the covariance V between two inputs u and v is given by: ) , ( ) , (u v 2Ru v V(3.6)

Between ε(u) and ε(v) where σ2 is the process variance and R(u,v) is a correlation.

The covariance structure of ε relates to the smoothness of the approximating surface. For a smooth response, a covariance function with some derivates might be adequate, whereas an irregular response might call for a function with no derivates. The fitting procedure can be viewed as a two-stage problem:

• Calculation of the generalized least-squares predictor.

• Interpolation of the residuals at the design points as if there were no regression. Because computer simulation is deterministic by nature, the error is totally due to modelling error and not to e.g. measurement error, and then it is justified to treat the error εi as a continuous function of xi. As the error is a continuous function then

(47)

points. If the points are close together, then the errors should also be similar, which means high correlation. Following this approach it can be assumed that the correlation between errors would be related to the distance between the corresponding points. A special weighted distance formula can be used, which in comparison to the Euclidean distance does not weight all the variables equally:

= − Θ = k h p h j h i j i h x x x x d 1 ) , ( (3.7)

Where Θ≥0 and ph[1,2]. Using this distance function, the correlation between the

errors can be defined as follows:

[

]

( , ) ) ( ), ( d xixj j i ij corr x x e r = ε ε = − (3.8)

The so defined correlation function has obvious properties, which means that in case of small distance the correlation is high while in case of large distance the correlation will approach zero. The values of the correlation function rij define the correlation

matrix R and it is possible to get a simple linear regression model and avoid a quite complicated functional form of the response. The functional form of the stochastic interpolating Kriging technique can be written as:

) | | ( ) ,... ( 1 1 1 j p ij j k j j k i i k exp x x x x f = +

− ⋅ − = = θ δ μ (3.9)

It is required to estimate 2k+2 parameters to define the Kriging model: μ, δ1,…,δk,

θ1,…,θk, p1,…,pk. This task can be achieved by maximizing the likelihood function F

of the sample, which is defined as follows:

2 1 ' 2 ) 1 ( ) 1 ( 2 / 1 2 / 2 2 / 1 ) ( ) 2 ( 1 δ μ μ δ π − − − = y R y n n e R F (3.10)

Where 1 denotes the n-vector of ones and y denotes the n-vector of observed function values. One of the most crucial factors of the Kriging method is the estimation of the

(48)

3.2.3 Design Optimisation, Robust Design and Parameter Sensitivity

Design optimisation deals with the selection of the “best” alternative amongst the many possible designs [Schwefel, 1981; Gill et al., 1981; Rao, 1996; Edwards and Jutan, 1997; Papalambros and Wilde, 2000]. It involves the:

• Selection of a set of variables to describe the design alternatives.

• Selection of one or more objectives, expressed in terms of the design variables, which we seek to optimise.

• Determination of a set of constraints, expressed in terms of the design variables, which must be satisfied by any acceptable design.

• Determination of a set of values for the design variables that optimise the objective(s), while satisfying all the constraints.

In mathematical terms, design optimisation is to minimize the set of functions describing the relation between design parameters and output variables under the above constraints, described as:

t k x x f x f x f n ( ) ( ( ),..., ( )) min = 1 → ℜ ∈ (3.11)

Note that the above functions (objectives and constraints) can be linear or non-linear, continuous or non-continuous and include continuous and discrete design variables. There are two major classes of solution methodologies that solve general optimisation problems, being local and global optimisation methods.

Local optimisation methods assume continuity and unique optimal solutions. In solving optimisation problems with the aim of finding a local optimum, gradient-based numerical iterative methods can be deployed. A number of methodologies exist, utilizing first order derivatives of the objective and constraint functions in order to iteratively progress towards the optimum. Two of the well-known and established methodologies are the Sequential Quadratic Programming method and the Generalized Reduced Gradient methodologies [Powell, 1978; Papalambros and Wilde, 2000]. There exist a number of other methods like for instance the Moving Asymptotes method [Svanberg, 1987].

Cytaty

Powiązane dokumenty

The intention of the committee of the Polish Quality Award is that the winners and distinction holders in subsequent editions of the regional competitions take part in

Regardless of whether it is a medical center or rehabilitation or prevention activities in the field of health care, everyone who applies for EuropeSpa med certificate

The SiC additive positively influences the microstructure, it means the content of ferrite in the matrix is increased, the size of graphite is decreased, the

The aim of the experiment was to analyze the possible effect of modification by strontium on the surface roughness after machining of the examined alloy

Particularly useful regulation for the small and medium companies in terms of an exemption from the obligations to keep records of waste is the Regulation of

Thus set presumption implied that organisational structure characteristics in analysed organisations do not create the environment for innovative organisation,

With regard to the fact that job analysis mainly provides information for creation and description of working positions, for specification of demands imposed on employees

In the article the authors have verified the thesis that consumer attitudes towards corporate social responsibility are positive and consumers have little knowledge