8
Chemical and Morphological Characterization

In Chapter 7 we dealt with several physical–chemical aspects. In this chapter, we discuss the usual characterization methods for coating materials and coatings. These methods can be distinguished in chemical, morphological (this chapter), thermomechanical (Chapter 9), and rheological characterization (Chapter 10). Up front we have to state that any in‐depth discussion is out of the question. The purpose of the present chapter is therefore, even more so then for Chapter 7, to make readers aware of the possibilities of generally available techniques and of a few less well‐known (and possible less available) techniques that are nevertheless rather useful. Although written with bulk polymers in mind, a rather useful overview in more, but not exhaustive, depth is Simon [1], while a similar review for surface characterization is provided by Stamm [2].

8.1 The Need for Characterization

Materials are typically characterized to some (smaller or larger) degree, and coating materials form no exception. A characterization method comprises sampling and sample preparation including the possibly required solvents or other reagents, the instrument(s) involved including calibration, and data collection and interpretation. The choice of a method depends on several factors, including availability, cost, and personal preference, and more than one method may yield an appropriate answer. Furthermore, the various methods capable of solving the same problem almost inevitably will have a different detection limit and a different resolution/accuracy. Therefore, it is essential to realize what is really required to solve the problem at hand.

For coatings chemical characterization plays a role at least twice. The first is for the starting materials to be used: resins, crosslinkers, pigments, and additives. Here one is usually interested in aspects such as purity, dispersity, and effective functionality. The second is for the network formed after crosslinking. In this case, for example, one wants to know the extent of the curing reaction, the amount of leachables, and the homogeneity of curing over the film thickness or a possible anisotropy. Finally, one might also be interested in the kinetics of the resin and/or network formation. The morphology of the coating realized is the result of the chemistry and processing conditions used, such as time, temperature, atmosphere, and wet film thickness.

In the following sections we first discuss the most common chemical characterization methods, applicable to both resins and networks (or their leachables). For chemical characterization nowadays, mainly spectroscopic techniques are used. We note that for interaction between radiation and a molecule to be possible, there must be some electric or magnetic change produced by the dynamics of the molecule so that it can interact with the electric or magnetic component of the radiation. The tools for polymer architecture characterization are infrared (IR) and nuclear magnetic resonance (NMR). For surface characterization, X‐ray photoelectron spectroscopy (XPS), secondary ion mass spectrometry (SIMS), and low energy ion scattering (LEIS) are options. For molar mass distributions, size exclusion chromatography (SEC) and matrix‐assisted laser desorption/ionization (MALDI), normally combined with mass spectrometry (MS), yield the required information. For functional group analysis, classical titration techniques are employed. Second, we deal with the common morphological characterization methods. Diffraction techniques like X‐ray diffraction (XRD) provide information on the crystallinity and/or secondary structure. Microscopic techniques yield morphological information at various length scales, and we discuss optical microscopy (OM), electron microscopy (EM), and surface probe microscopy (SPM).

Throughout this chapter we illustrate a number of these techniques with spectra obtained for a polycarbonate, synthesized by cationic ring‐opening polymerization of trimethylene carbonate (TMC) over a trifunctional initiator, trimethylolpropane (TMP), at 90 °C using fumaric acid as catalyst at an acid/initiator ratio of 2.5 (see Figure 8.1 [3]), as this polymer shows some characterization issues.

Schematic structures illustrating the synthesis of polycarbonate using trimethylolpropane (TMP) and trimethylene carbonate (TMC).

Figure 8.1 Synthesis of polycarbonate using TMP and TMC [3].

8.2 IR and Raman Spectroscopy

Both IR and Raman spectroscopy make use of the vibrations of molecules. In essence an N‐atom molecules has six external coordinates (overall translation and rotation) and 3N − 6 internal coordinates, representing the internal vibrations, rotations, and librations. These 3N − 6 internal coordinates can be decoupled to what are called normal coordinates by a coordinate transformation. To a good first approximation, these normal coordinates are independent coordinates that can be described by a harmonic oscillator. In brief, a harmonic oscillator describes motion of a (pseudo‐)particle in a parabolic potential given by φ = ½k(r − r eq)2 , where k is the force constant, r the position (normal) coordinate, and r eq its equilibrium value. Classically the (angular) frequency is given by ω = (1/2π)(k/μ)1/2 , where μ is the reduced mass (recall, for example, for a diatomic molecule having atoms with mass m 1 and m 2, 1/μ = 1/m 1 + 1/m 2 ). Equivalently, one uses the wave number ϖ = (1/2πc)(k/μ)1/2 , where c is the speed of light. However, quantum mechanics is required, which leads to the set of energy levels E n  = (n + ½)ℏω or ε n  = (n + ½)ϖ with n being the quantum number and 2π Planck's constant. For example, for water we have 3N – 6 = 3 normal coordinates, conventionally labeled as the symmetric stretching vibration ( ϖ = 3652 cm−1 ), the symmetric bending vibration (ϖ = 1595 cm−1), and the antisymmetric stretching vibration ( ϖ = 3756 cm−1 ). In this case the normal coordinates are still easily visualized, but with increasing number of atoms, this becomes increasingly difficult. Quantum mechanics teaches us further that there is a selection rule for absorption saying that only transitions with Δn = ±1 are allowed. This implies that absorption occurs at frequency ω or at wave number ϖ. Obviously, if the potential is only approximately a parabola, the behavior is not exactly described by the harmonic oscillator, and we have the anharmonic oscillator. Although the harmonic oscillator description is usually very good, anharmonicity leads to the modified selection rule Δn = ±1, ±2, ±3, and so on. Usually only a few Δn values are relevant, say, Δn = ±1, ±2, and ±3, which are referred to as the fundamental absorption and first and second overtone, respectively. For polyatomic molecules, we will also have combination and difference bands due to the addition and subtraction of two fundamental frequencies or overtones. Fortunately, the fundamental absorption is usually by far the strongest.

In vibrational spectroscopy typically a wave number range of ϖ = 4000–667 cm−1 is used. Normal coordinates can be divided in skeleton vibrations and characteristic group vibrations. Skeleton vibrations, in which all atoms are involved much to the same extent, can be found typically in the range 1400–700 cm−1. It is seldom possible to assign particular bands to specific modes without doing extensive calculations, but the whole complex of bands is highly typical of the molecular structure at hand. Changing a substituent on the chain or in a ring changes the absorption often significantly, leading to what is called fingerprint bands. A moiety may be recognized merely from the appearance of this part of the spectrum. The interesting thing now is that in the characteristic group vibrations, only a small portion of the atoms in the molecule is involved and that these vibrations are often almost independent of the structure of the molecule as a whole. With a few exceptions, they fall well in ranges above and below that of the skeleton vibrations. Due to this near independence of the structure as a whole, their values can be largely transferred from one molecule to another. For example, the C=O stretching vibration is approximately such a normal coordinate, and the wave number of 1600–1750 cm−1 is characteristic for this bond. Similarly, for the C-H bond we have 2800–3000 cm−1. This is true for quite a few structural features of molecules, and the molecule may be recognized from such bands. In fact, this is the only reason why vibrational analysis is such useful tool: one can, from the spectra, without hardly or no calculation at all, assess whether such a characteristic group is present or not. Table 8.1 displays the characteristic frequencies for a few of these groups in organic molecules [4]. An extensive compilation of characteristic frequencies is given in [5].

Table 8.1 Characteristic vibrations for several groups.

Group ϖ (cm−1) Group ϖ (cm−1)
-O-H 3600 c07f014 1200–1000
-NH2 3400 c07f014 1200–1000
≡C-H 3300 c07f014 1200–1000
=C-H 3060 (Aromatic) >C=S 1100
=CH2 3030 c07f014 1050
-CH3 2970 (Asym. stretch) c07f014 725
2870 (Sym. stretch) c07f014 650
1460 (Asym. deform.) c07f014 550
1375 (Sym. deform.) -NCO 2300–2250 (Asym. stretch)
-CH2- 2930 (Asym. stretch) 1460–1340 (Sym. stretch)
2860 (Sym. stretch) 650–580 (Deform.)
1470 (Deform.) -O-CO-NH2 3450–3400 (Asym. stretch)
-SH 2580 3240–3200 (Sym. stretch)
-C=N 2250 1280–120 (Ring vib.)
-C≡C- 2220 c07f014 950–750 (Ring vib.)
>C=O 1750–1600 c07f014 880–780 (Ring vib.)
>C=C< 1650
>C=N- 1600

Not all normal coordinates respond to IR radiation: the normal coordinate should yield a change in dipole moment; otherwise the IR absorption is zero. Similarly, for Raman analysis the normal coordinate should display a change in polarizability. For a simple molecule such as water, straightforward considerations show that all three normal coordinates are Raman active. For complex molecules, group theory can be used to determine which normal coordinate is Raman active or not. Generally symmetric modes produce relatively strong absorbing Raman lines. Also, if a molecule has a center of symmetry, Raman active vibrations are IR inactive and vice versa. If such a center is not present, some but not necessarily all vibrations may be both IR and Raman active. Finally, we note that under the action of light of sufficient energy, the molecules may add or subtract from the light a small amount of energy corresponding to the energy of some particular vibration, thereby obtaining a slightly different wavelength from the incident light. The light is said to be scattered instead of absorbed.

The IR signal is recorded by a transmission measurement in the appropriate wavelength regime. The absorption A is expressed by the relation A = εcl, where c is the concentration, l the path length, and ε the extinction coefficient, characteristic for the material. Fourier techniques, largely aided by the availability of the fast Fourier transform (FFT) algorithm, render the accuracy to be achieved to be mainly dependent on the number of scans (given the equipment). One usually refers to the results as Fourier‐transform infrared spectroscopy (FTIR) spectra. The signal can not only be measured in transmission but also in reflection, labeled as the attenuated total reflection (ATR) technique. With these instruments the reflected signal is recorded through an IR transparent crystal (such as diamond or germanium) that can be pressed against a solid sample. This avoids both dissolution and/or powdering of the coating material, thereby leading to a nondestructive technique. In the latter case, quantification is dependent on not only the equipment but also the surface roughness of the sample as well as the condition that the first few micrometers should be representative for the bulk material, as the penetration depth is limited to such a value. Obviously, this condition is not necessarily fulfilled.

In Figure 8.2 we show, as an example of an ATR FTIR result, the spectrum of a polycarbonate film [3]. It will be clear not only that the observed features are consistent with the chemical or primary structure but also that secondary structural features, such as branching, cannot be assessed. Generally, the kinetics of film formation and curing for a range of temperatures can be studied using films deposited on a Si wafer (which is largely IR transparent) by following the intensity change of a characteristic group, for example, the NCO group. For the following other chemical changes over the depth, for example, as they occur during weathering, a technique sometimes denoted as beveling can be used, in which the specimen is ground at a small angle with the surface, so that the depth profile is significantly enlarged and becomes amenable for analysis. For a detailed example, see [6].

Graphical illustration of baseline-corrected ATR FTIR spectrum of polycarbonate showing the C-H stretching (2955 cm-1), the > C=O stretching (1782 cm-1), and the asymmetric C-O vibration (1188 cm-1).

Figure 8.2 Baseline‐corrected ATR FTIR spectrum of polycarbonate showing the C-H stretching (2955 cm−1), the >C=O stretching (1782 cm−1), and the asymmetric C-O vibration (1188 cm−1).

Combining spectroscopic techniques with microscopy led to IR and Raman microscopy, in which the signals can be recorded locally. The lateral resolution δ is determined by the laser wavelength λ and numerical aperture (NA) used via δ ≅ 0.61λ/NA. For a true confocal design (which incorporates a fully adjustable confocal pinhole aperture), depth resolution in the order of 1–2 µm is possible, allowing individual layers of a sample to be discretely analyzed. The achievable depth resolution will depend strongly on the laser wavelength, microscope objective, and sample structure. By scanning the surface, chemical mapping can be done.

For example, for good adhesion between polyolefins and other materials, it is usually necessary to pretreat and prime the polyolefinic substrate (see Section 9.8.4). One type of such primers is waterborne poly(urethane–urea)s (wPUUs) that can be used on biaxially oriented polypropylene (BOPP) substrates [7]. The type of chain extender used influences the intermolecular hydrogen bonding efficiency, the molecular ordering, and the intermolecular interactions with water molecules, thereby affecting significantly the final dry and wet adhesion properties of the primer/BOPP films. A wPUU primer using hydrazine as chain extender with a good parallel orientation of the chains to the surface of the polymeric surface appeared to be the most efficient primer due to better phase separation driven by higher hydrogen bonding efficiency. Figure 8.3a shows the grazing angle Fourier‐transform infrared spectroscopy (GA‐FTIR) spectra for these wPUUs, while Figure 8.3b and Figure 8.3c show a map of the C=O stretching region for the optimum wPUU after deposition on BOPP and after delamination a laminated BOPP film using a T‐peel test (see Section 9.8.5). Here mapping in combination with GA‐FTIR was used in view of the rather small thickness of the primer layer. After deposition a rather homogeneous coverage of the wPUU was observed, but after delamination, certain areas show the presence of wPUU, while in other areas the wPUU signal is absent. This indicates that adhesive failure at the BOPP–primer interface occurred. The overall result is a good adhesion, even after storing the laminates one week in water.

Illustration of GA-FTIR mapping. (a) Spectra of biaxially oriented polypropylene (BOPP) and BOPP coated with different PUUs; (b) C=O stretch signal after deposition of PUU1 on BOPP showing mainly PUU1 signal; (c) C=O stretch signal delamination of laminated BOPP, showing both areas with PUU1 signal and without PUU1 signal (lower area).

Figure 8.3 GA‐FTIR mapping. (a) Spectra of biaxially oriented polypropylene (BOPP) and BOPP coated with different PUUs; (b) C=O stretch signal after deposition of PUU1 on BOPP, showing mainly PUU1 signal; (c) C=O stretch signal delamination of laminated BOPP, showing both areas with PUU1 signal and without PUU1 signal (lower area). For both (b) and (c), the field of view is 250 µm2.

IR microscopy in combination with the introduction of a depth profile in a coating can provide a detailed, depth‐dependent quantitative chemical characterization, as has been shown for an artificially degraded polyester–urethane clear coat [6]. A consistent method for data analysis and an accurate determination of the depth profile are important in this respect. Both aspects can be dealt with by a proper normalization on an internal standard and by optical profilometry measurements, respectively.

Another development is the use of tip‐enhanced Raman spectroscopy (TERS). In this technique the electromagnetic field is enhanced by a sharp conductive tip approaching a film deposited on a Ag or Au substrate. Significant enhancement can be realized (factor 10–103, dependent on conditions and definition of enhancement factor), which is a distinct advantage as the Raman signal is generally weak. The technique can be combined with mapping, so that a map of a characteristic frequency can be obtained over a certain area. A topographical resolution down to 15 nm can be reached, while the lateral resolution is determined by the tip radius. Figure 8.4 provides an example of the results of this technique showing the phase separation between poly(methyl methacrylate) (PMMA) and poly(styrene‐co‐acrylonitrile) (SAN) in a thin film and the determination of the width of the transition region between the phase‐separated regions [8].

Illustrations of high-resolution chemical identification of polymer blend thin films of PMMA and SAN using tip-enhanced Raman mapping after annealing at 250 °C for, respectively, 2 and 5 min.

Figure 8.4 High‐resolution chemical identification of polymer blend thin films of PMMA and SAN using tip‐enhanced Raman mapping after annealing at 250 °C for, respectively, 2 and 5 min. Coarsening during annealing is clearly observed, while the interface width is determined to be about 200 nm, in good agreement with the width as predicted by Flory–Huggins theory. Source: Xue et al. 2011 [8]. Reproduced with permission of American Chemical Society.

The literature on IR and Raman spectroscopy is large. The classic reference on IR and Raman spectroscopy of molecules is [9]. A more recent treatise is [10], while [1114] deal specifically with polymers. A concise discussion can be found in [15].

8.3 NMR

In NMR techniques, one employs the nuclear spin of elements of a sample positioned in a stationary magnetic field, which are excited by a high radio frequency (RF) electromagnetic field. As we change the frequency of the RF field, the nuclei that are lined up by the stationary field absorb energy and flip their orientation. The technique is most of the time used for molecules (resins, extractables) in solution, for example, in deuterated chloroform, but can be applied for solvent‐swollen samples as well as for samples in the solid state. Since the resonance frequency in solutions is usually sharp, we also have a sharp response. In solution NMR the most frequently used nucleus is that of 1H. Other nuclei used are 13C, 15N, 19F, 29Si, and 31P.

In brief [16], all nuclei with an odd mass number possess a spin with angular momentum I , where 2π is Planck's constant and images with I an odd integral multiple of ½. Nuclei with even isotope number either are spinless if the nuclear charge is even or possess a spin I with value 1, 2, 3, etc. Having both a spin and a charge renders the nucleus to have a magnetic moment μ  = γ n I  = g n β n I , where γ n is the gyromagnetic ratio of the nucleus and g n the nuclear factor. The nuclear magneton β n = eℏ/2m p c combines e and m p, the charge and mass of the proton, respectively, and c is the velocity of light. The values of g n and I are the quantities that distinguish the nuclei. The spin states are quantized where the component m I in any given direction can take integer values between +I and −I, leading to 2I + 1 components. Hence, for a proton with I = ½, to which we limit our discussion mainly from now on, we have two states, |α〉 with m I  =  + ½ and |β〉 with m I  = −½. Applying a magnetic field H , the interaction between the field and the moment is given by

8.1 images

if we take the direction of the field in the z‐direction. In a macroscopic assembly of protons subjected to a magnetic field H, the distribution between α and β spins is governed by the Boltzmann distribution. In thermal equilibrium, the number N β of β spins divided by the number N α of α spins reads exp(−ΔE/kT) with the energy difference ΔE = g n β n H . To induce transitions between these two levels, one applies an oscillating electromagnetic field with frequency ν (or angular frequency ω = 2πν) and thus with energy  = ℏω. Resonance occurs when

8.2 images

Quite generally, transitions between levels a and b occur as dictated by the transition probability

8.3 images

Here, V is the perturbation that mixes states a and b and δ(x) the Dirac delta function. 1 It holds that P ab  = P ba  = P. The total number of spins is N = N α+N β, the difference n becomes n = N α − N β , and N α = ½(N + n), while N β = ½(N − n). The rate of change of state |α〉 is then given by

8.4 images

Therefore, dn/dt =  − 2Pn or

8.5 images

where n(0) is the difference at time t = 0. The rate of absorption of energy reads

8.6 images

which approaches zero for n → 0. This state of affairs is labeled as saturation. However, the (nonradiative) interactions between the nuclei and the surroundings inevitably lead the spin configuration to change, a process that is called spin–lattice relaxation. 2 The consequence is that the upward and downward relaxation processes become W αβ and W βα for which we have W αβ ≠ W βα . By analogy we now obtain

8.7 images

In thermal equilibrium dN α/dt = 0, and we have N β 0/N α 0 = W αβ/W βα , where N α 0 and N β 0 are the initial equilibrium populations. Since that ratio equals the Boltzmann ratio exp(−ΔE/kT), we obtain

8.8 images

The quantity T 1 has the dimension of time and is called the spin–lattice relaxation time. Dependent on the mobility in the system at hand, it may be minutes or sometimes even longer. Now introducing the field again,

8.9 images

so that at equilibrium with dn/dt = 0, we have

8.10 images

The rate of absorption of energy now becomes

8.11 images

and as long as 2PT 1 ≪ 1, saturation can be avoided.

So far, we have discussed the effect of the externally applied magnetic field (or Zeeman) effect H =  − g n β n HI z , but a nucleus is shielded by its surrounding electrons, which modifies the expression to

8.12 images

The parameter σ is the chemical shift, 3 characteristic for the near environment of the nucleus. The signals of the various protons in a compound thus show a specific shift with respect to a reference signal. Finally, there is also the interaction between spins, and using an isotropic spin–spin interaction, we have ℋ I = J I 1· I 2 with coupling constant J. Spin coupling leads to the splitting of peaks in two or more components, but the effect of spin coupling I is often much smaller than of the magnetic field ℋ H . In total the interaction is thus

8.13 images

Let us consider in a fairly general way the macroscopic magnetization M z for a system of N α spins in the α state and N β spins in the β state, for the moment without an externally applied field. The z‐component of M is then

8.14 images

and since dn/dt = −n/T 1, we have dM z /dt = −M z /T 1, decaying to zero. As there is no distinction between the various directions, M x and M y decay similarly. Now applying a steady field H 0 along the z‐axis, M z is governed by

8.15 images

The final value of M z no longer vanishes but reaches a steady state value M 0 = γ n ℏ n 0 = χ 0 H 0, where χ 0 = N(γ n )2 I(I+1)/3kT is the susceptibility. However, for the x‐ and y‐components, one still has an exponential decay given by

8.16 images

For these components the transverse relaxation time T 2 is different because changes in M x and M y are the result of a different process as for the M z component. To elaborate briefly, forget for these components the relaxation for a moment, and recall that the nucleus has not only a spin but also angular momentum. The field produces a couple of strength G  =  μ n ×  H , which forces the spin into a precession, the Larmor precession, round a cone, making a constant angle with the field. This leads to d I /dt =  G  = γ n ( I  ×  H ). The bulk magnetic moment behaves similarly according to d M /dt = γ n( M  ×  H ). Defining the Larmor frequency by ω 0 = γ n H, in components the equations are thus

8.17 images

Adding the effects of relaxation, we have in total what are called the Bloch equations:

8.18 images

Using these equations the response can be calculated when details for the mechanisms for T 1 and T 2 are provided. For that discussion, we refer to the literature [16]. Moreover, the line shape of the response can be derived from these equations and appears to be fairly generally given by a Lorentz expression.

Many practical difficulties occur if one wants absolute data. Hence, to avoid the influence of the solvent, typically deuterated chloroform (CDCl3), the chemical shifts are frequently taken with respect to the response of a reference, typically tetramethylsilane (TMS), as for this molecule there is only one strong proton signal. Moreover, they are expressed in ppm (parts per million) of the applied frequency, and when using TMS = 0 ppm, we use what is called the δ‐scale (alternatively, one uses the τ‐scale with TMS = 10 ppm). As the chemical shift of the signal and its splitting in components is representative for the environment of the proton, analyzing the various peaks/multiplets, one can, as long as H  ≫  I , infer the chemical structure of the molecule. For example, acetaldehyde CH3COH contains two types of protons: those of the methyl group, all equivalent, and that of the aldehyde proton. The response of the methyl group in the 1H NMR spectrum is influenced by the spin state of the aldehyde proton (α or β) so that this peak splits in a doublet. The three protons of the methyl group lead to a quadruplet for the aldehyde proton response, as the three spins of the methyl group can combine to M z  = 3/2 (ααα), M z  = 1/2 (βαα, αβα, ααβ), M z  = −1/2 (ββα, βαβ, αββ), and M z  = −3/2 (βββ). From this we see that the quadruplet peak ratios are 1 : 3 : 3 : 1. Similarly, for ethanol the CH2 signal is split in a quadruplet due to the three neighboring CH3 protons, and the CH3 signal becomes a triplet due to the CH2 protons (Figure 8.5a). The OH group typically shows one signal as this proton is rapidly exchanged with other protons mainly due to impurity water (and to a lesser extent also to other alcohol molecules), effectively yielding no interaction. It can be shown that for a coupling constant of J Hz, the coupled nucleus must exchange more rapidly than J/2π to collapse a multiplet to a singlet. As for the OH proton J ≅ 6 Hz, an exchange rate of about one per second suffices. Proper drying will lead to the expected triplet structure in the spectrum. An obvious use of this effect is the study of exchange rate kinetics as a function of temperature and concentration. Moreover, the position of the peaks depends somewhat on the solvent used. For example, in D2O the CH3 and CH2 signals will show up at 1.0 and 3.5 ppm approximately, while in CDCl3 they show up at about 1.5 and 3.9 ppm, respectively. The position of the OH peak is highly variable, again dependent on the solvent. For example, in D2O the location is at about 4.8 ppm, while in CDCl3 the OH peak will be in between the CH2 and CH3 signals at about 2.2 ppm. Typical ranges observed for various functional groups are shown in Figure 8.5b. Finally, one might wonder whether equivalent nuclei, for example, those of the methyl group in the acetaldehyde, do not influence each other. It can be shown that, in many cases but not always, this interaction does not influence the spectrum, so that it can be usually ignored [16].

Schematic illustrations of (a) 1H NMR spectrum of ethanol in D2O showing the TMS peak at δ = 0 ppm and the CH3 triplet, CH2 quadruplet, and OH singlet; (b) typical δ-ranges for various functional groups.

Figure 8.5 (a) 1H NMR spectrum of ethanol in D2O showing the TMS peak at δ = 0 ppm and the CH3 triplet, CH2 quadruplet, and OH singlet and (b) typical δ‐ranges for various functional groups.

As soon as H and I become comparable, a much more involved analysis is generally required (note that there is additional fine structure in both multiplets of the ethanol spectrum that are already indicative of this). An example of a more elaborate analysis is the one for the polycarbonate indicated before [3], using both 1H (Figure 8.6) and 13C spectra (Figure 8.7). Similar to the IR characterization, the expected features are present, but a full characterization of its structure requires additional methods.

Schematic illustration of 1H NMR spectrum of polycarbonate as obtained in CDCl3 obtained at 400 MHz.

Figure 8.6 1H NMR spectrum of polycarbonate as obtained in CDCl3 obtained at 400 MHz.

Schematic illustration of 13C NMR spectrum of polycarbonate in CDCl3 obtained at 125 MHz.

Figure 8.7 13C NMR spectrum of polycarbonate in CDCl3 obtained at 125 MHz.

As we have seen, the relaxation times, representing mobility of the material, play an enormous role. The value for T 1 ranges from 10−2 to 104 s for solids and from 10−4 to 10 s for liquids. For solids T 2 is typically 10−4 s, while for liquids, T 2 ≈ T 1. These values have a pronounced effect on the line width, as easily understood from the Heisenberg uncertainty principle. This principle tells us that ΔEΔt ≥ , where ΔE is the uncertainty in energy E and Δt the life time, so that the frequency spread becomes Δν = ΔE/h ≥ (2πΔt)−1. If both T 1 and T 2 are large, Δν will be small, while if either T 1 or T 2 is small, Δν will be large. For a typical liquid, T 1 ≈ T 2 ≈ 1 s, so that Δν ≈ 0.1 Hz, while for a typical solid, T 2 ≈ 10−4 s, leading to Δν ≈ 103 Hz. Therefore, as in the solid state the mobility is limited, this leads to broad, somewhat noncharacteristic features in a solid‐state spectrum. For polymers in the solid state, often the 13C response is used with as advantage a wider spread of chemical shift values but as disadvantages a lower signal‐to‐noise ratio and rather long relaxation times, leading to long total scan times. For example, in polyethylene, T 1(13C) ≥ 1000 s [17], while T 1(1H) ≤ 1 s. By using the so‐called magic angle spinning (MAS) [18], a rapid rotation of the sample around a certain angle, the mobility issue can be overcome. The fast rotation renders only the trace, that is, the isotropic part, of the chemical shift tensor observable. The quality of the spectra then depends on the rotation rate and magnetic field strength. The issue of the long relaxation time can be overcome using cross‐polarization (CP) [19] in which magnetization is transferred from the 1H spins to the 13C spins under the condition of matching RF fields. This leads to a fourfold increase in signal‐to‐noise ratio due to the larger Boltzmann population as well as much shorter scan times. Using CP–MAS techniques, solid‐state NMR has become a powerful tool for the analysis of polymers.

Using nuclei other than 1H requires further considerations. Such nuclei have often a spin larger than 1/2, which is accompanied by having a nuclear quadrupole moment. While the increased spin complicates the spectra, quadrupole relaxation broadens the line width. Typically the chemical shift is larger, though. For example, while the range for 1H is about 15 ppm, the range for 19F is some 600 ppm. Further developments are multidimensional NMR [20] and NMR imaging [21]. As an example, we mention a multidimensional NMR study on SiO2 particles in situ grown via the sol–gel process in hexylamine‐swollen natural rubber and ethylene–propylene diene rubber using 1H and 29Si nuclei simultaneously [22]. Chemical mapping of the silica particles was performed by solid‐state NMR (and XPS) and revealed the presence of residual ethoxy groups and hexylamine on the silica surface. Ethoxy groups appeared to be present inside the silica structure as well as on the surface. Together with the hexylamine present at the surface, this results in an increase of hydrophobicity for these silica particles as compared with silica particles in conventional rubber–silica nanocomposites. The chemical structure of the in situ grown silica inside the rubber matrix has quality comparable to that of commercial high‐density silica, as judged from the Q2 : Q3 : Q4 ratios (see Figure 8.8c) determined by 29Si MAS–NMR measurements. The excellent dispersion of the sol–gel synthesized silica particles in the rubber matrix, their hydrophobic surface, and the entrapment of the rubber chains in the growing silica particles provides options for creating nanocomposites with improved mechanical properties as compared with conventional nanocomposites. For an example of NMR imaging, see Section 9.2.

Illustrations of silica in rubber showing 2D 29Si{1H} HETCOR NMR spectra recorded using a MAS frequency of 10.0 kHz and a CP time of 4.0 ms. (a) Sol-gel synthesized silica; (b) high-density silica. Schematic drawing of the silicon tetrahedra denoted as Qn, with n = 2, 3, 4, corresponding to the number of Si-O-Si bonds.

Figure 8.8 Silica in rubber showing 2D 29Si{1H} HETCOR NMR spectra recorded using a MAS frequency of 10.0 kHz and a CP time of 4.0 ms. (a) Sol–gel synthesized silica and (b) high‐density silica. (c) Schematic drawing of the silicon tetrahedra denoted as Q n , with n = 2, 3, and 4, corresponding to the number of Si-O-Si bonds. The gray shading represents bulk silica corresponding to Q4 silicon atoms.

For further details of all these methods, we refer to the extensive NMR literature from which we mention here only a few treatises [12, 23, 24].

8.4 Functional Group Analysis

As stated in Chapter 1, most polymer coatings are crosslinked thermoplasts. Crosslinking is necessary to fulfill requirements such as solvent resistance, impact resistance, and durability. The network properties are determined by the chemistry and physical characteristics of the applied resin and crosslinker and determine, for example, the T g, the crosslink density and resistance against weathering. Together with the physical properties of the resins and crosslinker, properties such as flexibility and mechanical behavior and adhesion are set.

A paramount factor in tuning these properties is the curing process, which involves the reaction between the reactive groups of the resin and crosslinker and possibly a reaction between reactive groups with species from the environment (water, oxygen). Curing can take place spontaneously at room temperature, although often very slow, but most of the reactions are triggered by temperature, light, or other irradiation. The properties of the final network depend on the amount of functional groups, the functionality of the applied resin and crosslinker, and the functional group distribution in the polymer chains. The presence of nonfunctional polymer chains is also an important factor codetermining, for example, the amount of extractable material.

For these reasons the determination of functional groups in all aspects receives a great deal of attention. The quantification of functional groups can be done in most cases by standard ASTM methods [25]. The determination of isocyanate, hydroxyl, and epoxy groups is based on titration techniques. Also the amount of double bonds in air‐drying alkyds can be quantified by titration. But in these alkyds the presence of single, double, or triple double bonds as well as the character of these bonds (e.g. cis vs. trans and isolated vs. conjugated) determines strongly the reactivity and the drying time. The quantification of these variables needs a deeper investigation using methanolysis or hydrolysis after the fatty acids are identified and quantified by gas chromatography and mass spectrometry (GC–MS). Also from NMR experiments, relevant information can be obtained.

The quantification of amine groups is less straightforward. The standard ASTM methods do not provide detailed information on the distribution over primary, secondary, and tertiary amines. The presence of hydroxyl groups makes the quantification even more complicated. More recently a new method is introduced based on a derivatization with phospholane derivatives (e.g. dioxa‐chlorophospholane) followed by identification and quantification by 31P NMR [26]. This method yields in one analysis the whole picture of amine and hydroxyl groups. The extra advantage is that the method can also discriminate between primary and secondary hydroxyl groups. An example is shown in Figure 8.9a.

Graphs showing functional group analysis. (a) Determination of mono-glycerine (MG) and di-glycerines (DG) in dicylglycerol oil in pyridine-chloroform solution. (b) Normal-phase gradient polymer eluation chromatography (GPEC) of an acid functional polyester showing the signal as obtained by an evaporative light scattering detector (ELSD) and a UV detector.

Figure 8.9 Functional group analysis. (a) Determination of monoglyceride (MG) and diglycerides (DG) in diacylglycerol oil in pyridine–chloroform solution by 202.2 MHz 31P NMR using cyclohexanol as internal standard after derivatization by dioxa‐chlorophospholane and (b) normal phase gradient polymer elution chromatography (GPEC) of an acid functional polyester showing the signal as obtained by an evaporative light scattering detector (ELSD) and a UV detector.

The next step in the analysis of functional groups is the determination of the group distribution. Liquid chromatography techniques, possibly combined with MALDI, provide information about the number of polymer chains carrying 0, 1, 2, and so on functional groups. In Figure 8.9b an example is given of an acid functional polyester showing the amount of nonfunctional (cyclic) chains and the 1‐ and 2‐functional chains [27].

8.5 XPS, SIMS, and LEIS

In this section, we deal briefly with three useful techniques. While XPS is widely available, SIMS is much less widely available, and this is even more true for LEIS.

In XPS the sample is irradiated with soft X‐rays, usually from a Mg K α (1253.6 eV) or Al K α (1486.6 eV) source in high vacuum (<10−7 torr). The X‐ray irradiation generates photoelectrons, and the emitted radiation is directly related to the bonding energy E bon, the energy of the radiation used where h is Planck's constant and ν the frequency, the kinetic energy of the incident electrons E kin, and the work function φ, depending on the sample and the spectrometer used for measuring. From the energy balance, we have E bon =  − E kin − φ. The binding energies of the electrons are characteristic of the element and the environment of the atom in the molecule. This yields specific information for the various elements present in the sample. Hence, XPS can characterize the composition and the chemical state of the near‐surface region. In practice, for each element sensitivity, factors have to be used, which are tabulated in handbooks. In principle, the technique is nondestructive, although sample damage due to electron radiation and evaporative losses may occur [28]. For example, the fluorine‐to‐carbon F(1s)/C(1s) have been found to decrease during an X‐ray exposure of several minutes, depending on experimental conditions [29].

The typical beam diameter is about 1 mm, so that an average composition is obtained over the corresponding area. Much narrower beam diameters are possible though. Generally, sample surfaces have to be rather flat as roughness influences the signal because the penetration depth is limited to, say, less than 10 nm.

XPS can yield qualitative and quantitative information on adsorbed surfactants and segregation from the bulk. The overlayer on the substrate decreases the intensity I e of a photoelectron peak, originating from a component in the substrate, by a factor

8.19 images

where δ is the thickness of the overlayer sampled, I c the intensity of the photoelectron peak originating from the covered substrate, images the inelastic mean free path (IMFP) of the electron in the overlayer or attenuation length of the emerging electron, and α the electron takeoff angle relative to the sample surface [30]. The IMFP, the average distance a photoelectron travels before an inelastic collision, depends on the binding energy of the photoelectron and the composition of the sample that, in turn, is affected by the energy of the radiation source. The angle‐dependent ratio R of the composition of the overlayer to substrate can be calculated using a simplified expression given by Fadley [31]:

8.20 images

where K is a function of atom density, instrument response, the kinetic energies of the substrate and overlayer atoms within the measured levels, and the effective cross sections of the atoms. The effective overlayer thickness τ is given by τ = δ/images where δ is the actual overlayer thickness. About 95% of the signal emerges from the distance 3images within the solid. The sampling depth has a maximum when α = 90° and is often below 5 nm. Angle‐dependent measurements thus offer the possibility to estimate the concentration profile of a segregated component at the surface. A general reference is [32].

In SIMS a primary ion beam hits the surface, which excites kinetically the atoms in the sample. Some of the atoms escape as ion, and these secondary escaping ions are characteristic for the sample. This implies the technique is destructive. The penetration depth is about 1–2 nm. Broad reviews are given in [3335].

Finally, in LEIS [36], the sample is bombarded with low energy ions (He+, Ne+, Ar+, etc.), which are elastically scattered by the atoms in the outermost surface layer of the sample. This is actually a kind of billiards, but with ions. In elastic scattering the energy of the scattered ions is directly determined by the mass and energy of the incident ions and the mass of the scattering ions. Measuring the energy of the scattered ions yields the mass of the scattering atoms. Using a sufficiently low dose, the technique is nondestructive. Moreover, it is a quantitative technique without the need for sensitivity factors, although, since the technique really probes the outermost atomic layer, shadowing effects can occur. A brief overview can be found in [37].

As an example, in Figure 8.10, XPS results are given for fluorinated films [38], showing the various C peaks in the spectrum and the C/F ration as a function of the added amount of fluorine at different takeoff angles. From these images it can be clearly seen not only that a shallower penetration depth leads to a higher F/C ratio but also that the surface concentration rapidly saturates with fluorine content.

Illustrations of XPS results on films prepared from solventless liquid oligoesters (SLOs) and partially fluorinated isocyanates cured at 80 °C. (a) Spectra for a film with a fluorine content of 1 wt %. Takeoff angles: (a) 75°, (b) 45°, and (c) 15°. (b) Surface F/C atomic ratio as a function of the added fluorine content in the films from SLO and F6-N3300.

Figure 8.10 XPS results on films prepared from solventless liquid oligoesters (SLOs) and partially fluorinated isocyanates cured at 80 °C. (a) Spectra for a film with a fluorine content of 1 wt%. The labels CF3 and CF2 indicate the response for tri‐ and di‐fluorinated carbon atoms. Takeoff angles: (a) 75°, (b) 45°, and (c) 15°. (b) Surface F/C atomic ratio as a function of the added fluorine content in the films from SLO and F6‐N3300.

8.6 SEC

Information about the molar mass distribution is required in many cases. The usually employed technique is SEC that separates the compounds of interest on the basis of size, sometimes coupled with mass spectrometry (MS). When characterizing polymers, it is important to consider the average value of the molar mass as well as the dispersity Đ (see Section 2.1). Polymers can be characterized by a variety of definitions for molar mass including the number average molar mass M n, the volume average molar mass M w, the size average molar mass M z, or the viscosity molar mass M v. Standard SEC determines the M v distribution, and based on these data, M n, M w, and M z can be calculated.

SEC separates on the basis of the size or hydrodynamic volume (radius of gyration) of the compounds. This differs from other chromatographic separation techniques, which generally depend upon chemical or physical interactions to separate different molecules. Separation occurs via the use of porous beads packed in a column. The smaller molecules can enter the pores more easily and therefore spend more time in these pores, increasing their retention time. Conversely, larger molecules spend less time, if any, in the pores and are eluted quickly. All columns have a range of molar masses that can be separated. A drawback is the possibility of interaction between the stationary phase and the product to be analyzed. Any interaction leads to a later elution time and thus mimics a smaller compound size. Another important point is clustering of polymeric molecules (e.g. charged ones), leading to apparent higher molar weights. Adjustment of the eluent and/or adding of suitable modifiers can overcome this problem.

If a molecule is either too large or too small, it will be either not retained or completely retained, respectively. Molecules that are not retained are eluted with the free volume outside of the particles V 0, while molecules that are completely retained are eluted with the volume of solvent held in the pores V i. The total volume V t can be calculated from V t = V g + V i + V 0, where V g is the volume of the polymer gel.

As can be inferred, there is a limited range of molar masses that can be separated by each column, and therefore the size of the pores for the packing should be chosen according to the range of molar mass of molecules to be separated. For polymer separations therefore the pore sizes should be in the order of the size of the polymers being analyzed. If a sample has a broad molar mass range, it may be necessary to use several SEC columns in tandem with one another to fully resolve the sample.

Although SEC is often used to determine the relative molar mass, what SEC truly measures is the molar volume and shape function as defined by the intrinsic viscosity (see Section 10.2.2). This relative data can be used to determine molar masses within ±5% accuracy if comparable standards are used. Generally, however, one typically uses polystyrene (PS) standards with dispersity Đ < 1.2 to calibrate SEC experiments. Unfortunately, PS tends to be a very linear polymer, and therefore as a standard it is only useful to compare it with other polymers that are known to be linear and of relatively the same size. Narrow M w standards commercially available are PS and PMMA for nonaqueous SEC and poly(ethylene oxide), glycol, pullulan (a linear polysaccharide), dextran, and sodium poly(styrene sulfonate) for aqueous SEC. Conventionally compound detection is done with a refractive index or a UV detector. For determining absolute molar mass values, compound detection in SEC can be done with a (differential) capillary viscometer acting as detector or using a dynamic light scattering detector. SEC does not perform well in low molar mass regions of less than, say, 400 Da.

As an example, Figure 8.11 shows the SEC chromatogram of the polycarbonate alluded to before [3]. Having only this information leads to the conclusion that this polymer has a unimodal molar mass distribution (but see Section 8.7). General references for SEC are [3941].

Graphical curve for SEC chromatogram of polycarbonate showing an apparent monomodal distribution.

Figure 8.11 SEC chromatogram of polycarbonate showing an apparent monomodal distribution.

8.7 MALDI–MS

MALDI–MS is a soft ionization technique that produces high‐mass (quasi‐) molecular ions. It allows the analysis of large organic molecules (such as polymers, dendrimers, and other macromolecules) dissolved in a suitable matrix compound with a pulsed laser beam. These molecular ions tend to be fragile and fragment when ionized by more conventional ionization methods. The laser pulse desorbs and indirectly ionizes the molecules to be analyzed. A short‐pulse (a few nanoseconds) UV laser is typically used for desorption, but different wavelengths, such as IR, have been investigated as alternatives. The ionized species are analyzed by MS. Here, we use the brief review [42] as a guide.

In practice, MALDI analysis is a three‐step process. In the first step the sample is mixed with a suitable matrix material and applied to a metal plate. The use of organic matrices has become routine for MALDI. Key to a successful MALDI analysis is primarily the uniform mixing of the matrix and the analyte. The use of common solvents for matrix and polymer permits direct mixing and promotes polymer–matrix interaction. The matrix serves as a solvent for analyte molecules and separates them from each other, thereby reducing strong intermolecular forces (matrix isolation) and minimizing analyte cluster formation. Samples are typically prepared in the concentration ratio of 1 : 104 analyte : matrix in a suitable solvent such as water, acetone, or tetrahydrofuran. A few microliters of this mixture is deposited onto a substrate and dried, and the solid mixture is then placed into the MS. Nonsoluble polymers are generally not analyzable by MALDI as an intimate mixture of matrix/analyte cannot be created because of the lack of a common solvent.

In the second step, a pulsed laser irradiates the sample, triggering ablation and desorption of the sample and matrix material. Upon irradiation by the laser pulse, the matrix molecules absorb most of the laser's energy. The high matrix‐to‐analyte concentration ratio ensures that most of the photon energy is absorbed by the matrix and minimizes direct irradiation of the analyte. The energy absorbed by the matrix molecules is transferred into electronic excitation of the matrix within the solid sample mixture, creating an instantaneous phase transition from the solid phase to the gas phase. To initiate an effective desorption and subsequent ionization process, MALDI matrices should have strong absorption coefficients at the chosen laser wavelength. Many organic compounds that absorb strongly in the UV range have been evaluated as potential MALDI matrices, but only a small number function well: the search for efficient matrices is still very often carried out on a trial‐and‐error basis. Sample preparation for solvable polymers, be it in water or an organic solvent, is fairly well established though. Matrices for these polymers include 1,8,9‐anthracenetriol (or dithranol), 2,5‐dihydroxybenzoic acid, trans‐3‐indoleacrylic acid, and 2‐(4‐hydroxyphenylazo)benzoic acid. These matrices are often used in conjunction with alkali metal salts (LiCl, NaCl, KCl) or silver salts such as silver trifluoroacetate (AgTFA) to form matrix‐cationization agent mixtures. Cationization can be further enhanced by adding other metal salts. In some instances, the selection of metal salts is more critical than the matrix itself. Because different polymers can have drastically different ionization or cationization efficiencies, there is no universal matrix‐cationization agent combination applicable to all polymers in these classes.

In the third step the molecules to be analyzed are ionized by being protonated or deprotonated in the hot plume of ablated gases and can then be accelerated into whichever MS is used to analyze them. The volume of the material removed by each laser pulse is estimated to be approximately 10–100 µm in diameter (laser spot size) and a few hundred nanometers (tens of monolayers) deep. A dense gas cloud is formed and expands supersonically (at about Mach 3) into the vacuum. It is believed that for the molecules to be analyzed, ionization occurs in the expanding plume as the direct result of collisions between analyte neutrals, excited matrix ions, and protons and cations such as sodium and silver. The analyte ions are introduced into the MS and analyzed. MALDI mass spectra are most often acquired using TOFMS techniques that determine m/z ratios by measuring ion drift times between points of formation and detection. As the spectrum contains the abundance and mass of each oligomer, polymer characteristics that can readily be determined by MALDI–MS include molar mass averages M n and M w according to their definition (see Section 2.1), dispersity, mass of repeat units, and end‐group mass structure.

MALDI becomes increasingly complex for high molar mass values of polydisperse polymers, say, M w/M n > 1.2. The upper mass limit for MALDI is about 350 kDa. Numerous factors − including sample preparation, mass‐dependent desorption/ionization effects, and instrumental configurations − contribute to significant mass discrimination effects in analyzing polymers with high dispersity. This difficulty can be overcome by combining MALDI with SEC [43].

Nevertheless, MALDI has become one of the most powerful tools for mass analysis of high molar mass compounds, such as biomolecules and polymers. For the ever‐returning problem in polymer (coatings) of determining the molar mass distribution, MALDI, in principle, can determine a rather precise and absolute molar mass distributions, largely independent of polymer structure. This is largely due to the smaller than 1 Da resolution that is possible nowadays for the mass of the various fragments produced. In case the molar mass distribution is crucial, use of this technique may be imperative. For example, Figure 8.12 shows the spectrum of the polycarbonate indicated before [3]. Although SEC informs us that the molar mass distribution is simply unimodal, MALDI clearly shows that in reality two distributions are present, one due to a branched polymer and another due to a linear polymer. It appears rather difficult, if not impossible, to retrieve such information otherwise.

Illustration of MALDI-MS spectrum of polycarbonate clearly showing two distributions (from a linear and branched polymer) corresponding to the sample for which in c08f011 a unimodal SEC chromatogram is shown.

Figure 8.12 MALDI–MS spectrum of polycarbonate clearly showing two distributions (from a linear and branched polymer) corresponding to the sample for which in Figure 8.11 a unimodal SEC chromatogram is shown.

As for all characterization methods, also for MALDI, a large literature exists. We refer for further details to, for example, the reviews [42, 44, 45] and monographs [46, 47].

8.8 XRD

For determining a detailed molecular structure, one generally employs diffraction experiments on crystalline samples. To that purpose one can use, in principle, neutrons, electrons, and X‐rays. As neutron diffraction is cumbersome and electron diffraction has methodological difficulties (e.g. multiple scattering), we limit ourselves here to some remarks about XRD.

For XRD, typically monochromated Kα radiation is used (Cu, λ = 0.1541 nm or Mo, λ = 0.07093 nm), and these X‐rays are mainly scattered by electrons. For scattering experiments, a part of the incident radiation is scattered in various directions, characterized by the angle θ between the incoming and outgoing radiation. A scattering parameter s = (4π/λ) sin(θ/2) is defined, where λ is the wavelength of the radiation used and s = | s | = | s sca −  s inc| is the length of the difference between the wave vector of scattered and incident radiation. In the first Born approximation, representing elastic single scattering, the amplitude of the outgoing radiation A( s ) is directly related to the Fourier transform of the electron density ρ( r ), that is, A( s ) ∼ ∫ ρ ( r ) exp(−i s · r )d r . Chemical bonding in principle changes the spherically symmetric electron distribution around a free atom in a nonsymmetric distribution around the atom in the molecule. However, these changes are small (see, e.g. the author's favorite dealing with charge distribution in pyrazine [48] or, more generally, [49, 50]). To a good approximation, however, the electron distribution in molecules can be broken up into spherical distributions around the nuclei. Hence, it is usually permissible not to distinguish between the location of the center of the electrons and the atomic nuclei. Each atom acts as a scatterer, which upon irradiation emits spherically symmetric waves. If the phase of two emitted waves from different atoms is the same, constructive interference occurs. For a regular pattern of atoms, like in crystals, this leads to Bragg's law reading 2d sinθ =  (Figure 8.13), where d is the distance between the scatterers, θ the diffraction angle, and λ the wavelength. This leads to a strong response, the so‐called reflexions, in particular directions with respect to the crystal. In liquids and amorphous solid systems, the constructive distances are randomly dispersed, leading to a response that is equally distributed over a cone with a certain angle with respect to the incident radiation direction.

Illustration of X-ray diffraction. (a) Schematic illustrating the diffraction geometry; (b) schematic illustrating Bragg's law.

Figure 8.13 X‐ray diffraction. (a) Schematic illustrating the diffraction geometry and (b) schematic illustrating Bragg's law.

A somewhat more formal description is obtained using the electron density ρ( r ) at position r in a material. Describing the total electron density ρ( r ) as the sum of the atomic densities ρ j ( r ), we have ρdV as the number of electrons per volume element dV. The scattering from that element is given by ρ exp(i s·r )dV where r is the vector from the origin to dV. The total scattering in the first Born approximation (see, e.g. [51]) is then

8.21 images

with s the scattering vector. As ρ( r ) = Σ j ρ j ( r ), atomic scattering factors are defined by

8.22 images

where the last step can be made since ρ j ( r ) is spherically symmetric. Finally, we have to add the effect of atoms deviating from their equilibrium positions due to thermal motion, which usually is described by

8.23 images

with 〈u 2〉 representing the average square deviation. This leads to the temperature factor exp(−M) with M = 8π2u 2〉(sin θ)/λ. The scattered intensity is then

8.24 images

Here k is a factor, dependent on the experimental configuration and the size, shape, and absorption of the crystal, and F is the structure factor. Hence, measuring the intensities, correcting for the instrumental factors, and solving the phase problem that arises because only intensities, that is, F 2 values, can be measured, one can determine the structure. Normally, this is done by assuming a structure and iterating the theoretical structure factors so as to obtain a best fit with the experimental ones. This includes also the phase factors for which a good first estimate for crystals can be obtained using the so‐called direct methods [52]. Once a best fit is obtained, a Fourier inversion leads to the electron density ρ( r ). In essence, the peaks in this map represent the atomic densities with a peak height proportional to the atomic number, but as the atoms are vibrating due to thermal motion, heavily smeared by that motion. For single crystals an anisotropic exponent in the temperature factor is used, but for amorphous (and powdered) specimens, an isotropic exponent is employed. The quality of such a fit is characterized by factors, such R I  = Σ j (I j  − 〈I〉)/Σ j Σ j I j , where I j is the intensity of an individual independent reflexion and 〈I〉 its average intensity for the set of equivalent reflexions, and R = Σ j ΔF j j F obs , where ΔF j  =  ∣ F obs − F cal and F obs and F cal are the observed and calculated structure factors, respectively, summed over independent reflexions, or, alternatively, by the R w factor defined by R w = Σ j w j ΔF j 2 j w j F obs 2 , where w j  = 1/σ 2(F obs) with σ the standard deviation for a set of equivalent reflexions. Good structure determinations typically have an R factor or R w factor of a few percent. For example, the structure determination of pyrazine mentioned before [48] yielded R I  = 2.6%, R = 7.7%, and R w = 5.0%. Corrections for extinction or thermal diffuse scattering affect the final values.

Although the potential of diffraction methods is best employed for crystalline materials, also semicrystalline and amorphous structures can be analyzed with success. Figure 8.14shows the diffraction pattern for semicrystalline PE from which the amorphous part (the broad hump) as well as the 110 and 200 spacings can be distinguished. Crystallinity depends not only on the processing conditions but also on the tacticity of the material. For example, syndiotactic PS may be partially crystalline or fully amorphous (Figure 8.15). Crystallinity is enhanced with regularity in the polymer chain. The degree of crystallinity X cryst can be determined from the XRD pattern if an amorphous (I am) and crystalline (I cryst) response can be distinguished and it is assumed that their intensities reflect their amount. In that case we have X cryst = I cryst/(I am+I cryst). Moreover the width of the crystalline peaks can be used to estimate the crystallite size. According to Scherrer's equation, the crystallite size L is related to the width β (in radians at half maximum) of a crystalline peak by L = /β cosθ, where K is a shape factor and λ the wavelength used. The value for K is dependent on the shape of the crystallites as well as the definition of size [53]. For spherical particles, K = 2(ln 2/π)1/2 ≅ 0.9. The analysis is, however, not as simple as it seems because lattice distortions, structural disorder, and instrumental effects (typically leading to line broadening) may play a role as well.

Illustration of PE XRD pattern. (a) Azimuthal and angular intensity distribution; (b) Angular intensity distribution with the 110 and 200 reflexions indicated.

Figure 8.14 PE XRD pattern. (a) Azimuthal (φ) and angular (θ) intensity distribution; (b) Angular intensity (2θ) distribution with the 110 and 200 reflexions indicated.

Illustration of s-PS XRD pattern. (a) Partially crystalline PS; (b) fully amorphous PS.

Figure 8.15 s‐PS XRD pattern. (a) Partially crystalline PS; (b) Fully amorphous PS.

In the abovementioned example, crystallinity occurred throughout the sample. This is not necessarily the case. If crystallinity occurs only at the surface, conventional XRD possibly will not detect the crystallinity if the probed thickness is relatively large as compared to the crystalline layer thickness. In that case grazing angle XRD may be employed, which uses very shallow, varying incidence angles, leading to different penetration depths (Figure 8.16a). An example is provided by PDMS–PCL coatings [54] using PDMS with a monohydroxyalkyl terminal group with a M n of approximately 500, 1000, and 2000 g mol−1 (denoted as PDMS500, PDMS1000, and PDMS2000), coupled to ε‐caprolactone of block length of 16 (PCL16) and 32 (PCL32). After thermal curing, most of the coatings were transparent. However, the coatings based on PDMS1000–PCL16 and PDMS1000–PCL32 showed a slight haziness that disappeared at 60 °C. Polarized light OM suggested the presence of PCL crystals (the typical melting point of PCL being 30–50 °C), but neither DSC nor wide‐angle (conventional) XRD was able to detect crystallinity. Grazing angle XRD did, however, unequivocally establish the presence of PCL crystals (Figure 8.16b).

Illustration of grazing angle XRD. (a) Schematic showing penetration depth as a function of the grazing angle; (b) Diffraction peak of PDMS1000-PCL32 coatings on an Al substrate showing the presence of characteristic peaks for PCL crystals.

Figure 8.16 Grazing angle XRD. (a) Schematic showing penetration depth as a function of the grazing angle; (b) Diffraction peak of PDMS1000–PCL32 coatings on an Al substrate showing the presence of characteristic peaks for PCL crystals at 2θ = 21.4° and 2θ = 23.7°. The lowest grazing angle corresponds to a thickness of less than ≈10 nm.

For isotropic materials the X‐ray response will be a ring pattern with constant value over the angular range of the azimuth angle φ. Angular variation indicates that a directional preference is present. Such a directionality is often described by the Herman orientation factor f, calculated from the azimuthal intensity distribution as

8.25 images

The orientation factor becomes f = −0.5 for full perpendicular alignment, f = 1 for full parallel alignment, and f = 0 for a random orientation. As an example, Figure 8.17 shows the orientation for PS–PPO composite films with a different volume fraction φ of carbon nanotubes (CNTs) [55]. Radial averaging of the intensity as a function of q (0.02 Å−1 < q < 0.05 Å−1) was done, and the resulting intensity was plotted as a function of the azimuthal angle. The analysis led to f = −0.032 ± 0.008, f = −0.040 ± 0.003, and f = −0.010 ± 0.005 for φ ≅ 0.01, φ ≅ 0.03, and the pure PS–PPO matrix, respectively, and where ± indicates the standard deviation due to the fit. Alternatively, one may calculate the full width at half maximum (FWHM) for the azimuthal distribution. For both CNT volume fractions, the distributions show an FWHM value of 88°, indicating that the use of the f‐factor is to be preferred from a sensitivity point of view. This whole analysis clearly illustrates that in this case, the distribution of CNTs is basically isotropic in all the samples examined.

Images of anisotropy analysis by XRD. (a) PS-PPO blend; (b) PS-PPO blend containing a volume fraction CNTs; (c) PS-PPO blend containing a volume fraction CNTs; (d) Radially averaged intensities showing a rather limited anisotropy for the CNT distribution.

Figure 8.17 Anisotropy analysis by XRD. (a) PS–PPO blend; (b) PS–PPO blend containing a volume fraction φ = 0.01 CNTs; (c) PS–PPO blend containing a volume fraction φ = 0.03 CNTs; (d) Radially averaged intensities showing a rather limited anisotropy for the CNT distribution.

However, a small deviation from anisotropy may be difficult to detect. In that case terahertz polarization sensitive measurements can provide a reliable, noninvasive, and fast way of identifying anisotropy. For example, for electrically conductive CNT composite networks, a small anisotropy could be detected using electrical conductivity measurements, although, as judged by SEM, TEM, AFM, and SAXS, the samples morphologically seemed to be isotropic. This anisotropy could be confirmed by the high frequency polarization measurements [56].

Again the literature is large, and we mention here the general standard handbook [57] and similar ones for polymers [58]. A brief review for polymers is given in [59], while small‐angle scattering for polymers is reviewed in [60].

8.9 Optical Microscopy

For morphological characterization, several types of microscopy are available, ranging from OM via EM to SPM. In this section we deal with OM, while Sections 8.10 and 8.11 describe EM and SPM, respectively. Many general references about OM are available. We quote here the books by Rawls, Spencer, and Delly [6163]. A general reference for microscopy is [64].

An optical microscope is nowadays a sophisticated instrument that can provide high‐resolution images of a variety of specimens [65]. Almost all optical microscopes in current use are known as compound microscopes (Figure 8.18a), where a magnified image of an object is produced by the objective lens and this image is magnified by a second lens system (the ocular or eyepiece) for viewing. Thus, the final magnification of the microscope is dependent on the magnifying power of the objective times the magnifying power of the ocular. Objective magnification powers range from 4× to 100×. Ocular magnification ranges are typically from 8× to 12×, though 10× oculars are most common. As a result, a standard microscope provides information with a final magnification range of ≈ 40× up to ≈1000×. Lower magnification is impractical on a compound microscope because of spatial constraints with image correction and illumination. Higher magnification is impractical because of limitations in light gathering ability; the short working distances required very strong lenses and, moreover, not useful as one reaches the diffraction limit. OM can be in transmission where light is transmitted from a source on the opposite side of a specimen to the objective lens. Usually the light passes through a condenser lens to focus it on the specimen for obtaining maximum illumination. One also uses reflective OM in which the (usually nontransparent) specimen is illuminated by a source on the same side as the objective lens.

Illustration of (a) image formation in a compound microscope; (b) schematic of a phase-contrast microscope.

Figure 8.18 (A) Image formation in a compound microscope; (B) Schematic of a phase contrast microscope.

Around 1870, Ernst Abbe formulated his sine theory for the resolving power of the optical microscope. The resolution of an objective lens is the ability to show two object details separated by a distance d min from each other in the microscope image and depends on the width of the cone of illumination and therefore on both the condenser and the objective lens. It is given by

where n is the refractive index of the medium (usually air or oil) separating the specimen from the objective and condenser lenses, λ the wavelength of light used (for white light, λ ≈ 0.53 µm is often used), and θ the angular half‐width of the cone of rays collected by the objective lens from a typical point in the specimen. In Eq. 8.26, n sin θ is the so‐called numerical aperture (NA) ranging from 0.25 to 1.4. For dry experiments where the medium is air, the NA ≤ 1, since θ is maximally 90°, and therefore sinθ has a maximum value of 1. For oil immersion experiments where the medium is an optically clear oil, the NA can be as a high as 1.4, dependent on the refractive index of the oil. The higher the NA and the shorter the λ, the better the resolution. A higher NA yields also a brighter image. However, the increase in resolution and brightness is obtained at the expense of a short working distances and a small depth of view. The theoretically possible resolution in normal OM using white light (λ ≈ 0.53 µm) is therefore approximately 0.2 µm. Note that θ refers to half the opening angle of the light collected by the objective, so that for transmission the NA of the condenser is as important as the NA of the objective lens in determining resolution. It is for this reason that closure of a condenser diaphragm results in a loss of resolution. There are practical considerations in how short the wavelength used can be. The human eye is best adapted for green light (see Chapter 11.2), and the human ability to see detail may be compromised with the use of blue or violet. Most manufacturers of microscopes correct their simplest lenses (achromats) for green light.

The interaction of light with glass in a lens produces aberrations that result in a loss in image quality because light waves will be refracted differently in different parts of a lens, and different wavelengths of (white) light will be refracted to different extents by the glass. Spherical aberration (i.e. due to the use of spherical lens surfaces) can be corrected by using lenses with different curvatures on their surfaces, and chromatic aberration (i.e. due to the use of different wavelengths) can be minimized by using multiple kinds of glass in combination. These corrections are largely responsible for the relatively high cost for apochromatic objective lenses with full color correction and high NA.

For proper sample illumination in transmission, a condenser lens focuses the light to a parallel beam, resulting in an evenly illuminated specimen resulting in a bright image (bright‐field mode). This optimum setup for specimen illumination and image generation in transmission is known as Köhler illumination, after its inventor August Köhler, and uses two apertures for regulation of the illumination beam diameter by closing or opening iris diaphragms. In dark field mode, the specimen is illuminated obliquely, with no direct light entering the objective. Features in the specimen plane that scatter light can be seen against a dark background. For high‐resolution objectives, dark field illumination is best provided by a specially designed dark field condenser, preventing oblique rays from entering the wide aperture of the objective. The microscope techniques requiring a transmitted light path include, apart from the above described bright and dark field, phase contrast, polarization, and differential interference contrast (DIC) optics.

Figure 8.19 shows two examples of optical micrographs of cross sections. In Figure 8.19a, a waterborne alkyd coating as applied by brushing on teakwood is shown [66]. The alkyd emulsion, based on tall oil fatty acids with a long oil length and low molecular weight, was prepared using a 2% load of nonionic surfactant to obtain an average particle size of 200 nm, and the final alkyd composition had a solid content of 35 wt% and a surfactant amount of 5.1 wt%, leading to T g = −18.7 °C (DSC) or 5.7 °C (DMA). This micrograph clearly shows the wood structure and that the coating penetrated but marginally in the pores of the wood. Note also that for this coating, a relatively large difference in T g‐values is obtained as measured with DSC and DMA. In Figure 8.19b, a cross section of an automotive coating is given [67], showing the typical stack of highly specialized individual coating layers. On the initially cathodic electrodeposited material that provides adhesion and active corrosion inhibition (ED coat), a spray‐coated layer (primer) is applied for smoothening and protection of the ED coat toward UV light. Onto the primer surface, two layers are consecutively spray‐applied, namely, a base coat that provides the color followed by a transparent top coat (clear coat), which renders the whole system shiny, smooth, and resistant toward chemicals (bird droppings, tar, rosin, acid rain) and surface‐related mechanical impact like the scratching in the course of car washing. Except for the base coat itself, all other layers were chemically cured after physical drying and film formation, which implies that the car body has to pass three bake cycles up to 170 °C (ED) or 140 °C (combined base and clear layer). The micrograph clearly shows the individual layers as well as the orientation of the flakes in the base coat.

Illustration of optical micrographs of coating cross-sections. (a) Water-borne alkyd coating on teak wood with an average thickness of 57 μm. (b) Typical automotive coating.

Figure 8.19 Optical micrographs of coating cross sections. (a) Waterborne alkyd coating on teakwood with an average thickness of 57 µm; (b) Typical automotive coating.

8.9.1 Phase Contrast Microscopy

The speed of light within a sample depends on the refractive index n of the material of the sample and, hence, to a phase difference with light that passed through the surrounding medium. As the amplitude of the radiation is additive, light rays that are ½λ out of phase annihilate each other. Based on this effect, Zernike invented in the 1930s the phase contrast microscope to generate contrast by changing invisible phase differences into visible amplitude differences (Figure 8.18b).

To separate the light transmitted through the specimen from the light that did not encounter the specimen, a transparent ring (known as an annulus) is placed in an opaque disk, and this disk is inserted into the optical path of the condenser. Another ring is placed inside the objective lens. Nearly all of the light that passes through the sample but misses the specimen then passes through the objective lens through this ring. Most of the light that passes through the specimen is scattered, and some of it enters the objective lens in such a way that it will not pass through the objective lens ring, but will pass this plane at some other location. The glass plate holding the ring is designed in such a way that all light missing the ring encounters an additional ¼λ of phase shift relative to the beams of light that do not interact with the specimen. This leads to light rays that interact with the specimen out‐of‐phase rays with rays that do not interact with the specimen by ½λ. When these rays combine, varying degrees of constructive and destructive interference occur, which produce the characteristic light and dark features in the image. The contrast between specimen and background in phase contrast and bright field mode can be significant for some specimens.

In DIC microscopy, two slightly separate plane‐polarized beams of light are used to create a three‐dimensional (3D)‐like image with shades of gray. Wollaston prisms situated in the condenser and above the objective produce the effect, and additional elements add color to the image. Care must be taken to interpreting DIC images as the apparent hills and valleys in the specimen can be misleading. The height of a hill due to some feature is the product of both the actual thickness of the feature (i.e. the path length) and its refractive index n. Variations of the DIC system are named after their originators, Nomarski and de Sénarmont. Options exist to maximize either resolution or contrast.

8.9.2 Fluorescence Microscopy

Certain atoms and molecules, excited by radiation, absorb the radiation and thereafter lose energy in the form of heat and light emission. If during excitation the electron keeps (changes) its spin, the electron is said to enter a singlet (triplet) state, and the light that is emitted as the electron returns to ground state is called fluorescence (phosphorescence). Phosphorescence is much longer lived than fluorescence. Both fluorescence and phosphorescence show specific emissions, dependent on specific wavelengths of the excitation light. To utilize fluorescence, the specimen needs to be labeled with a suitable molecule (fluorochrome) whose distribution can be measured after illumination, using a fluorescence microscope. Fluorescence can be used to identify particular molecules and has the advantage of providing a high signal‐to‐noise ratio, which enables one to distinguish spatial distributions of rare molecules. A key feature of fluorescence microscopy is that it employs reflected rather than transmitted light, which means transmitted light techniques, such as phase contrast and DIC, can be combined with fluorescence microscopy. In practice the limit of resolution is about 0.2 µm with the best available objective lenses and a good specimen.

In order to excite the fluorochrome properly and observe its fluorescence emission, the appropriate filters must be present in the microscope. A fluorochrome may not fluoresce at all if the specimen is illuminated with an inappropriate wavelength. Finally, the specimen should not exhibit excessive autofluorescence (that is, should not glow in the absence of the fluorochrome).

8.9.3 Confocal Scanning Microscopy

One of the main problems of conventional light microscopy is blurring of the images by out‐of‐focus contributions, resulting in obscuring structures of interest, particularly, in thick specimens. In conventional microscopy, not only the plane of focus is illuminated, but also much of the specimen above and below this point. This out‐of‐focus light leads to a reduction in image contrast and a decrease in resolution. In a confocal microscope, all out‐of‐focus structures are suppressed during image formation. This is obtained by an arrangement of diaphragms, which, at optically conjugated points of the path of rays, act as a point source and as a point detector, respectively. A detection pinhole does not permit rays of light from out‐of‐focus points to pass through. The consequence is that all out‐of‐focus information is removed from the image, and the confocal image is basically an optical cross section of a not necessarily thin specimen. The cross‐sectional thickness may approach the limit of resolution (which is as a rule of thumb 2–3 times the lateral resolution) but in practice is somewhat greater, say, 0.4–0.8 µm. The wavelength of light, the NA of the objective, and the diameter of the diaphragm (wider detection pinhole reduces the confocal effect) affect the depth of the focal plane. To obtain a full image, the point of light is moved across the specimen in an xy raster pattern by scanning mirrors. The emitted/reflected light passing through the detector pinhole is transformed into electrical signals by a photomultiplier and displayed on a computer monitor. As light source, either monochromatic (laser) light or white light is used. An air suspension table is often added to the equipment to eliminate vibrations present in the building.

The lateral resolution is also limited by spot size for the optical beam and approaches from 0.12 to 0.15 µm for an ideal specimen and with the best available objective lenses. The resolution in the z‐direction usually is considerably higher, say, tens of nanometers, depending on whether monochromatic (laser) light or white light is used. If the confocal images are stored in a computer, it is possible to stack them up and generate 3D reconstructions by using various depth of focus for a (transparent) specimen.

8.9.4 Polarized Light Microscopy

Polarized light microscopy uses plane‐polarized light to analyze structures that are anisotropic. For anisotropic objects, such as a (para)crystalline material, the refractive index n is dependent on the orientation of the object relative to the incident light beam. Structures that have two different refractive indices at right angles to one another are called birefringent. There are two kinds of birefringence, intrinsic birefringence, which results from atomic or molecular anisotropic order (i.e. like in a crystal), and form birefringence, which results from supramolecular association with paracrystalline arrays.

The polarized light microscope must be equipped with both a polarizer, positioned in the light path somewhere before the specimen, and an analyzer (a second polarizer), placed in the optical pathway after the objective rear aperture. Image contrast arises from the interaction of plane‐polarized light with a birefringent specimen to produce two individual wave components that are each polarized in mutually perpendicular planes. The velocities of these components are different and vary with the propagation direction through the specimen. After exiting the specimen, the light components become out of phase, but are recombined with constructive and destructive interference when they pass through the analyzer. In practice, the object is rotated around an axis in the plane of plane‐polarized light beam to maximize the intensity differences in the object (usually, the dominant object axis is at a 45° angle relative to the plane of polarization). Polarized light microscopy can be used to give information about the molecular structure of the birefringent object (e.g. orientation) and for polymer coatings is often used to assess the presence of crystals.

8.10 Electron Microscopy

The electron microscope is a microscope that uses a beam of electrons to create an image of the specimen. It is capable of much higher magnifications and has a greater resolving power than an optical microscope, allowing to see much smaller objects in finer detail.

All electron microscopes use electromagnetic and/or electrostatic lenses to control the path of electrons that are emitted by a cathode. The basic design of an electromagnetic lens is a solenoid (a coil of wire around the outside of a tube) through which one can pass a current, thereby inducing an electromagnetic field within the tube. The electrons passing through the center of such solenoids on their way down the column of the electron microscope toward the sample are sensitive to magnetic fields, and their path can therefore be controlled by changing the current through the lenses. The resolving power is the ability to distinguish between two points expressed as a distance. The faster the electrons travel, that is, the higher the accelerating voltage, the shorter their wavelength, and reduced wavelength increases resolving power. For electrons, λ = h/[2m 0  (1 + /2m 0 c 2)]1/2 ≅ h/[2m 0  ]1/2 , where Ψ is the acceleration voltage; m 0 and e the rest mass and charge of the electron, respectively; and c the speed of light. This results in λ ≅ 1.23/Ψ 1/2 nm or λ ≅ 2 pm using 300 kV. Typical accelerating voltages are 200 and 300 kV, but 1000 kV electron microscopes exist, though not commonly found. Resolution is what the microscope delivers and depends, among others, on the constancy of the voltage and the NA. For electron microscopy, the NA is given approximately by the Abbe expression d ≅ 0.61λ/sinα, with α half the angular aperture, as the refractive index can be taken as n = 1. The half‐angle α is typically 10−2 radians, so that d ≅ 0.75λ/αΨ 1/2 or d ≅ 0.14 nm at 300 kV. Aberrations and distortions present will limit the practical resolution. Several types of electron microscopes exist. Here we deal only with SEM, TEM, and STEM (scanning transmission electron microscopy). General references are [68, 69]; analytical EM is discussed in [70], while [71] specifically deals with polymers.

8.10.1 TEM

In TEM, the electron beam is transmitted through a thin specimen, semitransparent for electrons, and supported on a grid. An electron beam that has been partially transmitted through the thin specimen carries information about the structure of the specimen. The spatial variation in this information (the “image”) is then magnified by the magnetic lenses until it is recorded by hitting a fluorescent screen, photographic plate, or light sensitive sensor such as a charge‐coupled device (CCD) camera. The image detected by the CCD may be displayed in real time on a monitor or computer (Figure 8.20a).

Schematic illustrations of electron pathways in transmission electron microscopy. (a) Conventional TEM, showing Köhler illumination; (b) STEM showing scanning illumination and the various angular regimes.

Figure 8.20 Electron pathways in transmission electron microscopy. (a) Conventional TEM, showing Köhler illumination; (b) STEM showing scanning illumination and the various angular regimes.

The resolution of TEM is also limited by spherical and chromatic aberration, but a new generation of aberration correctors has been able to limit these aberrations. Software correction of spherical aberration has allowed the production of images with sufficient resolution to show carbon atoms in diamond separated by only 0.089 nm and atoms in silicon at 0.078 nm at magnifications of 50 million times. The ability to determine the positions of atoms within materials has made the TEM an indispensable tool for nanotechnology research and development in many fields, including heterogeneous catalysis and the development of semiconductor devices for electronics and photonics. In many cases though, it is still mainly the specimen itself, be it the preparation or its beam sensitivity, which limits the resolution of what we can see in the electron microscope, rather than the microscope itself.

Also electron diffraction can be done in TEM, and, similarly as for XRD, one obtains a diffuse ring pattern for amorphous materials, sharp rings for nanocrystalline materials with random orientation, and a spot pattern for crystalline materials. As specimens are normally rather thin, scattering theory neglecting multiple scattering (similar as for XRD) can be used in many cases. For a detailed interpretation of diffraction patterns, multiple scattering is required (see, e.g. [51]).

TEM is also capable of 3D imaging referred to as electron tomography, which involves taking a succession of images while tilting the specimens through increasing angles, typically up to 60° or 70° (Figure 8.21). For the optimum acquisition of these so‐called tilt series, various protocols are available [72]. After aligning all the images, a 3D image of the specimen can be reconstructed, leading to a 3D data cube of intensities from which numerical cross sections through the specimen can be extracted. For the reconstructed image of a particle of diameter D, the resolution d is often stated to be given by the approximate Crowther criterion [73] d ≅ πD/m, where m is the minimum number of views, equally spaced over a single tilt range of 90°. In practice, many other considerations have to made; see, e.g. [74]. For liquid dispersions one can use cryomicroscopy. A cryosample is created, after removing excess fluid from the TEM grid (blotting), by rapidly plunging the liquid sample on the grid in (usually) liquid ethane (≅185 K). In this way rapid cooling is ensured, crystallization avoided, and the sample vitrified in its native state. Thereafter this vitrified sample can be examined in a cryomicroscope, operating typically at −180 °C = 150 K. As an example of tomography experiments, Figure 8.22a shows one section of a composite silica particle [75]. The 3D representation of the complete set of sections is given in Figure 8.22b, clearly showing the raspberry nature of these particles. Such particles can be used in superhydrophobic coatings (see Section 7.3.5).

Illustration of electron tomography. (a) Acquisition of images, in this case with tilting up to 60°. (b) Reconstruction of 3D object.

Figure 8.21 Electron tomography. (a) Acquisition of images, in this case with tilting up to 60°; (b) Reconstruction of 3D object.

Images of raspberry silica particle composed of diameter -10 nm and -80 nm individual particles. (a) Reconstructed section; (b) reconstructed 3D representation.

Figure 8.22 Raspberry silica particle composed of diameter ≅10 and ≅80 nm individual particles. (a) Reconstructed section; (b) Reconstructed 3D representation.

The physics of image formation is dealt with in detail in [76]. Concise reviews on imaging soft matter in TEM and cryo‐TEM are given in [77], while [78] deals with many details.

8.10.2 SEM

Unlike TEM, where the electrons in the primary beam are transmitted through the sample, SEM produces images by detecting secondary or backscattered electrons (Figure 8.23), which are emitted from the surface due to excitation by the primary electron beam (secondary electron imaging). In the SEM, the electron beam is scanned across the surface of the sample in a raster pattern, with detectors building up an image by mapping the detected signals with beam position.

Schematic diagrams of electron interaction with a specimen. (a) Electron excitation resulting in characteristic X-rays or Auger electrons; (b) the volume within a specimen from which the various types of signals originate.

Figure 8.23 Electron interaction with a specimen. (a) Electron excitation resulting in characteristic X‐rays or Auger electrons; (b) The volume within a specimen from which the various types of signals originate.

Because SEM imaging relies on electron interactions at the surface rather than transmission, it enables imaging (the surface of ) thick samples. SEM has a greater depth of view as an optical microscope and so can produce images that are a good representation of the 3D topography of the sample. SEM images are therefore considered to provide 3D topographical information about the sample surface.

In SEM, we use much lower accelerating voltages as in TEM to prevent beam penetration into the sample, since what we require is generation of the secondary electrons from the true surface structure of a sample. Therefore, it is not uncommon to use a low voltage, in the range of 1–5 kV, even though SEMs are capable to acceleration voltages of, say, 30 kV. However, other signals arise (Figure 8.23a) and can use also the backscattered electrons instead of the secondary electrons, leading to backscattering imaging. Analyzing the energy (energy dispersive spectrometry, EDS) or the wavelength (wavelength dispersive spectrometry, WDS) of the X‐rays generated yields chemical information. For quantitative analysis EDS is used, but gauge samples with known composition are required. The X‐rays result from ionization of one of inner shells, for example, the K‐, L‐, or M‐shell, if the incident electron radiation has sufficient energy to dislodge an electron. The subsequent transition of an outer electron into the vacancy of the inner shell leads to the emission of characteristic X‐rays. Instead of an X‐ray photon, also another electron, an Auger electron may be emitted, which is also characteristic for the element. The Auger yield is high for the light elements, while the X‐ray yield is high for the more heavy elements. Hence, detection of the elements Be, C, N, O, and F requires special precautions.

The useful resolution in SEM is limited by the raster lines used in the scanning process and the beam divergence. The size of the pear increases with increasing acceleration voltage, while the volume close to the surface from which secondary electrons are emitted roughly stays the same (Figure 8.23b). For EDS and WDS, the resolution is also by the response volume, which is typically 1 µm in diameter. TEM resolution is thus about an order of magnitude better than SEM resolution.

8.10.3 STEM

STEM, as implied by the name, combines transmission microscopy with scanning [79]. The difference in electron pathways between TEM and STEM is shown schematically in Figure 8.20. This combination allows for a larger dose as, in principle, a spot examined is not visited twice. This is generally important for soft matter as these materials are easily beam damaged, that is, degraded by the electron beam. Moreover, typically somewhat thicker specimens can be used as compared with conventional TEM, say, up to 1–2 µm. Protocols to retrieve the maximum amount of information from such experiments are steadily being developed, including sampling limitations (see, e.g. [80]) and assessing the effect of beam damage (see, e.g. [81]). Tomography can be done not only in TEM but also in STEM, possibly under cryoconditions. For a general review, see [82], and for reviews dedicated to soft matter, see [83, 84].

8.10.4 Sample Preparation and Related Issues

Materials to be viewed in an electron microscope generally require processing to produce a suitable sample. This is mainly because the whole of the inside of an electron microscope is under high vacuum in order to enable the electron beam to travel over sufficient distances. The technique necessary to analyze a sample varies depending on the specimen, the type of analysis required, and the type of microscope. In any case, since artefacts are easily produced, information about the sample is important. Questions such as the following should be asked: What is the problem? How are the samples taken? Are the samples stable to vacuum and/or electron irradiation and in time?

The samples to be viewed in EM are examined in vacuum, as air scatters the electrons. Hence, the samples need to be specially prepared by sometimes lengthy and difficult techniques to result in samples with a proper surface (SEM) or thickness (TEM), which can withstand the environment inside an electron microscope. Nowadays somewhat hydrated samples can be imaged using an environmental scanning electron microscope (ESEM) in which the specimen is kept relatively moist but, by differential pumping, the beam path is largely in high vacuum.

For conventional SEM, one often uses embedding, that is, fixation with a resin, such as araldite®, which can be polymerized to yield a hard block and can be examined, possibly after polishing.

Scanning electron microscopes usually image conductive or semiconductive materials best. Nonconductive materials are usually examined after sputter coating. Sputter coating is depositing an ultrathin coating of electrically conducting material in a low vacuum. This is done to prevent charging of the specimen, which would occur because of the accumulation of static electric fields due to the electron irradiation required during imaging. It also increases the amount of secondary electrons that can be detected from the surface of the sample in the SEM and therefore increases the signal‐to‐noise ratio. Such coatings include gold, gold/palladium, platinum, chromium, etc., and have a layer thickness of a few, say, 10 nm. This process, however, potentially disturbs delicate samples and can obscure details.

Sectioning is the production of thin slices of the specimen to be used in TEM. For EM the sections must be very thin so that they are semitransparent to electrons, typically less than 100 nm, but up to 1–2 µm for low Z elements. These sections for EM are cut on an ultramicrotome with a glass or diamond knife. Glass knives can easily be made in the laboratory and are much cheaper than diamond, but they blunt very quickly and therefore need replacing frequently. It is also possible to use embedded specimens for TEM by sectioning with a microtome.

For TEM, stained samples can be prepared by exposure to nasty chemicals to reveal otherwise invisible detail, but this may result in artefacts purely as a result of this procedure. For staining one uses heavy metals, such as lead, uranium, and osmium, to provide extra contrast between different structures, since many soft materials are nearly transparent to the electron beam. By staining the samples with heavy metals, the local mass density is increased, which increases contrast in the resultant image.

Already mentioned is cryofixation in which one cools a soft or liquid specimen rapidly, typically to liquid nitrogen temperatures or below, so that the solvent, usually water, freezes. This is done so rapidly that the solvent vitrifies, leading to an amorphous state instead of to a crystalline state. Such a procedure preserves the specimen in a snapshot of its solution state with a minimum of artefacts. The entire field called cryoelectron microscopy has branched from this technique. With the development of cryoelectron microscopy, for which the Nobel Prize 2017 was awarded to Dubochet, Frank, and Henderson, it is now possible to observe virtually any liquid specimen close to its native state. However, cryofixation techniques are not without their own artefacts of preparation, and ice crystal damage is a common problem when trying to image a large specimen (larger than 200 µm), which cannot be frozen rapidly enough to vitrify the water. Recently direct examination of liquid samples has become possible by liquid‐phase TEM. In this technique the liquid sample (thickness ≤ 500 nm) is contained between two electron transparent windows (typically Si3N x , thickness ≅ 30–50 nm), thereby preventing evaporation of the liquid [85].

Electron microscopes are expensive to buy and to maintain. They require extremely stable high voltage supplies, extremely stable currents to each electromagnetic coil/lens, continuously pumped high/ultrahigh vacuum systems, and a cooling water supply circulation through the lenses and pumps. As they are very sensitive to vibration and external magnetic fields, microscopes aimed at achieving high resolutions must be housed in buildings with special services. A significant amount of training is required in order to operate an electron microscope successfully, and EM is considered a specialized skill.

Finally, it must be emphasized that every electron micrograph, in a sense, is an artefact. Changes in the microstructure may be almost inevitable in sample preparation. With experience, microscopists learn to recognize the difference between an artefact of preparation and the true structure, mainly by looking at the same or similar specimens prepared in the same or a different way. Moreover, a part of the material has to be selected, which, however, may be insufficiently representative. Bias in choice of area may be eliminated though. In (S)TEM, high‐resolution, large‐area scanning is nowadays possible by stitching a large number of high‐resolution images (see, e.g. [80]), so that a really representative area can be examined.

8.11 Surface Probe Microscopy

For surfaces specially dedicated techniques are available that are collectively addressed as surface probe microscopy (SPM). Of these atomic force microscopy (AFM) is by far the most useful one. In the latter technique, a sharp, vibrating tip approaches the surface in a controlled way, while the force and displacement due to the interaction of the tip and substrate is registered (Figure 8.24). Both the real and imaginary components of the force response can be monitored. By using a back‐coupling loop to keep the force constant, the distance can be recorded so that the topology of the surface can be determined by scanning over the surface. Typical images range from 10 to less than 1 µm, with a resolution that obviously depends, apart from the electronics and software used, on the radius of the tip. A tip is usually a few micrometers long and often has a diameter of less than 100 Å, and such a tip is located at the free end of a cantilever that is 100–200 µm long. There are different imaging modes that are used for different types of analysis. Generally, in the contact mode, the cantilever is held less than a few angstroms from the sample surface, and the interatomic force between the cantilever and the sample is repulsive. In the noncontact mode, the cantilever is held on the order of tens to hundreds of angstroms from the sample surface, and the interatomic force between the cantilever and sample is attractive. In the intermittent or tapping mode, the interaction fluctuates between the repulsion and attraction, that is, between the contact and noncontact mode. Apart from van der Waals forces, there are two other forces that arise during the scan: a capillary force that is caused by a buildup of water around the tip, as water is normally present without a protective, inert environment, and the force caused by the cantilever itself. Obviously, the first contribution should be avoided (unless a specific interest is present), while the second can be used as advantage by changing the stiffness of the cantilevers appropriate to the stiffness of the material probed.

AFM. (a) Schematic of operation; (b) typical sharp tip; (c) spherical tip with diameter of about 10 μm.

Figure 8.24 AFM. (a) Schematic of operation, (b) typical sharp tip, and (c) spherical tip with diameter of about 10 µm.

A modification of AFM is atomic force acoustic microscopy (AFAM). For AFAM the principle is to excite the cantilever into flexural vibrations when the tip is in contact with the sample surface [86]. A piezoelectric transducer placed below the sample generates acoustic waves, which cause vibrations of the sample surface and of the cantilever close to its resonance frequency. Out‐of‐plane vibrations of the surface sample and cantilever vibrations are measured. The resulting acoustic contact tip–sample resonance frequency (CRF) images, corresponding to a shift of the resonance frequency of the sample/tip system near the surface, are recorded simultaneously to topography images. These data allow a precise mapping of local variations in elasticity of the sample. An example is provided by highly interpenetrated and phase‐separated UV‐cured interpenetrating methacrylate–epoxide polymer networks [87]. In Figure 8.25a, the results for a coating containing 20% acrylate and 80% epoxide are presented, showing a bright matrix of epoxide with dark domains of methacrylate containing epoxide nodules, while Figure 8.25b shows the CRF histograms clearly confirming these results. This coating clearly contains two different populations centered at different CRF values. The first one ranging between 314 and 320 Hz corresponds to the soft methacrylate domains. It is 3 times broader than the one corresponding to the hard epoxide domains centered at 321.5 Hz. It is possible to estimate the surface area corresponding to soft and hard parts by integration, which leads in this case to 50% methacrylate and 50% epoxide at the surface, differing significantly from the bulk composition.

AFAM on a 20% methacrylate-80% epoxide coating. (a) The topography showing methacrylate nodules containing epoxide inclusions in an epoxide matrix; (b) the CRF distribution indicating the presence of about 50% methacrylate and 50% epoxide at the surface.

Figure 8.25 AFAM on a 20% methacrylate–80% epoxide coating. (a) The topography showing methacrylate nodules containing epoxide inclusions in an epoxide matrix; (b) The CRF distribution indicating the presence of about 50% methacrylate and 50% epoxide at the surface.

Still another modification is conductive atomic force microscopy (CAFM, [88]), used to obtain information about the local conductive properties of a sample by employing a conductive tip connected to a current meter. CAFM has been applied mainly to solid‐state materials with heterogeneous transport properties but is also used to measure electrochemical transport through conductive buffers. Electrostatic force microscopy (EFM) is another way to extract conductivity information using an AFM setup. In this case the electrostatic interaction between the tip and specimen is employed. Figure 8.26a shows the principles of both techniques. As an example we show results for CAFM experiments on a graphene network inside a graphene–PS composite in Figure 8.26b [89]. Studying the graphene distribution inside a polymer matrix, for example, by conventional SPM using phase shift measurements in tapping mode, is difficult because of the small thickness of graphene sheets in the composite. As a result the contact area between a tip and the graphene sheets is much smaller than the total tip–sample contact area, and the contribution of the graphene properties to the measured signal is rather low.

CAFM and EFM. (a) Schematic of the principle of CAFM (left) and EFM (right); (b) EFM images of the same area at: (1) Ut = -2 V, (2) Ut = -6 V; (3) topography of measured area; (4) cross-sectional AA' at Ut = -2 V (1), Ut = -4 V (2), Ut = -6 V (3).

Figure 8.26 CAFM and EFM. (a) Schematic of the principle of CAFM (left) and EFM (right); (b) EFM images of the same area at (1) U t = −2 V and (2) U t = −6 V; (3) Topography of measured area; (4) Cross‐sectional AA′ at U t = −2 V (1), U t = −4 V (2), U t = −6 V.

In addition, the surface inhomogeneities influencing the phase image render image interpretation problematic (Figure 8.26b, (3)). On the other hand, CAFM allows for easy distinguishing of nanoparticles of a conductive network inside the composite sample, and the topography and current distribution could be measured by CAFM in contact mode with the tip–sample force of ≅10 nN (Figure 8.26b, (1) and (2)). Such a force allows for nondestructive surface analysis, as testified by subsequent tapping mode measurement on the same area, which did not reveal any changes of the sample surface. The current distribution image shows places where the conductive network of the graphene sheets appears on the surface. The lateral resolution of CAFM is limited by the tip–sample contact area, which can be estimated from Hertzian contact theory. In this case the diameter of the contact area between the spherical gold tip (radius ≅ 50 nm) and PS surface is less than 10 nm at the tip–sample force of 10 nN, in agreement with the full width at half maximum of the current signal for graphene sheets (Figure 8.26b, (4)).

A brief review of AFM is given in [90], while a more detailed, but still concise review is provided in [91]. An overview on fundamentals and applications of SPM in general can be found in [92, 93].

8.12 Thickness and Beyond

The thickness of coatings and the possible presence of defects are important characteristics for nearly every coating. However, in particular for industrial coatings, such as automotive and marine coatings, the control of these characteristics is imperative, and for that first of all proper detection is required. Thickness may be measured from one or more cross sections using optical or electron microscopy, or directly using a caliper. While fine for the laboratory, such a destructive technique is not applicable in production control.

One of the techniques for thickness measurements is ultrasonic gauging, based on the difference in acoustical impedance between the substrate and the coating [94]. Typical accuracy is about 3%. For different polymers the impedance may be very similar, and in that case mainly the overall thickness can be monitored. A modern development led to the simultaneous determination of thickness, modulus, and attenuation [95, 96]; a comparison of ultrasonic data with DMA results is provided in [97]. With ultrasonic techniques it is also possible to follow film formation [98]. Whenever required, such an approach can also be combined with IR spectroscopy [99].

Another option for measuring thickness is using eddy current gauging, in which the response of a metal substrate is recorded upon applying locally a magnetic field. The thickness of nonmetallic coatings on metal substrates can be determined simply from the effect of lift‐off on impedance. The coating serves as a spacer between the probe and the conductive surface. As the distance between the probe and the conductive base metal increases, the eddy current field strength decreases because less of the probe's magnetic field can interact with the base metal. Thicknesses between 0.5 and 25 µm can be measured to an accuracy between 10% for lower values and 4% for higher values, preferably using a calibration specimen. Contributions to impedance changes due to conductivity variations should be phased out, unless it is known that conductivity variations are negligible, as normally found at higher frequencies. This technique obviously only measures the total thickness. It can also be used for conductive coatings [100].

A modern development is the use of terahertz radiation [101, 102]. This technique developed rapidly in the last decade, and at present it is possible to monitor in‐line complex coatings. For example, for automotive coatings it has been shown that the results compare favorably with that of ultrasound measurements [103]. Advanced calibration allows accuracies of up to ±2 µm to quantify the thickness of paint films. Analysis of time domain data yielded results of measurements made on real industrially applied wet‐on‐wet structures containing five layers with poorly defined interfaces and thicknesses below the conventional resolution limit of ultrasound [104]. A good aspect is that 3D imaging of the various coating layers becomes possible. For marine coatings consisting of three antifouling coatings and two anticorrosive coatings [105], not only the thickness could be monitored, but it has also been shown how subsurface defects, such as delamination between coating and substrate, as well as the presence of corrosion on the substrate, can be identified.

8.13 Final Remarks

It will be clear that a full chemical and morphological characterization requires many different techniques and therefore a wide variety of knowledge. This chapter is inevitably not capable of dealing with all these aspects and should be considered as an appetizer or guide to specialized literature. Nevertheless, a coating technologist should be able to discuss his or her problems with experts on these techniques for which a basic understanding of the possibilities and limitations of the various techniques is, if not a prerequisite, at least rather helpful.

References

  1. 1 Simon, G.P. ed. (2002). Polymer Characterization Techniques and their Application to Blends. Washington, DC: Oxford University Press.
  2. 2 Stamm, M. ed. (2008). Polymer Surfaces and Interfaces. Berlin: Springer.
  3. 3 Jiménez‐Pardo, I., van der Ven, L.G.J., van Benthem, R.A.T.M. et al. (2017). J. Polym. Sci. (Polym. Chem.) A55: 1502.
  4. 4 Banwell, C.N. (1966). Fundamentals of Molecular Spectroscopy. New York: McGraw‐Hill.
  5. 5 Socrates, G. (2004). Infrared and Raman Characteristic Group Frequencies, 3e. Chichester: Wiley.
  6. 6 Adema, K.N.S., Makki, H., Peters, E.A.J.F. et al. (2014). Polym. Degrad. Stab. 110: 422.
  7. 7 Villani, M., Scheerder, J., van Benthem, R.A.T.M. and de With, G. (2014). Eur. Polym. J. 56: 118.
  8. 8 Xue, L., Li, W., Hoffmann, G.G. et al. (2011). Macromolecules 44: 2852.
  9. 9 Wilson, E.B. Jr. Decius, J.C. and Cross, P.C. (1955). Molecular Vibrations. New York: McGraw‐Hill (also Dover, 1980).
  10. 10 Colthup, N.B., Daly, L.H. and Wiberley, S.E. (1990). Introduction to Infrared and Raman Spectroscopy, 3e. San Diego: Academic Press.
  11. 11 Siesler, H.W. and Holland‐Moritz, K. (1980). Infrared and Raman Spectroscopy of Polymers. New York: Marcel Dekker.
  12. 12 Koenig, J.L. (1992). Spectroscopy of Polymers. Washington, DC: American Chemical Society.
  13. 13 Garton, A. (1992). Infrared Spectrocopy of Polymer Blends Composites and Surfaces. New York: Oxford University Press.
  14. 14 Painter, P.C., Coleman, M.M. and Koenig, J.L. (1982). The Theory of Vibrational Spectroscopy and its Application to Polymeric Materials. New York: Wiley.
  15. 15 Painter, P.C. and Coleman, M.M. (1997), Chapter 6 Fundamentals of Polymer Science, 2e. Lancaster, PA: Technomics.
  16. 16 Carrington, A. and McLaclan, A.D. (1969). Introduction to Magnetic Resonance. New York: Harper and Row.
  17. 17 Axelson, D.E., Mandelkern, L., Popli, R. and Mathieu, P. (1983). J. Polym. Sci. Pol. Phys. Ed. 21: 2319.
  18. 18 (a) Andrew, E.R., Bradbury, A. and Eades, R.G. (1959). Nature 183: 1802.(b) Lowe, I.J. (1959). Phys. Rev. Lett. 2: 285.
  19. 19 Pines, A., Gibby, M.G. and Waugh, J.S. (1973). J. Chem. Phys. 59: 569.
  20. 20 Schmidt‐Rohr, K. and Spiess, H.W. (1994). Multidimensional Solid‐State NMR and Polymers. London: Academic Press.
  21. 21 Callaghan, P.T. (1991). Principles of Nuclear Magnetic Resonance Microscopy. Oxford: Oxford University Press.
  22. 22 Miloskovska, E., Friedrichs, C., Hristova‐Bogaerds, D. et al. (2015). Macromolecules 48: 1093.
  23. 23 McBrierty, V.J. and Packer, K.J. (1993). Nuclear Magnetic Resonance in Solid Polymers. Cambridge: Cambridge University Press.
  24. 24 Mathias, L.J. ed. (1991). Solid‐state NMR of Polymers. New York: Plenum Press.
  25. 25 (a) ASTM E222‐17 (2017). Standard test methods for hydroxyl groups using acetic anhydride acetylation.(b) ASTM D664‐11a (2017). Standard test method for acid number of petroleum products by potentiometric titration.
  26. 26 Hatzakis, E., Agiomyrgianaki, A., Kostidis, S. and Dais, P. (2011). J. Am. Oil Chem. Soc. 88: 1695.
  27. 27 Cools, P.J.C.H. (1999), Characterization of Copolymers by Gradient Elution Chromatography, PhD thesis, Eindhoven University of Technology, Eindhoven.
  28. 28 Phillips, R.W. and Dettre, R.G. (1976). J. Colloid Interf. Sci. 56: 251.
  29. 29 Batts, G.N. (1987). Colloids Surf. 22: 133.
  30. 30 Gerenser, L.J., Pochan, J.M., Mason, M.G. and Elman, J.F. (1985). Langmuir 1: 305.
  31. 31 Fadley, C.S. (1976). Prog. Solid State Chem. 2: 265.
  32. 32 van der Heide, P. (2012). X‐ray Photoelectron Spectroscopy: An Introduction to Principles and Practices. New York: Wiley.
  33. 33 Benninghoven, A., Rudenauer, F.G. and Werner, H.W. (1987). Secondary Ion Mass Spectrometry: Basic Concepts, Instrumental Aspects, Applications and Trends. New York: Wiley.
  34. 34 Benninghoven, A. (1994). Ang. Chem., Int. Ed. 3: 1023.
  35. 35 McPhail, D.S. (2006). J. Mater. Sci. 41: 873.
  36. 36 Brongersma, H.H., Draxler, M., de Ridder, M. and Bauer, P. (2007). Surf. Sci. Rep. 62: 63.
  37. 37 Grehl, T., Niehuis, E. and Brongersma, H.H. (2011). Microscopy Today 19: 34.
  38. 38 Ming, W., Tian, M., van de Grampel, R.D. et al. (2002). Macromolecules 35: 6920.
  39. 39 Wu, C.‐S. (2004). Handbook of Size Exclusion Chromotography and Related Techniques, 2e. New York: Marcel Dekker.
  40. 40 Striegel, A.M., Yau, W.W., Kirkland, J.J. and Bly, D.D. (2009). Modern Size Exclusion Chromatography, 2e. Hoboken, NJ: Wiley.
  41. 41 Mori, S. and Barth, H.G. (1999). Size Exclusion Chromatography. Berlin: Springer‐Verlag.
  42. 42 Wu, K.J. and Odom, R.W. (1998). Anal. Chem. News & Features, July 1 456.
  43. 43 Simonsick, W.J. and Prokai, L. (1993). Rapid Commun. Mass Spectrom. 7: 853.
  44. 44 Montaudo, G., Samperi, F. and Montaudo, M.S. (2006). Prog. Polym. Sci. 31: 277.
  45. 45 Nielen, M.W.F. (1999). Mass Spectrom. Rev. 18: 309.
  46. 46 Hillenkamp, F. and Peter‐Katalinic, J. (2007). Maldi MS: A Practical Guide to Instrumentation, Methods and Applications. Weinheim: Wiley‐VCH.
  47. 47 (a) Lane, J.E. and Spurling, T.H. (1979). Chem. Phys. Lett. 67: 107.(b) Mitlin, V.S. and Sharma, M.M. (1995). J. Colloid Interf. Sci. 170: 407.
  48. 48 de With, G., Harkema, S. and Feil, D. (1976). Acta Crystallogr. B32: 3178.
  49. 49 Coppens, P. (1997). X‐ray Charge Densities and Chemical Bonding. Oxford: Oxford University Press.
  50. 50 Tsirelson, V.G. and Ozerov, R.P. (1996). Electron Density and Bonding in Crystals. New York: Taylor & Francis.
  51. 51 Cowley, J.M. (1995). Diffraction Physics, 3e. Amsterdam: Elsevier.
  52. 52 Giacovazzo, C. (2014). Phasing in Crystallography: A Modern Perspective. Oxford University Press.
  53. 53 Patterson, A.L. (1939). Phys. Rev. 15 (978): 972.
  54. 54 Zhang, Y., Karasu, F., Rocco, C. et al. (2016). Polymer 107: 249.
  55. 55 Gnanasekaran, K., de With, G. and Friedrich, H. (2016). J. Phys. Chem. C120: 27618.
  56. 56 Bârsan, O.A., He, G., Alkorre, H. et al. (2017). Compos. Sci. Technol. 151: 10.
  57. 57 Klug, H.P. and Alexander, L.E. (1974). X‐ray Diffraction Procedures for Polycrystalline and Amorphous Materials, 2e. New York: Wiley.
  58. 58 (a) Alexander, L.E. (1969). X‐ray Diffraction Methods in Polymer Science. New York: Wiley.(b) Baltà‐Calleja, F.J. (1989). X‐ray Scattering of Synthetic Polymers. Amsterdam: Elsevier.
  59. 59 Sanjeeva Murthy, N. (2004). Rigaku J. 21: 15.
  60. 60 Chu, B. and Hsiao, B.S. (2001). Chem. Rev. 101: 1727.
  61. 61 Rawlins, D.J. (1992). Light Microscopy. Bios Scientific Publishers.
  62. 62 Spencer, M. (1982). Fundamentals of Light Microscopy. Cambridge: Cambridge University Press.
  63. 63 Delly, J.G. (1988). Photography through the Microscope. Eastman Kodak Co.
  64. 64 Rochow, T.G. and Tucker, P.A. (1994). Introduction to Microscopy by means of Light, Electrons, X‐rays or Acoustics, 2e. New York: Plenum Press.
  65. 65 Murphy, D.B. (2001). Fundamentals of Light Microscopy and Electronic Imaging. Wiley.
  66. 66 Gezici‐Koç, Ö., Erich, S.J.F., Huinink, H.P. et al. (2018). Prog. Org. Coat. 114: 135.
  67. 67 Hintze‐Bruening, H. and Leroux, F. (2012). Nanocomposite Based Multifunctional Coatings, Chapter 2. In: New Advances in Vehicular Technology and Automotive Engineering. Intech.
  68. 68 Williams, D.B. and Carter, C.B. (2009). Transmission Electron Microscopy, A Textbook for Materials Science. New York: Springer Science+Business Media.
  69. 69 Hawkes, P. and Spence, J.C.H. ed. (2007). Science of Microscopy. New York: Springer Science+Business Media.
  70. 70 Shindo, D. and Oikawa, T. (2002). Analytical Electron Microscopy for Materials Science. Tokyo: Springer.
  71. 71 Michler, G.H. (2008). Electron Microscopy of Polymers. Heidelberg: Springer.
  72. 72 Chen, D., Goris, B., Bleichrodt, F. et al. (2014). Ultramicroscopy 147: 137.
  73. 73 Crowther, R.A., DeRosier, D.J. and Klug, A. (1970). Proc. Roy. Soc. London A317: 319.
  74. 74 Chen, D., Friedrich, H. and de With, G. (2014). J. Phys. Chem. C118: 1248.
  75. 75 Carcouët, C.C.M.C., Esteves, A.C.C., Hendrix, M.M.R.M. et al. (2014). Adv. Funct. Mater. 24: 5745.
  76. 76 Reimer, L. and Kohl, H. (2008). Transmission Electron Microscopy, 5e. Berlin: Springer.
  77. 77 Friedrich, H., Frederik, P.M., de With, G. and Sommerdijk, N.A.J.M. (2010). Angew. Chem. Int. Ed. 49: 7850.
  78. 78 Frank, J. ed. (2005). Electron Tomography, 2e. New York: Springer.
  79. 79 Keyse, R.J., Garrat‐Reed, A.J., Goodhew, P.J. and Lorimer, G.W. (1998). Introduction to Scanning Transmission Electron Microscopy. New York: Springer.
  80. 80 Gnanasekaran, K., Snel, R., de With, G. and Friedrich, H. (2016). Ultramicroscopy 160: 130.
  81. 81 Leijten, Z.J.W.A., Keizer, A.D.A., de With, G. and Friedrich, H. (2017). J. Phys. Chem. C121: 10552.
  82. 82 Midgley, P.A. and Dunin‐Borkowski, R.E. (2009). Nature Mater. 8: 271.
  83. 83 Nudelman, F., de With, G. and Sommerdijk, N.A.J.M. (2011). Soft Matter 7: 17.
  84. 84 Patterson, J.P., Xu, Y., Moradi, M.‐A. et al. (2017). Acc. Chem. Res. 50: 1494.
  85. 85 (a) Ross, F.M. (2015). Science 350: aaa9886.(b) de Jonge, N. and Ross, F.M. (2011). Nat. Nanotechnol. 6: 695.
  86. 86 Karagiannidis, P.G., Kassavetis, S., Pitsalidis, C. and Logothetidis, S. (2011). Thin Solid Films 519: 4105.
  87. 87 Rocco, C., Karasu, F., Croutxé‐Barghorn, C. et al. (2016). Mater. Today Commun. 6: 17.
  88. 88 Ionescu‐Zanetti, C. and Mechler, M. (2005). Microsc. Anal. January: 9.
  89. 89 Alekseev, A., Chen, D., Tkalya, E.E. et al. (2012). Adv. Funct. Mater. 22: 1311.
  90. 90 Giessibl, F.J. (2003). Rev. Mod. Phys. 75: 949.
  91. 91 Seo, Y. and Jhe, W. (2008). Rep. Progr. Phys. 71: 016101.
  92. 92 Mironov, V.L. (2004). Fundamentals of the Scanning Probe Microscopy. Nizhniy Novgorod.
  93. 93 Tsukruk, V.V. and Singamaneni, S. (2012). Scanning Probe Microscopy of Soft Matter: Fundamentals and Practices. Weinheim: Wiley‐VCH.
  94. 94 Beamish, D. (2004). Mater. Perf. September: 1.
  95. 95 Lavrentyev, A.I. and Rokhlin, S.I. (2001). Ultrasonics 39: 211.
  96. 96 Alig, I., Lellinger, D., Sulimma, J. and Tadjbakhsch, S. (1997). Rev. Sci. Instrum. 68: 1536.
  97. 97 Alig, I., Tadjbakhsch, S. and Zosel, A. (1998). J. Polym. Sci., Phys. Ed. 36: 1703.
  98. 98 Alig, I., Oehler, H., Lellinger, D. and Tadjbach, S. (2007). Prog. Org. Coat. 58: 200.
  99. 99 Alig, I., Steeman, P.A.M., Lellinger, D. et al. (2006). Prog. Org. Coat. 55: 88.
  100. 100 Moulder, J.C., Uzal, E. and Rose, J.H. (1992). Rev. Sci. Instrum. 63: 3455.
  101. 101 Wallace, V.P., MacPherson, E., Zeitler, J.A. and Reid, C. (2008). J. Opt. Soc. Am. A25: 3120.
  102. 102 van Mechelen, D. (2015). Optics Photonic News November: 16.
  103. 103 Gregory, I.S., May, R.K., Su, K. and Zeitler, J.A. (2014). 39th Int. Conf. Infrared, Millimeter, and Terahertz waves (IRMMW‐THz), IEEE Xplore, November. doi: 10.1109/IRMMW‐THz.2014.6956024.
  104. 104 Gregory, I.S., May, R.K., Taday, P.F. and Mounaix, P. (2016). 41st Int. Conf. Infrared, Millimeter, and Terahertz waves (IRMMW‐THz), IEEE Xplore, December. doi: 10.1109/IRMMW‐THz.2016.7758543.
  105. 105 Tu, W., Zhong, S., Shen, Y. and Incecik, A. (2016). Ocean Eng. 111: 582.

Further Reading

  1. Goldschmidt, A. and Streitberger, H.‐J. (2003). BASF Handbook on Basics of Coating Technology. Münster: BASF.
  2. Stoye, D. and Freitag, W. (1998). Paints, Coatings and Solvents. Weinheim: Wiley‐VCH.
  3. Wicks, Z.W. Jr. Jones, F.N., Pappas, S.P. and Wicks, D.A. (2007). Organic Coatings: Science and Technology. Hoboken, NJ: Wiley.

Notes

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
44.220.184.63