Chapter 17

Modeling and Characterization

In this chapter, following on from Chapter 3, some more of the important tools used by the mineral process engineer are briefly described. Under modeling is included computer simulation for circuit design, computational fluid dynamics (CFD) and discrete element method (DEM) for equipment design, and design of experiments (DOE) for empirical model building. Under characterization, geometallurgy and applied mineralogy are the topics; another characterization tool, surface analysis, is covered in Chapter 12.

Keywords

Computer simulation; computational fluid dynamics; discrete element method; experimental design; geometallurgy; applied mineralogy

17.1 Introduction

In this chapter, following on from Chapter 3, some more of the important tools used by the mineral process engineer are briefly described. Under modeling is included computer simulation for circuit design, computational fluid dynamics (CFD) and discrete element method (DEM) for equipment design, and design of experiments (DOE) for empirical model building. Under characterization, geometallurgy and applied mineralogy are the topics; another characterization tool, surface analysis, is covered in Chapter 12, Section 12.4.4.

17.2 Circuit Design and Optimization by Computer Simulation

Computer simulation has become an important tool in the design and optimization of mineral processing plants. The capital and operating costs of mineral processing circuits are high and in order to reduce them consistent with desired metallurgical performance, the design engineer must be able to predict the metallurgical performance, relate performance to costs, and select the circuit for detailed design based on these data. Simulation techniques are suitable for this purpose, provided that the unit models are valid, and considerable progress continues to be made in this area.

Computer simulation is intimately associated with mathematical modeling and realistic simulation relies heavily on the availability of accurate and physically meaningful models. King (2012) provides an excellent review of unit modeling and circuit simulation methods. Comminution and classification models are well developed and in routine use (Napier-Munn et al., 1996; Sbábarao and Del Villar, 2010). Kinetic models of flotation have been used for many years (Lynch et al., 1981), and new, more powerful approaches to modeling flotation are now available for practical use (Alexander et al., 2005; Fuerstenau et al., 2007; Collins et al., 2009). Models of gravity and dense medium processes are available, although they are not as widely used (Napier-Munn and Lynch, 1992), and the modeling of liberation has progressed, though is not yet routine (Gay, 2004a, b). Advances in modeling and simulation of dewatering systems can be found in Concha (2014).

Several commercial simulators for mineral processing are available, including: JKSimMet/Float, USimPac (Brochot et al., 2002), Modsim (King, 2012), and Plant Designer (Hedvall and Nordin, 2002). The first two simulators also provide data analysis and model calibration capabilities. Limn (Leroux and Hardie, 2003) is a flowsheet solution package implemented as a layer on top of Microsoft Excel with simulation capability.

Figure 17.1 illustrates the procedure for using simulation to predict process performance for design or optimization. The key inputs are the material characteristics (e.g., grindability or floatability) and the parameters of the process models. The latter are obtained by estimation from operating data (preferred) or from a parameter library.

image
Figure 17.1 Procedure for simulation to optimize design or performance (From Napier-Munn et al., 1996; Courtesy JKMRC, The University of Queensland).

Compared to laborious and expensive plant trials, computer simulation offers clear advantages in assessing alternative circuits, optimizing design, and estimating flow rates of process streams, which can be used to size material handling equipment (conveyors, pumps, and pipelines). However, the dangers of computer simulation also come from its computational power and relative ease of use, which encourage searching the “what-if” space. It is always necessary to respect the operating range over which the models are valid, as well as the realistic limits which must be placed on equipment operation, such as pumping capacity. In addition, it is worth remembering that good simulation models combined with poor data or poor model parameter estimates can produce highly plausible looking nonsense. Simulation studies are a powerful and useful tool, complementary to sound metallurgical judgment and familiarity with the circuit being simulated and its metallurgical objectives.

17.3 Machine Design

To improve particle size reduction and particle separation systems and thus increase productivity of mineral processing, new devices of varied designs and operating principles are constantly being developed. Mineral processing has always invited the talented innovator, as the patent literature makes evident. An important cost consideration in the pursuit of the optimal design is the number of iterations in the evolution of the design from the original concept.

Still today much testing is empirical, which can mean years to reach a conclusion straining budgets and financial backers’ patience. Employing mathematical modeling tools can significantly reduce the time and cost involved. Advances in computational power enable multiple simulated iterations of a device’s operation. Computational Fluid Dynamics and Discrete Element Method are two important groups of modeling techniques that are beginning to pay dividends in the development of new mineral processing devices and the re-design of existing units. This section describes the general principle of these two modeling approaches, and stresses the need to validate their outcomes.

17.3.1 Computational Fluid Dynamics

Computational fluid dynamics (CFD) is the application of algorithm and numerical techniques to solve fluid flow problems (Versteeg and Malalasekera, 2007). In CFD, the fluid body (i.e., the interior shape containing the fluid) is divided into small fluid elements called cells (usually with tetrahedral or hexahedral shape). Algebraic variables are attributed to each flow characteristic of each cell (e.g., mass, pressure, velocity, temperature). The interfaces of the fluid body are used as boundary conditions where some of those flow characteristics are known. The interfaces can be of many types such as: walls, inlets/outlets with imposed flowrate, and openings with fixed pressure drop. The known characteristics at the boundaries are attributed to the corresponding cells and variables. The conservation principle of mass, momentum and energy is applied to each cell with consideration of the neighboring cells. This creates a system of algebraic equations with the variables representing each cell’s characteristics (some known from the boundary conditions, some unknown). This system of equations is solved with an iterative technique by the CFD code. The solutions are the flow characteristics for every cell. Knowing the discrete fluid behavior of each cell then enables one to determine the whole fluid body behavior (e.g., pressure, velocity, flowrate). Figure 17.2 shows a simplified application of the CFD technique to a 2D flow problem.

image
Figure 17.2 Simplified steps of a CFD study: (a) discretization, (b) boundary conditions, (c) algebraic equations, and (d) solution.

CFD is now applied to complex fluid flow problems with a high degree of confidence in the retrieved solution, even in the case of a mixture of two fluids like air and water. A sound knowledge of fluid dynamics, however, remains central to comprehending the simulations.

An important point to remember when applying CFD techniques to mineral processing problems is that slurries do not behave like water. This is due to the influence of variable viscosity and density depending on the local solids concentration and particle size. The model needs to take account of those slurry properties. Adequately incorporating slurry properties is a major focus of particle technology research.

17.3.2 Discrete Element Method

The discrete element method (DEM) is a numerical technique to simulate the behavior of a population of independent particles (Cundall and Strack, 1979). In this technique, each particle is represented numerically and is identified with its specific properties (e.g., shape, size, material properties, initial velocity). The interior shape of the vessel containing the particle is used as the domain of the simulation and is separated into a grid to identify the particle’s position. Particles are then subjected to a small motion (based on Newton’s laws) over a small time interval (iteration). The small motion will cause some particles to contact other particles or the domain boundaries. Each of these contacts is monitored and produces discrete reaction forces on each particle. The magnitude of the contacting forces is determined by a contact model (e.g., spring-dashpot model). The summation of the total force on each particle is then computed and forces created by external factors (e.g., fluid drag, magnetism, buoyancy, acceleration) can be added to this balance at this point. Newton’s laws of motion are then used to determine the motion parameters of each particle (e.g., acceleration, velocity, displacement) over the small time interval.

An example of this process is shown in Figure 17.3. The new position of the particles is computed and the process of contact detection can restart for the next iteration. After computation of every time step, every particle’s behavior is known over the total simulation time, hence the bulk behavior is known.

image
Figure 17.3 Example of a DEM iteration on a particle: (a) contacts identification, (b) forces application, (c) forces summation, and (d) particle displacement computation.

Combinations of CFD and DEM have been used to describe the behavior of particles moving and colliding inside a flowing fluid. In mineral processing where the ore is mostly processed wet, these CFD-DEM coupled approaches are of interest as they promise to optimize equipment design. However, it should be noted that these coupled simulations require large computational power and advanced technical knowledge.

17.3.3 Model Validation by Direct Observation of Particle Behavior

A model has no value without validation to assess the fidelity of the simulation to the real process. This requires a certain level of physical testing, which can be performed on simple set-ups representing specific parts of the process, or on lab-scale equipment representing the whole process. In validating a mineral processing operation, one of the difficulties is the observation of the slurry flow and how particles interact in this flow. Few techniques are able to quantitatively track and display particle or fluid behavior. Some of the techniques are: particle image velocimetry (PIV), laser Doppler anemometry (LDA), dye injection, high-speed imaging, and acoustic monitoring. While widely used, each has limitations, mostly the opacity and the lack of precision with the high solids content slurries common to mineral systems. The technique of Positron Emission Particle Tracking (PEPT) is attractive as it enables particles to be tracked in opaque and high particle concentration systems.

PEPT is based on tracking a radioactive tracer particle that is mixed with the feed to a unit (Parker et al., 1993). The tracer particle can be taken from the bulk to be irradiated and tracked, an advantage of the technique as the surface and body properties will be the same as the particles processed. By recycling the tracer, a picture of the particle motion is established over time while it flows, tumbles, collides and floats (in the case of flotation), and builds a picture of how a particle reacts to the system design, and what is its velocity, acceleration, residence time, etc. In mineral separation, knowing the motion of gangue and valuable mineral (at present tracked individually) will help to refine geometrical features of the separators and to validate competing models of the separation process. The current technology can track down to about 50 µm, and extending to finer sizes is a current challenge.

Figure 17.4 presents an example of particle tracking inside the trough of a spiral (gravity) concentrator where particles in a slurry are separated based on density (Chapter 10). The anticipated inner zone of high density particles (hematite in this case) surrounded by light particles (quartz) is evident. The outer band of high density particles is not expected; it appears to be due to these particles entering the trough on the outside and the turbulence (and possibly Bagnold force) keeping them there. This finding offers an interesting point to the observation of Bazin et al. (2014) (Chapter 10) that some coarse heavy particles are misplaced to the lights product.

image
Figure 17.4 (a) Modular positron emission particle tracking detector mounted around a spiral concentrator to observe a slice of the separation process, and (b) Slice top view showing the combination of many trajectories of a hematite tracer (inner and outer zone) and a quartz (middle zone) tracer (particle diameters ≈1.5 mm) in an iron ore slurry (Courtesy D. Boucher).

17.4 Design of Experiments

Beginning with its earliest incarnation called factorial design, the experimental approach now known as Design of Experiments (DOE) (also as Experimental Design, Industrial Designed Experiments (IDE) and Statistically Designed Experiments (SDE)) has been around for almost a century. Developed by the English statistician and evolutionary biologist Sir Ronald Fisher (1890–1962) as a method for testing and assessing crop yields at agricultural stations in England (Fisher, 1926), the method was slow to gain acceptance in other industries until the 1960s. It then became a mainstay of the continuous improvement revolution that launched the resurgence of Japanese manufacturing. With the advent of computer software to simplify the experimental design and data analysis (e.g., Stat Ease™, Minitab™), the approach is now gaining acceptance across engineering disciplines. It is particularly well suited for processing and manufacturing industries where many variables may interact in complex ways.

Experimental design is a structured process for investigating the relationship between input variables and output effects, factors and responses, in a process. Multiple input factors are considered and controlled simultaneously to ensure that the effects on the output responses are causal and statistically significant. Each variable is given equal weight across its full range by coding the variables in the regression analysis (e.g., max=1, min=−1), thus yielding the relative magnitude of the effect of each factor and a measure of the interaction between factors. DOE, therefore, represents a large improvement over the traditional one-factor-at-a-time (OFAT) experimental approach, by providing statistical information on the significance and magnitude of each factor and their interactions, through the method of ANOVA (analysis of variance) developed by Fisher (note: in doing so Fisher also developed the applied statistics concepts of the F-test and z distribution). The ANOVA provides equations (transfer functions) for each response in terms of the input factors exhibiting the desired level of significance (e.g., 90% or 95% confidence levels). Process models developed by DOE methods fall into the realm of empirical modeling; that is, they are not based on a physical or chemical representation of the process, but rather a mathematical regression that should only be applied within the range of the input factors used in its development. The DOE method, once learned, has the potential to provide the maximum information for the least cost in terms of time and resources. Montgomery (2012) provides an excellent reference for the subject. Somewhat easier to digest for the beginner is the reference by Schmidt and Launsby (1994).

Modern experimental design has evolved from the full factorial designs of Fisher, which had the serious limitation of being constrained by the number of factors that could be reasonably investigated. The following relationship illustrates the point. If k factors are to be tested at L levels, the number of tests (runs) N is given by:

N=Lk (17.1)

image (17.1)

At 3 levels (low, high, mid-point), a 3 factor design would require 27 tests, a 4 factor design, 81 tests, and for 5 factors, 243 tests. If only high and low levels are tested, the test numbers become a more reasonable 8, 16, and 32, respectively. Full factorial designs are therefore typically conducted at low and high levels only and are suitable for up to, at most, 5 (=32 runs) or 6 (=64 runs) factors. Fractional factorial designs were introduced in the 1930s by Yates (1937) to extend the number of factors that could be reasonably tested. The number of tests now required becomes:

N=Lkq (17.2)

image (17.2)

where q is 1,2,3… etc., and represents a half, quarter, eighth…etc. fractional design. Because not all conditions are tested for each variable against all other variables, some information about factor interactions is indiscernible from other factor interactions. This is known as aliasing and is a limitation of fractional designs. Fractional designs that involve even fewer runs for a given number of variables have been developed (Plackett and Burman, 1946; Box and Behnken, 1960; Taguchi, 1987), but in these designs the aliasing patterns are more severe than in basic fractional designs and care and understanding of the process factors under test need to be exercised. Factors in DOE can be either quantitative or qualitative, a feature which enhances their applicability to mineral processing where not all factors are continuous (e.g., ON-OFF, or Chemical A vs. Chemical B).

Modern DOE has divided the overall design process into two stages. In the first stage many factors are tested to determine those that are critical to the process, and is called the factorial testing or screening stage. In the second stage these critical factors are re-tested at multiple levels to establish a more robust mathematical relationship that is better suited for process optimization. This stage is referred to as response surface modeling (RSM) (because the resulting equations may be non-linear) or optimization stage. Figure 17.5 schematically represents the concept. If the key process factors are known then testing can proceed directly to the optimization stage.

image
Figure 17.5 Two-step process for modern experimental design.

A favored experimental design for optimization is the Central Composite Design (CCD), shown in Figure 17.6(a), which tests factors at 5 levels and requires 15 tests (not counting replicates) for 3 factors and 25 tests for 4 factors. Also shown for comparison purposes in Figure 17.6(b) is the 3D representation of a 3 factor, 2 level (23) design. Note that in the CCD, each factor is tested at 5 levels which greatly enhances the response definition over the 2-level factorial design. An example response surface using CCD to identify optimum conditions for recovery of tungsten from silica in a dry Knelson concentrator is shown in Figure 17.7. As with any experimental testwork, once optimum conditions have been identified, a set of confirmation runs should be performed to validate results.

image
Figure 17.6 3D representation of: (a) central composite design (CCD) having 3 factors (x, y and z axes), and (b) full factorial design.
image
Figure 17.7 Examples of response surface plot from a CCD designed DOE for tungsten recovery from silica in a 3 inch laboratory Knelson gravity separator run under dry conditions: (a) BS=bowl speed (G), SFR=solids feed rate (g min−1), and (b) AFP=air fluidizing pressure (psi) (Adapted from Kökkılıç et al., 2015). (The curved response surfaces indicate a strong non-linear interaction for the 3 factors).

The rigor of the experimental design method makes it well suited for laboratory and pilot testing where the effect of uncontrolled variables can be minimized (Anderson, 2006). One of the challenges of any plant testwork is the control of process conditions during testing, and this is as much the case in the application of DOE methodology as it is for any other testing approach in a plant (Napier-Munn, 2010). The EVOP (evolutionary operation) approach to process optimization was introduced by Box (1957) and uses simple factorial designs with limited factor range to search for optimum process conditions. It is widely used in other process industries, but not extensively in mineral processing (see Chapter 3). The DOE approach to optimization of plant processes and equipment has been reported for 3-product cyclones (Obeng et al., 2005), multi-gravity separators (Aslan, 2008), stirred mills (Celep et al., 2011), and flotation air rate and froth depth (Venkatesan et al., 2014). An interesting option for plant optimization is to first develop a robust process simulator based on available phenomenological models (e.g., JKSimMet™) with careful calibration. DOE optimization can then be performed using such a simulator in place of actual testing in the plant. Success using this approach has been reported for optimization of cyclone variables in a grinding circuit (Leroux et al., 1999), and for mill liner design in a SAG mill using discrete element modeling (Radziszewski et al., 2003).

17.5 Geometallurgy

The traditional approach to plant design involved the extensive testing of a single large composite sample or a small number of composite samples that are reputed to represent the ore body. It is accepted that laboratory tests can accurately measure factors such as the grindability, floatability, or other process parameter of the sample by a technique representing that to be used in the plant. The size of equipment required to achieve a specified throughput and product quality is then calculated from one of a variety of models that have been developed over the years, with some examples being given in previous chapters. Since these tests and models are tried and tested, they are accepted as reasonably precise. During operation of the resulting plant, the design is sometimes found to be inadequate. It is then suspected that the flaw in the design process lies in the samples not being sufficiently representative of the ore body, since using only a single or small number of composite samples does not recognize the variability of the ore, nor does it allow for the lack of precision in the value of the metallurgical parameter used in the design.

A geometallurgical approach uses a design procedure suited to an ore body that is described by a geostatistical analysis of a reasonably large number of small samples of drill core. The analysis requires the identification of the location of the sample points and a geological plan of the ore body, together with a mine plan of the blocks to be mined during the proposed life of the mine. The plant design can then be made using the estimated metallurgical parameter of each individual block. A statistical error can be assigned to the estimated parameter for each block and for each production period such that the final design includes risk estimates and safety factors based on the possible errors that arise from having a limited amount of sample data (Kosick et al., 2002).

Consequently, we can define “geometallurgy” as:

• The geologically informed selection of a number of samples for the determination of metallurgical parameters

• The distribution of these parameters across the blocks of the ore body by some accepted geostatistical technique, where the distribution is usually influenced by the geology because lithology/alteration/texture has an effect on the parameters

• The subsequent use of the distributed data in metallurgical process models to generate economic parameters such as throughput, grind size, grade and recovery for each mine block for plant design and production forecasting that can be used in mine planning.

This “geometallurgical approach” is needed because:

• Ore bodies are variable in both grade and metallurgical response

• The variability is a source of uncertainty that affects plant design, results (both metallurgical and financial) and capital investment decisions

• Deposits are becoming lower grade and more complex

• Throughputs are necessarily increasing and profit margins reduced; the financial risks are escalating

• Mining industry risk must be more carefully managed for projects to attract the necessary finance.

A “geometallurgical project” takes a step-by-step approach:

• Geologically informed selection of a number of variability samples (i.e., samples selected to reveal variability)

• Determination of relevant metallurgical parameters for those samples

• Populating a spatial model of the ore body by geostatistical distribution of those parameters

• Use of the distributed dataset of parameters in metallurgical models for design and forecasting

• Estimation of the uncertainties in the knowledge of the ore body to calculate lack of precision in the results

• Managing risk by adding safety factors to designs and calculating error bars for the forecasts.

17.5.1 Variability Sampling

It is important to realize the interrelated technologies involved in a mining operation and to include all departments in the design and production forecasting project:

• Metallurgy determines expected results for throughput and recovery of the plant by:

• The use of tests on drill core samples to generate metallurgical parameters for the ore: for example, grindability, flotation kinetics

• The use of these parameters in process models.

• Mining determines temporal variability in the production sequence; consequently, the metallurgical parameters vary from time to time in the plant

• Geology affects geographic variability in the mineral assemblage of the ore body; consequently, the metallurgical parameters vary from place to place

• Geostatistics use the geological information to make an estimate of the metallurgical parameters for each block and get an idea of the estimation errors (i.e., the uncertainty). It is self-evident that the more samples that are tested and the better they are selected, the greater the certainty in the data used for forecasting and the lower the project risk.

Effective sampling of an ore body is both difficult and expensive, but important to reduce the greater cost from the financial risk of failure to meet the expected results in production (Chapter 3). The sample requirements and subsequent risk analysis vary with:

• The stage of the project: that is, preliminary, pre-feasibility, or full feasibility

• The size of the ore body, and complexity of the geology, and resulting metallurgy.

The number of samples depends on the project stage, plant throughput, and ore variability.

Some guidelines for sample selection of drill core are:

• Always consult with the Geology and Mine Planning departments

• Try to include the variability of the ore types, that is, lithology, alteration, mineral occurrence of both values and gangue minerals; use geochemical and structural information

• Choose a representative number of each ore type; validate the sample set against the resource population. Outliers or unusual ore types should not be over- or under-represented

• Fresh core is better. Old badly stored core can supply erroneous data. But near-surface weathered ore must be included for testing, as it will be included in the mine plan

• Full or half-core is better than “assay rejects”

• Space the samples to allow uncertainty to be calculated; some close together, but most to cover the complete area of interest between drill holes and down-hole. A random distribution is acceptable for preliminary stages, but use of the drilling grid is better for feasibility design and forecasts

• Select samples to match the mining method while still showing variability; for example, composite by length equal to bench height but interrupt by ore type change where necessary

• Choose a relevant mining time period from the mine plan as a source of most of the samples; for example, most of them from the first 5 years of production for new plant design. Add more samples for testing each year for on-going production forecasting

• Identify the number of samples needed for the current stage of the project:

• Preliminary stage of a large project may need 35 samples to demonstrate the variability of the ore body and allow estimate of equipment size (albeit imprecise)

• Rule-of-thumb for pre-feasibility life-of-mine sampling is one sample per million tons of ore under evaluation, or 1 sample per 400,000 m3 (100 m×100 m×40 m)

• A full feasibility study will require more samples for a large ore body, but only the first 10 years of the mine plan are of immediate interest. The number of samples should be based on a statistical analysis to ensure that it meets the required level of confidence

• Collect a sample set that is representative of the variability of the section of the deposit that is of most interest when considering the financial risk in the project—not a “representative sample” (since there is no such thing).

17.5.2 Metallurgical Testing

Variability samples must be tested for the relevant metallurgical parameters. Ball mill design requires a Bond work index, BWi, for ball mills at the correct passing size; SAG mill design requires an appropriate SAG test, for example, SPI (Chapter 5). Flotation design needs a valid measure of kinetics for each sample, including the maximum attainable recovery and rate constants for each mineral (Chapter 12). Take care to avoid unnecessary testing for inappropriate parameters, saving the available funds for more variability samples rather than more tests on few samples. Remember that it must be possible to use the measured values for the samples to estimate the metallurgical parameters for the mine blocks in order to describe the ore body, and these estimates will be used in process models to forecast results for the plant. Always include some basic mineralogical examination of each sample.

17.5.3 Populating the Mine Block Model

Understanding and using the measured metallurgical parameters of the whole ore body requires that the test data are distributed across all the blocks in the mine plan. This exercise involves the consideration of much more information about the mine than would traditionally be used in plant design, which includes:

• Location of each sample within the ore body in terms of co-ordinates and section of core used

• Geological description of the sample, for example, lithology, alteration, rock type, and perhaps metal grade

• Mine block plan with similar geological information

• Planned mining schedule for the mine blocks, for example, by year.

The objective of the analysis is the distribution of the metallurgical test data across the blocks in the mine plan, assigning each block (and each mining period) an estimated metallurgical parameter value (e.g., BWi), and a precision of each estimate.

A suitable method involves:

• Basic statistical study to identify geometallurgical (geomet) domains, that is, areas where it can be shown by an ANOVA that the geological description coincides with distinctly different metallurgical parameters. There must be a sufficient number of samples in a domain for only those samples to be used in estimation of blocks of similar geology

• Use of a geostatistical technique, such as Kriging, or regression analysis, for distribution of metallurgical parameters within each geometallurgical domain. Independent variables such as BWi that can be used in statistical analysis can be distributed directly to mine blocks by the distance weighting method known as Kriging. Variables that are dependent on other parameters, such as maximum attainable recovery that is dependent on head grade, may be distributed to blocks by regression analysis using the estimated block value for head grade, and the estimate of maximum recovery can be improved by Kriging of “residuals.” Parameters that are not amenable to statistical analysis, such as flotation rate constant, must be recognized and handled by transposing to some variable that is suitable for Kriging or some other geostatistical technique

• Acceptance of the uncertainty in the estimated value of each block. Distribution of metallurgical parameters is best done using a method that involves a measure of the statistical error. The error can be included in simulations of plant performance to show the uncertainty in the forecast results.

A short description of Kriging is a pre-requisite to understanding the geometallurgical approach (Amelunxen et al., 2001). This involves the construction of geostatistical variograms for each metallurgical parameter (e.g., BWi) by plotting the semi-variance of differences in the value of pairs of samples of equal (within a range) distance apart against that distance (or mid-range) – see Figure 17.8. It is evident that the variability between samples close together is less than for those further and further apart, till eventually there is no longer any influence on the variance. The establishment of a model (equation) for the variogram allows for the estimation (by interpolation) of the metallurgical parameter and the statistical error of each estimate to be made for each block by combination of the values of samples that are within the range of influence. That “range” is determined by the shortest distance apart for pairs having attained the maximum variance, known as the “sill.” The value where the curve cuts the y-axis is referred to as the “nugget” (estimated from the variance of pairs of coincident samples) and is a measure of the inherent errors in sampling and measurement of individual datum points.

image
Figure 17.8 Typical variogram.

In practice, the estimate for a block is made from samples within an ellipsoid with dimensions selected after consideration of the geological structure, using the weighted average of a minimum of 3 samples that are not all in the same drill hole. If there are insufficient samples within those dimensions, a further ellipsoid is chosen with larger dimensions, and so on until there are sufficient samples. Enlarging the dimensions to use samples that are further away produces block estimates with larger standard errors. The use of data averaged from at least 3 samples results in a smoothing of variability in the block estimates. It is always instructive to compare the frequency distribution of parameters measured from the samples with that attributed to the blocks; excessive smoothing or shift in the mean indicates too few samples were tested.

Since blocks are normally identified by year on the mining plan, the geostatistical analysis also allows the determination of annual average parameter values and their statistical errors by the same Kriging technique. So, it is possible to design the plant to deliver a specified throughput and with optimum grade and recovery in each production year. It is also possible to extend the analysis to calculate how many more samples need to be tested from within the range of the blocks mined in a production year in order to improve the precision to any desired level.

17.5.4 Process Models for Plant Design and Forecasting

Metallurgical parameters (such as hardness work indices or mineral flotation kinetics) are estimated for each block in the mine model and all the blocks are used as an input dataset for process models. These process models are used for simulations to determine throughput, grade and recovery per block for new plant designs or for forecasting the results from existing operations (Bennett et al., 2004). The models must be capable of using the measured parameters and rapidly simulating many thousands of mine blocks to supply an optimal design or average forecast. Changing the plant or process design must be easy to accommodate within the model to allow various options to be considered.

It is important that the blocks are populated with the estimated metallurgical parameters. The results of the simulations in terms of throughput or recovery can then be assigned to each block. Remember that these results are specific to the plant design used in the model, and that changing the plant design will produce a new set of results for allocation to each block. Never try to populate the blocks with plant results that are determined by simulation from individual samples.

17.5.5 Estimating Uncertainty

Using a statistical approach to distribution of metallurgical parameters ensures that each block value is accompanied by a standard error, for example, Kriging or regression errors. This allows process models to be run as Monte Carlo simulations using the estimated value as the mean of a distribution of possible values that has a standard deviation based on the standard error. Hence the standard error of the resultant plant forecast can be determined, as a measure of uncertainty. Since the geostatistics also allows the estimation of annual average parameter values and their statistical errors, it is possible to calculate the uncertainty in the forecast for each production year.

17.5.6 Managing Risk

Error bars can be fitted to the production forecasts for blocks and for average annual production. The size of the bar is determined from standard error and the confidence that is to be applied. The more confidence we wish to place in the forecast, the larger the error bar to encompass the possible result. This concept may be used in design to choose a safety factor: enlarging the equipment size, for example, for grinding mills, will increase the forecast throughput to a point where the bottom of the error bar (minimum estimated throughput) will match the specification. Remember that there is always a statistical chance that the result will be below the error bar because of uncertainty in the estimated metallurgical parameter. The larger the acceptable risk, the lower the confidence limits required, and the smaller the grinding mill (in this example). The testing of more samples that are closer to the blocks will probably generate lower standard errors in the block estimates and narrower error bars, allowing for the design to involve a smaller safety factor (and smaller mill) for the same risk.

Adding a safety factor to equipment size may not reduce the risk in failing to achieve specified flotation recovery to an acceptable level, since maximum attainable recovery is a fixed limit. However, this approach quantifies the remaining risk, which can be invaluable to the financial viability of the project.

17.5.7 Case Study

In a study for the design of a SAG-Ball mill grinding plant (Bulled, 2007), 100 samples were selected, each comprising a 12 m bench intercept, from a total of 27 vertical drill holes. The samples were from six main lithologies further split into eleven sub-types. Minimum drill hole spacing was 50 m and minimum spacing within a hole was 12 m.

The samples were tested for SPI value for SAG design and BWi value for ball mill design. The variability in each case was slightly less than typical; standard deviation is normally about 20% of the mean value in terms of specific energy, kWh t−1, for both grindability parameters.

The sample data were distributed across 10,903 mine blocks, representing approximately 130 million tons of ore to produce a dataset for the ore body on which to base the plant design.

Statistical analysis indicated significant grindability differences between lithologies but, with only 100 samples in total, there was insufficient data to do separate geostatistical analysis of each type. This difference indicated that lithology can be used as a guide to the hardness of any block, but the wide range of values within each domain suggested that there is some other factor (such as degree of alteration) that has a regional basis. The samples and mine blocks were grouped into three different ore types and there were sufficient samples in two of these to conduct a separate geostatistical analysis for each, allowing block values to be estimated from samples of the same ore type treated as domains. Blocks from the third ore type were estimated from a combination of all samples. Subsequently, the block values were estimated using Kriging within each domain using only the neighboring samples within that domain. Spacing of samples was adequate since, for each ore type, at least 75% of the mine blocks had a sample of the same type within 100 m of the block.

The power required to achieve the average specified throughput and target grind size from the 10,903 blocks mined over an 18-year-life was determined using simulations in the CEET process model (Kosick and Bennett, 1999). The throughput and grind size were allowed to vary within a specified range across the blocks, as would be expected from the variability in grindability parameters (SPI and BWi). Variations in both grindability parameters resulted in the bottleneck shifting between the SAG and ball mill; it is important to note that had the design been based on average grindability for each stage rather than the complete dataset, the mills would be about 5% too small to achieve target throughput due to the changing bottlenecks.

Examination of datasets of blocks grouped into annual production periods indicated years when throughput was below target due to limitations from either the SAG or the ball mill circuits. Model simulations indicated that this could be avoided to some extent by planning changes to the SAG discharge screen aperture. However, it was necessary to increase the SAG mill power requirement by 5% to ensure that target throughput was met on average during the year with hardest ore.

The statistical errors in average annual estimates of grindability were used in Monte Carlo simulations within the CEET simulator to determine uncertainty in the forecast throughput. Error bars were put on annual forecasts at 90% confidence limits, indicating that a safety factor of approximately 14% must be added to the power requirements for both mills to ensure that the specified throughput was achieved in every year of the mine life. This still left the 5% statistical chance that specified throughput will not be achieved on average in every year.

17.6 Applied Mineralogy

Quantitative automated mineralogy (QEMSCAN: Quantitative Evaluation of Minerals by Scanning Electron Microscopy; MLA: Mineral Liberation Analyser; and recently TIMA–Tescan Integrated Mineral Analyser) is increasingly being applied to the study of ore deposits and the evaluation of mineral processing operations. The three instruments identified are all based on electron microscopes, although they may be operationally different, and form the main basis of this section. They provide information on polished sections, that is, 2D data, and have now been joined by X-ray microtomography, which provides 3D data, some details of which will be outlined later. The quantification and improved understanding of mineralogical parameters is making significant contributions to exploration, modeling of ore bodies to predict comminution and separation performance, and in diagnosing plant operations (Hoal et al., 2009; Miller and Lin, 2009; Evans et al., 2011; Lotter, 2011; Smythe et al., 2013).

Samples can be analyzed in many forms either as intact drill core, coarse reject material, and composite plant samples when evaluating mineral processing operations. Intact drill core samples are analyzed to determine rock-types and provide textural characterization. Intact core can be mounted on a polished section (or polished thin section) where chemical spectra are collected at a set interval within the field of view. Each field of view is then processed offline and a pseudo color image of the sample is produced, from which the modal mineralogy and texture of the sample can be extracted. (“Modal” refers to mineral proportion or weight% of mineral in a sample calculated, taking into consideration the mineral specific gravity.) Data are acquired over the polished sections at a varied pixel size (e.g., 5 or 25 μm). Multiple mineral types can be identified and color coded for easy visual inspection.

The information is presented according to the questions being asked, as illustrated below. The emphasis is on quantitative data presented in a way to aid the mineral processor reach an informed decision.

17.6.1 Mineral Variability

Blended and homogenized coarse reject samples from, for example, 2-3 meter drill core intervals used for geochemical analyses, can be analyzed to determine mineral variability which might have an impact on the metallurgical response. These coarse reject samples are further ground and homogenized and a pre-defined number of particles are mapped at a selected resolution. Such studies are critical in defining the distribution of minerals in a deposit. As an illustration, Figure 17.9 shows the variation of rare earth minerals among the ore zones of the Nechalacho rare earth element (REE) deposit in NWT (Canada). The important observation is that the zones carry significantly different proportions of heavy (HREE) and light rare earth elements (LREE).

image
Figure 17.9 Variability of REE minerals in the Nechalacho REE Deposit, NWT (A-F refer to ore zones with specific geological and mineralogical characteristics). (Courtesy T. Grammatikopoulos, SGS Canada Inc).

17.6.2 Grain Size, Liberation, and Association

Grain size distribution of the individual minerals can be extracted from an automated mineralogical analysis. This information illustrates the relationship of the grain size of the various phases within a sample. An example is shown in Chapter 4, Figure 4.15.

Particles are classified into groups based on mineral-of-interest area percent: for example, free (≥95% of the total particle area), liberated (≥80%), and non-liberated (<80%). The non-liberated grains can be further classified according to association characteristics into binary and complex groups. Figure 17.10 illustrates an example of liberation and mineral associations from a Cu deposit. The analysis is conducted on sized samples then the data are combined to assess the whole sample. From Figure 17.10, in the combined, copper sulfides in the free plus liberated forms account for about 75% of the mass of the Cu sulfides. The Cu sulfides in associations are roughly evenly distributed among complex, micas/clays, and pyrite. The increase in liberation (liberated + free) with decreasing particle size is clearly evident. The information can be used to inform grind size necessary to reach target performance. Rather than liberation, which is a bulk parameter, the exposure of mineral on the surface of a particle may be more relevant in some processes, notably flotation and leaching.

image
Figure 17.10 Liberation and association of copper sulfides (mass %) calculated for the sample (combined) and by size fraction. (Courtesy T. Grammatikopoulos, SGS Canada Inc).

17.6.3 Metal Distribution

Valuable metals can occur in different minerals and in trace quantities in gangue minerals. Instrumentation used to quantify elements include electron probe micro analysis, Laser Ablation ICP-MS, dynamic Secondary Ion Mass Spectrometry, and micro-pixie. (Which instrument to employ is mainly dependent on the ore type and the mineral assemblage.) Coupling with automated mineralogical analyses, the distribution of metals among the minerals can be quantified.

As an example, consider a Cu-Ni deposit. Commonly, Ni is carried by the sulfides pentlandite, millerite, and violarite, but can also occur in small amounts (a few ppm to ~1 wt%) in the lattice of ferromagnesian minerals (e.g., olivine), Fe-oxides pyrrhotite, and other minerals. Figure 17.11 illustrates the distribution of Ni from two deposits. For Deposit-1, 90% of the Ni is in sulfide form and is considered recoverable. For Deposit-2, only about 72% of the Ni is hosted by the sulfides, where the balance is hosted by the gangue minerals. These results will impact the resource calculations and overall economics of a project. This information can be incorporated in geometallurgical models to help define the ore’s amenability to concentration. Instrument and technique advances continue to push these analyses to ever smaller particles (Brodusch et al., 2014).

image
Figure 17.11 Distribution of Ni among sulfides, silicates and oxides. The Ni distribution is calculated based on the mineral mass estimated by the QEMSCAN analysis and electron probe micro analysis. (Courtesy T. Grammatikopoulos, SGS Canada Inc).

17.6.4 Liberation-limited Grade Recovery

Introduced in Chapter 1, the liberation-limited recovery is determined from the mineral composition of the particles presented to the separator. The mineralogical data are assembled to mimic a perfect separator, that is, one that accepts particles to the concentrate stream based purely on the mineral content, from highest liberated to least liberated. After allowing for mineral specific gravity to convert to a mass basis, the liberation-limited grade recovery curve is produced.

An example is given is Figure 17.12 which compares the liberation-limited curve determined by mineralogical analysis (MIN) with actual metallurgical test results (MET) from two rare earth element (REE) deposits. In the case of C-2, the metallurgical result approaches the theoretical; in case C-1, the metallurgical result falls well below the theoretical due to the loss of fines (slimes) rejected in the testwork.

image
Figure 17.12 REE grade and recovery: liberation-limited curve calculated from mineralogical analysis (MIN) and compared to actual metallurgical results (MET) for two carbonatite samples, C-1 and C-2 (Courtesy T. Grammatikopoulos, SGS Canada Inc).

Based on electron beam instruments, the mineralogical data is 2D. At one time the apparent need to correct to 3D was a major research activity (Barbery, 1991). At issue is that locked particles can be sectioned to appear liberated, meaning the degree of liberation is over-estimated. Correction procedures have ranged from simple to complex (Lin et al., 1999). In routine use, however, no corrections are usually made, but it has spurred development of 3D analysis using tomographic techniques. This cutting edge research is a fitting place to finish the chapter.

17.6.5 High-resolution X-ray Microtomography (HRXMT)

Cone beam X-ray microcomputed tomography (micro CT) systems were introduced commercially a decade ago. They allow for 3-dimensional visualization, characterization and analysis of multiphase systems at a voxel (value on a regular grid in three-dimensional space) resolution of 10 μm to generate three-dimensional images of particulate systems (Miller and Lin, 2009). An example application is described by Miller et al. (2009), who used the technique to diagnose separation performance on a phosphate project. They derived the limiting grade recovery curve from the 3D (volume) data and compared it with that generated from 2D (area) data and showed, as expected, that the 2D result over-estimated the liberation, evidenced by the grade being higher at any given recovery than shown by the 3D result. They also compared with plant data and found that the operating point (grade/recovery) fell on the 3D derived curve. As instrumentation advances and costs decline, this attractive direct measure of volume-based mineralogical data may become more widespread.

References

1. Alexander DJ, et al. Flotation performance improvement at Placer Dome Kanowna Belle Gold Mine. Proc 37th Annual Meeting of the Canadian Mineral Processors Conf Ottawa, ON, Canada: CIM; 2005; pp. 171–201.

2. Amelunxen, P., et al., 2001. Use of geostatistics to generate an ore body hardness dataset and to quantify the relationship between sample spacing and the precision of the throughput predictions. Proc. International Conference on Autogenous and Semiautogenous Grinding Technology (SAG) Conf., vol. 4. Vancouver, BC, Canada, pp. 207–220.

3. Anderson, C.G., 2006. The use of design of experimentation software in laboratory testing and plant optimization. Proc. 38th Annual Meeting of the Canadian Mineral Processors Conf., Ottawa, ON, Canada, pp. 577–606.

4. Aslan N. Application of response surface methodology and central composite rotatable design for modeling and optimization of a multi-gravity separator for chromite concentration. Powder Techno. 2008;185(1):80–86.

5. Barbery G. Mineral Liberation, Measurement, Simulation and Practical Use in Mineral Processing Quebec, Canada: Les Editions GB; 1991.

6. Bazin C, et al. Simulation of an iron ore concentration circuit using mineral size recovery curves of industrial spirals. Proc 46th Annual Meeting of the Canadian Mineral Processors Conf Ottawa, ON, Canada: CIM; 2014; pp. 387–402.

7. Bennett C, et al. Geometallurgical modeling: applied to project evaluation and plant design. Proc 36th Annual Meeting of the Canadian Mineral Processors Conf Ottawa, ON, Canada: CIM; 2004; pp. 227–240.

8. Box GEP. Evolutionary operation: a method for increasing industrial productivity. J Royal Statist Soc Series C (Applied Statistics). 1957;6(2):81–101.

9. Box GEP, Behnken DW. Some new three level design for the study of quantitative variables. Technometrics. 1960;2(4):455–475.

10. Brochot S, et al. USIM PAC3: design and optimization of mineral processing plants from crushing to refining. Mineral Processing Plant Design, Practice and Control. vol. 1 Littleton, CO, USA: SME; 2002; pp. 479–494.

11. Brodusch N, et al. Ionic liquid-based observation technique for nonconductive materials in the scanning electron microscope: application to the characterization of a rare earth ore. Microsc Res Tech. 2014;77(3):225–235.

12. Bulled D. Grinding circuit design for Adanac Moly Corp using a geometallurgical approach. Proc 37th Annual Meeting of the Canadian Mineral Processors Conf Ottawa, ON, Canada: CIM; 2007; pp. 101–121.

13. Celep O, et al. Optimization of some parameters of stirred mill for ultra-fine grinding of refractory Au/Ag ores. Powder Technol. 2011;208(1):121–127.

14. Collins DA, et al. Designing modern flotation circuits using JKFIT and JKSimFloat. In: Malhotra D, ed. Recent Advances in Mineral Processing Plant Design. Littleton, CO, USA: SME; 2009; pp. 197–203.

15. Concha AF. Solid-Liquid Separation in the Mining Industry Fluid Mechanics and Its Applications (Series). vol. 105 New York, NY, USA, Dordrecht London Springer: Cham Heidelberg; 2014.

16. Cundall PA, Strack ODL. A discrete numerical model for granular assemblies. Géotechnique. 1979;29(1):47–65.

17. Evans CL, et al. Application of process mineralogy as a tool in sustainable processing. Miner Eng. 2011;24(12):1242–1248.

18. Fisher RA. The arrangement of field experiments. J Ministry Agric Great Brit. 1926;33:503–513.

19. Fuerstenau MC, ed. Froth Flotation: A Century of Innovation. Littleton, CO, USA: SME; 2007.

20. Gay SL. A liberation model for comminution based on probability theory. Miner Eng. 2004a;17(4):525–534.

21. Gay SL. Simple texture-based liberation modelling of ores. Miner Eng. 2004b;17(11-12):1209–1216.

22. Hedvall P, Nordin M. PlantDesigner®: a crushing and screening modeling tool. Mineral Processing Plant Design, Practice and Control. vol. 1 Littleton, CO, USA: SME; 2002; pp. 421–441.

23. Hoal KO, et al. Research in quantitative mineralogy: examples from diverse applications. Miner Eng. 2009;22(4):402–408.

24. King RP. Modeling and Simulation of Mineral Processing Systems second ed. Englewood, CO, USA: SME; 2012; Elsevier.

25. Kökkılıç O, et al. A design of experiments investigation into dry separation using a Knelson concentrator. Miner Eng. 2015;72(0):73–86.

26. Kosick G, Bennett C. The value of orebody power requirement profiles for SAG circuit design. Proc 31st Annual Meeting of the Canadian Mineral Processors Conf Ottawa, ON, Canada: CIM; 1999; pp. 241–253.

27. Kosick, G., et al., 2002. Managing company risk by incorporating the Mine Resource Model into design and optimization of mineral processing plants. SGS Mineral Services, Technical Paper 21, Technical Bull. pp. 1–7.

28. Leroux D, Hardie C. Simulation of closed circuit mineral processing operations using Limn® Flowsheet Processing Software. Proc 35th Annual Meeting of the Canadian Mineral Processors Conf Ottawa, ON, Canada: CIM; 2003; pp. 543–558.

29. Leroux DL, et al. The application of simulation and Design-of-Experiments techniques to the optimization of grinding circuits at the Heath Steele Concentrator. Proc 31st Annual Meeting of the Canadian Mineral Processors Conf Ottawa, ON, Canada: CIM; 1999; pp. 227–240.

30. Lin D, et al. Comparison of stereological correction procedures for liberation measurements by use of a standard material. Trans Inst Min Metal Sec C. 1999;108:C127–C137.

31. Lotter NO. Modern process mineralogy: an integrated multi-disciplined approach to flowsheeting. Miner Eng. 2011;24(12):1229–1237.

32. Lynch AJ, et al. Mineral and Coal Flotation Circuits: Their Simulation and Control Developments in Mineral Processing. vol. 3 New York, NY, USA: Elsevier Scientific Publishing Company; 1981.

33. Miller JD, Lin CL. High resolution X-ray Micro CT (HRXMT)—advances in 3D particle characterization for mineral processing operations. In: Malhotra D, ed. Recent Advances in Mineral processing Plant Design. Littleton, CO, USA: SME; 2009:48–59.

34. Miller JD, et al. Liberation-limited grade/recovery curves from X-ray micro CT analysis of feed material for the evaluation of separation efficiency. Int J Miner Process. 2009;93(1):48–53.

35. Montgomery DC. Design and Analysis of Experiments eighth ed. New York, NY, USA: John Wiley and Sons; 2012.

36. Napier-Munn T. Designing and analysing plant trials. In: Greet CJ, ed. Flotation Plant Optimization. AusIMM 2010:175–190. Spectrum Series, No 15.

37. Napier-Munn TJ, Lynch AJ. The modelling and computer simulation of mineral treatment processes – current status and future trends. Miner Eng. 1992;5(2):143–167.

38. Napier-Munn TJ, et al. Mineral Comminution Circuits: Their Operation and Optimisation University of Quensland, Brisbane, Australia: Julius Kruttschnitt Mineral Research Centre (JKMRC); 1996.

39. Obeng DP, et al. Application of central composite rotatable design to modelling the effect of some operating variables on the performance of the three product cyclone. Int J Miner Process. 2005;76(3):181–192.

40. Parker DJ, et al. Positron emission particle tracking-a technique for studying flow within engineering equipment. Nucl Instrum Methods Phys Res Sect A. 1993;326(3):592–607.

41. Plackett RL, Burman JP. The design of optimum multifactorial experiments. Biometrika. 1946;33(4):305–325.

42. Radziszewski P, et al. Lifter design using a DOE approach with a DEM charge motion model. Proc 35th Annual Meeting of the Canadian Mineral Processors Conf Ottawa, ON, Canada: CIM; 2003; pp. 527–542.

43. Sbábarao D, Del Villar R, eds. Advanced Control and Supervision of Mineral Processing Plants. London, Dordrecht Heidelberg, New York, NY, USA: Springer; 2010.

44. Schmidt SR, Launsby RG. In: Kiemele MJ, ed. Understanding Industrial Designed Experiments. fourth ed. Air Academy Press 1994.

45. Smythe DM, et al. Rare earth element deportment studies utilising QEMSCAN technology. Miner Eng. 2013;52:52–61.

46. Taguchi G. System of Experimental Design: Engineering Methods to Optimize Quality and Minimize Costs White Plains, NY, USA: Kraus International Publications; 1987.

47. Venkatesan L, et al. Optimisation of air rate and froth depth in flotation using a CCRD factorial design-PGM case study. Miner Eng. 2014;66-68:221–229.

48. Versteeg HK, Malalasekera W. An Introduction to Computational Fluid Dynamics: The Finite Volume Method second ed. Harlow, Essex, UK: Prentice Hall; 2007.

49. Yates, F., 1937. The design and analysis of factorial experiments (Technical report). Technical Communication No. 35 of the Commonwealth Bureau of Soils (alternatively attributed to the Imperial Bureau of Soil Science), Harpenden, UK.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.148.103.210