Xem mẫu

Chapter 5 Calibrations, Standardizations, and Blank Corrections n Chapter 3 we introduced a relationship between the measured signal, Smeas, and the absolute amount of analyte Smeas = knA + Sreag 5.1 or the relative amount of analyte in a sample Smeas = kCA + Sreag 5.2 where nA is the moles of analyte, CA is the analyteÕs concentration, k is the methodÕs sensitivity, and Sreag is the contribution to Smeas from constant errors introduced by the reagents used in the analysis. To obtain an accurate value for nA or CA it is necessary to avoid determinate errors affecting Smeas, k, and Sreag. This is accomplished by a combination of calibrations, standardizations, and reagent blanks. 104 Chapter 5 Calibrations, Standardizations, and Blank Corrections 105 5A Calibrating Signals Signals are measured using equipment or instruments that must be properly cali-brated if Smeas is to be free of determinate errors. Calibration is accomplished against a standard, adjusting Smeas until it agrees with the standard’s known signal. Several common examples of calibration are discussed here. When the signal is a measurement of mass, Smeas is determined with an analyti-cal balance. Before a balance can be used, it must be calibrated against a reference weight meeting standards established by either the National Institute for Standards and Technology or the American Society for Testing and Materials. With an elec-tronic balance the sample’s mass is determined by the current required to generate an upward electromagnetic force counteracting the sample’s downward gravita-tional force. The balance’s calibration procedure invokes an internally programmed calibration routine specifying the reference weight to be used. The reference weight is placed on the balance’s weighing pan, and the relationship between the displace-ment of the weighing pan and the counteracting current is automatically adjusted. Calibrating a balance, however, does not eliminate all sources of determinate error. Due to the buoyancy of air, an object’s weight in air is always lighter than its weight in vacuum. If there is a difference between the density of the object being weighed and the density of the weights used to calibrate the balance, then a correc-tion to the object’s weight must be made.1 An object’s true weight in vacuo, Wv, is related to its weight in air, Wa, by the equation Wv = Wa ´ ê1 + æ 1 – 1 ö ´ 0.0012ú o w where Do is the object’s density, Dw is the density of the calibration weight, and 0.0012 is the density of air under normal laboratory conditions (all densities are in units of g/cm3). Clearly the greater the difference between Do and Dw the more seri-ous the error in the object’s measured weight. The buoyancy correction for a solid is small, and frequently ignored. It may be significant, however, for liquids and gases of low density. This is particularly impor-tant when calibrating glassware. For example, a volumetric pipet is calibrated by carefully filling the pipet with water to its calibration mark, dispensing the water into a tared beaker and determining the mass of water transferred. After correcting for the buoyancy of air, the density of water is used to calculate the volume of water dispensed by the pipet. EXAMPLE 5.1 A 10-mL volumetric pipet was calibrated following the procedure just outlined, using a balance calibrated with brass weights having a density of 8.40 g/cm3. At 25 °C the pipet was found to dispense 9.9736 g of water. What is the actual volume dispensed by the pipet? SOLUTION At 25 °C the density of water is 0.99705 g/cm3. The water’s true weight, therefore, is Wv = 9.9736 g ´ ê1 + æ0.99705 – 8.40ö ´ 0.0012ú = 9.9842 g 106 Modern Analytical Chemistry and the actual volume of water dispensed by the pipet is 0.99705 g/cm3 = 10.014 cm3 = 10.014 mL If the buoyancy correction is ignored, the pipet’s volume is reported as 0.99705 g/cm3 = 10.003 cm3 = 10.003 mL introducing a negative determinate error of –0.11%. Balances and volumetric glassware are examples of laboratory equipment. Lab-oratory instrumentation also must be calibrated using a standard providing a known response. For example, a spectrophotometer’s accuracy can be evaluated by measuring the absorbance of a carefully prepared solution of 60.06 ppm K2Cr2O7 in 0.0050 M H2SO4, using 0.0050 M H2SO4 as a reagent blank.2 The spectrophotome-ter is considered calibrated if the resulting absorbance at a wavelength of 350.0 nm is 0.640 ± 0.010 absorbance units. Be sure to read and carefully follow the calibra-tion instructions provided with any instrument you use. 5B Standardizing Methods The American Chemical Society’s Committee on Environmental Improvement de-fines standardization as the process of determining the relationship between the measured signal and the amount of analyte.3 A method is considered standardized when the value of k in equation 5.1 or 5.2 is known. In principle, it should be possible to derive the value of k for any method by considering the chemical and physical processes responsible for the signal. Unfortu-nately, such calculations are often of limited utility due either to an insufficiently developed theoretical model of the physical processes or to nonideal chemical be-havior. In such situations the value of k must be determined experimentally by ana-lyzing one or more standard solutions containing known amounts of analyte. In this section we consider several approaches for determining the value of k. For sim-plicity we will assume that Sreag has been accounted for by a proper reagent blank, allowing us to replace Smeas in equations 5.1 and 5.2 with the signal for the species being measured. 5B.1 Reagents Used as Standards The accuracy of a standardization depends on the quality of the reagents and glass-ware used to prepare standards. For example, in an acid–base titration, the amount of analyte is related to the absolute amount of titrant used in the analysis by the stoichiometry of the chemical reaction between the analyte and the titrant. The amount of titrant used is the product of the signal (which is the volume of titrant) and the titrant’s concentration. Thus, the accuracy of a titrimetric analysis can be no better than the accuracy to which the titrant’s concentration is known. primary reagent A reagent of known purity that can be used to make a solution of known concentration. Primary Reagents Reagents used as standards are divided into primary reagents and secondary reagents. A primary reagent can be used to prepare a standard con-taining an accurately known amount of analyte. For example, an accurately weighed sample of 0.1250 g K2Cr2O7 contains exactly 4.249 ´ 10–4 mol of K2Cr2O7. If this Chapter 5 Calibrations, Standardizations, and Blank Corrections 107 same sample is placed in a 250-mL volumetric flask and diluted to volume, the con-centration of the resulting solution is exactly 1.700 ´ 10–3 M. A primary reagent must have a known stoichiometry, a known purity (or assay), and be stable during long-term storage both in solid and solution form. Because of the difficulty in es-tablishing the degree of hydration, even after drying, hydrated materials usually are not considered primary reagents. Reagents not meeting these criteria are called sec-ondary reagents. The purity of a secondary reagent in solid form or the concentra-tion of a standard prepared from a secondary reagent must be determined relative to a primary reagent. Lists of acceptable primary reagents are available.4 Appendix 2 contains a selected listing of primary standards. Other Reagents Preparing a standard often requires additional substances that are not primary or secondary reagents. When a standard is prepared in solution, for ex-ample, a suitable solvent and solution matrix must be used. Each of these solvents and reagents is a potential source of additional analyte that, if unaccounted for, leads to a determinate error. If available, reagent grade chemicals conforming to standards set by the American Chemical Society should be used.5 The packaging label included with a reagent grade chemical (Figure 5.1) lists either the maximum allowed limit for specific impurities or provides the actual assayed values for the im-purities as reported by the manufacturer. The purity of a reagent grade chemical can be improved by purification or by conducting a more accurate assay. As dis-cussed later in the chapter, contributions to Smeas from impurities in the sample ma-trix can be compensated for by including an appropriate blank determination in the analytical procedure. secondary reagent A reagent whose purity must be established relative to a primary reagent. reagent grade Reagents conforming to standards set by the American Chemical Society. Figure 5.1 Examples of typical packaging labels from reagent grade chemicals. Label (a) provides the actual lot assay for the reagent as determined by the manufacturer. Note that potassium has been flagged with an asterisk (*) because its assay exceeds the maximum limit established by the American Chemical Society (ACS). Label (b) does not provide assayed values, but indicates that the reagent meets the specifications of the ACS for the listed impurities. An assay for the reagent also is provided. (a) (b) © David Harvey/Marilyn Culler, photographer. 108 Modern Analytical Chemistry Preparing Standard Solutions Solutions of primary standards generally are pre-pared in class A volumetric glassware to minimize determinate errors. Even so, the relative error in preparing a primary standard is typically ±0.1%. The relative error can be improved if the glassware is first calibrated as described in Example 5.1. It also is possible to prepare standards gravimetrically by taking a known mass of stan-dard, dissolving it in a solvent, and weighing the resulting solution. Relative errors of ±0.01% can typically be achieved in this fashion. It is often necessary to prepare a series of standard solutions, each with a differ-ent concentration of analyte. Such solutions may be prepared in two ways. If the range of concentrations is limited to only one or two orders of magnitude, the solu-tions are best prepared by transferring a known mass or volume of the pure stan-dard to a volumetric flask and diluting to volume. When working with larger con-centration ranges, particularly those extending over more than three orders of magnitude, standards are best prepared by a serial dilution from a single stock solu-tion. In a serial dilution a volume of a concentrated stock solution, which is the first standard, is diluted to prepare a second standard. A portion of the second standard is then diluted to prepare a third standard, and the process is repeated until all nec-essary standards have been prepared. Serial dilutions must be prepared with extra care because a determinate error in the preparation of any single standard is passed on to all succeeding standards. single-point standardization Any standardization using a single standard containing a known amount of analyte. 5B.2 Single-Point versus Multiple-Point Standardizations* The simplest way to determine the value of k in equation 5.2 is by a single-point standardization. A single standard containing a known concentration of analyte, CS, is prepared and its signal, Sstand, is measured. The value of k is calcu-lated as k = Sstand 5.3 S A single-point standardization is the least desirable way to standardize a method. When using a single standard, all experimental errors, both de-terminate and indeterminate, are carried over into the calculated value for k. Any uncertainty in the value of k increases the uncertainty in the ana-lyte’s concentration. In addition, equation 5.3 establishes the standardiza-tion relationship for only a single concentration of analyte. Extending equation 5.3 to samples containing concentrations of analyte different from that in the standard assumes that the value of k is constant, an as-sumption that is often not true.6 Figure 5.2 shows how assuming a con- relationship stant value of k may lead to a determinate error. Despite these limitations, single-point standardizations are routinely used in many laboratories when the analyte’s range of expected concentrations is limited. Under these con-Sstand ditions it is often safe to assume that k is constant (although this assump- reported tion should be verified experimentally). This is the case, for example, in s clinical laboratories where many automated analyzers use only a single CA standard. concentration The preferred approach to standardizing a method is to prepare a se-ries of standards, each containing the analyte at a different concentration. Example showing how an improper use of Standards are chosen such that they bracket the expected range for the a single-point standardization can lead to a determinate error in the reported *The following discussion of standardizations assumes that the amount of analyte is expressed as a concentration. It concentration of analyte. also applies, however, when the absolute amount of analyte is given in grams or moles. ... - tailieumienphi.vn
nguon tai.lieu . vn