Xem mẫu

11 Reviewing environmental impact statements 11.1 META-ASSESSMENT: REVIEWING ENVIRONMENTAL STATEMENTS In an impact assessment study, after the various areas of impact are studied, all the “threads” are joined again to arrive at an overall assessment, and the whole discussion must be presented in a report containing the main points of all the areas covered, following specific guidelines. The report as a whole is also the subject of scrutiny as part of the control process, and this is discussed in this chapter. The review of environmental statements is really a completely different stage in the process and requires also a completely different approach, which we can use now to finish our discussion on different types of expert systems for impact assessment. Reviewing impact-assessment reports can be labelled meta-IA as it involves “assessing the assessments” of impacts. Advice on how to prepare Environmental Statements has been forthcoming from the Department of the Environment (DoE, 1995), and the “reverse” task of assessing the quality of such statements has also been progressively explored in increas-ing depth and detail. Early attempts by Lee and Colley (1990) and by the Commission of the European Communities (CEC, 1993) were followed by the first good-practice guide from the Department of the Environment (DoE, 1994), culminating in the definitive report (DoE, 1996) – which we shall take here as our main source – based on research at Oxford Brookes University on the changing quality of Environmental Statements (Glasson etal., 1997). Weston (2000) has also reviewed this question under the light of the new 1999 legislation, but without developing the methodology any further. The task of reviewing impact-assessment reports can be summarised as assessing the completeness and presentation of such reports. It does not involve assessing the project impacts or the acceptability of the project – that will be for the development control procedures to determine – but assessing the impact report as a public document in its suitability for use in that development control process. An obvious implication for our discussion is that GIS is unlikely to play a part in this evaluation, which only involves © 2004 Agustin Rodriguez-Bachiller with John Glasson 358 Building expert systems for IA “comparing” the different parts of the report with some preconceived ideas about what they should contain and how they should be presented. The assessment must involve many different criteria which have to be assessed on their own merit, and whose partial assessments must be assem-bled into some form of overall evaluation of the qualities and deficiencies of the report as a whole, following some form of so-called multi-criteria evaluation (Voogd, 1983). The questions raised by such an approach can be grouped under several different headings: 1 identifying the aspects to be assessed and how to extract the relevant information about each aspect; 2 determining how to assess each aspect of the report; 3 deciding the nature of any – one or several – overall evaluation(s) of the report to be derived from the partial evaluations of the various aspects; 4 specifying how the partial evaluations are to be combined into the over-all evaluation(s). The details of how these questions are addressed vary depending on the approach. The “Environmental Impact Statement Review Package” – by the Impacts Assessment Unit at Oxford Brookes University (DoE, 1996; Glasson etal., 1999; Weston, 2000) – presents an expert assessor with a checklist of criteria grouped under main headings and divided into sections, and the assessor is expected to “score” each of the aspects of the report and, at the end of each section, also the overall worth of that section. Similarly, at the end, the assessor is expected to provide an overall mark for the whole report and to list overall good and bad points for various agents/groups involved: the developer, the local authority, the public, etc. In this approach, the fact that the assessor is an expert has implications for the methodology: • The assessor can be asked to judge directly each of the different aspects of the report and put “scores” to those judgements. • Similarly, the assessor can be asked to develop in the process an overall impression in the assessor’s mind of different groups of aspects (“sections”). • Finally, an impression of the overall quality of the report is also formed in the assessor’s mind as he/she progresses through the detailed evaluation. If, on the other hand, we are to use an expert-systems approach to the same problem, such an approach would be designed for use by non-experts, and this has fundamental implications for every step in the process. A non-expert cannot be expected to form opinions to the same extent as an expert about the quality of various aspects of the report, let alone the overall quality of whole sections of the report or of the report as a whole. The aspects of the report for the assessor to consider must be converted into relatively simple questions for the expert-system user, and the expert-system designer must © 2004 Agustin Rodriguez-Bachiller with John Glasson Reviewing environmental impact statements 359 “translate” as much as possible any evaluative steps involved into “statements of fact” or descriptive questions which a non-expert can answer. Such questions can sometimes be yes–no questions to deal with sharply defined issues (“Does the Statement contain maps/diagrams describing the project?”), and the answers to such questions can be converted into “scores” automatically (for example yes is “good” and no is “bad”). Sometimes it is better for questions to offer a wider range of answers (typically in the form of a menu) to which a “scale” of possible scores can easily be attached. For example, the aspect described in the already mentioned ES Review Package by the phrase “Indicates the nature and status of the decision(s) for which the environmental informa-tion has been prepared” can translate into an expert-system menu-question like: Does the Statement indicate the nature and status of the decision(s) for which the information has been prepared? – yes, in detail – it only refers to a planning application being submitted – it does not specify what type and status of decision it is prepared for. This is an example where the question is simple enough and at the same time the three possible answers provide the rudiments of a “scoring scale” from best to worst that is sufficiently detailed for our scoring purposes. The approach should avoid starting at the lowest level of the assessment using scales whose excessive detail will get lost once the scores of all the different aspects begin to combine. This aspect of expert-system design requires considerable judgement on the part of the designer, as a balance must be struck between accuracy – trying to reflect the true quality of the aspect being assessed – and simplicity of the options offered to the non-expert. Extensive lists of options could be offered to deal with every aspect, showing all the possible nuances reflecting the different levels of quality that could be present, but that could make the job of answering those questions exces-sively onerous for the non-expert. Also, the accumulation of such questions would make the whole evaluation process too long and complicated, and therefore impractical. No simple rule can be suggested to solve this problem,55 but it is the designer’s job to provide sufficient range of possible answers to cover a meaningful scale of “qualities” while at the same time making it easy for the non-expert to understand the answers and the differences between them. Questions can also take the form of lists (multiple-choice menus) from which the user picks out the items, which are relevant: for instance, the item that in the ES Review Package reads “Indicates the methods by which 55 This is a variation of the well-known “Law of Requisite Variety” by Ross Ashby (Ashby, 1956), which postulates that an information system should not define its information with a level of detail greater than it is capable of processing. © 2004 Agustin Rodriguez-Bachiller with John Glasson 360 Building expert systems for IA the quantities of residuals and wastes were estimated, acknowledges any uncertainty, and gives ranges or confidence limits where appropriate” can be broken down into several (one for each project-stage) which in turn can translate into list questions like: Does the Statement discuss the calculations used to estimate quantities of waste and/or residual materials expected during the construction stage? – it indicates the methods used to calculate them – it defines levels of uncertainty associated with the calculations – it gives ranges of confidence limits for the results – none of the above (if this one is chosen, all other choices are excluded) The user selects one or several of these options, and the scoring for the purposes of evaluation derives from the combinations of the scores attached to each of the choices. As can be seen, the transition from a review procedure for experts to an expert-system for non-experts requires a process of translation into simple questions of each and every aspect assessed, and this simplification often means that one aspect to be assessed translates into several questions in the expert system. For example, the item that reads in the ES Review Package “Describes any additional services (water, electricity, emergency services, etc.) and developments required as a consequence of the project” can translate (applied to every project stage) into two questions for the user: Does the Statement say if any additional services or development will be required during the construction/preparation stage? – no, it does not say – this type of project wouldn’t need any additional services/developments – yes, it specifies which additional services/developments it requires. (then, if the answer to the previous question is no. 3) Which additional services or developments will be required during the construction/preparation stage? – water – electricity – gas – sewage disposal – waste disposal – additional infrastructure (roads, etc.) to be built – other developments – emergency services – none of the above (if this one is chosen, all other choices are excluded). © 2004 Agustin Rodriguez-Bachiller with John Glasson Reviewing environmental impact statements 361 The range of possible variations extends beyond what can be discussed here56 but the point of this discussion is to show how the translation of aspects to be studied into useful questions for the expert-system user is not trivial and requires careful consideration by the expert-system designer. The following sections now discuss in greater detail the aspects that such an expert system should cover. 11.2 THE BUILDING BLOCKS OF THE ASSESSMENT The best-practice ES Review Package (DoE, 1996; Glasson etal., 1999; Weston, 2000) groups all the aspects to be reviewed into a structure of numbered sections and headings: 1 Description of the development • principal features of the project; • land requirements; • project inputs; • residues and emissions. 2 Description of the environment • description of the area occupied by and surrounding the project; • baseline conditions. 3 Scoping, consultation and impact identification • scoping and consultation; • impact identification. 4 Prediction and evaluation of impacts • prediction of magnitude of impacts; • methods and data; • evaluation of impact significance. 5 Alternatives 6 Mitigation and monitoring • description of mitigation measure; • commitment to mitigation and monitoring; • environmental effects of mitigation. 56 The above examples have been taken from the REVIEW prototype expert system developed at Oxford Brookes University by Agustin Rodriguez-Bachiller for teaching/demonstration purposes, and which is “attached” to the SCREEN and SCOPE prototypes mentioned in Chapter 6. © 2004 Agustin Rodriguez-Bachiller with John Glasson ... - tailieumienphi.vn
nguon tai.lieu . vn