• Nie Znaleziono Wyników

9. Effects Assessment

9.6. Additional test methods addressing specific questions and issues

9.7.2. Addressing specific protection goal in (semi)field studies

Specific protection goals are proposed for in-soil organisms as drivers of particular ecosystem services, both in-field and off-field. When interpreting field studies, it needs to be made sure that they are able to address the SPGs and to detect relevant effects on the key drivers. The currently proposed specific protection goals for in-soil animals are (these may be adapted following the risk manager consultation):

In-field: For earthworms, enchytraeids, microarthropods, macrodetritivores (e.g. isopods), nematodes, molluscs (slugs and snails), small effects (10 < 35%) up to months on abundance/biomass of populations are tolerable.

For enchytraeids, microarthropods, macrodetritivores (e.g. isopods), nematodes optionally also medium effects (35< 65%) up to weeks on abundance/biomass of populations are tolerable.

Off-field: For all organisms only negligible effects (< = 10% or NEL).

9.7.2.2. Statistical power to detect relevant magnitudes of effects infield studies

It is important to understand the power of various field study designs to detect effects at magnitudes relevant to the specific protection goals. Brock et al. (2015) develop a structured approach to the application of the MDD in the aquatic context and this may provide a basis for a similar development for in-soil organisms. For earthworms, in Section 3, it is indicated that in-field small effects on abundance and biomass (10–35%) for months are acceptable, off-field no effects (< = 10%

or NEL) are acceptable. In the present earthworm field study according to ISO 11268-3 (ISO, 1999, 2014), with improvements of Kula et al. (2006), effects are assessed after 1, 6 and 12 months, however, the assessment endpoints recovery after 1 year. Often, only a limited number of dosages is used and, in the test design used, it is not possible in practice to detect effects of less than 50% on overall abundance and/or biomass with sufficient statistical significance. In more recent studies, it appeared possible to lower the MDD to 30–40% for total abundance (Vollmer et al., 2016). For individual species, which have lower abundances, the ability to detect effects will be lower. The standard endpoints of the test thus do notfit with the required level of protection. In the test protocol, it is also prescribed to measure effects after three and 6 months, indicating that potentially the duration of effects is in line with the data requirements. In the present design, however, the field test with earthworms will not be able to show whether effects of 10–35% occur, or to derive a NEL or an EC10. An example how to calculate the power of the test is given in De Jong et al. (2006). This implies that it might be needed to adapt the study protocol, in order to obtain results that can be used to address the acceptability of effects identified for the SPGs. In general, presampling within a given field is essential to the study’s evaluation.

In TME studies, the number of replicates can be higher; a dose response design can be followed, which means that it is more realistic to derive ECx values from the results, with a higher statistical reliability than in the case of field studies. Since the distribution of in-soil organisms in agricultural fields might be highly variable, variation in TME studies can also be high and the statistical power

should be checked. Earthworms can be tested in TME studies as well (R€ombke et al., 2004). However, the variation appeared to be relatively high, so it is also questionable in this case whether TME studies with earthworms would be able to yield a reliable EC10 or EC35value.

For nematodes in small-scale microcosms, the MDD was found to be< 20% (H€oss et al., 2014). In field studies for collembolan MDD of ≥ 40% was found (Mack and Knaebe, 2016), Scholz-Starke (2013) conducted TME studies that on average were able to detect between approx. 5–10% (nematodes) and 40–50% (collembolans and enchytraeids) deviation from the control level. The MDD (minimal detectable difference) can be decreased by increasing the number of replicates. This can be the number of TME cores, but in the case of low numbers an option can be to increase the size of the cores so that the number of samples within the core (and thus the number of individuals) can be increased. MDDs can also be decreased by improving the sampling techniques (Brock et al., 2015). It should be elaborated how the sampling techniques can be adapted to be able to detect an certain level of effect.

In the case of experiments using natural communities of in-soil organisms (suggested as a surrogate reference tier), control and treatment communities will often have a different composition. If the differences in composition are potentially large, a large number of replicates will be needed to achieve a MDD less than 100%. Such differences can partially be accounted for by the Henderson and Tilton (1955) calculation or related methods. However, with such a method, it is not possible to obtain a statistical quantity like the MDD.

Until now, a power calculation has been performed for a limited number of the field protocols available for in-soil organisms. To be able to use the field protocols in regulatory practice, sufficient replication and abundance of relevant species should be ensured to be able to detect the above-mentioned magnitudes of effect.

9.7.2.3. Assessing long-term effects and studying recovery

At present the onlyfield study that is widely used to study effects on in-soil organisms is the field study with earthworms. At present, recovery 1 year after application is taken as an endpoint; for further details see Section 2.

It is therefore recommended, for the time being, to study community recovery only experimentally in field studies. As noted earlier (see Section 7.4), recovery over long time periods may be best addressed by complementing experimental studies with population modelling, but impacts and recovery at community levels cannot be assessed using population models and therefore need to be assessed using field studies. As stated in Section 7.3, population modelling potentially may address (if sufficiently verified/validated) the impact and recovery from year on year application of PPP on in-soil organisms species in a system approach. Effects of PPP use and recovery at community levels cannot be assessed using population models for the time being and therefore need to be investigated in (semi) field studies with intact communities of in-soil organisms.

Depending on the fate and behaviour of the substance in soil, the application timing, the lifestage of the exposed organism as well as toxicokinetics and toxicodynamics, effects on the community might be detectable at later stages. This might be the case even if there are no effects visible in the short term or if recovery of short-term effects is observed. Long-term impacts may be related, e.g. to disrupted trophic interactions or to reproductive effects. Several species of in-soil organisms are univoltine, some may only be able to complete one generation after several years (please refer to Sections 3.2.1 and 7.9). Therefore, the timeframes for assessing impacts and recovery at community levels in (semi) field studies with in-soil organisms need to be appropriate to detect delayed effects of the application of a test substance. Effects may also take years to manifest. As reported by Pelosi et al. (2015) when comparing the effects of different cropping systems on earthworms over 15 years, it took more than 9 years for a reduction in earthworm abundance in conventional cropping systems to be detectable compared to organic cropping. Therefore, it should be ensured that experimental approaches are able to detect magnitudes effects that may only be visible after several years under variable field conditions (e.g. due to external recovery and diverging environmental conditions). This emphasises that experimental higher tier approaches need to be as controlled as necessary to be able to understand and predict long-term effects of pesticides.

As can be seen in the Figure25, field sizes in Europe are mostly above 10 ha/field, in intensively managed areas up to 60 ha per field. It is likely that, considering the relatively low dispersal ability of in-soil organisms (see Section 3.2.2) and the average field sizes in Europe, recolonisation of fields by in-soil organisms from off-field within a year will be very limited. Experiments studying external and internal recovery in the same plots will yield only a very limited informative value as to whether SPGs are fulfilled.

Figure 25: Averagefield sizes in Europe (see Reuter and Eden, 2008)

In the example of earthworm field studies performed according to ISO 11268-3 (ISO, 1999), plot sizes are 89 8–12 9 12 m and plots lie 2–5 m apart in a randomised block design. Considering the mean dispersal rates for different earthworm species (up to 10 m per year, see Section 3.2.2), it is very likely that significant migration between plots and the surrounding area will occur within a year, which will be the case to a much lower extent for the majority of European fields being mostly > 1000 times larger than study plots in earthworm field studies. Therefore, external recovery is likely to be severely overestimated in such studies.

A recent study by Ernst et al. (2016) aims to address recovery of Collembola in a multigeneration study testing F. candida. This approach does not, however, sufficiently address the properties of the types of potential stressors and the specific features of the landscape, i.e. variations in land use, and the types, spatial distribution and connectivity of habitats, as reported in Section 7.4and is therefore, for itself, not considered suitable for assessing recovery (please refer to Appendix J).

In order to incorporate recovery in the risk assessment, it is important to keep in line with the specific protection goals. Protection goals are defined in Section 6, indicating an acceptable magnitude and duration of effects as well as the relevant spatial scale. On the basis of protection goals, focal taxa, focal communities and/or focal landscapes should be identified, based on relevant traits. For in-soil organisms, it is suggested that risk assessment is performed at the in-field scale (i.e.

encompassing variation of factors that determine local differentiation of populations) and at field-boundary level (encompassing variation between in-field and off-field habitats representing); please refer to Section 6.1.

Scholz-Starke et al. (2013) conducted TME studies with lindane. Stability of the TMEs could be influenced by, e.g. effects of isolation, effects on diversity by predators, or removing of soil for sampling purposes. Scholz-Starke defined four criteria that should be met in order to ensure stability of the system: abundance, diversity, similarity and soil removal. The results show that TMEs are relatively stable, and therefore suited to detect the possibility for recovery, even for a relatively persistent substance as lindane, and for a recovery period of 1 year. This implies that the duration of a TME study is such that the majority of the taxa present will be able to show reproduction.

For field experiments to assess the risk at the field scale, it is suggested to choose a test design that excludes unrealistic external recovery (e.g. by surrounding the plot by a large enough strip of PPP treated area preventing that even animals with relatively high dispersal ability to recolonise the plot being assessed). This will allow a realistic worst-case prediction of long-term effects at the local scale.

It is possible to study the interaction at the field-boundary scale to determine the contribution of migration to recovery under realistic conditions. In order to be able to make generalised predictions, however, the information from field-boundary experiments needs to be integrated in the risk assessment.