New population metrics for top-down-bottom-upстатья Электронная публикация

Работа с статьей

[1] Polishchuk L. V. New population metrics for top-down-bottom-up // Oikos Blog электронная публикация. — 2013. Arguably, one of the saddest fallacies in ecology is the concept that Everything is connected to everything else (known as the first Barry Commoner law of ecology). The key assumption underlying this concept is that all interactions within the system are equally strong. Let’s examine which kind of science this assumption implies. Even in a modest system of 10 species the number of pair interactions between species amounts to 55 (including the effect of a species on itself), and to 5050 for a system of 100 species, leaving aside interactions with the abiotic environment. Such a large number is too big to study the interactions on a one-by-one basis, but probably too small to completely ignore their individuality. The latter is possible if the number of interacting entities is on the order of 1023, the Avogadro constant, but this will lead us to the realm of statistical physics rather than ecology. The Commoner law, if correct, would make our attempts to understand Nature almost hopeless, and turn ecology into hardly more than a casebook of idiosyncratic examples. Or, following Ernest Rutherford’s famous dichotomy, ecology would have been close to stamp collecting rather than hard science. (Rutherford actually said “physics” and was basically right, because physics is a role model for genuine science. But we do not think that “physics envy” can really motivate the ecologist.) The picture is not all gloom, however. Rather than falling into despondency, one could quantify species interactions in order to see whether they are of the same strength or not. The actual problem, as it often happens, is therefore an operational one; it is about how to measure the things of interest. Let us focus our attention on trophic interactions, that is, on bottom-up and top-down effects. One way to assess them dates back to Justus von Liebig and consists of addition of biological nutrients to see which of them elicit a strong response from the pot plant, in terms of its growth, or from the planktonic algae in a water sample, in terms of primary production. These simple experiments, which in the era of ANOVA are called factorial-design experiments, immediately disprove the Commoner law. Liebig’s law of the minimum states that there is a single factor that produces the biggest response in a given species or a set of species with similar requirements, and thus affects them most strongly. Hence, not only the interactions are different in their strength but, under any given circumstances, there is only one that is most important. Clearly, the Liebig law makes a contrast with the Commoner law. While the factorial-design experiment is a powerful and efficient tool to reduce the number of significant interactions and detect the strongest one, it has its shortcomings. The imposed shifts in food and/or predator abundance, while not completely arbitrary, may not reflect the current situation in the system. Often, for example, one of the treatments completely excludes predators, despite their presence in the environment. In his 2001 review, Mark Hunter sarcastically notes that if we were to completely exclude food, this would have inevitably revealed an “obvious and dramatic bottom-up effect”. Of course, nobody would act that way in regard to food but this reductio-ad-absurdum example shows a general problem: the manipulative (addition / removal) approach does take into account the actual (rather than imposed) dynamics observed in the system. The dynamics is a fundamental feature of natural systems (Pimm 1991), implying that one driving factor, e.g. food, may be quickly replaced by another, e.g. predation, in the course of time and space. The factorial-design experiment is not tuned to track these changes while a truly dynamic approach might be able to make it. These considerations naturally bring us to the field of population dynamics. In the paper, we have focused on zooplankton, in particular Daphnia, a well-known model organism in ecology (Lampert 2011, see Figure), though we do believe that our approach is a general one and may not be limited to zooplankton. The population characteristic we are dealing with is birth rate. In part, this is because planktonologists can take advantage of the Edmondson-Paloheimo model for birth rate. Interestingly, birth rate as a response variable is somewhat similar to growth or production rates often taken as response variable in manipulation experiments, but our use of it is different. The Edmondson-Paloheimo model, being slightly modified (Polishchuk 1995), relates birth rate to fecundity and proportion of adults in the population. Fecundity is closely associated with food conditions and proportion of adults with size-selective predation, the latter being common in zooplankton. Thus, birth rate depends on both bottom-up and top-down effects, which is another reason why it is used here. To quantify the role of fecundity and hence bottom-up effects and that of proportion of adults and hence top-down effects in birth rate dynamics, we employ a mathematical approach called contribution analysis (Caswell 1989, Polishchuk 1995, 1999, Polishchuk and Vijverberg 2005, Hairston et al. 2005, Ellner et al. 2011). This provides us with the ratio of contributions of changes in the proportion of adults and fecundity to birth rate change taken as a measure of the relative strength of top-down vs. bottom-up effects. We view the ratio of contributions as a kind of measuring instrument, something like a thermometer. The comparison of the ecological instrument to the physical one is, of course, a metaphor – primarily because ecological variables do not obey simple and general quantitative relations such as those used to construct physical instruments; an example is the relation describing the thermal expansion of the physical body, which underlies the functioning of the thermometer. But it is a useful metaphor, for it leads to the next task: calibrating the ratio of contributions as a tool to measure the strength of top-down vs. bottom-up effects. This calibration is based on microcosm and computer experiments, and constitutes a major part of the paper. The main experimental result is that the ratio of contributions allows one to distinguish a strong top-down effect from a strong bottom-up effect. In the end, we would like to emphasize some points not mentioned in the paper. First, while our approach focuses on population dynamics and, as such, is intended to avoid inappropriate averaging (used, though implicitly, in manipulative experiments), some time-averaging seems necessary. The ratio of contributions is found to be sufficiently robust only when applied to a set of successive sampling intervals rather than an individual interval. (This set covers the second part of the experiments where top-down and bottom-up effects appeared in full strength; see Online Appendix 3 of the paper.) In our experiments, this set was identified by means of ANOVA, the procedure that will not apply to field populations due to lack of “replicate populations”. Hence, we need to understand how to recognize, in natural populations, a set of intervals over which the ratio of contributions remains roughly constant. This will open the way to the use of this approach for natural Daphnia (and other zooplankton) populations. Second, the Edmondson-Paloheimo model, when appropriately modified, has the potential to estimate birth rate in animals other than Daphnia, such as mammals. If applied to a wider range of organisms, this approach may be a useful supplement to conventional Liebig-style factorial-design experiments.

Публикация в формате сохранить в файл сохранить в файл сохранить в файл сохранить в файл сохранить в файл сохранить в файл скрыть