Part 2: SOM sums and some science aka Lies, damn lies and statistics.
Harper Adams would have been remiss if they had not shown us the challenges of farming regeneratively. In one module, ‘Food Sustainability and Ethics’, we analysed the UK and global food systems, and the concept of sustainability itself. We had lengthy discussions about the need for change throughout the food system, from production to consumption: about the ways in which education, consumer habits, government policy, and the media can affect agriculture.
It became apparent that while farmers may wish to farm in harmony with nature, markets and economics often work against them. The phasing out of the Basic Payment Scheme is removing a big tranche of the financial support that might have helped some farmers change their approach. The urgent need for new subsidies, such as the ELMs scheme, which prioritise the environment as well as production, made me think about writing my dissertation on the challenge of quantifying environmental benefits so that farmers can be paid correctly for them. Public money for public goods!
I chose to focus on a current hot topic – the concept of a carbon market or a subsidy for carbon capture. This idea has led to a growing need for an accurate method to measure carbon on farms. Methods for measuring carbon in trees have already been developed, and carbon credits are generated based on them, but measuring the carbon stored in soil is not so simple because it is always changing. Many factors can affect it – soil type, the weather, microbial population and management to name a few. It means that accurate measurements, even within a single field, are extremely difficult.
For my dissertation I decided to explore the statistical considerations surrounding soil sampling for the measurement of soil organic matter (SOM). SOM is a proxy for the amount of carbon stored in soil and comprises of plant and animal material at various stages of decomposition. The microbes within the soil break down this organic matter into ever small fractions which are then stored in soil aggregates and this is the method by which carbon is ‘locked up’ in soil. ~58% of SOM is carbon. The reason soil organic matter is often measured instead of soil carbon directly is because it is a useful indicator of soil fertility, health and microbial population as well as carbon content. Now back to my earlier comment about statistics…
My hypothesis was that the current standard soil sampling methodology for SOM does not take enough samples to account for its high variability. To test this, I re-sampled a previous SOM experiment at Harper Adams where 6 different organic additions – farmyard manure, slurry, green compost etc – had been applied to a plot to see which material caused the greatest increase in SOM. These were all compared to a control application of inorganic fertiliser. The previous study had concluded that the amount of SOM didn’t increase in a statistically significant way between each organic addition – not even compared to the fertiliser applications.
But when I re-sampled the plot using 45 samples per application zone instead of 3, as is the standard, I found that there were significant differences. Firstly, all organic treatments performed better than the control treatment of inorganic fertiliser. Moreover (for all of you dying to know what you should be spreading on your fields to increase SOM), green compost (compost made from solely plant material such as woodchip or grass cuttings) was the most effective addition, closely followed by farmyard manure and food waste compost. It was a radically different result.
Now for something a little technical. (You can skim this if it is too detailed but it is interesting.) In statistics we have ways of testing whether an observed difference in SOM content between the treatments is because one treatment is better than another or simply because of the natural variation in SOM within the soil. The ability of each test to detect a true difference between treatments is called its ‘power’. Power can be calculated and is determined by:
1. The number of groups being tested (the number of treatments)
2. Sample size (how many samples are being taken from each treatment)
3. Effect size (a number from 0 to 1, chosen by the statistician to represent the size of difference between SOM values which you would like your experiment to detect. Very small differences in SOM can be significant for carbon sequestration when multiplied over a large area so ideally our experiment should be able to detect small but significant differences. The effect size for SOM experiments is usually set between 0.1 and 0.2. I chose 0.2)
4. Significance level (this is the probability of making a type 1 error – which is essentially making a false-positive conclusion, stating that there is a difference in SOM increase between treatments when there is not. In statistics this is usually set at 5%, meaning there is a 5% chance of saying one treatment is better than another when it is not)
Once calculated, a test’s power relates to the likelihood of it detecting a difference when a difference exists. I calculated the power of the analysis done when only three samples were taken and compared then to the power of the same analysis when performed using 45 samples. Get ready, here comes the pièce de resistance of my whole dissertation.
The power of the previous experiment was 0.07210377, meaning that under the conditions of that experiment there was only a 7.2% chance that significant differences in SOM would be detected. Switching it round – that means that there was a 92.8% chance that no differences would be detected when they were in fact there. In comparison, by taking 45 samples I increased to power to 0.7059873, meaning that my method of sampling had a 70.6% chance of detecting a difference when it did exist and conversely a 29.4% chance of missing it.
It was no surprise that I concluded that the standard soil sampling methodology used in the previous study – in fact in most soil studies – was not sufficient for the correct measurement of soil organic matter. Soil is incredibly variable and changeable and if economic and time pressures mean that studies are under sampled the results are going to be inaccurate. If the carbon accounting for any future carbon credit and carbon-based subsidy schemes is to be at all accurate, changes to the standard are required.
The statistics generated from any experiment are only as good as the data they come from. My work showed the dangers of relying on the conclusions drawn from under-sampled experiments, such as those from the study I re-sampled. Their conclusion – that there was no difference in the increase in the amount of soil organic matter after the various treatments – had been extremely influential as it was used by the Agricultural and Horticultural Development Board as part of an information leaflet for farmers about soil health and SOM. Leaflets like this provide the information that directs understanding of soil processes and, even more alarmingly, soil management.
What I had learnt was that just because there are statistics to support an argument, it does not mean that it is necessarily true. My dissertation showed me how complicated soil science can be and how much more there is to learn. It also pointed me in the direction I wanted to take on the next stage of my scientific journey – to travel to find out what is going on in regenerative farming and soil carbon research around the world.