Wednesday, April 24, 2024

Questions over Overseer accuracy

Avatar photo
An exercise to look at the sensitivity of the Overseer nutrient model to the data that’s inputted has found a huge 55kg nitrogen (N)/ha/year potential variance on the Lincoln University Dairy Farm (LUDF). South Island Dairying Development Centre (SIDDC) executive director Ron Pellow used the farm’s real data as inputs in the Overseer model that’s increasingly being used as the “go-to” tool for determining if farms would meet regulatory limits, particularly when it comes to N leaching. He presented his findings at the Fertilizer and Lime Research Centre workshop at Massey University in February.
Reading Time: 3 minutes

Pellow warned that exercise had shown how difficult, and possibly inappropriate, it is to use the model to definitively pinpoint what a farm’s N loss is. Instead he suggested that a range of losses or average loss over time would be a more correct description, if using Overseer, which is a long-term average annual model.

He warned that variation from year to year could be far greater than the impact of specific changes themselves so having multi-year average data was important. But the purpose for the nutrient budget and what its outcomes are to be used for should be the first question.

Depending on that use, the decision over how data is inputted might vary as could the decision over whether to average input or output data over time and create a rolling average.

In the exercise Pellow ran the model for LUDF largely using default options for data over five seasons, from 2007/08 to 2011/12.

That provided him with the base N loss of 37kg N/ha/year average over the period although the range from year to year was 33kg N/ha/year to 44g N/ha/year. He then selected 10 different scenarios where it’s possible to input real information rather than a default, selecting each based on how likely it was other farmers would have similar data available to them, expecting they’d have an impact on results and relevance.

They included inputs such as pasture quality and N concentration in pasture, actual irrigation volumes, monthly cow numbers, increasing the effluent area to cover the whole farm, actively managing irrigation, milk components and clover levels.

He then ran the model for each of the seasons, this time changing one of the factors he could input farm specific data for. He also ran it using farm-specific data for as many inputs as he had good data for.

Pellow said some specific inputs made little difference at all. For instance, somewhat surprisingly, increasing the effluent area from 20% of the farm to the whole area reduced N loss to water by only 1-2 kgN/ha/year. But some had a much larger impact.

Selecting active irrigation management (which requires no overlapping of irrigation, use of soil moisture monitoring and no irrigation within five days before rain) predicted N losses would decrease from 37kg N/ha/year to 30kg N/ha/year averaged over the five seasons, although it was as low as 26kg N/ha/year in one season.

If he used as much farm-specific data as possible apart from the actual amount of irrigation water used each year, N loss was slashed to 22kg N/ha/year if averaged across all five seasons and got as low as low as 19kg N/ha/year in one season.

But alarmingly some inputs increased N loss dramatically. If actual irrigation volumes/year were included, along with as much farm-specific data as possible, N loss leapt to 54kg N/ha/year averaged over the five seasons but in some individual years got as high as 68kg N/ha/year. If the defaults were used and rainfall only was adjusted to be 858mm/year rather than the National Institute of Water and Atmospheric Science (NIWA) default of 593mm/year N loss could be as high as 74kg N/ha/year.

Pellow asked if the industry was prepared to accept the range in variability and said significant thought needed to go into how it was dealt with and how inputs into the model were reported.

“Should the results have been averaged over time bearing in mind it is a long term average model with long term data behind the model calculations?”

Consistency was the important factor both between years and farms and the more the outputs were to be used for benchmarking the more consistent the decision rules over inputs into the model needed to be.

Total
0
Shares
People are also reading