The NIWA data adjusters got away with this by arguing in court that their tampering was based on standard scientific practice, citing a station that had moved to higher elevation and needed to be adjusted for.
The claim of “standard scientific practice” is grossly inadequate for data sets delivered to politicians to influence major policy decisions. Once scientists start tampering with data, it opens the door to confirmation bias at best -and outright fraud at worst. Even if they did one adjustment in a way which can’t easily be proven incorrect, that doesn’t mean they accounted for all other possible sources of error. They could for example focus on adjustments which cool the past, and largely ignore errors in the other direction. Or they could simply weight adjustments in one direction more heavily than adjustments in the other direction.
The fact that nearly every single data set has been repeatedly adjusted to increase the slope of warming, is overwhelming evidence that the adjustments are illegitimate. Particularly since many of the tampering scientists are vocal global warming advocates, and have a huge conflict of interest.
The best policy (which is standard for large data sets) is to assume a Monte Carlo distribution of error, and leave the measured data alone.
The standard needs to be that the scientists who made the decision to tamper with the data, have to prove that they are correct before they start tampering- not that skeptics have to prove them wrong when they are caught ex post facto. Furthermore, they need to prove that their software correctly implements their tampering specification. There is normally zero quality control when scientists start programming.
When scientists turn a negative trend into a positive trend (like in the US data set since 1930) that indicates a signal to noise ratio of zero – meaning the data is worthless.
Skeptics need to take control of the debate, and stop pandering to the plausible deniability front which tamperers are hiding behind.