Every method has its advantages and disadvantages.
The USHCN database has more than 81,000,000 daily temperature readings going to 1895. It is a scientific obscenity to attempt to “adjust” that many records. Adjustments open the door to confirmation bias or outright fraud, and will invariably make the data less meaningful. I call this “tampering.”
There is no need to “adjust” the data. With a database that large, the distribution of error will be uniformly distributed between “too low” and “too high” and average out to zero. Many fields of science and engineering depend on this principle.
Use of anomalies completely hides baseline shifts, as NOAA has done.
Infilling is exactly the wrong thing to do, when station loss is biased towards loss of colder rural stations. It simply corrupts the temperature record further.
Gridding is just barely above the noise level in the US, because the USHCN stations are relatively evenly spaced geographically. It isn’t worth worrying about, and like the other adjustments obfuscates and smears the data.
The best approach is the approach I take (raw data only) and is why I found the problems which other techniques mask and make invisible.
No method is perfect. We know there is systematic UHI producing spurious warming and possibly systematic TOBS producing spurious cooling. To some extent they negate each other. But USHCN heavily weights TOBS and very lightly weights UHI, because their bias is to adjust the data to create warming.
Turning a long term cooling trend into a warming trend, indicates a huge error bar and signal to noise ratio of zero. The only acceptable approach is to leave the data alone.
I have been astonished by skeptics who agree with USHCN on this subject. How could they possibly make public statements that USHCN software is doing legitimate adjustments? They weren’t even aware of the infilling issue which NCDC just discovered. It defies explanation.