Why My Approach Is (By Far) The Best Approach

Every method has its advantages and disadvantages.

The USHCN database has more than 81,000,000 daily temperature readings going to 1895. It is a scientific obscenity to attempt to “adjust” that many records. Adjustments open the door to confirmation bias or outright fraud, and will invariably make the data less meaningful. I call this “tampering.”

There is no need to “adjust” the data. With a database that large, the distribution of error will be uniformly distributed between “too low” and “too high” and average out to zero. Many fields of science and engineering depend on this principle.

Use of anomalies completely hides baseline shifts, as NOAA has done.

Infilling is exactly the wrong thing to do, when station loss is biased towards loss of colder rural stations. It simply corrupts the temperature record further.

Gridding is just barely above the noise level in the US, because the USHCN stations are relatively evenly spaced geographically. It isn’t worth worrying about, and like the other adjustments obfuscates and smears the data.

The best approach is the approach I take (raw data only) and is why I found the problems which other techniques mask and make invisible.

No method is perfect. We know there is systematic UHI producing spurious warming and possibly systematic TOBS producing spurious cooling. To some extent they negate each other. But USHCN heavily weights TOBS and very lightly weights UHI, because their bias is to adjust the data to create warming.

Turning a long term cooling trend into a warming trend, indicates a huge error bar and signal to noise ratio of zero. The only acceptable approach is to leave the data alone.

I have been astonished by skeptics who agree with USHCN on this subject. How could they possibly make public statements that USHCN software is doing legitimate adjustments? They weren’t even aware of the infilling issue which NCDC just discovered. It defies explanation.

About these ads

About stevengoddard

Just having fun
This entry was posted in Uncategorized. Bookmark the permalink.

99 Responses to Why My Approach Is (By Far) The Best Approach

  1. John Goetz says:

    If NOAA kept a static version of their adjustments they would not have as many problems. For example, when data comes in each month and fails their quality control algorithms, they could manually inspect it and the original station reports, and save it to a static file for use by the GHCN crunchers. Likewise, if they make an adjustment or perform an infill, it is done that one time and saved to the static file.

    The problem that they run into is that their algorithms look at the entire record from beginning to present. “Present” changes every month, which means the input to the algorithm changes every month, and hence the output changes every month.

  2. Brian G Valentine says:

    Don’t tell me, tell Tom Karl. See where that gets you.

    The only thing that will stop corrupt data sets is the introduction of non corrupt political Executive office appointments. If you think these are bad, contemplate what Hillary would introduce.

    • Chewer says:

      Hildebeast didn’t move to New York because she wanted to be the fine New Yorkers Senator, she did it because it is the home of the U.N., which has been her ultimate goal!

  3. Ed Barbar says:

    If you think there is an urban heat island effect, and cities grow up around where people (thermometers) are, then this doesn’t make a lot of sense. Not adjusting will increase temps artificially because of effect.

    Meanwhile, it would be interesting to take a different temperature records, such as UAH, compare a baseline of years, say 1980 – 2000, to NOAA US temperatures in the same period as they were reported in 2000, and determine the delta to NOAA US temperatures for the same period as NOAA reported them in 2014.

    This will show any trend in adjustments.

    I was going to do it, if only I knew where the NOAA reported results were for 2000.

  4. baconman says:

    I’m glad you brought up the subject of error. Where are their error bars listed with all the temp adjustments? Is it at all possible to make TOBS adjustments as well as in-fill adjustments and it makes the temperature just perfect, kind of like adding a couple of shims to a door to level it. Perhaps if they added the error bars we might all see that there hasn’t been much movement up or down for the last 100 years.

  5. Chewer says:

    Raw is right, and that’s what fellow cellmate Gyro Jerry will tell Mann and his fellow travelers. ;-)

  6. Anto says:

    “Gridding is just barely above the noise level in the US, because the USHCN stations are relatively evenly spaced geographically.”

    Hu McCullough once explained nicely:
    “Hansen/GISS gridding, as I understand it, is equivalent to assuming a constant covariance throughout each 5° “square”. Missing squares are then infilled if they are within 1200 km (about 10.8 deg) of at least one gridsquare with at least one reading, with the equal-weighted average of those gridsquares. This is equivalent to assuming a constant, positive covariance between gridsquares that are within 1200 km of one another.

    However, since occupied gridsquares are not influenced by nearby occupied gridsquares, this constant inter-gridsquare correlation must be essentially zero (say .01). Also, since the influence of a gridsquare on its neighborhood does not increase with the number of stations it has, there must not be a nugget effect, so that the effective intra-gridsquare correlations are effectively assumed to be 1.00.

    Furthermore, since the 5° “gridsquares” in fact become narrow wedges in northern and southern latitudes, the distortions become more extreme near the poles — stations in Scandinavia that are relatively clustered get disproportinately little weight, while an isolated station in Siberia or Greenland gets inordinately high weight.”

    • Dave N says:

      ..and they call skeptics “flat-earthers” ;-)

    • Bob Greene says:

      I’ve lived in SW Michigan and Central VA, about 1200 km separation. I really don’t think one would be a legitimate proxy for the other.

    • JP says:

      I saw this problem with gridding is there are areas where the grid crosses 2 or more distinct air mass types. For instance, if a grid contained the reporting stations of San Francisco, Travis AFB, and Palmdale California, how does one extrapolate a representative temperature (San Fran could be 49 deg F, dewpoint 45), Travis AFB 90, dew point 55 and Palmdale 118, dewpoint 05

  7. Anto says:

    Just a bit more on the infilling issue for a moment. This from Phil Jones in the Climategate emails:
    “> Another option is to use the infilled 5 by 5 dataset that Tom Smith
    > has put together at NCDC. All infilling has the problem that when there
    > is little data it tends to revert to the 1961-90 average of zero. All
    > infilling techniques do this – alluded to countless times by Kevin
    > Trenberth and this is in Ch 3 of AR4. This infilling is in the current
    > monitoring version of NCDC’s product. The infilling is partly the reason
    > they got 2005 so warm, by extrapolating across the Arctic from the
    > coastal stations. I think NCDC and the HC regard the permanent
    > sea ice as ‘land’, as it effectively is.”

    And, strangely, in a later post Jones makes a point with which I entirely agree:
    “Tom P told me that they don’t infill certain areas in early decades, so there is missing data. Tom P isn’t that keen on the method. He rightly thinks that it discourages them from looking for early data or including any new stuff they get – as they have infilled it, so it won’t make a difference. It won’t make a difference, but that isn’t the point.”

    Shock! And again:
    “Infilling gives you more boxes, which generally have higher T anomalies than the average of the rest of the hemisphere, so leads to the NCDC/NOAA analysis appearing warmer.”

    • Tom In Indy says:

      Those are powerful statements, confirming that they know the adjustments they make will bias the results to the warm side.

      Our tax dollars at work.

  8. Saying your approach is the best approach doesn’t make it the best approach. If there are systematic biases in the data set they might need to be adjusted for. For example, UHI would introduce a systematic bias. To declare that there aren’t any systematic biases because… because you say so, isn’t much of an argument. That doesn’t mean I don’t appreciate the point you’re making, but such a claim is still going to get shredded on the more serious technical blogs.

    • I explained the reasons why the “more serious technical blogs” have no clue what they are doing. I’m sorry you didn’t understand that part, or read the word “Why” in the title. I can’t help you there.

      Applying precision to a methodology with no accuracy is one of the worst mistakes a technical person can make.

      • Perhaps you should reread what I wrote and try again.

        • Perhaps you should reread what I wrote and try again

        • Tom In Indy says:

          Will, I appreciate what you are saying, but if the “experts” cannot quantify the true UHI effect, then the application of a systematic adjustment makes no sense. They give lip service to UHI, point to their tiny and simple adjustment, then claim “problem solved!”. Every adjustment they make provides them with an opportunity to fiddle with the numbers until they get output that “looks right” according their bias.

        • Let me phrase this another way. Sat’s take millions of measurements. According to Steve’s claim there is no need to adjust the data, i.e, for sat drift. Because everything is just gonna “average out”. Obviously, if you didn’t adjust the sat data for sat drift, it would be futile to even attempt to interpret the data.

          The take home message here is that Steve’s argument is stupid. Which is disappointing, because Steve is a very smart guy.

          A side observation: In several posts you’ve presented open challenges to debate critics. But if your responses right off the bat is to go off topic, or are declarations that people should just re-read what you wrote (presumably because if someone doesn’t agree with you it can only be because they don’t understand you), then these challenges aren’t going to work for you.

          Anyway, please interpret my criticisms not as a personal attack, but coming from someone who is interested in helping you improve the quality of your arguments. You’re obviously doing some very interesting research and if it was up to me, I’d like you to do more of that sort of thing.

        • Will,

          I’m not talking about satellites, but I do appreciate that your mind is drifting.

        • Wow, no wonder your critics are scared to debate you. ;-)

        • I discussed systematic biases like UHI and potentially (but bizarrely) TOBS

          Did you not bother to read that, or do you normally like to repeat straw man arguments?

        • Do you read your own posts? Here is your claim:

          “The USHCN database has more than 81,000,000 daily temperature readings going to 1895. It is a scientific obscenity to attempt to “adjust” that many records.”

        • read the article.

        • Address this claim. Stop trying to defend yourself by pointing to somewhere where you’ve made a different or opposite claim. That doesn’t make the claim we’re debating correct. Anyway I don’t want to be as pig headed as you, so this is my last comment on this particular post.

        • Another reading challenged WUWT regular

        • Latitude says:

          ..that was an amazing conversation

          Will, are you alright?

  9. Wyguy says:

    The temperature data is so corrupt that it is impossible to ever know what the true temperature really was or is. Steven has really gotten the ball rolling on this farce, maybe people will begin to realize it (temperature records) are mostly worthless. Nature will give the temperature she wants, when she wants and nothing we puny humans can do to ever change it more than a tiny amount.

    • On the other hand, the Standard Atmosphere (such as it is, more or less a mean for mid-latitudinal vertical temperature distribution) has been essentially unchanged for almost a century (the idea for that stable vertical atmosphere goes back to the mid-19th century), gives 288K as the global mean surface temperature (higher than what climate scientists say it is today, despite a century of supposed warming), and is precisely confirmed by my famously neglected Venus/Earth temperatures comparison. So (of course) I agree with you that the temperature records are basically worthless, but I would disagree that it’s impossible to know what the true temperature was, and still is. And I would say Nature cannot give the temperature “she” wants, “she” has to provide the temperature(s) “she” was designed to give (oh yes).

    • Scott Scarborough says:

      Addressed to Will Nitschke,
      Satellite records do not go back to 1895. If they did, and were in as poor of a shape as the surface temperature records, they would be as worthless as the surface temperature records and any adjustments would be as meaningless as the ones performed by NOAA. I have heard it said on this site before: You can’t make a silk purse out of a sow’s ear. You can adjust anything if you know what you are doing. You can’t if you don’t. Changing temperatures that were recorded before your grandfather was born, pretending you have such intimate knowledge of what went on then is foolish.

  10. tallbloke says:

    Reblogged this on Tallbloke's Talkshop and commented:
    ‘Steve Goddard’ defends his approach to checking the temperature data.

    • omnologos says:

      tallbloke – imagine when we will be old and we will wonder why, when still young, we were wondering if one should trust raw or differently adjusted data.

      I hope I will think of my younger self as raving mad.

      • Anto says:

        Better yet, we will wonder what possessed an entire organisation to assume that a paper written by a man in 1986 was a good and sound reason to alter another man’s observations made in 1920.

        • Better yet, we will wonder what possessed an entire civilization to attempt a suicide.

        • _Jim says:

          ” … what possessed an entire civilization to attempt a suicide.

          Something quite beyond garden-variety neurosis * but not quite all the way into a psychotic condition either: (to wit) White Guilt. Usually induced by leftist politicians, greenies and eco-loons as ‘leverage’ as a means to sway public opinion (and votes) in lieu of cogent, rational argument.


          * neurosis – a relatively mild mental illness that is not caused by organic disease, involving symptoms of stress (depression, anxiety, obsessive behavior, hypochondria) but not a radical loss of touch with reality.


  11. A C Osborn says:

    Will Nitschke says:
    July 2, 2014 at 7:03 am

    Saying your approach is the best approach doesn’t make it the best approach. If there are systematic biases in the data set they might need to be adjusted for. For example, UHI would introduce a systematic bias.

    But all you have to do is calculate the UHI Bias and make the statement that it is there, job done.

    • Anto says:

      Correct. It is far easier to realise that a bias exists in the present day, design an experiment to determine that bias, and then make the adjustments to correct for it.

      However, today’s keepers of the record seem far more concerned with adjusting presumed past biases, which cannot be measured, and which adjustments are based on assumptions which can never be tested against reality.

      • Brian H says:

        Yes, and the more you read contemporaneous documentation and commentary, the less justification there is for the “bias” assumption. I.e., our ancestors were far more on the ball than the adjusters assume.

  12. A C Osborn says:

    Steve, what makes you think they Sceptics in the first place?

  13. Chip Bennett says:

    There are two different issues at play here:

    1. Adulterating the raw data (i.e. “adjusting” without retaining the original data)
    2. Incorrect analysis methodologies

    The first issue is fundamentally more egregious, because it prevents alternative analysis of the raw data, and in most industries not named “climate science” would be considered to be fraud (and in my industry, pharmaceutical, would be considered criminal fraud).

    So, no matter how anyone wants to slice, dice, or smear the data: go right ahead. Just state that what you’re presenting is a modification of the raw data, and present it side-by-side with the raw data, with a thorough justification for the data manipulation.

    But this post deals more with the second issue: a comparison of analysis methodologies – and Steve is absolutely correct. The UHI affect is observable and reproducible. TOBS error from a century ago is purely theoretical: not observable, and not reproducible. I would argue that it is, however, falsifiable – unless one would have us believe that the dust bowl was created by poor temperature-measurement practices.

    UHI is something that could be studied and understood quite easily. Instead of spending half a billion dollars on a CO2 satellite, spend that money instead on a tighter measurement grid around various urban locations, and see how the temperatures change across the grid (i.e. grid/measure the “and a few degrees colder in outlying areas anecdotal observations).

    But instead, what we see is actual UHI warming correction being applied to correct for theoretical TOBS cooling, and theoretical TOBS cooling correction being applied to correct for actual UHI warming.

  14. Ron C. says:

    What Goddard and Homewood and others are doing is a well-respected procedure in financial accounting. The Auditors must determine if the aggregate corporation reports are truly representative of the company’s financial condition. In order to test that, a sample of component operations are selected and examined to see if the reported results are accurate compared to the facts on the ground.

    Discrepancies such as those we’ve seen from NCDC call into question the validity of the entire situation as reported. The stakeholders must be informed that the numbers presented are misrepresenting the reality. The Auditor must say of NCDC something like: “We are of the opinion that NCDC statements regarding USHCN temperatures do not give a true and fair view of the actual climate reported in all of the sites measured.”

    • _Jim says:

      … then comes the “restatement” …

      Restatement – definition: The revision and publication of one or more of a company’s previous financial statements. A restatement is necessary when it is determined that a previous statement contains a material inaccuracy. The need to restate financial figures can result from accounting errors, noncompliance with generally accepted accounting principles, fraud, misrepresentation or a simple clerical error. A negative restatement often shakes investors’ confidence and causes the stock’s price to decline.


      from investopedia

  15. Bob Greene says:

    You might like the reason for infilling given in the link. Take 5 numbers 1,2,3,4,5 and average them. Remove 2 and you get a big difference. So, hence, ergo and therefore, you must infill and grid.

    • Robert Austin says:

      Bob Greene,
      You really think that it is as simple as that or that izuru knows what he is talking about?

    • _Jim says:

      Simplistic approach seemingly overlooking the fact that the 2-D spatial relationship between actual temperature measurement stations SHOULD be different, TO WIT, further north colder, exc as modified by warm coastal currents or cold coastal currents even …

      Are any allowances made for the 2-D relationship stations bear in relationship to each other?


    • Beale says:

      Even the example is nonsense. Removing 2 from the set 1 to 5 changes the average from 3 to 3.25. This isn’t a big difference, considering that you started with only five data items.

    • Scott Scarborough says:

      Temperatures are not usually 5 times as high in one place as another, like the sequence 1,2,3,4,5. Try reasonable temperatures like 59.3, 60.1, 58.8, 61.0,60.5. Take one out of the average and it won’t make a big difference.

    • Latitude says:

      guys, you’re not trying to win the lottery…
      ….you’re just getting a trend
      remember, it’s about a trend

      • Bob Greene says:

        Right, this is not the lottery. This is the settled science that is so good it is beyond debate and we are using it to redo the whole global economy. I expect a much higher standard of science than I’m seeing.

  16. Jerry Lundry says:

    I was surprised to read infilling has only just been discovered. I first downloaded USHCN data sets in 2010. Infilling was described at that time. Also at that time, my impression was that infilling had been used for years, and that great care had been taken to get it right, to document (in downloadable form) precisely what had been done, and to provide both raw data and corrected data in steps v1 and v2, so that anyone who didn’t like the corrections could either use the raw data or corrections of their own choosing. I was impressed (and still am) by the rigor of the documentation — in short, good science in the sense one can replicate the correction processes, if desired, in order to check what was done.

  17. Doug Proctor says:

    You have to do something when you do not have an equal distribution of data, or distribution of microclimates that control the data you are gathering or a stable data distribution network or a stable microclimatic distribution. Adjustments and the creation of a stable data distribution “grid” is what you have to do even if the result is not truly representational of what is going on in the World.

    Is the government analytical style appropriate and stable? Will it tend to adjust up or down (duh!) when faced with an equally valid decision? I’d say not, but I’d also say it is not important IF THE RESULT YOU ARE LOOKING FOR IS LARGE ENOUGH!

    That is the problem right there: the unassailable warming is within the error bars and adjustment levels. The anomaly is smaller than the uncertainty, so everything we do to “help” get a clearer picture is a risk for distorting the picture instead.

  18. gofer says:

    It is impossible the temperatures can be accurately determined down to tenths of a degree spanning decades. People only care about the temps where they live. That is why regional temps are more accurate and relevant than global. Tenths if degrees difference would actually show a remarkably stable climate. You can keep your home that stable and imagine a house built in 1895 and the changes in homes since then.

  19. Ron C. says:

    The several USHCN samples analyzed so far show that older temperatures have been altered so that the figures are lower than the originals. For the same sites, more recent temperatures have been altered to become higher than the originals. The result is a spurious warming trend of 1-2F, the same magnitude as the claimed warming from rising CO2. How is this acceptable public accountability? More like “creative accounting.”

  20. Broadlands says:

    Steve has never contacted NCDC about their “tampering”, but if he had he might have found at least a rudimentary understanding of the changes. Yes, there are many problems, but his use of the word “tampering” with its implication of conspiracy and fraud seems uncalled for. One can still criticize their methods and their conclusions without such language.

    Herewith their rationale and their attempts to make the latest changes clear.

    We switched to a new dataset in March 2014. It uses a much larger base of stations than the previous dataset, especially at the state level. It incorporates the topography of each state, rather than a simple average of the stations within the state (the whole record for high-terrain states will be cooler, because those high-terrain places are now represented).

    We announced this change at the July 2011 American Meteorological Society’s Applied Climatology Conference: https://ams.confex.com/ams/19Applied/webprogram/Paper190591.html
    We published its methodologies in the Journal of Applied Meteorology and Climatology: http://dx.doi.org/10.1175/JAMC-D-13-0248.1
    We provided an explanation of the change on line months in advance at: http://www.ncdc.noaa.gov/monitoring-references/maps/us-climate-divisions.php
    We provided a tool, months in advance of adoption, to allow users to compare and understand differences at the state, regional and national level: http://www.ncdc.noaa.gov/temp-and-precip/divisional-comparison/
    We provided a webinar in early 2014: http://www1.ncdc.noaa.gov/pub/data/cirs/climdiv/nCLIMDIV-briefing-cirs.pdf
    We notified several prominent applied climatology groups, including the American Association of State Climatologists
    We provided a notification of the change on its monthly climate reports in the months before and after the transition. See: http://www.ncdc.noaa.gov/sotc/national/2014/2
    We wrote a comprehensive “Frequently Asked Questions” about the transition with the February 2014 monthly climate report: http://www.ncdc.noaa.gov/sotc/national/2014/2/supplemental/page-5/
    We included a notification of the change in the press highlights document accompanying the February 2014 report
    and we provided technical updates in the weeks leading to the transition: http://www1.ncdc.noaa.gov/pub/data/cirs/div-dataset-transition-readme.txt (read from bottom up for chronology)

    • I’m quite familiar with those, and I contact NCDC almost every day on twitter. But thanks!

      I don’t find turning an 80 year cooling trend into a warming trend to be credible. If the error bar is that large, the trend and any corrections are meaningless.

    • Dougmanxx says:

      They change the “data” everyday. They’re blowing a lot of smoke:

      06_26_2014 USH00336196 1936 -678E -768E 379E 560E 1626 1922 2287 2253a 1948 1135 173 88a 0
      06_27_2014 USH00336196 1936 -678E -768E 379E 560E 1626 1922 2287 2253a 1948 1135 173 88a 0
      06_29_2014 USH00336196 1936 -700E -793E 357E 539E 1605 1901 2266 2232a 1927 1114 152 67a 0
      07_01_2014 USH00336196 1936 -702E -795E 355E 537E 1604 1900 2265 2231a 1925 1112 150 66a 0
      07_02_2014 USH00336196 1936 -698E -792E 358E 539E 1606 1902 2267 2233a 1928 1115 153 68a 0

      RAW USH00336196 1936 -555a -646 518 680 1695 1983 2333 2317a 2010 1205 240 144a 0

      They literally have a different temperature EVERY DAY for past "measurements". The station I show is Oberlin Ohio in 1936. They have cooled the summer and warmed the winter months. If you look at the handwritten records, the station keeper noted times of readings and various other notes. He was very detailed and conscientious. And yet.... they "estimate" months with entire monthly records. Why? The software is hopelessly broken.

      • Dougmanxx says:

        Furthermore: the “a” tag is reserved for months that are missing at least one days temperature readings, but in the case of this station… the months tagged have NO missing data. I checked the scanned handwritten records because I was curious. And yet… every month tagged “a” has complete data. What’s up with that?

      • Broadlands says:

        Ok Doug… how would YOU recommend that it be done? Where will you go to tell us what the untampered data are, and how they should be adjusted, or not? Where are the sources of the REAL data? What hours and temperatures should be used?

    • Anto says:

      Actually, “tampering” is a perfect description of what is being done.
      “To make changes in something, especially in order to falsify; eg. to tamper with official records.”

  21. Dougmanxx says:

    Broadlands says:
    July 2, 2014 at 8:45 pm

    A good start would be if you have “an” adjustment. it’s just that. Right now there is no such thing, because EVERY DAY past data is changed. Change it once? I might buy it. But continually change, to different numbers EVERY DAY of the year? LOL. Get real. That’s not an “adjustment”, that’s a good ole “fudge factor”. We used to call it a “bug”, and then a “feature” once the users discovered it. The data is right there. RAW. USE IT. Stop screwing around with an “algorithm” that makes you look dishonest.

    I would also say: the idea that we “know” the average temperature of the World, or even the US is a fairy tale. The only data that MIGHT allow us to say that is the satellite record. The land record? LOL. Get real, you can’t fix something that badly broken. So, be a scientist and admit: “we don’t know”.

    • Broadlands says:

      A “real” average temperature is a fairy tale, so why bitch about “tampering”? Then you say the RAW data is right there so USE IT. Where are these raw data Doug? Who has collated them? Who will remove obvious outliers? Who will decide which times of the day to use and what temperatures to use? You act like this is something that has only recently been discussed and considered.

      H.H. Clayton, errata

      If you look at the World Weather Records (two decades worth) you will find numerous places, worldwide with missing dates and missing values. You will find many different ways to calculate a daily mean or monthly mean.

      Where are the RAW data? What is your definition of RAW?

      • I remove the obvious outliers.

        • Broadlands says:

          Same data set Steve? Where is this RAW data set? NCDC didn’t even exist so you must be using their “tampered” RAW data? Where are the RAW data?

      • Anto says:


        They went way beyond reasonable adjustments long ago. Now, everytime a bright spark comes up with some perceived bias which might have been present 80 years ago, they write a piece of code to change the recorded data to take it into account.

        They are now adjusting up, down, sideways for so many different things and in fractions of a degree with only theoretical (assumed) knowledge of the biases that the entire process and the output becomes garbage.

        When your process is garbage, you are much better off going back to square one (in this case, the RAW – ie. reported) data and starting again.

        As Steve has pointed out many times, the climate establishment starts from the assumption that all previous observers were morons, using substandard instruments.

        • Broadlands says:

          Anto, Doug, Steve… What is the source of the RAW, reported data? Where does Steve go to get his RAW reported data to “remove the obvious outliers” Simple question. I would like to look at the “square one” RAW data… where is this RAW data (untampered of course) to be found? From 1895-2013, since Steve denigrates NCDC in the US 48 states in particular.

        • I compare HCN raw daily and USHCN raw monthly vs. USHCN final monthly and NCDC reported temps.

        • Broadlands says:

          Apparently, Steve’s definition of RAW data is NCDC’s current data, daily and monthly measurements…all derived from each US state… 48 different sources. From these 48 sources he can remove outliers… the ones they missed? Ok…But, that’s not the question. Where are the daily, monthly RAW data for PREVIOUS periods, for the 48 states in years before about 1983? For 1921 or 1934…two of the warmest years on record, or 1917, the coldest year on record. Where are these RAW data?

        • It is the same data set going back to 1895 and earlier.

      • _Jim says:

        Broadlands July 2, 2014 at 9:27 pm
        A “real” average temperature is a fairy tale, so why bitch about “tampering”? Then you say the RAW data is right there so USE IT. Where are these raw data Doug? Who has collated them? Who will remove obvious outliers? Who will decide which times of the day to use and what temperatures to use? You act like this is something that has only recently been discussed and considered.

        This looks like a highly disingenuous post; I am highly offended that you would construct such obvious ‘strawmen’ and do so so boldly. Geesh. I hope I am not paying your salary …

        • Latitude says:

          Broadlands is paid by the Koch brothers…..to make believers look bad

        • Broadlands says:

          Ah yes, the usual responses when losing an argument… be highly offended and try to denigrate the messenger while avoiding an addressing of the message. Where are the RAW data.. back to square one? Where can you go to find RAW data? What is your definition of RAW data?

        • _Jim says:

          Are you serious?

        • Latitude says:

          is there a course on this in some school?..they all sound alike and juvenile

      • kuhnkat says:

        Broadlands, Thanks for supporting Steve by showing that there may no longer BE real raw data that we don’t find on the scanned temp sheets!!

        Since you don’t seem to know where the raw data is either we can tell NOAA and NCDC to quit making it all up right?!?!

      • Dougmanxx says:

        So you prove my point: the surface record is hopelessly flawed. Why can’t you admit that? Instead you feel the need to torture it into something it isn’t. Be a scientist. Admit the flaws. Admit you don’t know.Admit the data available in the temperature record does not allow you to do what you wish it could. STOP torturing, twisting, manipulating, spindle fold and mutilating what data there is. If you feel you MUST use the surface data: stop using processes which make you look like LIARS.

        I don’t know about you, but for me, yesterday’s weather doesn’t change anymore. Yesterday is…. done. Unless there really is a secret time machine in a NOAA warehouse somewhere, all these changes just make you look dishonest. So stop “changing the past” EVERY DAY, because all that does is convince normal people like me that you are just making this sh*t up as you go along, in as self serving a manner as possible. /rant

        • Broadlands says:

          What is RAW temperature data? Where is it to be found? After all, you can’t say if something has been tempered with if you don’t even know what it is or where it is. But, apparently you do it anyhow. Very scientific.

          Steve tells us that he is using the same data set going back to years before NOAA/NCDC even existed. Who constructed those “pure” RAW untampered values? The US Weather Bureau maybe? Did they, too, tamper? And are those the sources of Steve’s incorrect prediction that 2013 would be one of the top-10 coldest years on record?

        • _Jim says:

          Are you familiar with the term “B91 form”?

        • Dougmanxx says:

          Broadlands says:
          July 3, 2014 at 1:32 pm

          You really are an idiot, aren’t you? There actually is a database of EVERY scanned handwritten report from all of the USHCN stations. If you want completely “untampered” data, you’d go there. I’ve done some comparisons between the handwritten records and the USHCN RAW file, which you can find here: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2.5 The checks I’ve done seem to indicate they are the same information. Might there be “tampering” I’m unaware of? Sure. But you don’t really care about that, you keep trying to distract from the point: NOAA publishes a brand new temperature record EVER DAY. That is, past temperatures are different EVERY DAY. So I say once again: stop doing something that makes you look so dishonest. Admit that it does, and move on.

        • Broadlands says:

          Amusing Jim! Were there B91 forms prior to the ‘tampering’ by NOAA? Have you studied the temperatures published by the US Weather Bureau and reported each month for every state? Have you even read the B91 instructions? Of course you haven’t.

          And Doug follows up with another of the required ad hominems. One sees those at this blog almost every post when someone dares to challenge the facts. If there’s a distraction it is that from Mr. Goddard. He failed to bring any of the numerous NOAA explanations to the attention of his loyal “scientists”. But never mind… they wouldn’t be looked at by anyone here anyhow. Have fun patting each other on the back.

        • Dougmanxx says:

          Broadlands says:
          July 3, 2014 at 3:35 pm

          Telling the truth isn’t an ad hominem attack. You must be an idiot if you can post all those links from various NOAA websites, then claim not to know where the “raw” data is. I simply state the truth. And as expected, you cannot reply to the whole point of this: the temperature of the past changes EVERY DAY. So that “data set” you can download from the NOAA FTP site is a NEW ONE every day. The past ALWAYS changes. That makes the whole process look dishonest. But then the reason you’re here isn’t to address apoint, it’s to confuse the issue. Sorry, I won’t let it go. Stop looking like a bunch of liars.

          I just downloaded todays “data” set. Guess what? Oberlin Ohio’s 1936 weather changed again!
          06_26_2014 USH00336196 1936 -678E -768E 379E 560E 1626 1922 2287 2253a 1948 1135 173 88a 0
          06_27_2014 USH00336196 1936 -678E -768E 379E 560E 1626 1922 2287 2253a 1948 1135 173 88a 0
          06_28_2014 USH00336196 1936 -700E -793E 358E 539E 1606 1902 2267 2233a 1927 1114 152 68a 0
          06_29_2014 USH00336196 1936 -700E -793E 357E 539E 1605 1901 2266 2232a 1927 1114 152 67a 0
          07_01_2014 USH00336196 1936 -702E -795E 355E 537E 1604 1900 2265 2231a 1925 1112 150 66a 0
          07_02_2014 USH00336196 1936 -698E -792E 358E 539E 1606 1902 2267 2233a 1928 1115 153 68a 0
          07_03_2014 USH00336196 1936 -669E -765E 382E 565E 1629 1925 2290 2256a 1951 1138 176 91a 0

          RAW USH00336196 1936 -555a -646 518 680 1695 1983 2333 2317a 2010 1205 240 144a 0

          Amazingly, after an entire week the "historical record" only has 2 days with the same "data". What's up with that?

        • _Jim says:

          re: Broadlands July 3, 2014 at 3:35 pm
          Amusing Jim! Were there B91 forms prior to the ‘tampering’ by NOAA?

          Non-sequitur; You do appear to be an idiot. Note that one has to ‘earn’ that title, and you have done so. Bye now and thanks for playing.


        • _Jim says:

          re:Broadlands July 3, 2014 at 9:58 pm

          Bullshit charts with bullshit data; you are really steeped in this global warming bullshit aren’t you? The ‘data’ has no provenance, has seen a myriad of adjustments with little to no basis in reality.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s