A $100,000 climate prize
Climatologists often claim that they are able to detect the global warming signal in the temperature records. If they are right then they are going to be having a very happy Christmas indeed, because Doug Keenan is offering them the chance to win a very large cash prize at his expense. Here are the details.
There have been many claims of observational evidence for global-warming alarmism. I have argued that all such claims rely on invalid statistical analyses. Some people, though, have asserted that the analyses are valid. Those people assert, in particular, that they can determine, via statistical analysis, whether global temperatures are increasing more that would be reasonably expected by random natural variation. Those people do not present any counter to my argument, but they make their assertions anyway.
In response to that, I am sponsoring a contest: the prize is $100 000. In essence, the prize will be awared to anyone who can demonstrate, via statistical analysis, that the increase in global temperatures is probably not due to random natural variation.
The file Series1000.txt contains 1000 time series. Each series has length 135 (about the same as that of the most commonly studied series of global temperatures). The series were generated via trendless statistical models fit for global temperatures. Some series then had a trend added to them. Each trend averaged 1°C/century—which is greater than the trend claimed for global temperatures. Some trends were positive; others were negative.
A prize of $100 000 (one hundred thousand U.S. dollars) will be awarded to the first person, or group of people, who correctly identifies at least 900 series: i.e. which series were generated by a trendless process and which were generated by a trending process.
Each entry in the contest must be accompanied by a payment of $10; this is being done to inhibit non-serious entries. The contest closes at the end of 30 November 2016.
The file Answers1000.txt identifies which series were generated by a trendless process and which by a trending process. The file is encrypted. The encryption key and method will be made available when someone submits a prize-winning answer or, if no prize-winning answers are submitted, when the contest closes.
More here.
Reader Comments (129)
Indeed, which is why I would quite like to know if what Doug is doing is legal or not. I'm no expert at the rules about lotteries, but my understanding is that there are rules. Does Doug's challenge satisfy all the legal requirements?
To stop the tedious posts by ATTP, trying to quell debate by law rather than by scientific expertise (as befits the AGW religion), here are what I believe the rules to be:
<BLOCKQUOTE CITE="">"LOTTERIES AND AMUSEMENTS ACT 1976
EXPLANATORY NOTES FOR THE GUIDANCE OF LOCAL SOCIETIES
1. Section 5 of the Lotteries and Amus
ements Act 1976 (as amended by the National
Lottery etc. Act 1993) authorises the conduct of sm
all lotteries (eg a sweeps
take or draw, etc.)
by societies for raising money for charitable, s
ports and other similar pur
poses, otherwise than
for private gain. The society on whose behalf the
lottery is promoted must first be registered
for the purposes of the said section 5 with t
he appropriate authority, and the lottery must be
conducted in a manner complying with the Act.
2.
Registration of Society
. In order to obtain the benefit
s of the Act, the society (an
expression which includes a local branch or secti
on of a society organised on a national or area
basis) must be one which is established and conducted
wholly or mainly for one or more of the
following purposes, that is to say:-
(a) charitable purposes;
(b) participation in or support of athletic
sports or games or cultural activities;
(c) purposes which, not being described in
paragraph (a) or (b)
above, are neither
purposes of private gain nor purposes
of any commercial undertaking.
If a society does not put on sale tickets
or chances valued at more than £
20,000 for any lottery
and if it does not put on sale tickets or chances
the value of which, when added to those sold
or to be sold in all earlier lotteries in
the same calendar year, amounts to over £
250,000,
application for registration should be made to the
local authority within whose area the office or
the head office of the particular so
ciety is situated; forms of app
lication are available from the
offices of the local authority, and when completed should be returned to t
he authority, together
with the statutory fee of
£35
. This fee is amended from time to time and it would be advisable
to check with the authority as to the amount cu
rrently required to be paid on application for
registration.
If a society wishes to run lotteries whic
h will exceed the amounts
mentioned above, then
application for registration must be made to t
he Gaming Board for Great Britain. Once
registered with the Gaming Board,
the society must promote all fu
rther lotteries (of whatever
size) held in that or the following three cal
endar years under the Boar
d's registration, and will
not be able to change to local authorit
y registration during that time"
Shall I assume that it is purely a coincidence that Doug has changed his challenge after someone posted this on my blog?
Diogenes
There is no mention of registration under Section 5 of the Lotteries and Amusements Act 1976 in Mr Keenan's information.
Okay, so after the update Douglas Keenan mentioned in a comment above, I can say with certainty this contest is a scam. It is completely dishonest with Keenan changing the contest after people submitted entries in order to make the contest harder. Not only is that horribly wrong, it is illegal as changing the terms of a contest in a significant way without offering a refund to people who have paid to participate constitutes a breach of contract, meaning keeping the entry fees people have submitted is theft.
Andrew Montford should be embarrassed to have this challenge promoted on his site. We can leave aside issues of legality. We can leave aside the fact that, for all we know, someone may have submitted a winning entry before Keenan made undisclosed changes to make the challenge more difficult, meaning he could potentially owe someone $100,000 and be hiding that fact. We can leave all that aside. The simple reality is Keenan made a challenge, then when people began working on it, he turned around and made undisclosed changes to the challenge to make it more difficult.
That's completely dishonest. Anyone and everyone should speak out against that sort of behavior. The idea anyone would give him money at this point is just disgusting.
AndThenTheresPhysics, Brendan Shollenberger, dioogenes, Douglas J Keenan
There might be problem here. If Mr Keenan keeps fiddling with hits lottery he risks crossing the line from legal , if as yet unregistered, lottery into illegality.
I am not a lawyer, but IIRC taking entries for a lottery while trying to avoid paying out the prize is known as < href="http://definitions.uslegal.com/i/intent-to-defraud/">"intent to defraud".
Personalaly I think Mr Keenan is honest, if politically motivated, but he stands on the edge of a very deep hole.
The problem is, ATTP doesn't understand the basis of statistical inference but that doesn't hold him back from venting and polluting.
Nice try Brandon.
At least you have your excuse.
To have solved the problem by discovering the PRNG algorithm, would have been akin to "solving the climate system" - which we surely all agree is:
a) Impossible and
b) Not germane to the issue of correctly attributing a trend for an unknown data generation process (i.e. the climate).
Of course the result of discovering the PRNG process would most likely allow the winner to completely replicate all 1000 stochastic series.
That would not constitute proof that the same person can replicate the stochastic temperature series (ad hence correctly attribute the observed variation).
Oh please!
Except that's not what Keenan claims to have done. Rather he claims to have fixed a flaw in how the series were set up. In the challenge, he claimed the part of the recipe used to produce the time series were random numbers. he then realized that the random number generator was not producing random numbers of sufficient unpredictability - ie. his the actual recipe wasn't in accordance with the stated recipe in the challenge. And he altered the time series so that they better matched what he claimed they were in the original challenge.
First time I've heard someone fixing something so it matched what they publicly represented it to be being accused of scamming or engaging in fraud.
Browsing through his random sets, in the first 300 sets I found more than a dozen sets which warmed by more than 2C.
I can think of no statistical technique which will detect a 1.0C trend when the standard deviation of the data is larger than 1.0. The trend gets lost in the noise. Anybody with Statistics 101 would know this, and not waste their money.
This is the real dishonesty. Mr Keenan has put forward a challenge which he claims can be won by statistical analysis, when it cannot.
And before anyone else starts making silly remarks, the trend in the actual data is 1C in 135 years, with a standard deviation of 0.05C.
Frank wrote: "The Met Office's Chief Scientist said that the IPCC's statement about global warming - derived from a purely statistical model - has NO BEARING on our understanding of the climate system."
ATTP replied: "Yes, everyone else has known this for a very long time. No one has claimed that they did have some bearing."
Does that mean that all of the other statements in the SPM for AR5 WG1 have no bearing on the climate system? This is arguably the first and most important statement WG1 makes in their report: There is unambiguous evidence that the globe is warming. Now we learn from the Met Office's Chief Scientist that statements about warming based on a linear statistical model (like this statement) are meaningless.
You are correct; I should have been smart enough to realize that WG1 was misleading me. I knew that the behavior of chaotic systems is difficult to interpret and that unforced variability (especially from changes in the rate of mixing/heat transfer between the deep ocean and the surface) can create deviations potentially as large as the LIA and MWP. (We don't know for sure if these event were mostly forced or unforced.)
The problem with the MET Office and IPCC relying on AOGCMs to detect warming is that the parameters in these models have been tuned by people with knowledge of 20th-century warming. Government funding for a model that didn't reproduce 20th-century warming would have dried up. Using their expert judgment, IPCC authors would likely exclude the results from any AOGCM the "failed" to reproduce the historical record of 20th-century warming - even though we all know that failure could have been caused by random chance. ("Failed" might be defined as attributing less than 50% or more than 150% of observed warming to forcing and attributing the remainder to unforced variability.)
Lorenz wrote a very prophetic paper on this subject back in 1991, before the rapid warming of the 1990's and "hiatus" since: "This somewhat unorthodox procedure [of using AOGCMs rather than statistical analysis to detect significant warming] would be quite unacceptable if the new null hypothesis had been formulated after the fact ... This would be the case, for example, if models had been tuned to fit the observed course of the climate."
http://eaps4.mit.edu/research/Lorenz/Chaos_spontaneous_greenhouse_1991.pdf
If models have been tuned to agree with the observed course of warming, then they can't be used for detection and attribution of warming!
There's a more recent post on this site about this challenge, but for the record, I want to respond to the idea being thrown around here that my accusations are wrong because:
Except that's not what Keenan claims to have done. Rather he claims to have fixed a flaw in how the series were set up.
Here's the thing, what Douglas Keenan claims to have done isn't the determining factor here. The determining factor is what Keenan actually did. And what he did is exactly what I said he did - changed the method he created the data set with in a way which made the contest significantly more difficult to win.
Anyone who compares the data sets will see that's true. Just looking at histograms of the trends of the series in the two data sets makes it clear that's true. The evidence is not unclear or vague. It is obvious and indisputable. Dismissing it because... Keenan didn't come out and tell everyone what he was doing is ludicrous.
Frank,
Wow. Shall I explain this again. I'll try to do it slowly, but that's hard when writing rather than speaking. The data tells us that we're warming. We can certainly use a statistical model to determine if the data values are higher now than they were in the past and whether, or not, this is stastically significant. This is fairly straightforward. If it's too complicated for you, just think "warmer now than in the past". Now here comes the complicated bit. That we've used a statistical model to establish if it is warmer now than it was in the past does not tell us why it's warmer, or what has caused this warming.
Okay, are you with me now? There is a difference between using statistical models to extract information from a dataset (are we warming?) and trying to understand the process associated with the observations. To interpret the observations requires actual, physically-motivated models. Statistical models alone cannot tell us why the data has the properties that it has, but they can be used to extract information about the data.
So, to go back to what was said
Yes. Statistical models cannot tell us why we're warming, but they can tell us that we're warming. Understanding the climate systemt therefore requires more than statistical models.
I hope this is clear now.
ATTP wrote: "That we've used a statistical model to establish if it is warmer now than it was in the past does not tell us why it's warmer, or what has caused this warming."
The linear AR1 statistical model has NOT established that it is warming. The head of the Met Office said that statistical model has NO BEARING on our understanding of the climate system. It does not tell us that statistically significant warming has occurred, because it is an inappropriate model. According to the Met Office, significant warming can only be demonstrated with the aid of climate models, which could have been tuned to agree with the temperature record.
If you can't prove that the warming that has observed is "statistically significant warming", don't expect society to spend trillions to prevent more.
Inferential statistics is used to abstract "meaning from data". The first step is to detect significant warming. Without detection, there is nothing that requires attribution. The Met Office ruled out using an linear AR1 model for both detection and attribution.
1) Is the warming we have observed in the 20th century meaningful/significant? It is warmer than it was a century ago, but we don't know if that change is meaningfully different from normal variability without inferential statistics. If you look only at the 20th century, the answer appears obvious - the change is unprecedented and mostly upward. If you look at the whole Holocene - especially warm and cold periods like the LIA and MWP - the change appears less significant - unless you can prove that those periods were forced. The current rate of warming could be unprecedented in the Holocene, but the time resolution and reliability of many temperature proxies is limited.
2) Is the hiatus meaningful? Temperature seems to be rising more slowly (or not at all) since about 2000. Should we conclude that the GHE doesn't exist because GHGs have been rising faster than ever and surface temperature is not changing rapidly? There was a similar hiatus in warming around 1960. Climate models suggest that the 1960 hiatus was forced by rising aerosols, suggesting that the current hiatus while forcing is rising is unprecedented..
We must use a consistent and valid method for deriving meaning (statistical significance) from both periods. Since no physics suggests that GHG-mediated warming in the 20th century should have been linear, the IPCC's choice of that statistical model was absurd and it was possible for Doug to find statistical models that fit the observations better than the one the IPCC chose. Doug's models say the warming is not statistically significant. However, I agree with the Met Office: no statistical model is appropriate for abstracting meaning (significance) from the 20th century record. Only after climate models tell us that an 0.5-1 degC change in GMST occurs rarely or never without being forced can we conclude that meaningful warming has occurred. Unfortunately for the consensus, those same climate models also tell us that the current hiatus was also an improbable event (p about 0.03). Either the models predict too much warming or too little unforced variability or both.
The last 2 pages from this prophetic Lorenz paper may explain things better. Title: Chaos, Spontaneous Climatic Variations and Detection of the Greenhouse Effect.
http://eaps4.mit.edu/research/Lorenz/Chaos_spontaneous_greenhouse_1991.pdf
aTTP:
"Yes. Statistical models cannot tell us why we're warming, but they can tell us that we're warming"
How please? Simple numbered steps please using recognised and referenceable terminology and techniques, along with any necessary caveats to distinguish the 'statistically identified warming' from random effects. Please start with definitions, derivations and limitations of your chosen metrics. Thanks.
@ Frank, Nov 24 at 11:02 PM
Just to clarify ... I do not prefer or advocate using any particular statistical model. The statistical models that I have used were just for comparison: to show that there are some models better than a straight line with AR(1) noise.
not banned yet,
This is obvious, right? If I have a set of temperature measurements over some period of time, then you can establish if it is probably warmer now than it was in the past, or not. That is all. Simple.
What do you mean by distinguish the 'statistically identified warming' from random effects? We're talking about a real system. A system in which the temperature is a measure of the energy. If the temperature goes up, it's because we're gaining more energy than we're losing. If it goes down it means we're losing more energy than we're gainging. This is physics 101; there is no "random effect" that can cause the temperature to change without it being associated with some change in energy flux - it's impossible.
So, if I have a dataset of temperatures, I can use statistical methods to establish if that dataset is consistent with the system getting warmer, getting cooler, or having a roughly constant temperature. Statistical methods alone, however, cannot tell my why it's doing that, but they can be used to identify properties of the dataset.
Frank,
This explains your conclusion. The IPCC did not use a linear model under the assumption that the underlying GHG-mediated warming was linear and to extract the GHG-mediated trend. They used it simply because if you have a long time series that appears to have a trend, it is a good deal more informative to report a linear trend (with uncertainties) than it is to report some kind of random number seed that happens to produce a time series that matches the observations. The linear trend models is intended to be descriptive not inferential.
Douglas,
Okay, here's you chance; what do those better models tell you about the global temperature data?
aTTP - the only thing I have learned from your post is that you are incapable of (or unwilling to?) provide the foundation and methodology of the statistical models which you are claiming can "statistically identify warming":
"Statistical models cannot tell us why we're warming, but they can tell us that we're warming"
Or you're not thinking, have you considered that? I'll try one more time, but I'm almost certainly wasting it. First step. Look at the actual data. For example, HadCRUT4. The mean anomaly in 1880 was about -0.23C, the variability is about +- 0.1C. In 2015, the mean is about about +0.5, with a similar variability. It is clearly warmer now than it was in 1880. If you think otherwise, stop reading because you're clearly as clueless as Keenan. You could then do what the IPCC does and determine some kind of best-fit linear trend and a confidence interval (assume autocorrelation if you wish since temperatures are probably correlated). That can allow you to estimate (with 95% confidence) how much warmer we are now than we were in the past. If somehow you think this doesn't tell us this, then you're living in some kind of fantasy land where temperatures can go up without things getting warmer. I don't know how that is meant to work, but I have no great interest in finding out.
ATTP wrote: "The IPCC did not use a linear model under the assumption that the underlying GHG-mediated warming was linear and to extract the GHG-mediated trend. They used it simply because if you have a long time series that appears to have a trend, it is a good deal more informative to report a linear trend (with uncertainties) than it is to report some kind of random number seed that happens to produce a time series that matches the observations. The linear trend models is intended to be descriptive not inferential".
The SPM says: "The globally averaged combined land and ocean surface temperature data as calculated by a linear trend, show a warming of 0.85 [0.65 to 1.06] °C"
By deriving a 90% confidence intervals from the linear AR1 model, the IPCC is deriving an INFERENCE from this statistical model: They assert DETECTION of at least 0.65 degC of warming is part of a significant trend that can not be explained by random drift in temperature.
Speaking technically, random drift in temperature is unforced variability. 0.65-1.06 degC of warming requires some explanation - it is forced.
Yes, so what? This is what a linear trend analysis of the surface temperature dataset produces. Why are you presenting this as if it means anything other than what it explicitly says?
No, they are not. There is no such thing as a "random drift in temperature". It is nonsensical/unphysical. Temperatures change when energy is added, or taken away. The data tells us how the temperature has changed. It does NOT - by itself - tell us why it has changed. Even if there were a process call "random drift in temperature" the data would still be indicating that we are warmer now than we were in the past. It's almost as if you're suggesting that if the process that produced the temperature changes was something called "a random drift in temperature" that an increase in temperature would not imply that we were warmer. I hope you can see that this does not make sense.
I think you have this the wrong way around. The measurements indicate that we have warmed - we have probably warmed by between 0.65 and 1.06C, irrespective of the process that produced that warming. The measurements, by themselves, do not tell us why we've warmed. What we would like to understand is why it has warmed. Even if it were simply some kind of unforced variability it would NOT mean that we aren't warmer. This is self-evident, right?
ATTP wrote: "There is no such thing as a "random drift in temperature". It is nonsensical/unphysical. Temperatures change when energy is added, or taken away".
Wrong again. Heat content rises or falls when energy is added or taken away (forcing). Surface temperature can drift randomly due to chaotic variation in exchange of warm surface water with cold deeper water.
The El Nino event we are currently experiencing is the best known example of such a chaotic change in exchange of heat between the surface and the deep ocean. The eastward movement of warm, less dense water from the Pacific Warm Pool is suppressing the upwelling of cold water off the coast of Peru. This is causing GMST to rise without energy being added (forcing). Such examples of unforced variability are commonly found in the behavior of chaotic systems. Irregular change (or random drift) is not required to have a an obvious cause in deterministic chaotic behavior. El Nino is not forced.
The existence of other patterns of unforced variability with longer durations is now widely accepted. The PDO seems to represent the integration of more or less frequent El Nino events which are unforced. The AMO is widely believed to be due to fluctuations in the meridional overturning current in the North Atlantic. We don't know if the LIA, MWP, RWP, etc represent forced or unforced variation.
The Met Office admitted that the IPCC's linear AR1 statistical model is not suitable for drawing inferences (confidence intervals) about observed warming.
Frank,
I shall just highlight this. I said
You said
I think that says all that needs to be said about the quality of this discussion.
It appears that the quality of the discussion is suffering fom the ignorance of one participant who apparely hasn't heard that: 1) 95% of the heat from radiative forcing ends up in the ocean as Ocean Heat Content. 2) Therefore changes in OHC provide our best estimate of the current radiative imbalance driving global warming. 3) The ARGO buoys were created to accurately measure OHC. 4) ARGO would not be necessary if changes in GMST accurately reported on radiative imbalance.
For the record, I never implied that energy was not conserved, Chaotic variantion in heat transfer between the surface and the deep ocean can cause GMST to rise or fall indepent of radiative imbalance,
Above, I recommended a sort paper from Lorenz entitled "Chaos, Spontaneous Climatic Variation, and Detection of the [Enhanced] Greenhouse Effect". The key section is the last two pages. Perhaps you will learn enough to engage in a quality discussion on this subject.
Frank,
Wow, you really are strawmanning me now. Not a surprise mind you. Let's be clear about something. The surface temperature data is data from a real system and indicates how the temperature has changed over some time interval. If that data indicates that it is warmer now than it was in the past, this is not going to change if we determine that the reason it has warmed is because of some kind of internal process, rather than some kind of external forcing. This is obvious right?
Indeed, and I never said it couldn't. My basic point is that if the observations suggest that the temperature has risen, then the temperature has risen. Discovering that the reason for this rise is an internal, choatic process is not going to change this conclusion. Again, isn't this obvious?
It is clear from the surface temperature data that temperatures have increased relative to the late 1800s. Right? It is also clear that this rise is statistically significant (i.e., we can reject the null that temperatures have not risen). Right? Suggesting otherwise is utterly bizarre. Okay? This is utterly trivial stuff. Right?
You really seem to be suggesting that the process that drives the warming might influence whether or not we conclude that it has warmed. I don't know how else to describe this other than "wrong".
ATTP wrote: "It is clear from the surface temperature data that temperatures have increased relative to the late 1800s. Right? It is also clear that this rise is statistically significant (i.e., we can reject the null that temperatures have not risen). Right? Suggesting otherwise is utterly bizarre. Okay? You really seem to be suggesting that the process that drives the warming might influence whether or not we conclude that it has warmed. I don't know how else to describe this other than "wrong"."
Why are you asking me these questions? (You won't believe anything I say.) They are answered very clearly in the last two pages of the Lorenz paper I have linked several times: "Chaos, Spontaneous Climatic Variation [unforced variability], and DETECTION of Greenhouse Warming". What paper could be more relevant to the questions you posed above? (Note the difference between detection and attribution. The paper is about Detection.)
http://eaps4.mit.edu/research/Lorenz/Chaos_spontaneous_greenhouse_1991.pdf
"Imagine for the moment a scenario in which we have traveled to a new location, with whose weather we are unfamiliar. For the first ten days or so the maximum temperature varies between 5 and 15 degC. Suddenly, on two successive days, it exceeds 25 degC. Do we on the second warm day, or perhaps the first, conclude that someone or something is tampering with the weather? ...
Consider now a second scenario where a succession of ten or more decades without extreme global-average temperatures is followed by two decades with decidedly higher averages; possibly we shall face such a situation before the 20th century ends. [Lorenz is writing in 1991, before the second decade of rapid warming - which it turns out has been followed by the "hiatus".]
"Certainly no observations have told us that decadal-mean temperatures are nearly constant under constant external influences."
Then Lorenz discusses whether statistics can help. (Doug has proven that other models fit warming better than the IPCC's linear AR1 model and the Met Office has told us no statistical model is appropriate.)
Yes, it certainly looks like there has been significant warming - until you stop and think about how little we know about "Spontaneous Climatic Variability" (unforced variability). Chaotic systems exhibit change without any obvious cause; the infamous butterfly flapping its wings. How much could the AMO and PDO have contributed to the observed trend? What if phenomena like the LIA and MWP are produced mostly by spontaneous variability? (Estimates of the change in TSI during the Maunder Minimum are too small for solar forcing to have produced the LIA.) Any change that occurs spontaneously is noise in the data and interferes with DETECTING a SIGNIFICANT change that allows us to conclude that "man is tampering with the climate".
Read the paper and ask how Lorenz (and the Met Office) answered your questions.
Frank,
Why would you think that? I think the questions have obvious answers.
I also think you're somewhat misrepresenting Chaos. Chaos - in this context - is still deterministic. It's not - strictly speaking - random. The system still obeys the laws of physics and can be described by well-understood equations. The problem is that in such a non-linear system the final state may depend very sensitively on the initial conditions, to the extent that it is essentially impossible to specify them with sufficient accuracy.
Even so, however, this does not mean that we cannot determine if the system has undergone warming. This can be established from the measurements, without necessarily knowing the cause.
ATTP wrote: "It is clear from the surface temperature data that temperatures have increased relative to the late 1800s. Right? It is also clear that this rise is statistically significant (i.e., we can reject the null that temperatures have not risen). Right? Suggesting otherwise is utterly bizarre. Okay?"
Frank suggested: "Read the paper and ask how Lorenz (and the Met Office) answered your questions."
ATTP relies with a bunch of accurate statements about chaos that aren't relevant to the question of detection of significant warming and continues: "Even so, however, this does not mean that we cannot determine if the system has undergone warming. This can be established from the measurements, without necessarily knowing the cause."
Frank responds: What does Lorenz say about your questions? Does he say that the instrumental temperature record provides conclusive evidence that something (no necessarily man) has been tampering with the climate? Or could these changes be spontaneous (unforced) climatic variability - which is noise in the data?
Your and my OPINIONS don't mean much. Lorenz is an authority on this subject. What did he say in 1991 (a time when disagreeing with the consensus wouldn't get one branded as a Heretic or a Den1er)? If you don't like his answer, find me a more authoritative source.