User:LArron/PNAS Response to Letter

From RationalWiki
Jump to navigation Jump to search

PNAS Talk[edit]

This should give an overview of the history of the famous talk about the Response to A. Schlafly's letter from PNAS (status as of Sep 25th, 2008). The presentation isn't perfect, yet, and some comments which were overwritten are missing. To give a legend of the obvious:

semi-visible comment: archived
invisible comment: removed
visible comment
editor with privileges
editor is blocked*

*: (Sep 18th, 2008)


In this day and age, scientists have their own agenda and have corrupted science. Just look at global warming or cloning or stem cells as proof. With that said, the only way to get the real truth is by suing in court. Unfortunately, scientists are bound to vast wealth and have the power to defend themselves vigorously. If ever a fund was set up to pay for a suit, I would contribute. It is a classic case whereby the truth be known, the truth will prevail. -- 14px jp 22:14, 12 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
: Thanks, Jpatt. One additional beauty of the truth is that it remains the truth no how much some deny it. PNAS can deny its errors all it likes, but that doesn't change the fact they are errors.--Aschlafly 22:21, 12 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Philip J. Rayment23:01, 16 September 2008Reverted edits by AlexM (Talk); changed back to last version by Aschlafly
AlexM22:57, 16 September 2008removed unsubstansive comments
DRamon22:15, 15 September 2008reverting more vandalism from JonoP
JonoP22:14, 15 September 2008rmv unsustansive comments
Aschlafly22:21, 12 September 2008reply
::"...the only way to get the real truth is by suing in court." Which is why the Kitzmiller et al. vs. Dover Area School District et al. decision was so important. Science was able to vindicate itself against the claims of intelligent design creationism. As for PNAS: It's not like the reviewer is saying anything that commenters here hadn't told ASchlafly already. -DrekAschlafly01:12, 13 September 2008Reverted edits by Drek (Talk); changed back to last version by Aschlafly
Drek23:25, 12 September 2008()
Drek23:09, 12 September 2008()
::Well said, Andy and Jpatt. It is perhaps worth pointing out that the President of the NAS is a "climate scientist". If the Academy is dominated by pseudoscience of that kind, it's hardly a surprise that their response was to cover up and deny the truth. Nevertheless, they had to be given their chance to make good before further steps are taken. I suggest now that the issue be put to potentially supportive congressmen/women and senators, given the public funding for Lenski's activities. Bugler 05:46, 13 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Bugler06:17, 13 September 2008()
Bugler05:46, 13 September 2008()
:::To JPatt I would ask, how do we distinguish between a reply from an expert correcting an error and a reply from “Big Science” covering up the truth? Aschlafly10:11, 13 September 2008Reverted edits by Toffeeman (Talk); changed back to last version by Bugler
Toffeeman07:49, 13 September 2008()
Toffeeman07:48, 13 September 2008Reply to Jpatt
:::If you send a good criticism to “Big Science” which they do not like and are willing to cover up you would expect a dismissive reply, as has been received. However, if you sent a bad criticism to “Big Science” you would also expect a dismissive reply, as has been received. That a dismissive reply has been received does not distinguish between the two possibilities. Solely from the dismissive nature of the reply, therefore, we cannot conclude evidence of a conspiracy on the part of “Big Science” nor can we conclude that the letter was unsound. We would expect “he is wrong” whether or not Mr. Schlafly was wrong.Aschlafly10:11, 13 September 2008Reverted edits by Toffeeman (Talk); changed back to last version by Bugler
Toffeeman07:50, 13 September 2008()
Toffeeman07:49, 13 September 2008()
Toffeeman07:48, 13 September 2008Reply to Jpatt
:::How can we distinguish? Can we distinguish? Well, yes. Firstly the detail of the reply will give clues. The reply was not “un-responsive”, it gave clear reasons why the letter was not to be published: the objections ignore elementary statistics and the final comment is “uncalled for”. These are reasons to reject the criticisms.Aschlafly10:11, 13 September 2008Reverted edits by Toffeeman (Talk); changed back to last version by Bugler
Toffeeman07:49, 13 September 2008()
Toffeeman07:48, 13 September 2008Reply to Jpatt
:::Secondly these are reasons that are independently corroborated by the discussion of the various drafts of the letter. Objections were raised against all the substantive points by editors, many with expertise in the relevant areas; Stitch75, Brossa, Kallium, Argon, Wisdom89, Murray, Gerlach etc. The majority of the discussion centred around the statistics, as with the majority of the reply from PNAS. However the author was also specifically warned that the inclusion of accusations of withholding data was inappropriate[1]. Most of these objections were addressed not by reformulating the criticisms to the paper but by dismissal, deletion and blocking. Those few editors writing in support of the criticisms declared no expertise, showed little and, in at least one case, was a parodist [2]! Aschlafly10:11, 13 September 2008Reverted edits by Toffeeman (Talk); changed back to last version by Bugler
Toffeeman07:54, 13 September 2008()
Toffeeman07:49, 13 September 2008()
Toffeeman07:48, 13 September 2008Reply to Jpatt
:::On the other hand what do we have to support the contention that the letter has good criticisms of the Lenski study? We have little that goes beyond assertion.Aschlafly10:11, 13 September 2008Reverted edits by Toffeeman (Talk); changed back to last version by Bugler
Toffeeman07:49, 13 September 2008()
Toffeeman07:48, 13 September 2008Reply to Jpatt
:::It was pretty much known before sending the letter that, although the author had been given every help in formulating the best letter possible, the letter had little merit. It should not come as a surprise that it was dismissed and, certainly, not taken as evidence that “Big Science” is corrupt.--Toffeeman 07:48, 13 September 2008 (EDT)Aschlafly10:11, 13 September 2008Reverted edits by Toffeeman (Talk); changed back to last version by Bugler
Toffeeman07:49, 13 September 2008()
Toffeeman07:48, 13 September 2008Reply to Jpatt
::: Right. The next step is to criticize the taxpayer funding of this junk science. When the authors and the publishing organization will not even address statistical errors in the work, then it's time to pull the public funding.--Aschlafly 10:13, 13 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Philip J. Rayment23:01, 16 September 2008Reverted edits by AlexM (Talk); changed back to last version by Aschlafly
AlexM22:57, 16 September 2008removed unsubstansive comments
DRamon22:15, 15 September 2008reverting more vandalism from JonoP
JonoP22:14, 15 September 2008rmv unsustansive comments
Aschlafly14:23, 14 September 2008Reverted edits by Raul (Talk); changed back to last version by DinsdaleP
Raul14:06, 14 September 2008()
Aschlafly10:13, 13 September 2008reply
:: if you REALLY believe that Lenski has committed acedemic FRAUD then lodge a formal complaint with his University. They are taken very seriously and can lead to loss of tenure and dismissal from the university, and with that on his record no other institution would hire him on any basis. Markr 19:40, 13 September 2008 (EDT) Markr19:43, 13 September 2008put my comment on end (sorry) and changed jesus to Jesus
Markr19:40, 13 September 2008option: complaint of fraud to University
(deleted non-substantive comments). Again, the heading on this page will be enforced: "Substantive comments only, please." If you have a substantive comment about the identified errors and the PNAS's failure to address them, then please comment. Non-substantive comments will be removed. This is an encyclopedic-based search for the truth, not a blog or a place to refuse to contribute in a substantive manner.--Aschlafly 20:29, 14 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Aschlafly16:55, 20 September 2008archive
Philip J. Rayment23:01, 16 September 2008Reverted edits by AlexM (Talk); changed back to last version by Aschlafly
AlexM22:57, 16 September 2008removed unsubstansive comments
DRamon22:15, 15 September 2008reverting more vandalism from JonoP
JonoP22:14, 15 September 2008rmv unsustansive comments
Aschlafly20:29, 14 September 2008reason for removal of non-substantive comments
SamuelHTD10:16, 14 September 2008Why did you remove a big portion in place of your question?
Sam99foster10:13, 14 September 2008()
Sam99foster10:13, 14 September 2008deletion of Toffeeman's comments seemed unfair and unnecessary
Brossa11:00, 13 September 2008()
(unindent)Aschlafly20:29, 14 September 2008reason for removal of non-substantive comments
DinsdaleP11:39, 14 September 2008()
Okay, let's get back to basic, substantive comments. When I read the PNAS response, I don't see it being a "non-responsive explanation" or a "failure to address the five errors identified in the Letter to PNAS". What I see is a refutation of the objections raised by Aschlafly in his letter with regard to the statistics-based issues he raised. The PNAS response devotes seven sentences to explaining why the underlying statistics in the published experiment were sound, and following accepted practices. A follow-up point in the second paragraph states that the statistical misunderstanding by Aschlafly is of such a basic level that the PNAS journal is not the appropriate forum to respond with a lesson in experimental statistics to explain his misunderstanding in detail. In short, they are stating, professionally, that Aschlafly needs to learn more about experimental statistics and then re-examine his issues to see if they are, in fact, issues (which PNAS says they are not). Aschlafly20:29, 14 September 2008reason for removal of non-substantive comments
DinsdaleP11:39, 14 September 2008()
The PNAS response also states that their letters forum is not the appropriate place to raise objections about whether Lenski has disclosed his data or not to the public. That is a professional response as well.Aschlafly20:29, 14 September 2008reason for removal of non-substantive comments
DinsdaleP11:39, 14 September 2008()
What I'd suggest then, is that the appropriate response is for Aschlafly to defend the statistical analysis he used in identifying his issues, to respond to the PNAS assertion that it was erroneous. To summarize, PNAS didn't dodge a response - their response is that the analysis was flawed. Any response by Aschlafly to that assertion is up to him. --DinsdaleP 11:39, 14 September 2008 (EDT)Aschlafly20:29, 14 September 2008reason for removal of non-substantive comments
DinsdaleP11:39, 14 September 2008()
: I concur with DinsdaleP. The PNAS response is pretty much what I expected given what I perceived to be the general lack of receptiveness to information provided by others with more experience in biology in the discussions leading up the letter's creation (clearly, others have differing opinions). Thus it would appear that either biologists as a class are terribly poor at recognizing "the truth" or maybe it's not the biologists who are having the problem. I'd like to suggest another route, in addition to DinsdaleP's suggestion to explicitly defend Andy's statistical claims. Recall that the strains and cultures described in the paper remain available to other scientists if they choose to reproduce the experiments. In the several months since the "Lenski affair" began, what success have Conservapedia members (at least those interested in the topic) had in locating bacteriology labs sympathetic to Conservapedia's mission to investigate Blount et al.'s work and rerun the experiments? As numerous others have noted, reproduction of experimental results is a gold standard in science. The strains provide the best evidence for or against the conclusions in the paper. I'd humbly suggest that obtaining the help of a friendly lab would provide the most sure and direct means of resolving the issues.--Argon 18:26, 14 September 2008 (EDT)--Argon 18:26, 14 September 2008 (EDT)Aschlafly20:29, 14 September 2008reason for removal of non-substantive comments
Argon18:29, 14 September 2008()
Argon18:26, 14 September 2008()
:Since you've taken the liberty of deciding what is substantive or not in deleting posts like my last one, then I have a serious, respectful question to ask; What exactly do you mean by "substantive"? I didn't attack you or your letter; I was attempting to state that the PNAS response did, in fact, address the points of your letter. Whether one considers the PNAS response to be correct or not is a separate matter - they read your objections and responded to them instead of ignoring them, that's all.Aschlafly16:55, 20 September 2008archive
DinsdaleP21:00, 14 September 2008Can a comment on this page fail to endorse your view and still be considered "substantive"?
:My last post would therefore seem to have met Webster's definition of substantive - "having or expressing substance", but apparently the measure of "substantive" for a comment on this page appears to be whether it agrees with your view or not. That's your prerogative, but if you intended to allow comments on this page other than endorsements of your view, then please let me know what I did wrong. --DinsdaleP 21:00, 14 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
DinsdaleP21:01, 14 September 2008()
DinsdaleP21:00, 14 September 2008Can a comment on this page fail to endorse your view and still be considered "substantive"?
:: Dinsdale, we're here to think and learn. You can look at my letter, look at the PNAS's response, and provide some substantive insights. We're not here to say something like, uh, go ask someone else if a (9th grade-level) graph is correct or not. If you think the substantive issues are beyond your depth, and I don't, then comment on them in a substantive and intellectual and specific way. This is not another waste-of-time blog, and it's not going to become one.--Aschlafly 21:19, 14 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Philip J. Rayment23:01, 16 September 2008Reverted edits by AlexM (Talk); changed back to last version by Aschlafly
AlexM22:57, 16 September 2008removed unsubstansive comments
DRamon22:15, 15 September 2008reverting more vandalism from JonoP
JonoP22:14, 15 September 2008rmv unsustansive comments
Aschlafly21:19, 14 September 2008reply to Dinsdale
:::DinsdaleP, you did attack ASchalfy at least indirectly. Suggesting that the PNAS response has merit might also be interpreted by some that the letter ASchafly sent wasn't the very best it could be. Now, contrast that to my deleted comment suggesting that a time-tested response would be to actually try reproducing the experiment. Many bad experiments are exposed when others fail to get the same results as the original authors. I think this would be an excellent, substantive avenue to pursue.--Argon 21:23, 14 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Argon21:23, 14 September 2008()
ASchlafly- you said above, "PNAS can deny its errors all it likes, but that doesn't change the fact they are errors". As you say, in a fair discussion of the merits of two sides of an argument, it's important that both sides take a good, hard look at their own propositions. Since your position is that PNAS has errors on its own side, I'm just curious to know if you are in any way prepared to accept that there might be errors in your own argument, or are you absolutely 100% certain that your position is error-free? I'm wondering if perhaps before submitting this issue to funding authorities, you would be prepared to have an independent statistical expert take a look at your proposal? BenHur 22:17, 14 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
BenHur22:17, 14 September 2008Suggestion
:The only thing I was criticizing in my original comment today was the Main Page headline statement that "PNAS refuses to address the 5 errors in the Lenski study identified by the Letter to PNAS". What I pointed out is the fact that they did in fact respond, by criticizing the statistical analysis used by Aschlafly. I'm not supporting or attacking Mr. Schlafly's analysis, because I'm the first one to admit that I have no expertise in this area. My conclusion was a constructive suggestion that Mr. Schlafly present a rebuttal to the PNAS decision by showing how his analysis and conclusions were not erroneous in the manner they claimed. A public, statistical defense of Mr. Schlafly's work, perhaps accompanied by the endorsement of some regarded experts in the field, would be the best response to PNAS choosing to respond by email instead of through the journal. Aschlafly16:55, 20 September 2008archive
DinsdaleP22:24, 14 September 2008()
:I wrote both the original draft letter to PNAS from Mr. Schlafly's notes and my earlier comments today with the intent of contributing constructively. I hope this clarification of my view is substantive enough to remain. --DinsdaleP 22:24, 14 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
DinsdaleP22:24, 14 September 2008()
Folks, I've pointed out five very specific statistical (logical) errors. The torrent of nonsense above even includes an absurd demand for me to try to repeat the experiments, as thought that would somehow correct a flawed paper.Aschlafly16:55, 20 September 2008archive
Philip J. Rayment23:01, 16 September 2008Reverted edits by AlexM (Talk); changed back to last version by Aschlafly
AlexM22:57, 16 September 2008removed unsubstansive comments
DRamon22:15, 15 September 2008reverting more vandalism from JonoP
JonoP22:14, 15 September 2008rmv unsustansive comments
Aschlafly22:49, 14 September 2008reply
The math is wrong in the PNAS paper. No one at PNAS is even willing to put his name on a response claiming that the math is correct, because it isn't. I'm not going to allow further nonsensical postings here. If you want to address the statistical (logical) errors in a specific way, fine. If you feel it is beyond your depth to do so, then move on. Thanks and Godspeed.--Aschlafly 22:49, 14 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Philip J. Rayment23:01, 16 September 2008Reverted edits by AlexM (Talk); changed back to last version by Aschlafly
AlexM22:57, 16 September 2008removed unsubstansive comments
DRamon22:15, 15 September 2008reverting more vandalism from JonoP
JonoP22:14, 15 September 2008rmv unsustansive comments
Aschlafly22:49, 14 September 2008reply
:"The paper incorrectly applied a Monte Carlo resampling test to exclude the null hypothesis for rarely occurring events." Specifically, why is it incorrect to apply a Monte Carlo test in this circumstance, or why was their application incorrect? Do your own calculations produce a p-value that differs from the published p-value of 0.08?Aschlafly16:55, 20 September 2008archive
Brossa23:31, 14 September 2008()
:"The Third Experiment was erroneously combined with the other two experiments based on outcome rather than sample size, thereby yielding a false claim of overall statistical significance." This sounds as though you disagree with the use of the Z-transform technique used to combine the data from the three replay experiments, or believe that the Z-transform analysis was performed incorrectly. Which do you disagree with - the technique, the application, or both, and why? --Brossa 23:31, 14 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Brossa23:31, 14 September 2008()
:: {removed personal and non-substantive attack that violated clear rules for this discussion; also, unsubstantiated claims of expertise are not allowed - --Aschlafly 19:16, 15 September 2008 (EDT))Aschlafly16:55, 20 September 2008archive
Philip J. Rayment23:01, 16 September 2008Reverted edits by AlexM (Talk); changed back to last version by Aschlafly
AlexM22:57, 16 September 2008removed unsubstansive comments
DRamon22:15, 15 September 2008reverting more vandalism from JonoP
JonoP22:14, 15 September 2008rmv unsustansive comments
Aschlafly19:17, 15 September 2008same
Argon11:50, 15 September 2008Suggesting that Conservapedia recruit outside expertise and acknowledge constructive criticism *is* a substantive suggestion.
Aschlafly09:29, 15 September 2008Reverted edits by MikeR (Talk); changed back to last version by Brossa
MikeR09:08, 15 September 2008()
::I do think the PNAS letter was very disrespectful in its attitude, and perhaps too brief and dismissive, but I think it did make a valid point (in a bad way). Ultimately Andy, as great as you are at other things, from what you've written here you don't appear to have any real expertise in statistics beyond that of a reasonably-gifted layman. I don't think you're going to make any headway against these people until you acknowledge these limitations, and seek out somebody like Behe who has the relevant expertise and can add some academic weight to your arguments. Aschlafly19:17, 15 September 2008same
Argon11:50, 15 September 2008Suggesting that Conservapedia recruit outside expertise and acknowledge constructive criticism *is* a substantive suggestion.
Aschlafly09:29, 15 September 2008Reverted edits by MikeR (Talk); changed back to last version by Brossa
MikeR09:08, 15 September 2008()
:: Where do we go from here? I think we should stop, take a step back, reevaluate some of the claims made here (some of which I'm afraid were wrong imho) and seek out heavyweight help. I hope you don't mind this constructive criticism and take it at face value. MikeR 09:08, 15 September 2008 (EDT)Aschlafly19:17, 15 September 2008same
Argon11:50, 15 September 2008Suggesting that Conservapedia recruit outside expertise and acknowledge constructive criticism *is* a substantive suggestion.
Aschlafly09:29, 15 September 2008Reverted edits by MikeR (Talk); changed back to last version by Brossa
MikeR09:08, 15 September 2008()
:: {removed personal and non-substantive attack that violated clear rules for this discussion - --Aschlafly 19:16, 15 September 2008 (EDT))Aschlafly19:17, 15 September 2008same
Aschlafly19:16, 15 September 2008removed attack that violated rules of discussion
::: I'm still a little unclear on your position ASchlafly - are you absolutely 100% certain your own statistical analysis is correct on this? Before you proceed further it's important to know that the technical analysis you are presenting is indeed indisputable. BenHur 12:01, 15 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
BenHur12:01, 15 September 2008Rock-solid statistical analysis?
== Torrent of nonsense ==Aschlafly17:08, 15 September 2008Reverted edits by Argon (Talk); changed back to last version by BenHur
Argon12:04, 15 September 2008→Torrent of nonsense: new section
Aschlafly suggests: "The torrent of nonsense above even includes an absurd demand for me to try to repeat the experiments, as thought that would somehow correct a flawed paper."Aschlafly17:08, 15 September 2008Reverted edits by Argon (Talk); changed back to last version by BenHur
Argon12:11, 15 September 2008→Torrent of nonsense
Argon12:04, 15 September 2008→Torrent of nonsense: new section
Faulty logic on four counts: 1) It is not a "torrent of nonsense" to independently repeat experiments. That is the basis by which experimental results may be readily confirmed or disproved and reproducibility is one of the main principles of the scientific method. 2) Nobody suggested that Aschlafly repeat the experiments by himself. It is understood that he doesn't have the facilities to do that. What I suggested was that Conservapedia editors try to locate sympathetic scientists who have the resources to run the experiments. These experiments are neither expensive nor difficult to reproduce. It just takes time (a few minutes a day for most of the work) and a lab. 3) If another lab group found that they were unable to reproduce the results despite using the strains provide by Lenski and following the paper's techniques, that would force a serious re-evaluation and likely, a retraction of conclusions made paper. 4) I made no "demand" that the experiments be repeated. I *suggested* that it would be a good idea. I wrote: "Now, contrast that to my deleted comment suggesting that a time-tested response would be to actually try reproducing the experiment."Aschlafly17:08, 15 September 2008Reverted edits by Argon (Talk); changed back to last version by BenHur
Argon14:33, 15 September 2008→Torrent of nonsense
Argon14:33, 15 September 2008→Torrent of nonsense
Argon12:55, 15 September 2008→Torrent of nonsense
Argon12:11, 15 September 2008→Torrent of nonsense
Argon12:11, 15 September 2008→Torrent of nonsense
Argon12:04, 15 September 2008→Torrent of nonsense: new section
Given the substantive remarks provided above, will Aschlafly or anyone else acknowledge that rerunning the experiment in an independent lab has legitimate merit?--Argon 12:19, 15 September 2008 (EDT)Aschlafly17:08, 15 September 2008Reverted edits by Argon (Talk); changed back to last version by BenHur
Argon14:33, 15 September 2008→Torrent of nonsense
Argon12:55, 15 September 2008→Torrent of nonsense
Argon12:47, 15 September 2008→Torrent of nonsense: Typo
Argon12:19, 15 September 2008→Torrent of nonsense
Argon12:11, 15 September 2008→Torrent of nonsense
Argon12:04, 15 September 2008→Torrent of nonsense: new section
:::: Ben, if you have at least a 9th grade-level education, then you can look at the 5 errors and decide for yourself, and comment in a substantive manner. Yes, they are obvious and basic errors, and the fact that the reviewer of my letter at PNAS would not attach his name to a specific denial speaks volumes.--Aschlafly 17:11, 15 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Aschlafly16:55, 20 September 2008archive
Philip J. Rayment23:01, 16 September 2008Reverted edits by AlexM (Talk); changed back to last version by Aschlafly
AlexM22:57, 16 September 2008removed unsubstansive comments
DRamon22:31, 15 September 2008()
DRamon22:15, 15 September 2008reverting more vandalism from JonoP
JonoP22:14, 15 September 2008rmv unsustansive comments
Aschlafly17:11, 15 September 2008reply to Ben
:::::: Thank you for your clarification. Actually, yes, I have a University education in statistics to Master's level, and I have looked at what you suggest are "five errors", and I have indeed made my own decision. I'll reserve comment for now. BenHur 18:40, 15 September 2008 (EDT)Aschlafly19:12, 15 September 2008Reverted edits by BenHur (Talk); changed back to last version by Aschlafly
BenHur18:40, 15 September 2008()
(unindent) No, I do not concede points one and two; I've commented on them on the Talk:Letter to PNAS page. I just don't think that there is any chance that we will agree on those points, so there's little value in rehashing them unless you want to. Similarly, your point four is not something that can be resolved by argument. The only way to prove that the results were due to contamination would be to repeat the experiment with different controls in place and demonstrate different results - which might spur some other labs to run a third or fourth trial to see who was right. It seemed to me that the only points that could be solved by discussion were three and five, since they made statements about the statistical methods that can potentially be resolved through debate. So, once again, what is your response to my questions regarding points three and five?--Brossa 22:34, 15 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Brossa22:34, 15 September 2008repost last
: Brossa, I could not find a meaningful rebuttal by you of point 1, and I found no comments by you on point 2, at Talk:Letter to PNAS. Are you trying the classic trick of "the answer is over there," when it isn't? Point 2 alone completely disproves the PNAS paper's thesis, and yet you avoid it and skip towards less obvious errors. I'm happy to address the subtle errors once you address the obvious ones.--Aschlafly 16:10, 16 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Philip J. Rayment23:01, 16 September 2008Reverted edits by AlexM (Talk); changed back to last version by Aschlafly
AlexM22:57, 16 September 2008removed unsubstansive comments
Aschlafly16:10, 16 September 2008reply to Brossa
::Aschlafly, you are avoiding Brossa's question. Why can't you just answer it? Brossa said that he wanted to limit the discussion to these points. The reason that he wants to do that is because it the PNAS response targets your approach to the statistical analysis. MickA 16:24, 16 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
MickA16:24, 16 September 2008()
:I don't mind addressing points one and two; I just don't think that we'll come to an agreement about them. Point one represents a misunderstanding of the figure 3, which is labeled "Alternative hypotheses for the origin of the Cit+ function..." The figure does not represent the results of the experiments and does not conflict with them. It is a cartoon of the a priori hypothesis that was generated before the experiments were performed; it is not itself the hypothesis (the map is not the country). Note that the vertical axis lacks a scale; there is no way of knowing what the actual mutation rates are ahead of time. The location of the vertical jump on the graph is abitrary; it has to lie somewhere between 0 and 31,500 generations, but that point could be anywhere. Quoting from the paper: "The historical contingency hypothesis predicts that the mutation rate to Cit+ should increase after some potentiating genetic background has evolved. Thus, Cit+ variants should re-evolve more often in the replays using clones sampled from later generations of the Ara-3 population." The hypothesis as stated does not specify a generation at which the potentiating mutation occured. The hypothesis is not that potentiation took place at generation 31,000 rather than some other generation; it is that there was a potentiating mutation rather than a rare-mutation event. The results of the experiment do not disprove the contingency hypothesis; they confirm it and suggest that the potentiating mutation took place at generation 20,000. You think that the figure is the hypothesis; I think that the hypothesis is what is stated explicitly in the text of the paper; I doubt that we'll agree.Aschlafly16:55, 20 September 2008archive
Brossa17:53, 16 September 2008()
:Point two states: "Both hypotheses propose fixed mutation rates, but the failure of mutations to increase with sample size disproves this." I disagree with this statement. The problem with comparing the 'sample sizes' in replays two and three is that the experimental conditions were similar, but not the same. One could imagine a hypothesis that men commit murder most often between the ages of 25 and 35, with samples taken from the male populations of L.A., Singapore, and London. One would find different murder rates among the men of those three cities, but still might find (or not) that murderers in those cities tend to be between 25 and 35 years old. The problem with comparing the 'sample sizes' in replays two and three is that the experimental conditions were not the same, just as Singapore is not the same as Los Angeles. The rare-mutation hypothesis does not mean that the mutation rate to Cit+ is the same for all experimental conditions anywhere; just that the mutation rate is constant given the conditions of a particular replay. It is possible to have different baseline mutation rates among the three replays, all of which follow the historical contingency pattern. Or the mutation rate could actually be the same across all three replays, and the results seen here are just a statistical fluke that would vanish if the replays could be run thousands of times. Either way, it's not fatal to the paper's conculsions.Aschlafly16:55, 20 September 2008archive
Brossa17:53, 16 September 2008()
:Point two also states "If the authors claim that it is inappropriate to compare for scale the Second and Third Experiments to each other and to the First Experiment, then it was also an error to treat them similarly statistically." This is also incorrect in my view. One can state that the murder rate is different between two cities, and yet the murderers have some characteristic in common. Combining results from different samples is the bread and butter of statistical analysis. Meta-analysis, for example, is used to combine the results of studies that are much different than replay experiments one, two, and three. If you wish to make a more specific argument about the techniques used to combine the three replay experiments into a single result, I'll address it. I think that the second part of point two is essentially the same as point five in that it criticizes the statistical method used to combine the results of the three replays (the Z-transform), as opposed to point three which mentions the Monte Carlo technique separately. Aschlafly16:55, 20 September 2008archive
Brossa17:53, 16 September 2008()
:I therefore repeat my questions about points three and five, as stated previously. If you have other questions that I must answer first, please list them all at once, as I'm eager to move on to that discussion.--Brossa 17:53, 16 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Brossa17:55, 16 September 2008()
Brossa17:53, 16 September 2008()
:: It's foolish to debate someone who has already closed his mind. Point 1 is plainly correct: the PNAS article should admit and disclose that the false hypothesis was indeed proven to be false. If you stand behind the falsehood, then you'll refuse to admit other errors also.Aschlafly16:55, 20 September 2008archive
Aschlafly18:31, 16 September 2008reply to Brossa
Erm… why was the Monte Carlo resampling incorrect? I cannot make heads nor tail of what you were thinking when you wrote this. Also, the result was inconsistent with the null hypothesis by a margin of 92 percent. Aschlafly16:55, 20 September 2008archive
Aschlafly19:24, 19 September 2008reply
Aschlafly19:24, 19 September 2008reply
AlexaJ10:35, 19 September 2008Respond this time, please.
AlexaJ10:35, 19 September 2008Respond this time, please.
Philip J. Rayment23:01, 16 September 2008Reverted edits by AlexM (Talk); changed back to last version by Aschlafly
AlexM22:57, 16 September 2008removed unsubstansive comments
Aschlafly18:31, 16 September 2008reply to Brossa
:::I already said that I didn't think that we would reach common ground on points one, two, and four, and that it would be a waste of time to rehash them. You asked me to, so I did. It seems foolish to then treat those points as a shibboleth and refuse to discuss the other points, which I am fully prepared to concede provided you elaborate on why the Monte Carlo technique and the Z-transform were either the wrong tests or performed incorrectly. I am capable of accepting two items from a list of five even if I reject the other three. Even if I were completely incapable of agreeing with you, I still don't see why you won't put your best mathematical argument forward about the paper's statistics. Were there too few Monte Carlo resamplings? Do you feel that the Z-transformation was performed incorrectly? Is the Z-transform itself suspect? Do you propose some other statistical analysis? If you do, there are any number of us here who can crunch the numbers again. What a coup it would be for you if we could use a technique that you suggested to obtain results that smashed Lenski's smug complacency!Aschlafly16:55, 20 September 2008archive
Brossa19:39, 16 September 2008response to Aschlafly
:::I don't claim that the combination was simultaneously correct and incorrect, by the way - I claim that it was correct. Where do I imply otherwise? Also, it's not only improper to compare replay one with replays two and three for scale: it's impossible. Replay one involved constantly changing numbers of cells whereas replays two and three started with fixed numbers. How do you count the number of cells in the first replay to compare it to the other two? Is it the number of cells transferred each time? Is it the maximum population achieved in each flask prior to transfer? 750 generations passed in one case and 3700 generations in another before the Cit+ trait was seen - how do you factor that into the 'scale' equation? It is only the superficial resemblance of replays two and three that brings up the concept of 'scale'. The "underlying thesis" of the paper is not that there is a unique rate of mutation to Cit+ that applies across any and all experimental conditions.--Brossa 19:39, 16 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Brossa19:39, 16 September 2008response to Aschlafly
:::: Brossa, you don't address how Lenski did combine the three experiments, and how Experiment 3 does not scale with Experiment 2. Given that you don't address the main errors, it's foolish to waste time discussing more subtle points with you. Put another way, there are plenty of open-minded contributors on this site. Why would one waste time discussing with a close-minded person instead?Aschlafly16:55, 20 September 2008archive
Philip J. Rayment23:01, 16 September 2008Reverted edits by AlexM (Talk); changed back to last version by Aschlafly
AlexM22:57, 16 September 2008removed unsubstansive comments
Aschlafly20:06, 16 September 2008reply to Brossa
:::: Your account will be blocked for your 90/10 rule violation unless you improve soon. Thanks and Godspeed.--Aschlafly 20:06, 16 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Philip J. Rayment23:01, 16 September 2008Reverted edits by AlexM (Talk); changed back to last version by Aschlafly
AlexM22:57, 16 September 2008removed unsubstansive comments
Aschlafly20:06, 16 September 2008reply to Brossa
::::: But Aschlafly, you still have not yet explained your own mathematical analysis. It seems you will do anything to debate with others, but nothing whatsoever to actually explain your own thesis - constantly repeating that you are correct is not a reasonable argument. A very large number of editors have asked some very simple question which you must surely have the answers to, since they are fundamental to your thesis, yet you utterly refuse to provide an answer and instead debate, debate, and block. Surely this discussion can be kept on a reasonable, open-minded keel AND at the same time, include extensive statistical and mathematical explanations from BOTH sides of the issue? None of us - yourself included - are enjoying this 'debate about a debate'. Please, try to back up your argument with some actual evidence. BenHur 20:48, 16 September 2008 (EDT)Aschlafly21:32, 16 September 2008removed "BenHur's" non-substantive edit; one more non-substantive posting, BenHur, and your account will be blocked
BenHur20:48, 16 September 2008()
:::::ASchlafly:"Brossa, you don't address how Lenski did combine the three experiments"Aschlafly16:55, 20 September 2008archive
Brossa21:20, 16 September 2008()
::::::Me, earlier on this page:"This sounds as though you disagree with the use of the Z-transform technique used to combine the data from the three replay experiments." "Second, why was it incorrect to apply the Z-transform to the data from the three replays, or in what way was the Z-transform performed incorrectly?" "...and the erroneous combination of the three replay experiments (the Z-transformation)." "If you wish to make a more specific argument about the techniques used to combine the three replay experiments into a single result, I'll address it. I think that the second part of point two is essentially the same as point five in that it criticizes the statistical method used to combine the results of the three replays (the Z-transform)..." Aschlafly16:55, 20 September 2008archive
Brossa21:23, 16 September 2008added spaces
Brossa21:20, 16 September 2008()
::::::Blount and Lenski used Monte Carlo to derive a p-value for each of the three replay experiments; then used the Z-transformation to derive a final p-value from the three Monte Carlo p-values. I've mentioned the Z-transformation multiple times in connection to combining the results of the three experiments.Aschlafly16:55, 20 September 2008archive
Brossa21:20, 16 September 2008()
:::::ASchlafly: "Brossa, you don't address...how Experiment Three does not scale with Experiment 2."Aschlafly16:55, 20 September 2008archive
Brossa21:20, 16 September 2008()
::::::Me, earlier on this page: "The problem with comparing the 'sample sizes' in replays two and three is that the experimental conditions were similar, but not the same." "It is possible to have different baseline mutation rates among the three replays, all of which follow the historical contingency pattern. Or the mutation rate could actually be the same across all three replays, and the results seen here are just a statistical fluke that would vanish if the replays could be run thousands of times. Either way, it's not fatal to the paper's conculsions." "It is only the superficial resemblance of replays two and three that brings up the concept of 'scale'. The "underlying thesis" of the paper is not that there is a unique rate of mutation to Cit+ that applies across any and all experimental conditions"Aschlafly16:55, 20 September 2008archive
Brossa21:23, 16 September 2008added spaces
Brossa21:20, 16 September 2008()
:::::ASchlafly: "Your account will be blocked for your 90/10 rule violation unless you improve soon."Aschlafly16:55, 20 September 2008archive
Brossa21:20, 16 September 2008()
::::::Me, earlier on this page:"If you would like me to through the original letter point by point, and respond to all of the claims in detail, I'd be happy to - but only after I have your explicit permission to do so on this talk page (or some other page of your choosing). Until I'm given that permission, I'll await your response..." "I just don't think that there is any chance that we will agree on those points, so there's little value in rehashing them unless you want to." "I already said that I didn't think that we would reach common ground on points one, two, and four, and that it would be a waste of time to rehash them. You asked me to, so I did."Aschlafly16:55, 20 September 2008archive
Brossa21:23, 16 September 2008added spaces
Brossa21:20, 16 September 2008()
::::::I came into this discussion wanting to discuss specific issues: the basis of your objections to the Monte Carlo technique and the Z-transformation used to combine the results of the three replays into one final p-value. This could have been resolved quite quickly, but you said that I had to address all the points of your letter before you would answer my questions. I said repeatedly that I did not wish to do so without your explicit permission, which it seemed that you gave. So I jumped through those hoops, only to be accused of a 90/10 violation. Have I simply walked into an cunning trap?--Brossa 21:20, 16 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Brossa21:23, 16 September 2008added spaces
Brossa21:20, 16 September 2008()
:::::Aschlafly, I'm a little confused as to why you removed this comment of mine? I did not cast any aspersions on your argument, and was merely answering your question as you posed it to me? Is "declining to comment" an indictable offense? BenHur 19:20, 15 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
BenHur19:20, 15 September 2008Why was my comment removed?
:::::: Your comment was not an "indictable offense," but it violated the rules of this page: "Substantive comments only, please." Got it? Either say something substantive, or edit somewhere else. Thanks.--Aschlafly 19:33, 15 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Philip J. Rayment23:01, 16 September 2008Reverted edits by AlexM (Talk); changed back to last version by Aschlafly
AlexM22:57, 16 September 2008removed unsubstansive comments
DRamon22:15, 15 September 2008reverting more vandalism from JonoP
JonoP22:14, 15 September 2008rmv unsustansive comments
Aschlafly19:33, 15 September 2008reply to Ben
::::::: This is confusing. Is "I agree with your thesis" or "your methods are 100% correct" a substantive comment? It can be very hard to infer your intent, Mr.Schalfly, I'm sorry to say. I have no quarrel with you, but I'm becoming confused as to what is and isn't appropriate comment on what is labelled a "Talk Page". Are there special rules for this Talk Page? If so, perhaps the title on the page might be changed? BenHur 19:42, 15 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
BenHur19:42, 15 September 2008answer to Aschlafly
:::::::: No, your quoted phrases are obviously not substantive comments. Your statement of agreement means nothing. I doubt you are even using your real name, for starters, which renders your agreement evens sillier. I repeat for the nth time, say something substantive or edit somewhere else.--Aschlafly 19:52, 15 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Philip J. Rayment23:01, 16 September 2008Reverted edits by AlexM (Talk); changed back to last version by Aschlafly
AlexM22:57, 16 September 2008removed unsubstansive comments
DRamon22:15, 15 September 2008reverting more vandalism from JonoP
JonoP22:14, 15 September 2008rmv unsustansive comments
Aschlafly19:52, 15 September 2008reply to Ben
::::::::: (removed another non-substantive posting)--Aschlafly 20:28, 15 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Philip J. Rayment23:01, 16 September 2008Reverted edits by AlexM (Talk); changed back to last version by Aschlafly
AlexM22:57, 16 September 2008removed unsubstansive comments
DRamon22:15, 15 September 2008reverting more vandalism from JonoP
JonoP22:14, 15 September 2008rmv unsustansive comments
Aschlafly20:28, 15 September 2008removed another non-substantive posting
BenHur20:09, 15 September 2008response
:::::::::Andy - I too, like BenHur, am confused by your definition of "substantive." For example, jp writes only about his opinions of the state of modern science and the current scientific community, nothing at all specific to the PNAS reply letter. BenHur, however, has asked questions directly regarding your original letter (Regarding the use of the Monte Carlo technique and Z-transform) in an attempt to further clarify your position. As far as I can tell, he is not denying the validity of any statement you made, but rather asks for the actual math you performed and further elucidation on the reasoning behind your points. I can hardly see how that is not substantive, it is directly related to the subject at hand: Your letter to PNAS and their response. To quote yourself, "you doth protesteth too much." It is indeed your prerogative to question scientific studies and to raise issues, but to expect to be taken seriously, one should have a complete set of supporting mathematics and evidence for your questions. So far, I have seen you produce no specifics on why Lenski performed an erroneous statistical analaysis, nor why PNAS' claim that your challenges are incorrect is incorrect itself. --AndrasK 20:20, 15 September 2008 (EDT)Aschlafly20:25, 15 September 2008Reverted edits by AndrasK (Talk); changed back to last version by BenHur
AndrasK20:20, 15 September 2008()
:::::Aschafly, perhaps you did not realize that my earlier questions were meant for you. I wish to address the statistical errors in a specific way, which requires a better understanding of your position. I will repeat my main questions: why was it incorrect to apply Monte Carlo techniques to the data in the paper, or in what way was the Monte Carlo technique performed incorrectly? Second, why was it incorrect to apply the Z-transform to the data from the three replays, or in what way was the Z-transform performed incorrectly? In lieu of the Monte Carlo/Z-transform techniques, what statistical calculations should have been performed? Feel free to be technical; I have more than a ninth grade education.--Brossa 17:38, 15 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Brossa17:38, 15 September 2008()
:::::It takes much more than a 9th-grade education to thoroughly evaluate professional microbiology and experimental statistics. The letter was rejected because the objections raised demonstrated fundamental misunderstandings of the techniques used, and thus were not up to PNAS standards. That's all there is to it. Period. The objections were vague (far from "very specific") and offered no corrections (and none have yet appeared on this site either). They only said "that's wrong" without support, which accomplishes nothing. That letter, like all such correspondence, was evaluated purely on its own merit, and not because PNAS is afraid of being proven wrong (in fact, that's when science is most exciting!). Furthermore, anonymous review is common practice- you also wouldn't have gotten a name if he/she had agreed with the letter. Kallium 18:23, 15 September 2008 (EDT)Aschlafly18:38, 15 September 2008Reverted edits by Kallium (Talk); changed back to last version by MickA
Kallium18:23, 15 September 2008Was the reviewer a coward? No, it's "anonymous" review.
:::::: If you're skipping over the main points, then concede their validity or explain why you've skipped over them.--Aschlafly 19:14, 15 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Philip J. Rayment23:01, 16 September 2008Reverted edits by AlexM (Talk); changed back to last version by Aschlafly
AlexM22:57, 16 September 2008removed unsubstansive comments
DRamon22:15, 15 September 2008reverting more vandalism from JonoP
JonoP22:14, 15 September 2008rmv unsustansive comments
Aschlafly19:14, 15 September 2008conceding validity of main points?
::::::: If you would like me to through the original letter point by point, and respond to all of the claims in detail, I'd be happy to - but only after I have your explicit permission to do so on this talk page (or some other page of your choosing). Until I'm given that permission, I'll await your response to my previously-stated questions about points three and five: your statements about the incorrect application of Monte Carlo resampling and the erroneous combination of the three replay experiments (the Z-transformation). --Brossa 22:00, 15 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Brossa22:00, 15 September 2008()
:::::::: Substantive postings are welcome, but I still don't have an explanation for why you skipped over the main points 1 and 2. Do you concede them?--Aschlafly 22:04, 15 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Philip J. Rayment23:01, 16 September 2008Reverted edits by AlexM (Talk); changed back to last version by Aschlafly
AlexM22:57, 16 September 2008removed unsubstansive comments
DRamon22:15, 15 September 2008reverting more vandalism from JonoP
JonoP22:14, 15 September 2008rmv unsustansive comments
Aschlafly22:04, 15 September 2008reply
:::::::::: With all due respect Aschlafly, and excuse me if I feel the need to tiptoe as gently as possible here, but I would cautiously disagree with your statement that "Substantive postings are welcome". It seems to me that you have explicitly removed only the substantive comments from this page. What is left are weak and non-substantive comments, to be sure. It seems very difficult to comment in any way here on this matter, as you seem to be of the opinion that you and you alone are correct. WIth the best will in the world, I certainly don't feel able to speak freely here, despite being well qualified to do so. Perhaps a slightly more lenient approach might help your own cause? BenHur 22:23, 15 September 2008 (EDT) Aschlafly16:55, 20 September 2008archive
BenHur22:23, 15 September 2008with due respect
(unindent) No, I do not concede points one and two; I've commented on them on the Talk:Letter to PNAS page. I just don't think that there is any chance that we will agree on those points, so there's little value in rehashing them unless you want to. Similarly, your point four is not something that can be resolved by argument. The only way to prove that the results were due to contamination would be to repeat the experiment with different controls in place and demonstrate different results - which might spur some other labs to run a third or fourth trial to see who was right. It seemed to me that the only points that could be solved by discussion were three and five, since they made statements about the statistical methods that can potentially be resolved through debate. So, once again, what is your response to my questions regarding points three and five?DRamon22:31, 15 September 2008()
Brossa22:29, 15 September 2008()
== Please provide your statistical analysis ==Aschlafly09:36, 20 September 2008cleaning out junk that violated rules of page
MickA17:38, 15 September 2008→Please provide your statistical analysis: new section
Andrew, If you are so sure that your statistical analysis of the Lenski paper is correct, you should publish it on Conservapedia. MickA 17:38, 15 September 2008 (EDT)Aschlafly09:36, 20 September 2008cleaning out junk that violated rules of page
MickA17:58, 15 September 2008→Please provide your statistical analysis
MickA17:38, 15 September 2008→Please provide your statistical analysis: new section
: I did. Which point don't you understand?--Aschlafly 19:13, 15 September 2008 (EDT)Aschlafly09:36, 20 September 2008cleaning out junk that violated rules of page
Philip J. Rayment23:01, 16 September 2008Reverted edits by AlexM (Talk); changed back to last version by Aschlafly
AlexM22:57, 16 September 2008removed unsubstansive comments
DRamon22:15, 15 September 2008reverting more vandalism from JonoP
JonoP22:14, 15 September 2008rmv unsustansive comments
Aschlafly19:13, 15 September 2008→Please provide your statistical analysis: I did
::Andy this is the hollywood breast cancer argument over again. You have made a series of vague statements without any actual numbers to back this up. DanB 21:11, 15 September 2008 (EDT)Aschlafly21:30, 15 September 2008Reverted edits by DanB (Talk); changed back to last version by DRamon
DanB21:11, 15 September 2008→Please provide your statistical analysis
::Could you direct me to the page showing your calculations? MickA 08:50, 16 September 2008 (EDT)Aschlafly09:36, 20 September 2008cleaning out junk that violated rules of page
MickA08:50, 16 September 2008→Please provide your statistical analysis
I think regardless of what the details behind the statistical analysis are, the shame here is that PNAS refused to address anything specific in their response. They simply glossed over everything that was said in the letter sent to them and gave a generic, unsubstantive response. All the people here that are trying to argue with ASchlafly about his position should instead focus on why is it that PNAS refuses to directly address our concerns. --DRamon 21:01, 15 September 2008 (EDT)Aschlafly09:36, 20 September 2008cleaning out junk that violated rules of page
DRamon21:01, 15 September 2008→Please provide your statistical analysis
:PNAS have directly addressed his concerns. They made it quite clear that there is no obvious ("subtle or obscure") statistical errors. If Andy's letter had raised any serious concerns then they would have asked the authors to respond. The only reason this is continuing is Andy "twice as many statistic courses as you" won't accept that he is wrong. DanB 21:11, 15 September 2008 (EDT)Aschlafly21:30, 15 September 2008Reverted edits by DanB (Talk); changed back to last version by DRamon
DanB21:11, 15 September 2008→Please provide your statistical analysis
:: It is not the function of the editorial board to "defend" a paper. They mainly said that, from what Mr. Schlafly has written they could not see the mistake in Dr. Lenskis statisics suggested by Mr. Schlafly. It is neither the function of a reviewer to "put his name" on something. A reviewer should check if the average reader of a journal will be able to learn something from a letter. If he thinks there is nothing to learn, either because the letter fails to bring new insight, or because the readers may not understand how using only conservative methods in statistics and nothing which is mathematically considered to be fragile by people who have average mathematical knowledge, is favourable, then they should reject the letter. Although I had the impression from the wording of the response that the reviewer did have at least a brief look at the original article. And PNAS refuses the Letter for the very same reason people her are asking, namely the complete absence of alternative estimations for the numbers. I dont want to check the calculations, if Mr. Schlafly states that by using a well defined method, he finds another result and states this result clearly, i am a priori very willing to believe that, because i see no reason not to believe his word on that (Since he has, to my best knowledge, no record of scientific fraud). After he states his result, i may find the time trying to reproduce it (although i am very busy). --Stitch75 13:00, 16 September 2008 (EDT)Aschlafly09:36, 20 September 2008cleaning out junk that violated rules of page
Stitch7513:00, 16 September 2008→Please provide your statistical analysis
:::Could you direct me to the page showing your calculations? MickA 15:41, 16 September 2008 (EDT)Aschlafly09:36, 20 September 2008cleaning out junk that violated rules of page
MickA15:41, 16 September 2008→Please provide your statistical analysis
Could you direct me to the page showing your calculations? MickA 11:05, 17 September 2008 (EDT)Aschlafly11:59, 17 September 2008Reverted edits by MickA (Talk); changed back to last version by Aschlafly
MickA11:05, 17 September 2008→Please provide your statistical analysis
==Improper==Aschlafly16:55, 20 September 2008archive
Fred177614:30, 16 September 2008()
I have no knowledge on this branch of science, so I won't comment on whether the results were correct, or if PNAS's letter was unbiased. Then again neither does Schlafly, who, of what info about him is known to the public is not a biologist. I find it somewhat inappropriate for someone with no training on a specifict field to come and tell someone that has actually learned something about that field, and say he hasn't done his job right. It's common sense. Fred1776 14:30, 16 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Fred177614:40, 16 September 2008→Improper
Fred177614:39, 16 September 2008→Improper
Fred177614:30, 16 September 2008→Improper
Fred177614:30, 16 September 2008()
:That's an extraordinary and offensive remark from someone who admits complete ignorance. How come you - Fred1776 - have an exact knowledge of how expert Andrew Schlafly's knowledge of the issue is? He may not be a mathematician, but he is an experienecd educator, amongst other things, with a legal training that enables his mind to get to the crux of a problem, as has been demonstrated very many times in this project. Biologists are properly open to the scrutiny of others. Bugler 15:34, 16 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Bugler15:34, 16 September 2008→Improper
::I simply can't turn down the opportunity of agreeing with Bugler! Biologists, as with all experts, should be open to scrutiny from everyone. It's good to use people's expertise, but not at the expense of raising their pronouncements to the status of dogma. True, the lay person may not be able to formulate much criticism but sometimes, and especially when they stray outside the purely technical, all experts make mistakes that are evident to others. We should respect expertise, but stopping all questioning is giving respect beyond what is due. --Toffeeman 15:45, 16 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Toffeeman15:45, 16 September 2008()
::(EC)Well, the PNAS seems to think his understanding of statistical analysis is less than "elementary". The question is, given that virtually none of us are experts in statistics, why wouldn't we believe the PNAS? I mean, they are the trained scientists, right? If they told me my mis-understanding of statistical analysis was too fundamental to warrant a response in their journal, I'd show some humility and accept that as valid criticism. But hey, that's just me. KimEide 15:48, 16 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
KimEide15:55, 16 September 2008→Improper: third time is the charm
KimEide15:49, 16 September 2008→Improper
KimEide15:48, 16 September 2008→Improper
:::Kim, if you're naive enough to believe everything that the Liberal establishment tells you, well, hey, don't let us stop you. But don't think that you will be allowed to infect this site with your credulity. Bugler 15:54, 16 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Bugler15:54, 16 September 2008→Improper
::::Well if I'm credulous because I'd accept the opinion of an expert in very difficult and technical field I have no formal training in, then so be it. KimEide 15:57, 16 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
KimEide15:57, 16 September 2008→Improper
:::::Bugler, is it possible for you to talk without threatening someone? Try some civility. --IanG 16:08, 16 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
IanG16:08, 16 September 2008→Improper
:::::::::::(At the risk of a block) Not "believe everything on the basis that they are experts", but neither "disbelieve everything on the basis that they are assumed to be Liberal". If one is to make use of expertise then you need to accept something told you on the basis of expertise: you will have to revise at least one belief because of what the expert says. --Toffeeman 16:09, 16 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Toffeeman16:09, 16 September 2008()
KimEide, the experts on Christianity are overwhelmingly in agreement: Jesus rose from the dead. Yet I expect that you don't accept that expert view. Meanwhile, you seem to accept the "expert" view of Lenski about statistics despite his having, as far as I can tell, no expertise in that subject.Aschlafly16:55, 20 September 2008archive
Philip J. Rayment23:01, 16 September 2008Reverted edits by AlexM (Talk); changed back to last version by Aschlafly
AlexM22:57, 16 September 2008removed unsubstansive comments
Aschlafly16:16, 16 September 2008→Improper: reply
Those who don't want to think for themselves can return to Wikipedia and other playpens. Those who have substantive insights about the logical and statistical issues here, please do comment.--Aschlafly 16:16, 16 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Philip J. Rayment23:01, 16 September 2008Reverted edits by AlexM (Talk); changed back to last version by Aschlafly
AlexM22:57, 16 September 2008removed unsubstansive comments
Aschlafly16:16, 16 September 2008→Improper: reply
:I happen to know for a fact that Lenski is thoroughly trained in the statistical analysis of experiments like the one in question. In fact, he's published extensively in that area. And I'm afraid the historicity of the Resurrection is a minority view among New Testament scholars. KimEide 16:22, 16 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
KimEide16:22, 16 September 2008→Improper
::What might wash with Liberal agnostic theology professors won't wash with true Christians. Bugler 16:27, 16 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Bugler16:27, 16 September 2008→Improper
:::I agree completely. That doesn't change the empirical fact that it's a minority view among scholars. KimEide 16:28, 16 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
KimEide16:28, 16 September 2008→Improper
::::I disagree. I have found that when people use the word "scholars" that it is very specific to their side. So what is your definition of a scholar? In my life I have had the privilege of attending a number of different churches, three of which were run by men with PHDs. All three of them believed in the bodily resurrection of Christ. Taken across America, that number would be much higher. Learn together 17:39, 16 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Learn together17:39, 16 September 2008→Improper
:::::If those men had PhDs in New Testament studies (or something related) from a serious University, I would certainly count them as scholars. But they would be in the minority in the field of New Testament studies. That's just an empirical fact I happen to know from being employed by a Liberal Arts University. In general though, I think of a scholar as someone who has 1)Studied the literature extensively 2)Defended his knowledge before a panel of other people who have studied the literature extensively (i.e. been awarded an advanced degree) 3)Published in peer-reviewed journals 4)And is or was actively engaged in the professional field, either through writing, teaching (or preaching), researching, giving papers at conferences, etc. That's just off the top of my head though. Most scholars will have all four of those. Some will only have three. Maybe occasionally one might just have two. KimEide 17:49, 16 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
KimEide18:03, 16 September 2008→Improper
KimEide17:49, 16 September 2008()
That's just off the top of my head though. Most scholars will have all four of those. Some will only have three. Maybe occasionally one might just have two. KimEide 17:49, 16 September 2008 (EDT)KimEide18:03, 16 September 2008→Improper
KimEide17:49, 16 September 2008()
::::::This is a ridiculous arguement. The resurrection of Jesus is a matter of faith and can not be verified except by referring to the Bible. Statistical analysis is a branch of mathematics. The truths of mathematics are not subject to debate and are not matters of faith. Appealing to authorities in mathematics is completely different than appealing to a religious authority. MickA 17:52, 16 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
KimEide18:03, 16 September 2008→Improper
MickA17:52, 16 September 2008()
MickA17:51, 16 September 2008()
(undent)We're not arguing whether the opinion of a majority of biblical scholars is a reliable indicator of whether Jesus rose from the dead or not. We're just arguing what that majority opinion is. It may seem irrelevant, but it is a crucially important to the issue at hand...ummm...somehow. KimEide 18:02, 16 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
KimEide18:02, 16 September 2008→Improper
: Christian theologians hold the equivalent of PhDs, and yet you reject the conclusion of those "experts". I have found nothing in Lenski's published background to indicate any expertise by him in statistics. Yet you accept Lenski's view on statistics without even thinking through the issues on your own, while rejecting the consensus of Christian experts. Why? The answer is obvious: bias and lack of open-mindedness.--Aschlafly 18:25, 16 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Philip J. Rayment23:01, 16 September 2008Reverted edits by AlexM (Talk); changed back to last version by Aschlafly
AlexM22:57, 16 September 2008removed unsubstansive comments
Aschlafly18:25, 16 September 2008→Improper: reply
:: Biological studies routinely require at least some application of statistical theory, especially when studying populations of organisms or molecules which molecular (and evolutionary) biology usually involves. Biostatistics (the statistical theories commonly used in biomedicine) is often a required course for PhD students for this reason. Established researchers in the life sciences do not need to have a degree in statistics to indicate expertise - Lenski's publication record itself indicates that he has successfully applied statistical theory in his analyses many times. At any rate, if someone can simply claim without proof that he has taken (and understood) statistics courses on the level needed to evaluate Lenski's paper, then he has no basis on which to accuse others of lacking expertise.Aschlafly16:55, 20 September 2008archive
Shwu19:18, 16 September 2008()
::: Malarkey. I've taken and excelled in upperclass statistics courses, and there were not biology students, college or graduate, in them. If Lenski has expertise in statistics then let's see it. His own "biographical sketch" doesn't even disclose what his undergraduate major was at Oberlin or what his PhD concentration at the University of North Carolina were Aschlafly16:55, 20 September 2008archive
MikeR09:55, 17 September 2008→Improper
Philip J. Rayment23:01, 16 September 2008Reverted edits by AlexM (Talk); changed back to last version by Aschlafly
AlexM22:57, 16 September 2008removed unsubstansive comments
Aschlafly19:26, 16 September 2008→Improper: sig
Aschlafly19:25, 16 September 2008→Improper: reply
::: The absence of biology students in your statistics courses does not prove that no biology students take statistics (and I am surprised that you knew what the major was of every student in your classes); as a life sciences graduate student I see biology students take statistics (usually biostatistics) all the time. Also, you are quite quick to jump to conclusions - just because Lenski does not list his fields of study for BA and PhD does not mean he is hiding something, and a biographical sketch is not a detailed CV. His honors and appointments in microbiology are all quite apparent. Aschlafly21:54, 16 September 2008Reverted edits by Shwu (Talk); changed back to last version by Aschlafly
Shwu21:37, 16 September 2008→Improper
::::: At the risk of making one more non-substantive edit (how can I avoid it? - I don't know what one is!), I would warrant that your own CV probably doesn't list every course you took in Harvard when you studied Law? Perhaps you studied shpping law - perhaps not? We would never know. A crucial and absolutely standard part of a life scientist's university education is statistics - just as it is normal for geographers, geologists, economists, geneticists, etc. Simply because he doesn't list statistical qualifications is no proof that he doens't have qualifications. It is essentially not possible for him to hold his degree without statistical qualifications. Might I ask where you took your own "upperclass statistics course"? BenHur 22:17, 16 September 2008 (EDT)Aschlafly22:43, 16 September 2008Reverted edits by BenHur (Talk); changed back to last version by Aschlafly
BenHur22:19, 16 September 2008()
BenHur22:18, 16 September 2008()
BenHur22:17, 16 September 2008()
in.[3]--Aschlafly 19:26, 16 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
MikeR09:55, 17 September 2008→Improper
::::Andy, the bottom line is the PNAS publishes letters online criticizing their publication all the time. Every single issue. It's the whole function of "Letters to the PNAS". It's the only reason the letters section of the PNAS exists. They would have published your letter if it had shown the requisite understanding of the issues at hand. How do we know this? Because they do it all the time. Why didn't they publish yours? Because your understanding of how to statistically analyze experiments involving bacteria populations isn't up to snuff. How could it be? A man who was truly interested in learning would show humility, accept criticism, and dive into the relevant literature about Monte Carlo, z-transformations, etc. A man who was only interested in protecting his ego would carry on the lie that Lenski and the PNAS don't know what they are talking about. KimEide 09:27, 17 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
KimEide09:27, 17 September 2008→Improper: My last word on the subject
::::: PNAS won't publish any letter critical of evolution in any way, even when the letter points out 5 obvious statistical flaws.--Aschlafly 09:58, 17 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Aschlafly09:58, 17 September 2008→Improper: reply to KimEide
:::::I still, having read your previous comments on the original letter, don't feel that you've explained either a) why the use of Monte Carlo was wrong, b) why the use of Z-transformation was wrong, nor c) critically, what alternative analysis you would have used, and what p-values you would have obtained with it. That last point is what I feel you really need to address if you want PNAS to really take notice. What would you use, and what p-values would you obtain?Aschlafly16:55, 20 September 2008archive
MikeR09:57, 17 September 2008→Improper
MikeR09:56, 17 September 2008→Improper
MikeR09:55, 17 September 2008→Improper
:::::Additionally, graduate level statistics is really quite basic by scientific standards. The gap between a Ph.D. and degree level knowledge is like the gap between a degree and junior high school. Biology as a field is hugely intertwined with complex statistics, and many biologists will have gained an understanding well beyond graduate level. Aschlafly16:55, 20 September 2008archive
MikeR09:56, 17 September 2008→Improper
MikeR09:55, 17 September 2008→Improper
:::::I'm not "anti" this letter, I'm not a liberal agitator, I've made decent contributions in my time here, but I really feel that this letter needs somebody to just take a step back, and consider that maybe the first attempt can be improved. As I mentioned previously, I think some good, proactive steps would be to concentrate more on the missing data claim, and to bring on board somebody with an expert knowledge of statistics or biology to help refine the claims, and double check whether you're right on some of these points or not. MikeR 09:55, 17 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
MikeR09:56, 17 September 2008→Improper
MikeR09:55, 17 September 2008→Improper
::::I'm not "anti" this letter, I'm not a liberal agitator, I've made decent contributions in my time here, but I really feel that this letter needs somebody to just take a step back, and consider that maybe the first attempt can be improved. As I mentioned previously, I think some good, proactive steps would be to concentrate more on the missing data claim, and to bring on board somebody with an expert knowledge of statistics or biology to help refine the claims, and double check whether you're right on some of these points or not. MikeR 09:55, 17 September 2008 (EDT)MikeR09:56, 17 September 2008→Improper
MikeR09:55, 17 September 2008→Improper
:::::: MikeR, PNAS never publishes a letter containing a meaningful criticism of a pro-evolution article, no matter how obvious and egregious the flaws. If the Lenski evolution article had incorporated a flaw tantamount to claiming that 2+2=5, the PNAS would still not admit the error. Check out evolution syndrome.--Aschlafly 10:14, 17 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Aschlafly10:14, 17 September 2008→Improper: reply
:::::: Perhaps but Andy, with respect, I'm not convinced you were correct in your belief that all of those 5 points were genuine flaws. That's why I think it's worth getting a second opinion from an unbiased outsider with post-graduate expertise in statistics. You've not really given an explanation of why the Monte Carlo or Z-Transformation approaches were wrong, nor have you stated what you believe the correct p-value should have been, and until you do I'm going to continue to have doubts about your analysis, based on my own knowledge. MikeR 12:27, 17 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
MikeR12:27, 17 September 2008→Improper
::::::: MikeR, I obviously welcome another unbiased outside opinion. Let us know when you get one. But beware, the evolution syndrome you've seen here will make anyone think twice before they criticize any aspect of any paper that promotes evolution. Anyone seeking funding or tenure will think twice before daring to question any aspect of an evolution paper, lest they be subjected to the hysteria you've seen here. So I'm not optimistic that you will be able to find anyone willing to attach his name to this dispute. You may have to use your own mind on this one.--Aschlafly 12:32, 17 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Aschlafly12:32, 17 September 2008→Improper: reply
:::::::: I've not really seen any "hysteria" here, it's just that a number of us are not entirely convinced that you're right. You could persuade us by explaining, specifically, why you believe that the Monte Carlo or Z-transformation approaches were wrong, and what p-values you believe should have been obtained. I think the PNAS letter is somewhat rude, but there's nothing much in it that I can really disagree with regarding the statistics (although they don't exactly say a lot either). I don't see a problem with combining samples of different sizes using a Z-transformation, for example, as it's a technique I use on a regular basis, and what I believe is a pretty standard statistical approach at this level. Indeed, that's why Fisher's Z-transformation technique was invented, to combine samples of different sizes. I'm genuinely interested to know what the issue is with it in this case.Aschlafly16:55, 20 September 2008archive
MikeR15:15, 17 September 2008→Improper
MikeR15:13, 17 September 2008→Improper
::::::::As a side note, some of these statistical terms might be worth elaborating on in articles here on Conservapedia. I might make that my next project actually, if anyone wants to help. MikeR 15:13, 17 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
MikeR15:13, 17 September 2008→Improper
:::::::::Aschlafly, would you accept that your description of PNAS is directly analagous to your own resistance to criticism of an anti-evolution stance? I ask you again - i) are you 100% certain that your analysis is correct, ii) do you believe you are the best person to make that judgement, and iii) would you consider bringing in an unbiased third-party statistician to review your methodology? What could be wrong with that if you are certain you are correct? It could only but strengthen your argument. BenHur 13:15, 17 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
BenHur13:15, 17 September 2008→Improper
::::::Aschlafly - here are three links to letters published in 2008 by PNAS which do, in fact, contain meaningful criticism of pro-evolution articles.:Aschlafly16:55, 20 September 2008archive
BillK12:41, 17 September 2008()
* Ancient lineages in the genome: A response to Fagundes et alAschlafly16:55, 20 September 2008archive
BillK12:41, 17 September 2008()
* Natural selection does not explain cultural rates of changeAschlafly16:55, 20 September 2008archive
BillK12:41, 17 September 2008()
* When sleep or hide makes the difference in macroevolutionAschlafly16:55, 20 September 2008archive
BillK12:41, 17 September 2008()
Perhaps you might care to modify your claims in the light of this....BillK 12:41, 17 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
BillK12:41, 17 September 2008()
: Your first example is a defense of a theory of evolution, and thus tends to support my point. Given the failure of your first citation to support your point, I did not bother to look at your other two.--Aschlafly 14:48, 17 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Aschlafly14:48, 17 September 2008→Improper: your cite does not support your point
::The first article clearly is criticizing a pro-evolution article. You said MikeR, PNAS never publishes a letter containing a meaningful criticism of a pro-evolution article, no matter how obvious and egregious the flaws. Above are the links to three articles that "contain meaningful criticism of a pro-evolution article." I don't know how to be more clear. There are pro-evolution articles published in the PNAS. Three of them are criticized in the letters above. What more do you want? JohnDee 21:38, 17 September 2008 (EDT) Aschlafly16:55, 20 September 2008archive
JohnDee21:38, 17 September 2008→Improper
what if the other two were indeed the type of articles you were asking for? Fred1776 16:02, 17 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Fred177616:02, 17 September 2008→Improper
==Clarification on peer review and anonymity==Aschlafly09:37, 20 September 2008cleaned out unsigned, unsubstantive comment
Shwu17:59, 16 September 2008()
There has been, I think, undue outrage over the fact that the reviewer's name was not included in the response to Schlafly's letter. To clarify, this is standard procedure when reviewing manuscripts for scientific journals and does NOT in any way indicate cowardice or uncertainty. Anonymous peer review allows for reviews that are rigorous, honest, and, most importantly, objective, since if no one knows your identity, no one can threaten you, bribe you, or otherwise influence your evaluation of the material. Thus, you can carry out your professional duties without concern over how your review of Person A's paper will affect Person A's opinion of you or his/her review of your paper in the future, for example.Aschlafly09:37, 20 September 2008cleaned out unsigned, unsubstantive comment
Shwu17:59, 16 September 2008()
The anonymity of the response is standard procedure in scientific publishing and necessary to ensure objectivity of the evaluation process, and should not be construed negatively as has been done here.Aschlafly09:37, 20 September 2008cleaned out unsigned, unsubstantive comment
Shwu17:59, 16 September 2008()
:I hate to say it so bluntly, Andy, but you more than somewhat had this coming. You provided, quite simply, an elementary knowledge of statistics, and an ignorance of a vast collection of general knowledge. I believe it is entirely possible you suffer from Creation Syndrome IssacP 10:24, 17 September 2008 (EDT)Aschlafly10:29, 17 September 2008Reverted edits by IssacP (Talk); changed back to last version by Aschlafly
IssacP10:24, 17 September 2008()
== Statistics examples ==Aschlafly16:55, 20 September 2008archive
DRamon14:33, 17 September 2008→Statistics examples: new section
I see lots of people are clamoring for some specific statistical data on here, and although I am no expert on statistics, I canAschlafly16:55, 20 September 2008archive
DRamon14:33, 17 September 2008→Statistics examples: new section
give a simple example that demonstrates flaw #5 in the letter to PNAS (that combining different samples is invalid). So lets sayAschlafly16:55, 20 September 2008archive
DRamon14:33, 17 September 2008→Statistics examples: new section
I am testing some hypothesis, and I got a 100 samples, of which 48 are "pro" (support) and 52 are "con" (against) the hypothesis.Aschlafly16:55, 20 September 2008archive
DRamon14:33, 17 September 2008→Statistics examples: new section
So this doesn't support my hypothesis at all. But just for fun lets do another, smaller, experiment, with just 10 samples. And supposeAschlafly16:55, 20 September 2008archive
DRamon14:33, 17 September 2008→Statistics examples: new section
in this smaller (statistically insignificant by itself) experiment I only used 1- samples, of which 8 were "pro" and 2 were "con." Aschlafly16:55, 20 September 2008archive
DRamon14:33, 17 September 2008→Statistics examples: new section
If I combine the 2 experiments, I get a total of 56 "pro" and 54 "con", so more than 50% "pro" (in a large total sample size!), Aschlafly16:55, 20 September 2008archive
DRamon14:33, 17 September 2008→Statistics examples: new section
appearing to support the hypothesis, even though that's obviously not the case! --DRamon 14:33, 17 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
DRamon14:33, 17 September 2008→Statistics examples: new section
== Data from several experiments ==Aschlafly16:55, 20 September 2008archive
KennyMac08:20, 18 September 2008→Data from several experiments: new section
I would like to contribute to this discussion because I have taught statistics to graduate biology students for 16 years.KennyMac08:20, 18 September 2008→Data from several experiments: new section
The combination of data from several experiments is a specialist and sometimes difficult area of statistical theory but a simple example shows why Aschafly’s concern about combining the results of three different experiments is not justified and why this aspect of his criticism of Lenski’s recent paper in PNAS is not valid.KennyMac11:30, 18 September 2008→Data from several experiments
KennyMac08:20, 18 September 2008→Data from several experiments: new section
Suppose we want to conduct a test of whether or not men are taller than women on average. For the sake of the example, I generated random heights of people from a population in which men had an average height of 175cm (5’10’) and women of 165cm (5’6”). The standard deviations of height in both sexes were 7cm. I think these numbers are approximately correct for people in the UK but the details aren’t important.KennyMac08:20, 18 September 2008→Data from several experiments: new section
Suppose we take 5 samples of 2 men and 2 women. Here are the numbers I generated:KennyMac08:20, 18 September 2008→Data from several experiments: new section
{| class="wikitable"Toffeeman15:18, 20 September 2008→Data from several experiments
Toffeeman15:18, 20 September 2008→Data from several experiments
KennyMac08:20, 18 September 2008→Data from several experiments: new section
KennyMac08:20, 18 September 2008→Data from several experiments: new section
! Weighting !! Z transformed !! p Toffeeman15:18, 20 September 2008→Data from several experiments
| Equal || 4.025 || <0.001Toffeeman15:18, 20 September 2008→Data from several experiments
|By total Cit+|| 3.576 || <0.001Toffeeman15:18, 20 September 2008→Data from several experiments
|By total replicates|| 1.825 || 0.034Toffeeman15:18, 20 September 2008→Data from several experiments
So weighting on the basis of the number of replicates considerably increases the P-value. It remains, however, well within the range of statistical significance (0>P<0.05). If we hold that Lenski should have weighted on the basis of replicates then he should have rejected the null hypothesis and reached exactly the same conclusions that he did. The entire paper would have been exactly the same except the sentence “the result is extremely significant (P<0.0001) whether or not….” Would read “the result is significant (P<0.04) whether or not”. Point 5 establishes one number and an “extremely”. Point 5, therefore, has no weight (excuse the pun). Aschlafly17:56, 24 September 2008Reverted edits by Toffeeman (Talk); changed back to last version by Brossa
Toffeeman09:36, 24 September 2008Pointed out by certain gentlemen on a certain site
Toffeeman15:18, 20 September 2008→Data from several experiments
--Toffeeman 15:18, 20 September 2008 (EDT)Toffeeman15:18, 20 September 2008→Data from several experiments
: Sorry, Toffeeman, a falsehood is still a falsehood. Based on your own posting, if Lenski had applied the Whitlock Z-transform paper in a logical manner, the results would not have been nearly as striking as Lenski claimed (his paper said the results were "extremely significant"). Moreover, I found Lenski's description of his application of the Whitlock paper to be particularly misleading. Lenski's use of "whether or not"<ref>"We also used the Z-transformation method (49) to combine the probabilities from our three experiments, and the result is extremely significant (P < 0.0001) whether or not the experiments are weighted by the number of independent Cit+ mutants observed in each one." (Lenski paper at 7902).</ref> obscures the basic error that he did not apply Whitlock's paper in the straightforward, correct manner. I think the wording in the Lenski paper deliberately obscures this falsehood from the reader.Aschlafly16:47, 20 September 2008reply re: Point 5
: People have free will to embrace and defend falsehoods. I don't expect them to change quickly or admit they were wrong. But you'll find me defending and promoting the truth.Aschlafly16:47, 20 September 2008reply re: Point 5
: Point 5 remains valid and the falsehood remains uncorrected by PNAS or Lenski. Four other points in higher priority remain uncorrected by them also.--Aschlafly 16:47, 20 September 2008 (EDT)Aschlafly16:47, 20 September 2008reply re: Point 5
::"a falsehood is still a falsehood". Precisely, the null hypothesis should be considered false and the conclusions of the paper stand.Toffeeman17:13, 20 September 2008→Point 5 Confirmed
::Oh? Do you mean Lenski's falsehood? And what falsehood is that? Lenski said that he had calculated the P-value without weighting and that had come out at <0.0001. That is true, not false. Lenski said he had calculated the P-value weighting on the basis of the Cit+ replicates and that had come out at <0.0001. That is true, not false. There is no falsehood. Lenski did not mention the results of weighting on the basis of replicate numbers. He thus made no claim about weighting on the basis of replicate numbers. If he made no claim he cannot have made a false claim. Toffeeman17:13, 20 September 2008→Point 5 Confirmed
::How do the words Lenski uses "mislead". If you read them as written what conclusion do you come to? You are lead to the conclusion that the mutation was not "rare-but-equal", instead it was contingent. That is the right conclusion. If Lenski had presented the data in a different manner (perhaps by including the results of weighting on the basis of replicate numbers) what conclusion do you come to? You are again lead to the conclusion that the mutation was not "rare-but-equal", instead it was contingent. That is the right conclusion. To mislead you must be lead to a conclusion that is incorrect. By Lenski's paper you are not lead to a conclusion that is incorrect. Thus is cannot be said to be "misleading". Toffeeman17:13, 20 September 2008→Point 5 Confirmed
::I shall not comment on your second paragraph, the temptation to "Tu Quoque" would be too great. Toffeeman17:14, 20 September 2008→Point 5 Confirmed
Toffeeman17:13, 20 September 2008→Point 5 Confirmed
--Toffeeman 17:13, 20 September 2008 (EDT)Toffeeman17:13, 20 September 2008→Point 5 Confirmed
::: The falsehood consists of pretending to apply the Whitlock Z-transform in a straightforward, logical and correct manner. I think the Lenski paper is intentionally misleading by using the "whether or not" wording, when both alternatives are nonsensical. Point 5 has been proven above to be correct in identify an error in the Lenski paper.Aschlafly17:54, 20 September 2008→Point 5 Confirmed: reply
::: "Toffeeman", your blocking history suggests you have been less than straightforward yourself. Go elsewhere if you seek to be deceitful. You're not fooling anyone here.--Aschlafly 17:54, 20 September 2008 (EDT)Aschlafly17:54, 20 September 2008→Point 5 Confirmed: reply
::::Lenski did apply the Z-transformation correctly, both weighted and unweighted. The data points extracted from each replay are the generation numbers of those replicates that gave rise to Cit+ mutants. Thus Replay 1 produced four data points: 30,500 31,500 32,500 32,500. Replay 2 produced five data points: 32,000 32,000 32,000 32,000 32,500. Replay 3 produced eight data points: 20,000 20,000 27,000 27,000 31,000 31,500 32,000 32,000. Thus the N for replay 1 is 4; replay 2 is 5 and replay 3 is 8. The fact that replay 3 used 38 times as many replicates as replay 1 does not mean that it should be weighted 38 times as much; it only produced twice as much data, not 38 times as much. Brossa00:06, 21 September 2008→Point 5 Confirmed
::::Suppose I want to find out what the average age of a murderer is in three cities. In L.A. I interview 72 random people and find that 4 of them were convicted of murder; I record the ages of the four. In Seattle I interview 340 people and find that 5 of them are convicted murderers; likewise in Singapore I interview 2800 and find 8 murderers. In the end, I have 4,5, and 8 data points from the three cities; the number of people I had to interview to obtain those data points doesn't factor into the analysis of what the average age of the murderers is.--Brossa 00:06, 21 September 2008 (EDT)Brossa00:06, 21 September 2008→Point 5 Confirmed
::::: In your first paragraph you simply repeat the error underlying Lenski's paper. You, like the paper, incorrectly apply Whitlock's Z-transform.Aschlafly11:30, 21 September 2008repy to Brossa
::::: The quality and reliability of data is proportional to sample size, and when different studies are combined they need to be weighted accordingly. The results from a very large sample size would not be weighted equally with the results from a small sample size, as you and Lenski have done. That's basic logic, though I'm not optimistic that you or Lenski will admit it. Open-minded people who respect logic have no difficulty elevating logic over personal whim.-Aschlafly 11:30, 21 September 2008 (EDT)Aschlafly11:30, 21 September 2008repy to Brossa
:::::: So, what is the p-value you find when performing the calculations according to your way? --Stitch75 12:03, 21 September 2008 (EDT)Aschlafly12:13, 21 September 2008Reverted edits by Stitch75 (Talk); changed back to last version by Aschlafly
Stitch7512:03, 21 September 2008→Point 5 Confirmed
::::::Andy if you read Whitlock's paper you would see it say, and I quote, "Ideally each study is weighted proportional to the inverse of its error variance, that is, by the reciprocal of its squared standard error." It says nothing about weighting according to sample size, which is what you seem to be insisting should be done.DanB20:40, 21 September 2008→Point 5 Confirmed
DanB20:39, 21 September 2008→Point 5 Confirmed
::::::Also Whitlock acknowledges in the paper that there is no preference for weighted versus equal weighting, so the fact that both equal weighting and weighting by the standard error give a statistically significant result shows that the 3 experiments combined support rejection of the null hypothesis. DanB 20:39, 21 September 2008 (EDT)DanB20:39, 21 September 2008→Point 5 Confirmed
:::::::ASchlafly, you state that I incorrectly apply "Whitlock's Z-transform" (actually the test belongs to Mosteller & Bush<ref>Mosteller, F. & Bush, R.R. 1954. Selected quantitative techniques. In: Handbook of Social Psychology, Vol. 1 (G. Lindzey, ed.)</ref> and/or Liptak<ref>Liptak, T. 1958. On the combination of tests. Magyar Tud. Akad. Mat. Kutato Int. Kozl. 3: 171-197</ref>). Whitlock describes weighting by the reciprocal of the squared standard error. The standard error of the mean is proportional to 1/sqrt(N), so the reciprocal of the squared standard error is proportional to N. Thus larger studies are given more weight. I maintain that the sample sizes N of the three replays are 4, 5, and 8 respectively. Weighting based on those three N does not weight all three replays equally as you claim: it gives replay 2 25% more weight and replay 3 100% more weight than replay 1. Rather than simply repeating that I am wrong, will you please state what you think the sample sizes of the three replay experiments are, and, in your opinion, what the correct application of the Z-transformation would be?--Brossa 18:00, 22 September 2008 (EDT) Brossa19:21, 22 September 2008response to ASchlafly and DanB
Brossa18:00, 22 September 2008()
:::::::: The Lenski paper states how it weighted the experiments, and that weighting is incorrect. Admit it. Moreover, the incorrect weighting in the Lenski paper was not likely an inadvertent error, as it inflated the significance of the results. I found the wording used by the Lenski paper to describe its (incorrect) weighting to be artfully misleading.Aschlafly08:35, 23 September 2008reply Brossa
:::::::: Provide me with federal funding as Lenski received, and I'll write a paper for you. But I don't have to write an alternative paper to point out glaring errors in Lenski's paper.--Aschlafly 08:35, 23 September 2008 (EDT)Aschlafly08:35, 23 September 2008reply Brossa
::::::::: I've been following this discussion for a while and I have to agree with ASchlafly. It hardly seems fair that he should have to, in his spare time, replicate an experiment done by a professional just to "earn" the right to criticize it. I am unfamiliar with statistics, but if some complicated transform goes against common sense, common sense should prevail. After all, there are lies, damned lies, and statistics... AndyM 10:57, 23 September 2008 (EDT)AndyM10:57, 23 September 2008()
(unindent)I'm not asking anyone to write a paper or replicate an experiment. I'm asking ASchlafly to support his statement "The results from a very large sample size would not be weighted equally with the results from a small sample size, as you and Lenski have done"(bolding mine). I have stated publicly, subject to challenge by others, that the sample sizes (n) of the three replays are four, five, and eight respectively. Furthermore, using n of 4, 5, and 8 in the weighted Z-method DOES NOT weight all the replay experiments equally - it weights replay 3 twice as much as replay 1 and 8/5 as much as replay 2. Tell you what: I'll drop all my questions about Monte Carlo and the Z-transform, and simply ask ASchlafly one question: what is the sample size, n, of the second replay experiment? He need not even do any calculations - a statement in words that will allow someone else to do the calculation will suffice. This is not a complicated question to answer; the paper states how many replicate cultures there were (340), how many cells there were in each replicate (3.9x10^8), how many replicates gave rise to Cit+ cells (5), and which generations those Cit+ replicates came from (4 from 32,000 and one from 32,500). I will even give my answer: five. Furthermore, I will say why I believe that, using the murderer/age analogy: performing the 340 replicates is the same as interviewing 340 people in order to find out if any of them are convicted murderers. Finding that five replicates gave rise to Cit+ mutants is the same as the survey finding that 5 of those 340 people were convicted murderers. Finding that the Cit+ mutants arose from 4 replicates from generation 32,000 and 1 from generation 32,500 is the same as finding the ages of the murderers. The five data points in the Lenski study allow one to calculate the 'mean generation of clones yielding Cit+': 32,100. This is the same as finding the mean age of the five murderers. If I want to compare this hypothetical murderer age study to some other study of the mean age of murderers, I would weight the studies based on how many murderers were in each study, not on how many non-murderers were included in the initial survey. Brossa15:54, 23 September 2008()
Surely ASchlafly can say what he thinks the n of the second replay is, even if he won't say why he thinks it. Is it five? 340? The number of replicates times the number of cells per replicate? Something else? No analysis need be performed on the resulting number.--Brossa 15:54, 23 September 2008 (EDT)Brossa15:54, 23 September 2008()
:OK - would you care to put in writing that after ASchlafly gives you his response, you won't start obfuscating the issue with Monte Carlo and Z-transform issues? You understand: it is typical of liberals to, after being proven wrong, to start pretending that they were talking about an entirely different issue altogether. After ASchlafly states the sample size of the second replay experiment you will consider yourself answered. Correct?AndyM16:57, 23 September 2008()
:Brossa, your rant is misplaced. One cannot salvage an error in logic by questioning which of superior alternatives should be used instead. The sample size of an experiment is the number that comprises the underlying sample used in the experiment, not the number of a certain outcome from the experiment. Maybe you can debate yourself over what the correct underlying sample size is, but it is plainly not the number of a certain outcome from the experiment.--Aschlafly 19:28, 23 September 2008 (EDT)Aschlafly19:28, 23 September 2008reply to Brossa
(non-substantive postings deleted; the rule for this page is enforced.--Aschlafly 17:58, 24 September 2008 (EDT))Aschlafly17:58, 24 September 2008non-substantive postings deleted
Brossa07:34, 24 September 2008()
:::Aschlafly, are we not getting back to the distinction I made here? A suggestion that you rejected?Aschlafly17:56, 24 September 2008Reverted edits by Toffeeman (Talk); changed back to last version by Brossa
Toffeeman07:37, 24 September 2008()
:::If we are agreed on what constitutes the sample then it is clear that Lenski did combine the sample in a logical manner with proper weighting. He followed accepted practice and used established statistical techniques. Your letter did not dispute the soundness of accepted practice and established techniques, it alleged that Lenski had diverted from them. If the sample is agreed then this allegation is untrue.Aschlafly17:56, 24 September 2008Reverted edits by Toffeeman (Talk); changed back to last version by Brossa
Toffeeman07:37, 24 September 2008()
:::If, contrary to your earlier statement, there is a dispute about the sample then it is Lenski who is correct on the sample size. Lenski is testing a hypothesis about Cit+ replicates. If you were testing (to use Brossa’a example) “most murders are committed by 20 to 35 year olds” you would include murderers, you would not run statistics on the age of fraudsters. You may get your data from a source that includes fraudsters, say prison records that include details of all types of criminals. However you would exclude non-murderers, and correctly exclude non-murderers, when reviewing the age of murderers. Aschlafly17:56, 24 September 2008Reverted edits by Toffeeman (Talk); changed back to last version by Brossa
Toffeeman09:35, 24 September 2008()
Toffeeman07:37, 24 September 2008()
:::That is the situation we have here. Lenski’s null hypothesis is that Cit+ replicates arose randomly. This says no more about Cit- replicates than it does about what Lenski had for lunch the day he wrote the paper. The presence of Cit- replicates is a result of the method of generating the sample, a side product. To give another example, if you were doing an experiment breeding chickens you would generate a lot of manure: you wouldn’t include it in the analysis!Aschlafly17:56, 24 September 2008Reverted edits by Toffeeman (Talk); changed back to last version by Brossa
Toffeeman09:35, 24 September 2008()
Toffeeman07:37, 24 September 2008()
:::Finally the argument has no end product: if we erroneously consider the Cit- replicates as part of the sample the null hypothesis is still refuted. Your objections to Lenski’s wording do not establish deceit on his part, yet they would were they to be established themselves. Your objections to Lenski’s calculations, however, would not establish the falsity of his conclusions even if the objections were correct. The calculations of the three differing weightings above are not intended:Aschlafly17:56, 24 September 2008Reverted edits by Toffeeman (Talk); changed back to last version by Brossa
Toffeeman09:35, 24 September 2008()
Toffeeman07:37, 24 September 2008()
::::1. to show that Lenski was honest (he was, but that is established by other evidence) Aschlafly17:56, 24 September 2008Reverted edits by Toffeeman (Talk); changed back to last version by Brossa
Toffeeman07:37, 24 September 2008()
::::2. to show that Lenski correctly applied accepted statistical techniques (he did, but that is established by other evidence) orAschlafly17:56, 24 September 2008Reverted edits by Toffeeman (Talk); changed back to last version by Brossa
Toffeeman07:37, 24 September 2008()
::::3. to show that only Cit+ replicates are part of “the sample” (they are, but that is established elsewhere)Aschlafly17:56, 24 September 2008Reverted edits by Toffeeman (Talk); changed back to last version by Brossa
Toffeeman07:37, 24 September 2008()
:::The calculations of the three differing weightings above show that your objections are moot. --Toffeeman 07:37, 24 September 2008 (EDT)Aschlafly17:56, 24 September 2008Reverted edits by Toffeeman (Talk); changed back to last version by Brossa
Toffeeman07:37, 24 September 2008()
:After reading the deleted postings in the history I'm not sure you did the right thing ASchlafly. They should stay here, or be moved to the liberal style page that they perfectly illustrate. On the other hand, deleting them keeps readers from having to sort through that trash: three different weighting styles?? Surely Brossa and Toffeeman can at least attempt an honest argument without having to conjure up new mathematics. AndyM 19:15, 24 September 2008 (EDT)AndyM19:15, 24 September 2008()
Maybe the Journal of Nature can be your next letter submission source. International Weekly Journal of Science -- 14px jp 21:17, 22 September 2008 (EDT)Aschlafly19:57, 24 September 2008moved non-substantive comments to "random talk"
Aschlafly19:57, 24 September 2008moved non-substantive comments to "random talk"
Jpatt21:17, 22 September 2008()
== Random Talk ==Aschlafly19:57, 24 September 2008moved non-substantive comments to "random talk"
 Aschlafly11:31, 23 September 2008Reverted edits by Jirby (Talk); changed back to last version by AndyM
Jirby11:27, 23 September 2008→Point 5 Confirmed
Aschlafly16:56, 20 September 2008→Lack of statistical expertise: removed rant in violation of rule
TomG16:53, 20 September 2008→Data from several experiments
Sorry, I really can't let this slide. Andy, by claiming that you've taken and excelled in upperclass statistics courses you are in effect claiming expertise in the subject. Anyone with experitse, or even basic knowledge of statistics, would not have written Mystery:Young Hollywood Breast Cancer Victims, wherein you argue the following: There are currently 500 "Hollywood" women under 30 who would disclose a diagnosis or breast cancer (a made up number, but we don't need to dwell on that). We have an acknowledgement of 3 "Hollywood" under 30 women ever who have been diagnosed with breast cancer. Therefore the rate of breast cancer in Hollywood women under 30 is at least 3 in 500 (more than 10 times the national average). If you can't figure out the blatant error in that statement then there is no way on earth you can expect any of us to believe that you are capable of finding flaws in others statistical analysis. While it is possible that you did once "excel" in high level statistics courses, it is pretty obvious you have not retained what you learned. Nothing wrong with that, but you shouldn't pretend to have expertise you don't have. TomG 16:53, 20 September 2008 (EDT)Aschlafly16:56, 20 September 2008→Lack of statistical expertise: removed rant in violation of rule
TomG16:53, 20 September 2008→Data from several experiments
== References ==Aschlafly16:47, 20 September 2008added reference
<references/>Aschlafly16:47, 20 September 2008added reference
! Expt# !! p-value !! z-score !! #Cit+ muts !! #replicatesToffeeman15:18, 20 September 2008→Data from several experiments
| 1 || 0.0085 || 2.387 || 4 || 72Toffeeman15:18, 20 September 2008→Data from several experiments
| 2 || 0.0007 || 3.195 || 5 || 340Toffeeman15:18, 20 September 2008→Data from several experiments
| 3 || 0.0823 || 1.390 || 8 || 2800Toffeeman15:18, 20 September 2008→Data from several experiments
Applying the formula described above.. Toffeeman15:18, 20 September 2008→Data from several experiments
! Men mean !! Women mean !! Mean difference !! t !! PKennyMac11:30, 18 September 2008→Data from several experiments
KennyMac08:20, 18 September 2008→Data from several experiments: new section
| 175.6 || 164.6 || 11 || 3.85 || 0.00058KennyMac08:20, 18 September 2008→Data from several experiments: new section
Clearly, combining the data from several similar experiments strengthens the conclusions considerably, as shown by the fact that P is much smaller for the combined data than for any individual sample.KennyMac11:30, 18 September 2008→Data from several experiments
KennyMac08:20, 18 September 2008→Data from several experiments: new section
Although the combination of data from several experiments is a specialised area of statistics, I see nothing particularly incorrect about the approach used by Lenski and his colleagues. The general point is that it is valid to combine the results of different experiments if it is scientifically meaningful to do so. (For example: A. Combining the results of five samples of the heights of men and women is clearly valid. B. Combining three samples of heights of men and women with two samples of lengths of male and female squid clearly isn’t.) Generally speaking, the outcome of a combined analysis of several small experiments which all point in the same direction (or at least in a similar direction) will be more significant than that of any one of those experiments, as is shown in the larger table above.KennyMac11:45, 18 September 2008→Data from several experiments: A little clarification
KennyMac08:20, 18 September 2008→Data from several experiments: new section
I hope this clarifies the extensive discussion on this point and puts Aschafly’s mind at rest on this subject. KennyMac 08:20, 18 September 2008 (EDT)KennyMac08:20, 18 September 2008→Data from several experiments: new section
:That's very nicely put, thanks. You should work on some of the stats pages here. Of course, technically any sample is ultimately just a combination of n samples of size 1. MikeR 13:28, 18 September 2008 (EDT)MikeR13:28, 18 September 2008→Data from several experiments
:I'll take a look at this Friday. It's not immediately obvious what the point is to your analysis above.--Aschlafly 23:46, 18 September 2008 (EDT)Aschlafly23:46, 18 September 2008→Data from several experiments: not immediately obvious
::"The general point is that it is valid to combine the results of different experiments if it is scientifically meaningful to do so"--KingOfNothing 00:57, 19 September 2008 (EDT)KingOfNothing00:57, 19 September 2008→Data from several experiments
:::This makes no sense as an argument. It may be true in this simple case that you can do one large or several small samples and get similar results - which is quite obvious and wouldn't need such a detailed rant. However, you provide no mathematical proof, just one example. Etc 01:08, 19 September 2008 (EDT)Etc01:08, 19 September 2008→Data from several experiments
::KingOfNothing is correct. The point is that, if a series of experiments all indicate (or tend to indicate) the same conclusion, the result of analysing the complete series of tests may be much stronger than that of any one test (i.e. it may have a much lower significance test probability). This is as true of Lenski's experiments as of the simple example above. Hence Aschafly's criticism of Lenski's paper in this respect is invalid.DeanS10:11, 19 September 2008Undo revision 520261 by KennyMac (Talk) Sock of Blocked users
KennyMac09:59, 19 September 2008→Data from several experiments: Answering Aschafly's query
:: Regarding Etc's point: If you paraphrase Aschafly's argument in his Letter to PNAS as the hypothesis that one weak result should cast doubt on several stronger results, even though they all tend to lead to the same conclusion, the example above disproves that hypothesis. No further mathematical analysis is necessary. (By the way, a carefully argued text is not a rant.) KennyMac 09:59, 19 September 2008 (EDT)DeanS10:11, 19 September 2008Undo revision 520261 by KennyMac (Talk) Sock of Blocked users
KennyMac09:59, 19 September 2008→Data from several experiments: Answering Aschafly's query
:"It's not immediately obvious what the point is to your analysis above". No surprise, Aschlafly, really. Maybe you should take your own advice: "I suggest you try harder with an open mind". --CrossC 02:46, 19 September 2008 (EDT)CrossC02:46, 19 September 2008→Data from several experiments: sig
CrossC02:45, 19 September 2008→Data from several experiments
It is with great sadness that I note that the author of this - the only significant statistical explanation and discussion in this entire fiasco- has just been blocked for five years. Even his email is blocked, so he can't even appeal the action. I don't see such manouvers as having contributed to the much vaunted "open mind" of which various people here speak. BenHur 10:27, 19 September 2008 (EDT)BenHur10:27, 19 September 2008→Data from several experiments
REPLY: I have now reviewed the above analysis, and it supports Point 5 rather than the PNAS paper. Point 5 stated, "The Third Experiment was erroneously combined with the other two experiments based on outcome rather than sample size, thereby yielding a false claim of overall statistical significance." The analysis above does nothing more than reinforce Point 5 by combining experiments based on sample size.Aschlafly19:24, 19 September 2008reply
TomG10:30, 19 September 2008→Data from several experiments
In Pavlovian manner, some Lenski types nod their head here in agreement at the above analysis, apparently unaware that it reinforces Point 5.Aschlafly19:24, 19 September 2008reply
AlexaJ22:42, 18 September 2008→Dear Aschlafly,: new section
When combining results from samples that are vastly different in sample size, it is necessary to factor in the different sample sizes. Apparently the PNAS paper failed to do that, which helps explain why it refuses to provide a meaningful response to Point 5.--Aschlafly 19:24, 19 September 2008 (EDT)Aschlafly19:24, 19 September 2008reply
(rants below were deleted for being non-substantive in violation of this page's rules.)--Aschlafly 19:24, 19 September 2008 (EDT)Aschlafly19:24, 19 September 2008reply
AlexaJ10:35, 19 September 2008Respond this time, please.
AlexaJ22:42, 18 September 2008→Dear Aschlafly,: new section
1.Figure 3 depicts an "historical contingency" hypothesis around the 31,000th generation, but the abstract states that mutations "arose by 20,000 generations." The paper fails to admit that the Third Experiment disproved the contingency hypothesis depicted in Figure 3. Aschlafly19:24, 19 September 2008reply
AlexaJ10:35, 19 September 2008Respond this time, please.
AlexaJ22:42, 18 September 2008→Dear Aschlafly,: new section
Reading over the paper, I didn’t find anything about a “historical contingency” around the 31,000th generation. What I believe I am seeing is a misunderstanding, Lenski was looking for two scenarios, First he indicates that if the ability for the E. Coli to consume citrate was due to a mutation, than the rate of the mutation would remain constant, but if it was population that made the mutation more likely, there would be a point where the mutation rate would increase. The graph simply showed both options, so I believe there was a misunderstanding of the graph. Aschlafly19:24, 19 September 2008reply
AlexaJ10:35, 19 September 2008Respond this time, please.
:::I understand point 5 now, or at least I think I do. What we have as a "sample" is either:Toffeeman19:57, 19 September 2008()
::::1. Individual cultures (Schlafly)Toffeeman19:57, 19 September 2008()
::::2. Cultures that developed cit+. (Lenski)Toffeeman19:57, 19 September 2008()
:::Schlafly contends that the sample should be all the cultures and that Lenski has, improperly, filtered the sample by excluding the vast majority of it (i.e. all those cultures that did not become cit+). Am I right in thinking this is the argument? --Toffeeman 19:57, 19 September 2008 (EDT)Toffeeman19:57, 19 September 2008()
:::: No, we're talking about how Lenski combined a large study (which did not really support Lenski's hypothesis) with small studies (which Lenski claims does support his hypothesis). The studies were not combined in a logical manner with proper weighting given to the much bigger size of the large study.--Aschlafly 23:15, 19 September 2008 (EDT)Aschlafly23:15, 19 September 2008reply to Toffeeman
:::::Aschlafly, have you read the paper on z-transforms which explains the statistical technique used? You can download a .pdf copy of the paper for free here. --BillA 06:30, 20 September 2008 (EDT)BillA06:30, 20 September 2008→Data from several experiments
:::::: You say "statistical technique used," but you should have said "statistical technique cited." In fact, a close reading of the Z-transform paper provides more support for Point 5: combined studies must be weighted based on sample size: Aschlafly09:34, 20 September 2008reply re: Point 5
::::::: "When there is variation in the sample size across studies, there can be a noticeable difference in the power of the two methods, with the weighted Z-approach being superior in all cases. As such, we should always prefer the weighted Z to the unweighted Z-approach when the independent studies test the same hypothesis."see p. 1371.Aschlafly09:34, 20 September 2008reply re: Point 5
:::::: In other words, the cited paper actually supports Point 5.--Aschlafly 09:34, 20 September 2008 (EDT)Aschlafly09:34, 20 September 2008reply re: Point 5
(unindent)Lenski used the weighted method. See note 49 to the paper and the text around the combination. Of course there is the question of on what basis Lenski weighted the results. Lenski weighted the results on the basis of the Cit+ numbers and we may think it would have been better to weight on the basis of the number of replicates. I have below the calculations (not mine) of combined P-values based on 1) no weighting, 2) weighting on the basis of Cit+ and 3) weighting on the basis of replicates. The weighted Z-transformed = SUM(Weight x Z-score for each run)/SQRT(SUM(Weight^2 for each run))Toffeeman15:19, 20 September 2008→Data from several experiments
Toffeeman15:18, 20 September 2008→Data from several experiments
Toffeeman14:33, 20 September 2008→Data from several experiments
2.Both hypotheses propose fixed mutation rates, but the failure of mutations to increase with sample size disproves this. If the authors claim that it is inappropriate to compare for scale the Second and Third Experiments to each other and to the First Experiment, then it was also an error to treat them similarly statistically. Aschlafly19:24, 19 September 2008reply
AlexaJ10:35, 19 September 2008Respond this time, please.
AlexaJ22:42, 18 September 2008→Dear Aschlafly,: new section
2.Both hypotheses propose fixed mutation rates, but the failure of mutations to increase with sample size disproves this. If the authors claim that it is inappropriate to compare for scale the Second and Third Experiments to each other and to the First Experiment, then it was also an error to treat them similarly statistically.AlexaJ10:35, 19 September 2008Respond this time, please.
AlexaJ22:42, 18 September 2008→Dear Aschlafly,: new section
I am not sure what you are trying to point out here. As the PNAS pointed out, mutations are random. Another note is that Lenski adressed this in his paper, and highlighted it in the article, even pointing out changes in the third experiment. Just a question, what do you mean by treating “them similarly statistically.”?AlexaJ10:35, 19 September 2008Respond this time, please.
AlexaJ22:42, 18 September 2008→Dear Aschlafly,: new section
3. The paper incorrectly applied a Monte Carlo resampling test to exclude the null hypothesis for rarely occurring events. The Third Experiment results are consistent with the null hypothesis, contrary to the paper's claim. Aschlafly19:24, 19 September 2008reply
AlexaJ22:42, 18 September 2008→Dear Aschlafly,: new section
Erm… why was the Monte Carlo resampling incorrect? I cannot make heads nor tail of what you were thinking when you wrote this. Also, the result was inconsistant with the null hypothesis by a margin of 92 percent.AlexaJ10:35, 19 September 2008Respond this time, please.
AlexaJ22:42, 18 September 2008→Dear Aschlafly,: new section
4. It was error to include generations of the E. coli already known to contain trace Cit+ variants. The highly improbable occurrence of four Cit+ variants from the 32,000th generation in the Second Experiment suggests an origin from undetected, pre-existing Cit+ variants. Aschlafly19:24, 19 September 2008reply
AlexaJ22:42, 18 September 2008→Dear Aschlafly,: new section
First off, modern technology grants quite a decent ability to detect the amount of bacteria in the sample, and there were NO Cit+ variants in it, nor were they mentioned in the paper. Honestly, do you believe that Lenski and his assistants were playing “toss the bacteria around”? Your assumption that there *had* to be Cit+ bacteria in the sample is incorrect, and seems to be based off your misunderstanding of the field. Aschlafly19:24, 19 September 2008reply
AlexaJ10:35, 19 September 2008Respond this time, please.
AlexaJ22:42, 18 September 2008→Dear Aschlafly,: new section
5. The Third Experiment was erroneously combined with the other two experiments based on outcome rather than sample size, thereby yielding a false claim of overall statistical significance. Aschlafly19:24, 19 September 2008reply
AlexaJ10:35, 19 September 2008Respond this time, please.
AlexaJ22:42, 18 September 2008→Dear Aschlafly,: new section
Again… what? There is simply nothing “wrong” or “incorrect” about combining different sized samples to test the results. There are correct techniques for testing this kind of things (“Z-transformation” method, to be specific!) and it was used to make the test. Aschlafly19:24, 19 September 2008reply
AlexaJ10:35, 19 September 2008Respond this time, please.
AlexaJ22:42, 18 September 2008→Dear Aschlafly,: new section
Finally, your claim about the paper not being publicly available is false, as you can easily find it by going to Lenski’s site All the information is right there, 3 clicks away from a Google search. Aschlafly19:24, 19 September 2008reply
AlexaJ11:12, 19 September 2008→Dear Aschlafly,: The link was somehow removed...
AlexaJ10:35, 19 September 2008Respond this time, please.
Might I ask you to clarify the questions and insight I have provided above? AlexaJAschlafly19:24, 19 September 2008reply
AlexaJ10:35, 19 September 2008Respond this time, please.
AlexaJ22:42, 18 September 2008→Dear Aschlafly,: new section
Might I ask you to clairify the questions and insight I have provided above?AlexaJ10:35, 19 September 2008Respond this time, please.
AlexaJ22:42, 18 September 2008→Dear Aschlafly,: new section
Waiting for your response, AlexaJAlexaJ10:35, 19 September 2008Respond this time, please.
AlexaJ22:42, 18 September 2008→Dear Aschlafly,: new section
: Your posting is filled with gross misspellings, "I don't get it" type of statements, and a general link that shows nothing. I suggest you try harder with an open mind. Thanks and Godspeed.--Aschlafly 23:47, 18 September 2008 (EDT)Aschlafly19:24, 19 September 2008reply
Aschlafly23:47, 18 September 2008→Dear Aschlafly,: reply ... try harder
::... 7 slightly misspelled words doesn't give an apt enough excuse not to respond, Andy.Jirby 01:03, 19 September 2008 (EDT)Aschlafly19:24, 19 September 2008reply
Jirby01:03, 19 September 2008→Dear Aschlafly,
::Can we assume from your lack of response to these points (and others above) that you don't have the expertise to answer these questions? It's okay if you don't, you should just be humble and admit it. JohnDee 09:37, 19 September 2008 (EDT)Aschlafly19:24, 19 September 2008reply
JohnDee09:37, 19 September 2008→Dear Aschlafly,
:::Are you not making an assumption here, JohnDee? If you were in his position, would you have answered a letter with all these mistakes and a condescending tone?--Carolyn 09:54, 19 September 2008 (EDT)Aschlafly19:24, 19 September 2008reply
Carolyn09:54, 19 September 2008→Dear Aschlafly,
::::Read over this talk page. Andy has never directly answered any criticism of his PNAS letter. He either removes the comment from the talk page, bans the user, or just refuses to answer the question. Sometimes all three. Given this and the criticism I've seen on this page from people who seem to know what they are talking about, the only conclusion I can come to is Andy doesn't actually understand why z-transformations are applied, what a Monte Carlo re-sampling test is, etc. If he understood any of this, I would assume he would actually answer their questions. JohnDee 10:20, 19 September 2008 (EDT)Aschlafly19:24, 19 September 2008reply
JohnDee10:23, 19 September 2008→Dear Aschlafly,
JohnDee10:20, 19 September 2008→Dear Aschlafly,
::::"Are you not making an assumption here". Spelling mistakes signify little beyond spelling. If you were to consider swimming to the bottom of the Pacific and I said "you won't be able to, you can't breathe underwater" then I would be right. If I were to write "U wont bee able to, yu can't breev undewatta" I would still be right about your ability to swim to the bottom of the Pacific. Woefully wrong about spelling, but spot on about the status of your gills. On the "tone" of the edit I might remind you that PNAS took exception to the "tone" of the letter. "Mr. Schlafly's final comment about release of data is uncalled for". Professor Lenski certainly took exception to the tone of both emails to him: "(g)iven the continued rudeness of your second email" (my emphasis). And, naturally, it makes little difference to the validity of the criticisms: if I were to say "you *******************, you can't breathe underwater" I would still be right. So "your spelling is bad" or "you are being rude" simply fail to negate the statement "the letter was wrong". In the abscence of a negation coupled with plenty of corroboration, we a forced to the conclusion that the letter had little merit. --Toffeeman 10:32, 19 September 2008 (EDT)Aschlafly19:24, 19 September 2008reply
Toffeeman10:32, 19 September 2008()
:::::I am resting under a similar assumption. My next couple of questions, Aschlafly, first off, why did you refuse to give a direct answer to a single question, instead option to say "Your spelling was bad, hence, you are closed minded." I think you mean to say "I have no idea how to give a correct response, hence I will just refuse to answer you." Until you actually answer my questions, I will assume you do not understand how to answer. Second, yes, there are "I don't understand" parts in my questions, parts I wanted you to clarify so I could understand them. Refusing to answer those makes it look, again, like you simply don't understand, in this case, you don't understand what YOU wrote. Finally, the "general link" was a link to the cumulative research of Richard Lenski, so that point is completely invalid. If you could respond to my post WITHOUT brushing it off, it would be deeply appreciated. (by the way, I re-checked the spelling just for you.) AlexaJAschlafly19:24, 19 September 2008reply
AlexaJ10:35, 19 September 2008→Dear Aschlafly,: Sig and spacing.
AlexaJ10:35, 19 September 2008Respond this time, please.
:::::::Just for our own awareness, can I please ask you AlexaJ what your expertise in this subject matter is? Thanks--Carolyn 10:45, 19 September 2008 (EDT)DeanS10:50, 19 September 2008Reverted edits by Carolyn (Talk); changed back to last version by AlexaJ
Carolyn10:45, 19 September 2008→Dear Aschlafly,
Schlafly, admitting you were wrong does not in any way reflect weakness, nor does it have to affect your belief in any way. On the contrary, admitting you are wrong proves you are sure of your capacities to better yourself, and that you are willing to learn. If you say it right, you will also have a good chance to eanr the respect of some of the people that criticized you. DeanS16:10, 19 September 2008Reverted edits by Fred1776 (Talk); changed back to last version by AlexaJ
Fred177615:36, 19 September 2008→Dear Aschlafly,
Continuous denial, and refusal to answer because of a few spelling mistakes, on the other hand reflects fear of making mistakes, and has the effect of worsening your image. You choose. Fred1776 15:36, 19 September 2008 (EDT)DeanS16:10, 19 September 2008Reverted edits by Fred1776 (Talk); changed back to last version by AlexaJ
Fred177615:36, 19 September 2008→Dear Aschlafly,
! Man1 !! Man2 !! Woman1 !! Woman2 !! Men mean !! Women mean !! Mean difference !! t !! PKennyMac08:20, 18 September 2008→Data from several experiments: new section
| 176 || 179 || 157 || 148 || 177.5 || 152.5 || 25 || 5.27 || 0.017KennyMac08:20, 18 September 2008→Data from several experiments: new section
| 180 || 176 || 160 || 164 || 178 || 162 || 16 || 5.66 || 0.015KennyMac08:20, 18 September 2008→Data from several experiments: new section
| 176 || 175 || 167 || 165 || 175.5 || 166 || 9.5 || 8.50 || 0.0068KennyMac08:20, 18 September 2008→Data from several experiments: new section
| 169 || 171 || 168 || 173 || 170 || 170.5 || -0.5 || -0.19 || 0.57KennyMac08:20, 18 September 2008→Data from several experiments: new section
| 179 || 175 || 166 || 178 || 177 || 172 || 5 || 0.79 || 0.26KennyMac08:20, 18 September 2008→Data from several experiments: new section
P in the last column is the t-test probability for a one-side test of women being shorter than men. (Formally, it’s the probability of getting a value of t greater than that calculated from the data if women are in fact taller than men on average.)KennyMac08:20, 18 September 2008→Data from several experiments: new section
Should the fact that, in the fourth sample, the average height of the women is taller than that of the men make us doubt that men are in fact taller on average? Should we be concerned about the last sample, in which the difference in height of the two sexes is rather small, though in the expected direction? No, in both cases. When we combine the data on all 10 men and all 10 women, we get this:KennyMac11:45, 18 September 2008→Data from several experiments: A little clarification
KennyMac08:20, 18 September 2008→Data from several experiments: new section
::::: Which of the 5 specific errors do you think they addressed? None, as far as I can tell.--Aschlafly 11:11, 13 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Philip J. Rayment23:01, 16 September 2008Reverted edits by AlexM (Talk); changed back to last version by Aschlafly
AlexM22:57, 16 September 2008removed unsubstansive comments
DRamon22:15, 15 September 2008reverting more vandalism from JonoP
JonoP22:14, 15 September 2008rmv unsustansive comments
Aschlafly14:23, 14 September 2008Reverted edits by Raul (Talk); changed back to last version by DinsdaleP
Raul14:06, 14 September 2008()
SamuelHTD10:16, 14 September 2008Why did you remove a big portion in place of your question?
Sam99foster10:13, 14 September 2008deletion of Toffeeman's comments seemed unfair and unnecessary
Aschlafly11:11, 13 September 2008replies
:::::The response addresses your qualitative claims about the paper's statistical methods raised in points two, three, and five by the following:"Nevertheless, from a statistical point of view, it is proper to combine the results of independent experiments, as Blount et al. did correctly in their original paper"(emphasis added); in fact the longest paragraph in the response deals entirely with the statistical claims of the letter and dismisses them.--Brossa 11:32, 13 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
SamuelHTD10:16, 14 September 2008Why did you remove a big portion in place of your question?
Sam99foster10:13, 14 September 2008deletion of Toffeeman's comments seemed unfair and unnecessary
Brossa11:32, 13 September 2008()
::::But that's never going to happen because the data availability requirements for public funding have already been met. Jirby 11:03, 13 September 2008 (EDT)10:56, 13 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
SamuelHTD10:16, 14 September 2008Why did you remove a big portion in place of your question?
Sam99foster10:13, 14 September 2008deletion of Toffeeman's comments seemed unfair and unnecessary
Jirby11:03, 13 September 2008()
Jirby10:56, 13 September 2008()
::::: No, I don't think the researchers have met NSF guidelines as referenced in the letter.--Aschlafly 11:11, 13 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Philip J. Rayment23:01, 16 September 2008Reverted edits by AlexM (Talk); changed back to last version by Aschlafly
AlexM22:57, 16 September 2008removed unsubstansive comments
DRamon22:15, 15 September 2008reverting more vandalism from JonoP
JonoP22:14, 15 September 2008rmv unsustansive comments
SamuelHTD10:16, 14 September 2008Why did you remove a big portion in place of your question?
Sam99foster10:13, 14 September 2008deletion of Toffeeman's comments seemed unfair and unnecessary
Aschlafly11:11, 13 September 2008replies
Proof?Jirby 11:26, 13 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
SamuelHTD10:16, 14 September 2008Why did you remove a big portion in place of your question?
Sam99foster10:13, 14 September 2008deletion of Toffeeman's comments seemed unfair and unnecessary
Jirby11:26, 13 September 2008()
: As I said, the NSF guidelines are references in the letter.--Aschlafly 11:29, 13 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Philip J. Rayment23:01, 16 September 2008Reverted edits by AlexM (Talk); changed back to last version by Aschlafly
AlexM22:57, 16 September 2008removed unsubstansive comments
DRamon22:15, 15 September 2008reverting more vandalism from JonoP
JonoP22:14, 15 September 2008rmv unsustansive comments
SamuelHTD10:16, 14 September 2008Why did you remove a big portion in place of your question?
Sam99foster10:13, 14 September 2008deletion of Toffeeman's comments seemed unfair and unnecessary
Aschlafly11:29, 13 September 2008reply
You mean the notebooks and ect? Jirby 11:32, 13 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
SamuelHTD10:16, 14 September 2008Why did you remove a big portion in place of your question?
Sam99foster10:13, 14 September 2008deletion of Toffeeman's comments seemed unfair and unnecessary
Jirby11:32, 13 September 2008()
:Oh my dear God, I can't believe this!! Where has this beautiful country gone to if even science is not reliable anymore nowadays. Hope things will change in the future. Good thing there still are people like Mr. Schlafly, <includeonly>suckadick moron</includeonly>who have the brains and power to stand up, and turn the people of America in the right direction again. Raul 12:24, 13 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
SamuelHTD10:16, 14 September 2008Why did you remove a big portion in place of your question?
Sam99foster10:13, 14 September 2008deletion of Toffeeman's comments seemed unfair and unnecessary
Raul12:24, 13 September 2008()
Raul12:24, 13 September 2008()
Mr. Schlafly, I have a question BTW. Was this letter received on a paper, or electronically? Because if it was on a paper, perhaps it would be a good idea to scan it, and post it. It would add a lot to the encyclopedic value of the article. Raul 12:26, 13 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
SamuelHTD10:16, 14 September 2008Why did you remove a big portion in place of your question?
Sam99foster10:13, 14 September 2008deletion of Toffeeman's comments seemed unfair and unnecessary
Raul12:26, 13 September 2008()
: PNAS procedures required me to submit the letter electronically using its own electronic submission software. When the PNAS acknowledged that my submission complied with all its requirements, it also said that the authors of the original paper had been notified of my letter.--Aschlafly 12:39, 13 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
Philip J. Rayment23:01, 16 September 2008Reverted edits by AlexM (Talk); changed back to last version by Aschlafly
AlexM22:57, 16 September 2008removed unsubstansive comments
DRamon22:15, 15 September 2008reverting more vandalism from JonoP
JonoP22:14, 15 September 2008rmv unsustansive comments
SamuelHTD10:16, 14 September 2008Why did you remove a big portion in place of your question?
Sam99foster10:13, 14 September 2008deletion of Toffeeman's comments seemed unfair and unnecessary
Aschlafly12:39, 13 September 2008reply re: procedures
Too bad. It doesn't make much sense though, guess that tells a lot about PNAS. What they should care about is the actual text, not the medium it is in. That's not the case IMHO for encyclopedias however. If not for anything else, a scan would have been useful as a reference for the digital text. Oh well... Raul 12:53, 13 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
SamuelHTD10:16, 14 September 2008Why did you remove a big portion in place of your question?
Sam99foster10:13, 14 September 2008deletion of Toffeeman's comments seemed unfair and unnecessary
Raul12:53, 13 September 2008()
Honest question, is it against the rules to disagree with Andrew Schlafly or criticize that letter? I just want to know so I don't end up in the same situation as other people who have been censored here.--IanG 17:03, 13 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
SamuelHTD10:16, 14 September 2008Why did you remove a big portion in place of your question?
Sam99foster10:13, 14 September 2008deletion of Toffeeman's comments seemed unfair and unnecessary
IanG17:03, 13 September 2008()
IanG17:02, 13 September 2008()
:I believe this this page is only for discussion of the response, which is quite straightforward. Criticism of the letter should have gone on it's talk page, but it's too late now. Oh well. There is no censorship on Conservapedia. Your comment is not substantive - please refactor it. Praise Jesus, Pila 17:26, 13 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
SamuelHTD10:16, 14 September 2008Why did you remove a big portion in place of your question?
Sam99foster10:13, 14 September 2008deletion of Toffeeman's comments seemed unfair and unnecessary
Markr19:43, 13 September 2008put my comment on end (sorry) and changed jesus to Jesus
Pila17:26, 13 September 2008()
::Fine, then I'll just say I agree with the response.--IanG 17:44, 13 September 2008 (EDT)Aschlafly19:31, 13 September 2008removed non-substantive postings; only substantive postings are welcome here per notice
IanG17:44, 13 September 2008()
== Next step ==Aschlafly19:31, 13 September 2008Reverted edits by JohnZ (Talk); changed back to last version by IanG
JohnZ19:21, 13 September 2008→Next step: new section
Andy, the PNAS Editorial Board seems to have taken point 5 as your substantive objection - and has rejected it out of hand. Perhaps you should submit a new letter and use your 250 words to argue specifically for that point. --JohnZ 19:21, 13 September 2008 (EDT)Aschlafly19:31, 13 September 2008Reverted edits by JohnZ (Talk); changed back to last version by IanG
JohnZ19:21, 13 September 2008→Next step: new section
:: if you REALLY believe that Lenski has committed acedemic FRAUD then lodge a formal complaint with his University. They are taken very seriously and can lead to loss of tenure and dismissal from the university, and with that on his record no other institution would hire him on any basis. Markr 19:40, 13 September 2008 (EDT)Aschlafly16:55, 20 September 2008archive
SamuelHTD10:16, 14 September 2008Why did you remove a big portion in place of your question?
Sam99foster10:13, 14 September 2008deletion of Toffeeman's comments seemed unfair and unnecessary
Markr19:43, 13 September 2008put my comment on end (sorry) and changed jesus to Jesus
== Please ==Bugler14:06, 13 September 2008()
NewUser12:52, 13 September 2008→Please: new section
Please, stop deleting entries in this talk page. It puts conservatives in bad light. Let's answer with reason and debating skills instead. This is particularly aimed at user:Aschlafly.Bugler14:06, 13 September 2008()
NewUser12:52, 13 September 2008→Please: new section
Notice: misrepresentations are not going to be allowed on this page. Substantive comments only, please.Aschlafly10:13, 13 September 2008reply
Note: earlier posts are archived here. --BillA 17:05, 20 September 2008 (EDT)BillA17:05, 20 September 2008()