Earthquake Prediction Evaluation – America's Junk Science

Petra Nova Challus – Investigative Journalist

October 4, 2010

 

The process of science itself is to prove something is true via facts that support a theory, however, it falls short in the prediction evaluation process and this report challenges the programs of evaluation, the methodology, formulations of scoring and asserts it falls into the category of junk science.

Prediction Evaluation is designed to test a predictors skills to ascertain if they are predicting earthquakes beyond the level of guessing. If one were to pass such a test it would be termed “beating random chance.” As most are aware a prediction must include a defined location where an earthquake is anticipated, the time period of the prediction and a set earthquake magnitude. And during the evaluation process it is required that the evaluator receive and/or view the prediction(s) before the prediction period commences.

The Evaluators Addressed in This Report

Dr. Alan Jones of Department of Geological Sciences and Environmental Studies, State of New York Binghamton University who earned his PhD in Engineering Sciences at Purdue University

Mr. Roger Hunter, USGS retired received his degree in Psychology.

Jones has issued two reports; one for subject Dennis Gentry via his paper with Richard H Jones in Testing Skill in Earthquake Predictions (2003) and a separate evaluation on Dr. Zhonghao Shou for the Seismological Society of America as part of a scientific publishing process.

Hunter has undertaken numerous evaluations, however, has only written one evaluation report on subject Geologist James O. Berkland published in Berkland's book “The Man Who Predicts Earthquakes” and through the Skeptical Inquirer Magazine.

Underlying Evaluator Requirements:

There are no requirements to become an evaluator via education, training, scientific guidelines, they are not licensed and they are not regulated by any scientific body.

Their computer programs are the same, though they are never tested to confirm the outcomes they deliver maintain consistency and can be duplicated by anyone else.

They develop their rules on their own or with an agreement with the predictor under review and are intended to be delivered to the predictor before the commencement of a review and they are not set by any scientific body.

They are not required to offer a written report with a noted score upon completion of the evaluation process.

No predictor can request a review by an outside agency as they are unregulated with no governing body of science that oversees evaluators.

Earthquake Catalogs (The Earthquake Library)

After a prediction expires, earthquake catalogs are checked to ascertain if the prediction has been cataloged and the evaluator confirms if the earthquake in question is listed for the specified time it was in force, if it fell within the radius predicted and the magnitude meets with what was offered in the prediction.

While it's easy to confirm dates and the prediction radius, the magnitudes afforded in the earthquake catalogs are not uniform. From scales such as mw, ml, mb and ms and others, these are entered by either the scientific organizations requirements for their scientists to use or the scientist on duty sets it. Each magnitude scale offers a different value and a predictor does not know in advance what magnitude scale will be used at the time of review for catalog entry.

Many have heard that statement about comparing apples to apples and oranges to oranges, but in evaluation this is not performed. By whatever magnitude scale the quake is entered in the catalog is what the evaluator selects.

Flaws in the System

To demonstrate the flaws in the system due to lack of uniformity in earthquake catalog data we need to examine its impact upon the outcome for the predictor.

Example – Haiti Earthquake

Let us imagine the predictor issued a prediction for a M 6.5 – 6.9 and upon review the NEIC catalog indicated the result was 7.0mw. This would be considered a miss, however, an M 6.8mb is equal to an 7.0mw and thus the predictor is entitled to receive credit for the prediction, but at present only what appears in the catalog is what the predictor is afforded credit.

NEIC Catalog Entry: Haiti Earthquake

PDE-W 2010 01 12 215310.06 18.44 -72.57 13 7.0 MwUCMT .CM 3T..... 0

From the USGS Significant Earthquake Lists the magnitude breakdown by scale:

JAN 12 21 53 10.0 18.443 N 72.571 W 13 G 7.0 1.0 500 HAITI REGION. MW 7.0 (GS), 7.0 (UCMT), 7.0 (GCMT), 7.0 (WCMT). mb 6.8 (GS). MS 7.3 (GS). ME 7.6 (GS).

http://earthquake.usgs.gov/earthquakes/eqarchives/significant/sig_2010.php

Evaluation Argument on Magnitude Equivalency Scales

In 2007 I issued a prediction for an M 6.0 earthquake in Nicaragua. The USGS reported the resulting earthquake as an MW 5.2 and INETER in Nicaragua reported it and cataloged it as an ML 6.0. Credit for this prediction initially was not given as it was cataloged at an MW 5.2. I contacted a geophysicist who performs magnitude determinations for the USGS daily and confirmed an 5.2MW is the same and an ML 6.0 with comment that local service providers are more correct than outside networks, yet still no credit was afforded. One year later upon discovering the Significant Earthquake Lists on the USGS web site I found such an example and forwarded the link to Hunter and was finally afforded credit for the prediction. While Hunter has asserted he is a scientist, he appeared to unaware of the various magnitude equivalency scales, did not contact anyone he knew within any body of science to confirm this information, nor did he seek the answer through the medium of the Internet.

Evaluation Argument on Overlapping Radius Rule

In November 2007 I issued a prediction for Tel Aviv, Israel and another for Alexandria, Egypt and both were thrown out by Hunter, yet both locations experienced earthquakes. The earthquake in Tel Aviv was formulated by far field triggering and was the first event to occur there in 6 months, only one of 9 in the past 5 years and only one of 16 in the past 20 years, thus making it a very rare event and it was felt by over one million persons. These predictions were initially discard for he overlapping radius rule, though upon my pointing out each area had their own earthquakes the issue of his rule about overlapping radius was set aside and I was given credit for Israel as it met within my guidelines and a miss for Egypt as it fell below my specified predicted magnitude.

Example – Papua, Indonesia (Aru Islands) Worldwide Reporting of this Event

This example demonstrates how any particular earthquake is ranked by magnitude scales from global service agencies. Australia is the closest, yet information developed and entered into the NEIC catalog may not use Australia's computation, but their own method if they are not signatories to a shared information agreement. And it is noted worldwide by scientists that a seismic service provider closest to the location is normally the most correct in reporting the magnitude of the event.

7.2 2010/09/29 17:11:24 -4.920 133.783 12.3 NEAR THE SOUTH COAST OF PAPUA, INDONESIA

USGS (mw scale entry, first report)

http://earthquake.usgs.gov/earthquakes/recenteqsww/Quakes/quakes_big.php

2010-09-29 17:10:52.0 -5.01 133.80 33 50 6.7ms, 6.6mb, 7.5mw Aru Islands region, Indonesia

GSR – Russian Academy of Sciences

http://www.ceme.gsras.ru/cgi-bin/ccd_quakee.pl?num=100&x=17&y=8#

7.2 29-Sep-2010 17:11:32 -4.957 134.033 30 P Irian Jaya Region, Indonesia, New Guinea. 

Geosciences Australia – Preliminary Report – ML scale used throughout

http://www.ga.gov.au/earthquakes/

Margins For Error

To be .01 off in magnitude, 1 mile or kilometer outside of one's radius is cause for exclusion for credit on any prediction issued. Yet, the key point of error in the evaluation process is that the data itself as it is not uniform nor guaranteed to be correct. Saturation of equipment determines how accurate earthquake locations may be. And if they are off shore, due to lack of triangulation the margin of error is far greater than on land. Therefore to arrive at a correct score in evaluation each and every prediction must be rated to account for known errors and on “all magnitude scales” to afford the predictor credit if any of the scales used qualify the prediction.

This matter is not an issue for any scientist who performs magnitude determinations daily and they understand it perfectly, and with the locations themselves in question as they are changed as often as the magnitudes in the earthquake catalogs we should question why Jones and Hunter have never employed this knowledge into their evaluation procedures. The only entries guaranteed to be correct in earthquake catalogs is the date and time the earthquake occurred, and what do we expect from the realm of science? Understanding data and and application in the use of it.

Catalog Consistency

Earthquake catalog data is not as accurate upon early entry as it is at a later date and that's because it reviewed and reviewed often during the first year, corrections are made as to location, magnitude and depth and some are literally pulled out and placed back into the catalogs frequently as well. As further research is undertaken in specific areas and it's clear the data was not correct, then corrections will continue to be made, but at no time are the catalogs guaranteed to be correct nor stationary. However, in attempting to ascertain the percentage of error ratio in the catalogs no one seems to be aware of exactly what it might be and this as well has to be factored into the evaluation scoring.

Thus at final scoring the evaluation score should be listed as an example of 95% plus or minus X% taking into consideration it is known the data being used is not guaranteed to be correct. Yet both Jones and Hunter are aware the catalogs are not guaranteed to be correct and they have not delivered scores reflecting the uncertainty of this well known non-guaranteed data.

The Passing Grade in Evaluation

The standard for passing an evaluation has been set by the evaluators, not a scientific body and thus it is at their discretion as to what they believe proves a predictor is not guessing in the process of their prediction program. Jones and Hunter assert (Jones 95%) (Hunter 98-99%) provides proof the predictor under review is offering predictions beyond guessing. As to whether 98-99% should be expected from a predictor there are numerous opinions, though we know to receive a score with a substantial number of predictions issued it is enormously difficult to achieve. Given weather persons at their very best after forty years of weather forecasting can only achieve 85%, it begs one to ask why an earthquake predictor with less than forty years of daily experience should perform better than the enormously experienced weather person.

Record Setter:

Only one person under review has ever beaten random chance and that notable outcome belongs to amateur predictor Dennis Gentry under review by Alan Jones with only 12 predictions issued. However, we should ask if only 12 predictions are adequate to confirm a predictor performs well? Actually, is isn't and thus another review was undertaken and outlined in Jones paper, Testing Skill in Earthquake Predictions where 18 predictions were issued before Gentry suffered equipment failure.

However, in researching Gentry's prediction program he was not issuing site-specific events, but using a bulls eye arrangement known as concentric circles, thus any qualifying event which arrived along the edge of that circle would afford him credit for his predictions. He and his equipment that detected geomagnetic signals were located in Southern California and had never been tested prior to this arrangement by anyone to ascertain it's maximum distance capabilities. During Gentry's second review he received credit for an earthquake occurring in Egypt, however, neither Jones or anyone at the time who sat as a witness to this process on Goggle Groups web site, sci-geo-earthquakes questioned if his equipment could detect signals at that distance. I have researched this and was given information which indicated that such equipment with a 100 foot tower can only detect signals at a maximum distance of 25 straight line miles. Thus we should question the entire process and discard it for cause as it seems the predictions were not in fact issued solely from the use of that equipment.

Though credit for Jones review in the use of his computer program should be given because he demonstrated the process of evaluation and to my knowledge has always provided rules for evaluation and gone over the process with everyone he has reviewed.

After the Evaluation - Debunking Predictors

There are approximately eight persons who have received reviews in totality from Jones and Hunter and only two persons were treated favorably following their reviews in that they escaped from public malignment of their character. Dennis Gentry was on very good terms with Alan Jones and urged him to continue his work and never offered an unkind word about his performance at any time. And as for Hunter, of the long list of those reviewed only one person, “Skywise” as he is known throughout the Internet and is Hunter's personal friend has received the same beneficial courtesy, though on 10/13/09 Hunter made a public admission the prediction program was a hoax and his evaluation was assigned a score of 94%. As most are aware a hoax is not a valid method of prediction and thus consensus of opinion is that it should not be scored as it is not comparable to valid prediction issuance.

One of the most bizarre events that has transpired through evaluation process involved Dr. Zhonghao Shou, a chemist, who submitted an article to the Seismological Society of America (SSA) for publishing and as part of that Jones was enlisted to offer his evaluation and did so. Upon conclusion Dr. Shou did not agree with the outcome on various issues and has outlined them on his web site. However, his evaluation report was released to Hunter, (a person not connected with the paper publishing effort), without Dr. Shou's knowledge nor authorization and Hunter has unfavorably commented upon it more than once through the Internet and it is Dr. Shou's opinion it was released to Hunter to attack him. In this, we should question the integrity of science publishing in this matter and professional conduct when trusted with the task of evaluation. If this is to become common practice, then the subjects of evaluation should be required to sign an authorization statement for release their reports to outside parties or with an agreement from the evaluators that they will not release privileged information to the public.

Aside from the aforementioned, one of America's most beloved geologists James O. Berkland has stood the test of time in the matter of longevity through malignment of his character following his evaluation by Hunter as have I. My test prediction program commenced in February 2007 and ran through November 2007 and during that period I issued 216 site specific global predictions based upon the use of a computer based forecasting program, far field triggering, seismicity pattern recognition an human bio-sensitivity to earthquakes and received no less than six scores over the course of three years with today's final outcome of 94% with 1.47 standard deviations. Hunter approached me mid-program in 2007 and suggested I was doing well and said he wanted to offer his evaluation. He did not present me with the rules for evaluation scoring at that time, nor did he convey they existed until my program ended and the scoring process was underway. I found my predictions were being discarded from credit based upon rules I had no knowledge of. This afforded Hunter the ability to note errors I had made prior to his contact with me which only he knew would downgrade my score without my knowledge of those evaluation rules before he ever approached me to offer his services. Yet, even at that point in time he did not have any written rules for evaluation to offer.

To date I have not received a written formal report of this evaluation with a score affixed to it. I was told since I did not pass, no report would be issued and he would not publish the results. Yet the results in a lose format were posted on the Internet at Google Groups, sci-geo-earthquakes before I had the opportunity to review the predictions included or excluded in my evaluation. And we must realize that if no report is offered it does not afford the predictor the ability to seek an outside opinion from anyone to review it for errors or offer comment upon it. And in addition after three years I am not allowed to discuss this evaluation in public via the medium of the Internet without direct interference from Hunter who has generally asserted that because I did not pass his evaluation I am a failure in life itself.

In 2009 I wrote my evaluation rules based upon two factors; Hunter's list of reasons to discard predictions and note of program parameters predictors could create themselves as communicated by Dr. Thomas Jones of SCEC in 2005 at a public conference. Following publishing my rules on my web site Hunter then posted the rules that applied to his methods on his web site at my suggestion without credit afforded to me for writing them; but again I reiterate, they were not in existence prior to my evaluation.

Thoughts on Reformation

From persons who present themselves as experts, though unqualified by any means, nor regulated and offer what are thought to be professional prediction evaluations it's clear much needs to be reformed if correct calculations are to be afforded to the predictors. It is common knowledge throughout the scientific community that earthquake catalogs change and often with the first year being changed most frequently and it is during this the time period when most evaluation scoring is undertaken. There has been discussion for years to enter earthquake magnitude determinations worldwide using only one scale and if that could be afforded it would make the evaluation process considerably easier than it is at present, but in light of various magnitude scale entries and no guarantee the catalog data is correct, the present system of evaluation offers no acknowledgment that the reports themselves are based upon flawed data thus producing incorrect outcomes. And that outcome is in favor of the evaluators, not the predictors. However, we need to have a figure on how flawed the catalogs are, be it 5% to possibly as high as 25% and include that into the evaluation final summation. Therefore, that which cannot be proven is no doubt junk science.

Evaluation Impact Upon Predictors

For anyone who might wish to enter the realm of prediction and is not sanctioned by some umbrella of science which offers them protection from public malignment of their character, they should reconsider because the negative commentary upon completion of an evaluation continues for an undetermined period of time. The process of evaluation and its scoring is very detailed and for anyone who has not engaged in this process they cannot understand what is correct, incorrect and what latitude they have or do not have in final scoring. Thus the process itself appears to be developed to remove credibility from anyone who predicts earthquakes in hopes of removing public confidence in their work, despite it being known to be experimental, just as it is inside of scientific programs.

The methods of evaluation currently practiced by Jones and Hunter without any safeguard via regulation for impropriety or review by an outside body leaves much to be desired as do their methods, computer programs and outcomes that cannot be verified by any outside source, thus it is much akin to leaving the fox in charge of the chicken house, and it is well known that lack of regulation permits abuse of power.

Closing Commentary

Remember above all else, science is about facts. Prediction Evaluation at present is in fact Junk Science because it cannot be proven to be correct because the data itself is flawed. It has errors and those errors are not accounted for in the evaluation process. Hopefully, they will wish to review what they have been doing in light of this report and consider reformation regardless of enforceable regulation.

Also prediction programs where there is any suspicion that they are not being performed honestly should not be evaluated because it in essence it removes credibility of the evaluator who undertakes such a review.

My research which led me to my conclusions came from scientists all over the world and I would like to extend my most sincere thanks to all of the members of the science community for their generosity in lending me a hand in understanding how science performs at it's best, what is known, not known, and offer my compliments on their professionalism at all times. Outside of the realm of evaluation I have been treated respectfully by every member of the science community that I have had contact with, no matter the issue at hand and to them, I cannot say Thank You often enough.

And remember, no scientists to date can predict earthquakes and guarantee they will occur, nor are there training programs to teach anyone how to predict earthquakes.

Dr. Zhonghao Shou Web Site - Review of his Evaluation:

http://www.earthquakesignals.com/zhonghao296/A070821.html




 

 
Make a Free Website with Yola.