If you’re ever curious about earthquake history and patterns, it’s not that difficult to do your own original research. Thanks to openly available data on the Internet, it doesn’t cost anything and only takes a little time and some basic programming knowledge.
To do any analyzing, you will definitely need a database. I prefer using MySQL on my Mac, and that’s what this tutorial will be based on. I installed the community server available here. To access it, I pull up Terminal and type in the following:
For some reason mysql won’t run unless I create an alias for it and then call the alias. There may be an easier way but that’s what I found online once and have gotten it to work. That should pull up something like this:
I’ve created a table called quake with the following structure:
I’m going to be filling this table with data from the Advanced National Seismic System catalog. Under “Select earthquake parameters,” I’m going to change Start date/time to “1990/01/01,00:00:00″. I’m going to leave End date/time blank, which means it will go to the latest data. I’m going to set Min. magnitude to 5.0 and not fill in any other fields or change any options. You can set things however you want. Sometime I will filter by latitude or longitude to measure earthquake patterns in particular areas of the globe – such as the New Madrid fault line a couple hundred miles from where I live!
Just know that if you don’t at least filter by a reasonable date length or magnitude size, you’re going to get way too much data, and you may need to change the “Line limit on output” at the bottom to be much greater than the default of 10000. My current quake table has about 32000 lines in it, but I haven’t updated it in a few months. I’m going to fill it anew for this tutorial and I know I’m going to need more lines than that under these parameters, so I changed it to 50000.
Click “Submit request” and you’re off. A new tab or page should open and after a couple seconds start filling with line after line of earthquake records. (If you didn’t specify an end date, scroll to the bottom and make sure you have some recent data. If it cuts off early you may need to increase your line output.)
This looks good to me so I’m just going to hit Cmd-A (select all), Cmd-C (copy), open up TextEdit and hit Cmd-V (paste). Now I have all these lines of data on my computer, and I need a way to develop INSERT statements to dump into my MySQL table. I’m going to use C++ to do that.
First I need to prep my lines for parsing from C++. I erase the top introductory lines so that my file starts with my first line of earthquake data: “1990/01/01 7:49…..” I save the file as “log” in my quake folder.
Now unfortunately my Mac doesn’t want to save boring old .txt files so I have to go into Finder, change the extension, and confirm that I really do want to “Use .txt”
Now I open up my quakes.cpp file (Another text file I had to change the extension of) which has the following code:
/* Standard C++ headers */
using namespace std;
string test, date, time, lat, lng, depth, magnitude;
infile >> date;
infile >> time;
time = date + ” ” + time;
infile >> lat;
infile >> lng;
infile >> depth;
infile >> magnitude;
string query = “insert into quake (time, lat, lng, depth, magnitude) values (‘”;
query += time + “‘, ” + lat + “,” + lng + “,” + depth + “,”;
query += magnitude + “);”;
outfile << query << endl;
infile >> date;
I make sure that the infile.open line is referencing the name of the .txt file I just made. The outfile line doesn’t matter, because it will be created if it does not exist.
Now I need to run my C++ file. For the longest time I did not know Macs could do this, but they can. I open up a new Terminal window, move to the folder containing my cpp file (via the cd command) and run the following statement:
make quakes.cpp quakes
This compiles the file into an executable called quakes. Then I just reference the executable to run it:
Voila! I have a whole new text file full of usable queries:
Now I just do another Cmd-A Cmd-C, move over to my first Terminal window, and I’m ready to paste the statements into my database. Ah, but not yet. Since I’m updating the whole database in this example, I need to clear it out first with a quick “DELETE FROM quake”. Then I paste.
The program interprets each line as its own statement and it runs them one at a time as the little window fills with data rushing past. I can tell by the way the date fields are incrementing that this is going to take a few minutes. This is also a good time to remind you that the catalog can be a couple days behind as they verify the newest logs, clean up duplicates, and the like. (As of the data I pulled around 3/11/11, 7:00 PM Central Time, the big one that just hit Japan not quite 24 hours ago isn’t in there.)
But once it’s all in there and the screen stops moving, the hard part is done and the fun begins! I’ve got all my data entered with time, latitude, longitude, magnitude, and depth, and I can plop whatever query I want in there to slice and dice my new data. How did 2010 stack up against other years for, say, recorded quakes of 6.0 magnitude or greater?
SELECT count(*), YEAR(time) FROM quake WHERE magnitude >= 6.0 GROUP BY YEAR(time) ORDER BY COUNT(*) DESC;
There we go – third place. This is exactly the process I used when I analyzed recent global history after last year’s Haiti and Chile quakes, and it’s what I’ve been using to analyze the activity around Arkansas and the New Madrid in early 2011. Once you get your initial files set up, it’s just a matter of hitting the up arrow in Terminal to reuse your previous commands, and a bunch of copying and pasting. Often I’ll put some numbers into Numbers to make some pretty charts. And that’s all there is to it!