Why I Wish Google Circles Were Reversed

google-create-a-circleFor years social networks have been trying to digitally duplicate the complex analog properties of relationships. MySpace had a “Top 8″ to signal your closest friends. It was fun, but it was signaling only. Facebook finally allowed grouping to mimic your analog life’s partitions, but it was cumbersome to use. Meanwhile, Twitter stumbled on an important aspect that Facebook had ignored: asymmetric relationships (Facebook has since created a “subscription” feature). But tweets are still either all private or all public.

Continue reading

Posted in Uncategorized | 9 Comments

Speak, Friend, And Comment: Block WordPress Comment Spammers That Never Visit The Page

Like a lot of other self-hosted WordPress bloggers, I had a problem with spam comments. The system is pretty good at marking comments as “spam,” but there are still occasional false positives and I like to look through the spam box to make sure nothing legitimate gets missed (I especially like to do this on a blog that’s still small, where every real comment is more important). But once I started getting upwards of 40 spam comments a day, trolling through the comments was simply becoming unmanageable. After trying various mods and hacks to the WordPress code, I finally developed a custom solution that dropped me to 0 spam comments per day. (If you don’t care about the discovery process, skip down to “How to Block The Spammers”)

spam-comments-per-day

Continue reading

Posted in Tutorials | Leave a comment

Link Google Analytics and Webmaster Tools For More Data!

I was browsing Google Analytics today when I noticed a new option on the left-hand side called “Search Engine Optimization.” I clicked on one of the sub-options and was invited to link my GA account with my Google Webmaster Tools account.

analytics-webmaster-tools-enable-link

This took me to a different settings page within Analytics, giving me an option to link my Account:

analytics-webmaster-tools-associate-sites

I clicked “Edit” and followed the instructions. I got sent over to Webmaster Tools to select my site and then got sent back to Google Analytics, where my settings option now looked like this:

analytics-webmaster-tools-profile-selected

I’ve only just started playing around with the connection, but it looks pretty cool. Essentially the connection lets you use Google Analytics to look at some data from Webmaster Tools, such as the info in the Search queries tab, but it gives you way more access to that data than you get from Webmaster Tools. It seems to go back farther and it lets you apply GA-style data filtering and manipulation.

analytics-webmaster-tools-average-position

For instance, I can look at the Average Position of my site in SERPs for the given keywords and how it has changed over time. I applied a filter for Average Position > 0, which allowed me to filter out all the position-less keywords that cloud up the results in Google Webmaster Tools.

A few things to note:

1. Data for the most recent two days is not available, displaying the same lag that we are used to seeing in GWT.

2. Data only goes back to June 29. This is much farther back than the GWT data goes, but I will have to wait and see if it rolls forward like the GWT data does, or if these are permanent numbers and that’s just as far back as the integrated data goes (I’m hoping for the latter).

3. There are still some bugs – Note the absurd combined CTR rate.

Overall, this looks like a promising new feature from the teams at Google, and I’m excited to see what else is there already and what else will be coming as time goes on.

Posted in Tutorials | 1 Comment

MPG Statistics For 1998 Chevrolet Cavalier

When we replaced my wife’s van with a car, I started keeping track of the Toyota Camry’s MPGs. Surprisingly I had never done this with my existing car, and I thought it was a good chance to gather some scientific data and compare the two. Since I am much nerdier than my wife, in addition to recording multiple trials I will be running some of them under varying conditions. Since my drive to work is 95% highway and we are currently using my wife’s car for all of our mutual non-highway driving, I will drive on the highway at some different speeds to see what difference that makes.

Fillup Mileage Miles Gal MPG Notes
8/22/11 125883
8/26/11 126038 151 4.99 30.3 65mph avg
8/31/11 126178 140 4.64 30.2 65mph avg
9/8/11 126370 192 5.94 32.3 60mph avg
9/16/11 126623 253 7.76 32.6 60mph avg
9/21/11 126784 161 5.89 27.3 varied
9/27/11 127018 234 7.87 29.7 varied
Total 127018 1135 37.1 30.6
Posted in Experiments | Leave a comment

MPG Statistics For 1996 Toyota Camry

 

We recently replaced my wife’s van with a Toyota Camry, and being the nerdy statistician that I am, I started keeping track of the mileage when we filled up to try to gather some personal data about its miles per gallon. (I’m also tracking my 1998 Chevy Cavalier.) I will update this post with new data as time goes on.

Fillup Mileage Miles Gallons MPG
8/21/11 148184
8/25/11 148365 181 6.54 27.7
8/30/11 148688 323 11.6 27.8
9/7/11 149042 354 11.92 29.7
9/9/11 149140 98 3.78 25.9
9/12/11 149419 279 8.51 32.8
9/16/11 149651 232 7.98 29.1
9/21/11 149864 213 7.22 29.5
9/26/11 150119 255 8.27 30.8
10/3/11 150383 264 8.92 29.6
Total 150383 2199 74.7 29.4

Blue indicates a span that contained much more highway driving than usual.

Posted in Experiments | Leave a comment

Social Media Buttons For Blogs: Twitter, Facebook, Google Plus

There are a lot of options out there for buttons that let visitors share content a number of different ways, but what I mostly care about are the big ones. At this moment in time I want to have share buttons with counters for Twitter, Facebook, and Google +1. So I put together my own set of links that I thought I would share with the world so you can customize as you wish.

Continue reading

Posted in Computer Use | 5 Comments

Customizing PHPBB Header and Footer

I started work on a new website this week, and I wanted it to have a forum. Right now phpbb is a pretty popular open-source hosting forum software, and my host (the wonderful FatCow) offered an easy installation service for it, so I clicked the buttons and set it up. I have full access to the files and database it created, and I intended to figure out how to stick the whole thing in a box (or a “div,” if you prefer) so I could customize the look of the overall body, header, and footer around my site. I’ve done the same thing with three or four WordPress installations, and didn’t think it’d be too hard.

Well, I was wrong.

But after a few days of stumbling around the Internet, I figured everything out and thought I’d write up a tutorial for all to learn.

Continue reading

Posted in Tutorials | 2 Comments

Creating Amazing Smart Playlists in iTunes

Playlists have been a cool feature of music playing programs ever since the fledgling days of WinAmp and MusicMatch Jukebox. You put all your songs on your computer, but sometimes you want to listen to a specific grouping of those songs that don’t fit in the neat little boxes of a particular artist, album, or genre, so you take the time to make an arbitrary custom list.

Of course, we’ve come a long way since then. Many programs today also allow you to use dynamic playlists that automatically fill with songs based on criteria you select, and this means that you can customize your own shuffle and create an easy and seamless – and constantly fresh – listening experience.

In iTunes, these dynamic playlists are called “Smart Playlists,” and iTunes actually comes with many of these my default. If you scroll to the bottom of your left-hand sidebar, you will see a list of many things like Recently Added or Recently Played. Clicking on one of these lists will show you what songs are in them. If you right-click on the name for the context menu and select Edit Smart Playlist, you will see the qualifications for the songs in that playlist. Here are my current settings for my Recently Played list (click to enlarge):

I have two rules: “Last Played > is in the last > 2 weeks” and “Media Kind > is not > Podcast,” which basically means this list contains any song I have played in the last two weeks besides podcasts. I have an incredible amount of customizability here. I can add or subtract rules and base the rules on almost any piece of data my tracks contain, from Genre to Playcount to Time, and even Bit Rate or whether or not it has Artwork associated with it. I can fill it with songs that only match all of my rules, or that match any one of them. I can limit it to any amount of tracks, time length, or physical hard drive space, and define how that limiting is determined. I can set it for Live updating (my usual preference) or uncheck it to make a one-time list that will not update over time. And the final nice thing about these playlists is that they copy over to my iPhone, and update every time I sync.

I have edited these default playlists to enhance their use to me. You can see from the screenshot that I’ve increased the Top 100 Most Played to a Top 200, and I’ve deleted the silly “90′s Playlist” altogether as I don’t have a need for it. But the smart playlists I use the most are the ones I’ve created myself. (All you have to do is go to File > New Smart Playlist…)

I am a heavy user of the Rating feature on iTunes. I have the I Love Stars app on my menu bar for setting Ratings when I’m on another program, and I constantly check and update the ratings when the songs play on my phone. A long time ago I set up a classic 5 Star playlist.

It’s great to shuffle my favorite jams in the world, but I’m very selective about my 5-star list. It only has 89 songs. I need more than that to get through a day sometimes, and even though I want to hear those songs much more often than my other songs, I don’t usually only want to hear them. So I also made a 4-5 Star playlist that included 4 star songs as well as 5 star songs. (The default Top Rated, which is set to “greater than 3″ does the same thing.)

This is a pretty good playlist. WIth 752 songs, it’s big enough to stay fresh, but sometimes even that gets old, so I also made a 3-5 star playlist, which at this moment has 2227 songs in it. So depending on what kind of mood I’m in, I can listen to my most favorite songs or a broader list that includes songs I haven’t heard as much.

This was a pretty good system, and it satisfied me for a long time. But even it had its drawbacks. Two-thirds of the songs in my 3-5 Star list are only rated 3 stars, which means I mostly hear songs I don’t like as much. But if I only listen to 4-5, I never hear the 3 Star songs at all, which is a problem in the other direction. And all of the playlists would sometimes succumb to that curious feature of computerized shuffling that seems to pick certain songs over and over early on in shuffles while completing ignoring other ones. I wished that I had a way to keep my lists but ignore songs I had heard recently, with “recently” being subjective depending on how much I liked the song.

Then I discovered the best part of the Smart Playlist feature. One of the data for rules can be based on another playlist! This allows you to recursively fine-tune your lists in even more ways than I previously realized. So here’s what I did.

First, I set up a folder (another cool feature for organizing your massive numbers of playlists – File > New Playlist folder) and called it SMART, because this was going to be smarter than any smart playlist I had already made.

Then I made a Smart Playlist inside that I called 3ST-3MO, which basically contains all the 3 Star songs that I haven’t heard for 3 months.

I also added a 4ST-3WK for all the 4 Star songs I haven’t heard in 3 weeks, and a 5ST-3DAY for all the 5 Star songs I haven’t heard in 3 days. Then I brought them altogether in a super duper Smart Playlist that matches ANY rule and includes any song that is in any of the 3 playlists. The icing on top is that I also include the contents of a a 4th playlist, Recently Added, so that new songs that aren’t rated yet will get to audition for one of the 3 spots before I forget about them completely.

I called it SMART+RECENT, and it’s a thing of unparalleled genius and beauty. I don’t hear the 3 Star songs too frequently and I don’t hear the 5 Star songs too infrequently. And because it regularly updates away the songs that I’ve just listened to, the shuffle doesn’t get stuck picking some of the same ones ahead of others. I’m still thinking about tweaking some of the details – 3 months might not be long enough for the 3 stars, for instance – and you can do the same for yours. It’s very flexible.

My tiered system is a lot like the radio station concept of heavy rotation, medium rotation, and so forth, and it basically means that all I have to do is start my shuffle and get a satisfying dose of music. More listening, less skipping, and thus more focusing on whatever the reason I’m listening to the music in the first place, whether it’s work, driving, or just relaxing and enjoying the sounds of my favorite musicians.

Posted in Computer Use | Leave a comment

How To Analyze Your Own Earthquake Data

If you’re ever curious about earthquake history and patterns, it’s not that difficult to do your own original research. Thanks to openly available data on the Internet, it doesn’t cost anything and only takes a little time and some basic programming knowledge.

To do any analyzing, you will definitely need a database. I prefer using MySQL on my Mac, and that’s what this tutorial will be based on. I installed the community server available here. To access it, I pull up Terminal and type in the following:

alias mysql=/usr/local/mysql/bin/mysql
mysql

For some reason mysql won’t run unless I create an alias for it and then call the alias. There may be an easier way but that’s what I found online once and have gotten it to work. That should pull up something like this:MySQL on Mac Screenshot

I’ve created a table called quake with the following structure:

I’m going to be filling this table with data from the Advanced National Seismic System catalog. Under “Select earthquake parameters,” I’m going to change Start date/time to “1990/01/01,00:00:00″. I’m going to leave End date/time blank, which means it will go to the latest data. I’m going to set Min. magnitude to 5.0 and not fill in any other fields or change any options. You can set things however you want. Sometime I will filter by latitude or longitude to measure earthquake patterns in particular areas of the globe – such as the New Madrid fault line a couple hundred miles from where I live!

Just know that if you don’t at least filter by a reasonable date length or magnitude size, you’re going to get way too much data, and you may need to change the “Line limit on output” at the bottom to be much greater than the default of 10000. My current quake table has about 32000 lines in it, but I haven’t updated it in a few months. I’m going to fill it anew for this tutorial and I know I’m going to need more lines than that under these parameters, so I changed it to 50000.

Click “Submit request” and you’re off. A new tab or page should open and after a couple seconds start filling with line after line of earthquake records. (If you didn’t specify an end date, scroll to the bottom and make sure you have some recent data. If it cuts off early you may need to increase your line output.)

This looks good to me so I’m just going to hit Cmd-A (select all), Cmd-C (copy), open up TextEdit and hit Cmd-V (paste). Now I have all these lines of data on my computer, and I need a way to develop INSERT statements to dump into my MySQL table. I’m going to use C++ to do that.

First I need to prep my lines for parsing from C++. I erase the top introductory lines so that my file starts with my first line of earthquake data: “1990/01/01 7:49…..” I save the file as “log” in my quake folder.

Now unfortunately my Mac doesn’t want to save boring old .txt files so I have to go into Finder, change the extension, and confirm that I really do want to “Use .txt”

Now I open up my quakes.cpp file (Another text file I had to change the extension of) which has the following code:

/* Standard C++ headers */
#include <cstdlib>
#include <iostream>
#include <fstream>
#include <string>

using namespace std;

int main(void)
{
ifstream infile;
ofstream outfile;

infile.open(“log.txt”);
outfile.open(“query.txt”);

string test, date, time, lat, lng, depth, magnitude;

infile >> date;

while (infile)
{

infile >> time;

time = date + ” ” + time;

infile >> lat;
infile >> lng;
infile >> depth;
infile >> magnitude;

getline(infile, test);

string query = “insert into quake (time, lat, lng, depth, magnitude) values (‘”;

query += time + “‘, ” + lat + “,” + lng + “,” + depth + “,”;
query += magnitude + “);”;

outfile << query << endl;

infile >> date;

}

return 0;

}

I make sure that the infile.open line is referencing the name of the .txt file I just made. The outfile line doesn’t matter, because it will be created if it does not exist.

Now I need to run my C++ file. For the longest time I did not know Macs could do this, but they can. I open up a new Terminal window, move to the folder containing my cpp file (via the cd command) and run the following statement:

make quakes.cpp quakes

This compiles the file into an executable called quakes. Then I just reference the executable to run it:

./quakes

Voila! I have a whole new text file full of usable queries:

Now I just do another Cmd-A Cmd-C, move over to my first Terminal window, and I’m ready to paste the statements into my database. Ah, but not yet. Since I’m updating the whole database in this example, I need to clear it out first with a quick “DELETE FROM quake”. Then I paste.

The program interprets each line as its own statement and it runs them one at a time as the little window fills with data rushing past. I can tell by the way the date fields are incrementing that this is going to take a few minutes.¬†This is also a good time to remind you that the catalog can be a couple days behind as they verify the newest logs, clean up duplicates, and the like. (As of the data I pulled around 3/11/11, 7:00 PM Central Time, the big one that just hit Japan not quite 24 hours ago isn’t in there.)

But once it’s all in there and the screen stops moving, the hard part is done and the fun begins! I’ve got all my data entered with time, latitude, longitude, magnitude, and depth, and I can plop whatever query I want in there to slice and dice my new data. How did 2010 stack up against other years for, say, recorded quakes of 6.0 magnitude or greater?

SELECT count(*), YEAR(time) FROM quake WHERE magnitude >= 6.0 GROUP BY YEAR(time) ORDER BY COUNT(*) DESC;

There we go – third place. This is exactly the process I used when I analyzed recent global history after last year’s Haiti and Chile quakes, and it’s what I’ve been using to analyze the activity around Arkansas and the New Madrid in early 2011. Once you get your initial files set up, it’s just a matter of hitting the up arrow in Terminal to reuse your previous commands, and a bunch of copying and pasting. Often I’ll put some numbers into Numbers to make some pretty charts. And that’s all there is to it!

Posted in Tutorials | Leave a comment

How to Pick Good, Safe Passwords

Here is a post I wrote for my company’s blog on some¬†great tips for remembering unique passwords. The basic idea is to pick a pattern that is loosely based on the site’s URL so you can use that pattern to generate a different password for every single site you use. Then if any single site’s information is exposed, all of your other passwords remain safe.

Posted in Computer Use | Leave a comment