I want to keep the comments open on this blog, but I keep getting hit with tons of comment spam, particularly from China.  Okean.com has an excellent list of Chinese IP blocks at http://www.okean.com/antispam/china.html.  The following script reads the data file and formats the CIDR's into .htaccess "deny from" format.  The output can be pasted into your .htacces file.  You do need to have an "order allow, deny" at the start and "allow from all" at the end!

Since I wrote the post Net::Google::Analytics Extended Example Google updated the API and the Perl module was modifed to accomodate. That example is broken and I am providing an updated version here.

There are two major changes. First, Google started using OAuth2 for authentication so that code is different. There are plenty of details in the API docs.

Second, the Perl module has a bit smoother interface. I'm not sure if this was an API change or a module enhancement, I think the latter. In any case the new code is a lot cleaner.

I recently put up a copy of our main site, www.cindyruppert.com, on a new server for test purposes and pointed www.cindyruppert.net at it.  I didn't want Google (or anyone else) to see all of duplicate content, so I put some basic HTTP Authentication on it, using the .htaccess file:
Google just deprecated their image chart API that I use to create QR codes.  When I first looked into it a while ago, there wasn't a good solution that worked on the old Perl I was stuck with.   Now I'm all upgraded and looked again and found GD::Barcode::QRcode.  Works great.

This is pretty straight from the module's synopsis, but it took a second for me to get what I wanted, so I thought I'd post it.

# Create QR code with Perl

use strict;
use warnings;
use GD::Barcode::QRcode;

open my $OUT, '>', 'test.png';

my $gd = GD::Barcode::QRcode->new(
	{ Ecc => 'L', Version=>2, ModuleSize => 16},

print $OUT $gd->plot->png;
close $OUT;

Load an Access database

| No Comments
Here is a stripped down script to load a Microsoft Access database table using Win32::ODBC. This loaded about 100k rows per minute on my machine, which is WinXP, Access 2003, Active State Perl 5.8.8 and a Core 2 6600 processor at 2.4GHz.

use strict;
use warnings;

use Win32::ODBC;

$| = 1;

my $dsn = "LinkManagerTest";
my $db = new Win32::ODBC($dsn)
    or die "Connect to database $dsn failed: " . Win32::ODBC::Error();

my $rows_added = 0;
my $error_code;

while (<>) {

    print STDERR "."     unless $. % 100;
    print STDERR " $.\n" unless $. % 5000;

    my ($source, $source_link, $url, $site_name) = split /\t/;

    my $insert = qq{
        insert into Links (
        values (

    $error_code = $db->Sql($insert);

    if ($error_code) {
        print "\nSQL update failed on line $. with error code $error_code\n";
        print "SQL statement:\n$insert\n\n";
        print "Error:\n" . $db->Error() . "\n\n";
    else {

    $db->Transact('SQL_COMMIT') unless $. % 1000;


print "\n";
print "Lines Read: $.\n";
print "Rows Added: $rows_added\n";

exit 0;

Resources for Learning Perl

| No Comments
Gabor Szabo just wrote a Learning Perl post on the issue of how beginners find good Perl learning materials.  He suggested linking to some good material, so here we go.

First up is Perl.org's own Learn Perl page.  Lots of good starting info there.

Next we have chromatic's Modern Perl book, an excellent resource.  It is available in print or as a free PDF download.  I sure wish this was available when I first started learning Perl!

Finally we have Gabor's own Perl Tutorial.

Perl is an awesome language, and great fun to program in.  Dig in and have fun!
Enhanced by Zemanta
There is an update to this post, stuff changed.

I wanted to fetch visitors and page views by month for the past year for our website, and quickly found the Net::Google::Analytics module. It and Net::Google::AuthSub installed easily. However, the snippet in the synopsis did not compile ($i was undefined) and was obviously missing a loop over the retrieved data. Anyway, here is a working snippet that retrieves and formats some data for easy loading into a spreadsheet. 

The hardest thing for me was getting the correct profile number.  I thought it was the number in the web page code that looked like UA-191234-1, but it's not.  You have to go to account settings and thenedit the profile of the web site to see the magic number - it's the "Profle ID:" at the top of the page.

# Fetch some Google Analytics data

use strict;
use warnings;

use Net::Google::Analytics;
use Net::Google::AuthSub;

my $user    = 'you@gmail.com'; # your account user id here
my $pass    = 'xxxxxx';        # your password here!

my $profile    = '14883391';
my $start_date = '2010-10-01';
my $end_date   = '2011-09-30';

# Login

my $auth = Net::Google::AuthSub->new(service => 'analytics');
my $response = $auth->login($user, $pass);
if (!$response->is_success) {
    die 'Login failed: ' . $response->error . "\n";

# Datafeed request

my $analytics = Net::Google::Analytics->new();

my $data_feed = $analytics->data_feed;
my $req = $data_feed->new_request();


my $res = $data_feed->retrieve($req);

if (($res->{is_success} || 0) ne 1) {
    die "Lookup failed\n";

# Print tab separated header line

my $entry = $res->entries->[0];

for my $dimension (@{$entry->dimensions}) {
    my $name = $dimension->name;
    $name =~ s/^ga\://;
    print "$name\t";

for my $metric (@{$entry->metrics}) {
    my $name = $metric->name;
    $name =~ s/^ga\://;
    print "$name\t";

print "\n";

# Print tab separated values

for my $entry (@{$res->entries}) {
    for my $dimension (@{$entry->dimensions}) {
        my $value = $dimension->value;
        print "$value\t";
    for my $metric (@{$entry->metrics}) {
        my $value = $metric->value;
        print "$value\t";
    print "\n";

exit 1;

I need to download a zip file and extract the contents as part of a batch process, with some fine grain control over the name and location of the extracted file.  

Never messed with zip files using Perl before, so here is the first step - a very simple example to extract the contents of a zip file to the current directory using the names in the zip file.

There is an excellent FAQ and lots of good examples in the distribution.

# extract_zip.pl - very simple zip extract example

use strict;
use warnings;
use Archive::Zip qw( :ERROR_CODES :CONSTANTS );

my $zip_name = "example.zip";

my $zip = Archive::Zip->new($zip_name);
unless (defined $zip) {
	die "Unable to open $zip_name\n";

print "The zip file contains ", $zip->numberOfMembers(), " members:\n";

for my $member_name ($zip->memberNames()) {
	print "  Extracting $member_name\n";
	my $status = $zip->extractMemberWithoutPaths($member_name);
	die "\nExtracting $member_name from $zip_name failed\n" if $status != AZ_OK;

exit 1;

Cache::FileCache Example

| No Comments
There is a spot in our real estate system that's always needed a cache to hold some data obtained from the internet. I finally got around to it using Cache::FileCache. Not the latest and greatest, but available on my increasingly antiquated system. It was hard to find a some decent example code, so here is some I wrote.


# tCacheFile.pl - try out Cache::FileCache
# 04/29/2011  Bill Ruppert

use strict;
use warnings;
use Cache::FileCache;

# Setup cache
my $cache = new Cache::FileCache({ 
	namespace           => 'FruitCache',
	default_expires_in  => '100 days',
	cache_root          => 'C:/tools/cache/',
	auto_purge_interval => '1 day',

# Cache some items
$cache->set('Orange', 'A round citrus fruit');
$cache->set('Lemon',  'A yellow pointed sour citrus fruit');
$cache->set('Apple',  'A red roundish fruit good for pies');

# Get all items in cache
print "Get all items in cache:\n";
for ($cache->get_keys()) {
	my $data = $cache->get($_);
	printf "  %-10s: %s\n", $_, $data;

# Mix cache hits and misses
print "\nTry some hits and misses:\n";
for (qw( Lemon Kiwi Orange Melon Apple )) {
	my $data = $cache->get($_);
	$data = "Not cached!" unless defined $data;
	printf "  %-10s: %s\n", $_, $data;

exit 1;

Comparing DNS Servers in Perl

| No Comments
I've been reading about Google's new DNS service with great interest.  I had to switch to OpenDNS some time ago when my ISP began redirecting 404's to a search page.  This wreaked havoc on my link verification tools.

While reading, I came upon a shell script at http://www.manu-j.com/blog/opendns-alternative-google-dns-rocks/ that I wanted to try.  Since I run WinXP, I translated it to Perl and spruced it up a bit.  I installed the utility from http://members.shaw.ca/nicholas.fong/dig/ and away we went...

# dnstimes.pl - test dns server times

use strict;
use warnings;

my @urls = qw(

my %dns_servers = (
	Level_3	=> '',
	Google	=> '',
	OpenDNS	=> '',

for my $dns_firm (sort keys %dns_servers) {
	my $dns_ip = $dns_servers{$dns_firm};
	for my $url (@urls) {
		my $result = `dig \@$dns_ip $url`;
		my ($time) = $result =~ /Query time: (\d+)/s;
		print "$dns_firm\t$url\t$time\n";

Find recent content on the main index or look in the archives to find all content.

Recent Comments

  • Bill Ruppert: I was afraid someone might ask about that. It was read more
  • Stanislav: Hi, thanks for the script, but can you tell me read more
  • Bill Ruppert: The updated code has been published, see link at top read more
  • Bill Ruppert: You are correct, the module was updated and I had read more
  • Pavel: Error: Can't locate object method "data_feed" via package "Net::Google::Analytics" at read more
  • mla: Thank you. I had the same problem with the profile read more


OpenID accepted here Learn more about OpenID
Powered by Movable Type 4.38