Rebuild missing SSL certificates from Plesk database

January 19th, 2013

I’ve had to deal with errors similar to this occasionally on Plesk servers:

root@cent:# apachectl -t
Syntax error on line 55 of /var/www/vhosts/
SSLCertificateFile: file '/usr/local/psa/var/certificates/cert-sFD3Ys' does not exist or is empty

Probably the #1 reason I see this is when we’re doing migrations from one Plesk machine to another. Restoring Plesk-created backups can also cause it sometimes.

Regardless of the reason, if the certificates exist in the psa database – they can be re-created easily through ssh. I got tired of manually doing this, and ended up writing just a quick bash one-liner to take care of it for me.

Important: backup your certificates directory first in case anything gets overwritten!

root@cent:# tar cvjf /root/psa_certificates.tar.bz2 /usr/local/psa/var/certificates

Re-create all plesk SSL certs from psa db:

root@cent:# cd /usr/local/psa/var/certificates
root@cent:# mysql -uadmin -p`cat /etc/psa/.psa.shadow` psa -Ne'select id,cert_file,name from certificates;' \
| while read id cert_file name;do echo "$cert_file : $name"; \
mysql -uadmin -p`cat /etc/psa/.psa.shadow` psa -Ne "select pvt_key from certificates where id=$id;" \
| php -r 'echo urldecode(file_get_contents("php://stdin"));' > $cert_file; \
mysql -uadmin -p`cat /etc/psa/.psa.shadow` psa -Ne "select cert from certificates where id=$id;" \
| php -r 'echo urldecode(file_get_contents("php://stdin"));' >> $cert_file; done

Re-create all plesk ca certs from psa db:

root@cent:# mysql -uadmin -p`cat /etc/psa/.psa.shadow` psa -Ne'select id,ca_file,name from certificates;' \
| while read id cert_file name;do echo "$cert_file : $name"; \
mysql -uadmin -p`cat /etc/psa/.psa.shadow` psa -Ne "select ca_cert from certificates where id=$id;" \
| php -r 'echo urldecode(file_get_contents("php://stdin"));' > $cert_file; done

This will pull the private key, certificate, and ca certificate from the database, urldecode them using php, and save them to the appropriate filenames. You should also run either websrvmng or httpdmng to rebuild the apache config to make sure it uses the correct files.

Automated off-site Linux Backups using Duply and Duplicity

August 8th, 2011

Off-site backups are important, and even though I know this, I rarely implement them in my own servers. Lately, I’ve been setting up rsnapshot to do hourly and daily backups locally(to the same server), and I only do manual backups to remote servers occasionally. I decided to install duply on all of my servers/virtual machines(that I care about) and have them back up to a single backup server. This backup server will also do daily encrypted backups to Amazon S3, effectively giving me 3 redundant layers of backups.

If you haven’t heard of Duplicity or Duply before, Duply is basically a wrapper for Duplicity which makes it easier to manage. Duplicity itself is similar to rsnapshot except, it uses tar to efficiently store differences between backups (instead of hardlinks). Here’s the description from the man page:

Duplicity incrementally backs up files and directory by 
encrypting tar-format volumes with GnuPG and uploading 
them to a remote (or local) file server. Currently local, 
ftp, ssh/scp, rsync, WebDAV, WebDAVs, HSi and Amazon S3 backends 
are available. Because duplicity uses librsync, the incremental 
archives are space efficient and only record the parts of files 
that have changed since the last backup. Currently duplicity 
supports deleted files, full Unix permissions, directories, 
symbolic links, fifos, etc., but not hard links.

I wrote this mainly as a reference for myself when I need to set duply up on another server, but it might be useful for others as well.

Read the rest of this entry »

Xen file-based vs LVM-based disk images (benchmarks)

May 8th, 2011

I’ve been messing around a lot with Xen lately and have seen several different articles and forum posts debating the advantages over using file-based disk images, like /mnt/xen/VM01-disk.img, versus giving the VM direct access to a LVM partition. So, I ran a few simple tests on my own to determine what would be best for my machine.

Dom-0 is using four 1.5tb 7200rpm seagate drives in a software Raid-10, /dev/md2.  Both Dom0 and DomU have 1gb of ram and are using ext3. Xen is using mainly default settings with the default scheduler.

From DomU with the DomU image being file-based on ext3 Dom-0 fs
DomU has 1gb ram

[root@DomU]# dd if=/dev/zero of=tmpfile.bin bs=1024k count=10k
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 192.556 seconds, 55.8 MB/s

From Dom0 on the same filesystem where above DomU’s image is stored.
Dom0 has 1gb ram

[root@Dom0]# dd if=/dev/zero of=tmpfile.bin bs=1024k count=10k
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 72.0743 seconds, 149 MB/s

I was really surprised to see there was this much of a difference between Dom0 and DomU disk access. Several tests were run, and the results didn’t vary much at all.

Read the rest of this entry »

Create Your Own URL Shortener

February 13th, 2011

I don’t really pay attention to twitter that often,but I did notice more and more people are starting to use personalized url shorteners. There’s a lot of free services out there you can use, but if you have somewhere you can host a simple php script, why not make your own?

I ended up buying, and that’s what I’m going to set this up on. If I wanted to make things shorter, I could take off the /tyn but then I dont think it’d make as much sense. redirects back to this page, for example. If you need help picking out a short domain name, try out

To create my own shortener, I decided just to use php’s base_convert function which will convert to and from bases 2-36. For a personal url shortener, you shouldn’t need more than base 36. I did end up having to write a base 62 converter class for sh0tz so that I can keep urls short, but that’s another post another time.

Read the rest of this entry »

Installing and Updating WordPress via SVN

September 29th, 2010

I have quite a few WordPress installations that I semi-manage on my server, and just recently realized how time consuming it is everytime a new version comes out.   WordPress does let you update it to the newest version directly from the admin interface which is definitely nice, but if you have several installations of WordPress you aren’t going to want to log in to every single one of them and click that update button.

One solution, and the one I decided on for now, is to use Subversion.   Ever since 1.5, WordPress has been using subversion for its version control so of course they allow public read-only access to the subversion repository.   Whether you are already familiar with subversion or not, it is relatively easy to install WordPress using svn and then keep it up to date as well.

Here’s an example on my server:

$cd /var/www/vhosts/
$svn co .
.....svn will download the 3.0 branch of wordpress to the current directory ...

Be sure to include the period at the end of that svn checkout command so that it downloads wordpress to the directory you’re in. Otherwise it will create a new directory named 3.0. Once Subversion finishes checking out the branch, and assuming your permissions are all good then you can just go to and run the WordPress install script to put in your database info and whatnot. You could also modify wp-config.php manually if you’re into that.

And to upgrade wordpress?    Easy!   You just need to switch the branch subversion has checked out currently.

$cd /var/www/vhosts/
$svn sw .

It will update all the files that need to be updated. Depending on the release, you’ll probably need to go to http://<wordpress install>/wp-admin/upgrade.php once the files are updated.

Now with this, you could create a bash script to loop through all of your wordpress installation directories and run the svn switch command to update to the latest stable wordpress branch.    You can find a list of the current available branches here:


August 18th, 2010

This is based off of du.php, and is basically the same thing except it lets you click on folders to navigate through the directory structure and see how much space each directory is taking up.

It uses php’s shell_exec function to call the du utility on the directories it’s in. So if your host doesn’t allow you to use shell_exec, this isn’t going to work for you. If they do allow shell_exec, but do not allow ssh access then this is perfect.

Read the rest of this entry »