networked http://54.211.228.60 day to day technical issues Fri, 06 Jan 2017 23:42:10 +0000 en-US hourly 1 https://wordpress.org/?v=4.7 A Tool to Backup Files to Amazon S3 http://54.211.228.60/2017/01/a-tool-to-backup-files-to-amazon-s3/ http://54.211.228.60/2017/01/a-tool-to-backup-files-to-amazon-s3/#respond Fri, 06 Jan 2017 23:40:41 +0000 http://54.211.228.60/?p=308 For the past year I've been working on and off on a little project to create a tool which:

  • runs on at least Linux, MacOS X and FreeBSD
  • allows to backup your files to Amazon S3 while providing optional server side encryption (AES-256)
  • is cost effective for large numbers of files (the problem with things like s3cmd or aws s3 sync is that they need to compare local files with metadata retrieved on the fly from AWS and this can get expensive)
  • is easy to install
  • provides meaningful error messages and the possibility to debug

I've ended up creating a tool called S3backuptool (yeah, not that original) which does the above and in order to run it requires Python 2.7 , PyCrypto and the Boto library.

Details are available on the project's page and it can be installed from prebuilt packages (deb or rpm) for several Linux distributions or from Python's PyPi for far more Linux distributions and OSes.

So far it's been quite the educative enterprise while also catering to my needs.

Metadata about all backed up files is stored locally in SQLite database(s) and in S3 as metadata for each uploaded file. When a backup job runs it compares the state of files with the one stored in the local SQLite database(s) and action is needed on S3 only then actual S3 api calls are performed (those cost money). In case the local SQLite databases are lost then they can be reconstructed from the S3 stored metadata.

]]>
http://54.211.228.60/2017/01/a-tool-to-backup-files-to-amazon-s3/feed/ 0
Secure and Scalable WordPress In the Cloud (Amazon S3 for content delivery and EC2 for authoring) http://54.211.228.60/2017/01/secure-and-scalable-wordpress-in-the-cloud-amazon-s3-for-content-delivery-and-ec2-for-authoring/ http://54.211.228.60/2017/01/secure-and-scalable-wordpress-in-the-cloud-amazon-s3-for-content-delivery-and-ec2-for-authoring/#respond Fri, 06 Jan 2017 16:59:49 +0000 http://54.144.195.143/?p=289 Several months ago I decided to move all of the stuff running on my server (Droplet on Digital Ocean) to various cloud providers. My main motivation was that I did not have time any more to manage my email server which was made up on Postfix + Zarafa + MailScanner + SpamAssassin + ClamAV + Pyzor/Razor/DCC + Apache2 + Mysql . Then I was also dealing with monitoring + backups.
Anyway moving the mail was easy as there are plenty of cloud solutions which are mature.

With my blog (which I did not post to since a long time ago) I decided to try something a bit more interesting so I decided to move it to Amazon S3 as a static website.
In order to achieve this I had to solve the following:

  • convert WordPress from dynamically generated pages to static ones. This was easy using the plugin "WP Static HTML Output" which does what it says
  • find a solution for the comments as with a static page you won't be able to add comments. The solution was to start using Disqus. I've installed the plugin "Disqus Comment System", created a Disqus account and then using the plugin proceeded to import all of the comments which were stored in WordPress' database
  • find a solution for search. Again this was not hard and I've moved to using Google Search (plugin "WP Google Search")
  • once I had the above I generated the a static release which was a .zip file.
  • I've created an S3 bucket called aionica.computerlink.ro . The bucket must be named as your site/blog and bucket names are unique across all of AWS S3 which means that if someone else is already having a bucket called like that then you're out of luck and your remaining option then is to use CloudFront together with a differently named S3 bucket
  • created a DNS CNAME entry for aionica.computerlink.ro pointing at aionica.computerlink.ro.s3-website-us-east-1.amazonaws.com.
  • setup an S3 bucket policy allowing anyone to read any bucket content
  • uploaded the contents of the .zip file to the S3 bucket root
  • With all of the above I could not browse my blog and it was hosted on S3.

The next challenge was to create an authoring system which is cost effective and which I can easily use to create and publish new content.
I've decided to go with AWS EC2 so what I did was:

  • create an IAM EC2 policy allowing read + write access to the S3 bucket hosting aionica.computerlink.ro:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "Stmt1483709545000",
                "Effect": "Allow",
                "Action": [
                    "s3:ListAllMyBuckets"
                ],
                "Resource": [
                    "*"
                ]
            },
            {
                "Sid": "Stmt1483709567000",
                "Effect": "Allow",
                "Action": [
                    "s3:AbortMultipartUpload",
                    "s3:DeleteObject",
                    "s3:GetObject",
                    "s3:ListBucket",
                    "s3:ListBucketMultipartUploads",
                    "s3:ListMultipartUploadParts",
                    "s3:PutObject"
                ],
                "Resource": [
                    "arn:aws:s3:::aionica.computerlink.ro",
                    "arn:aws:s3:::aionica.computerlink.ro/*"
                ]
            }
        ]
    }
  • create an EC2 Security Group having the following rules:
    -- allow HTTP, HTTPS and SSH traffic only from my (home's) public IP address
    -- allow ICMP from anyware
    -- allow all traffic from any interface having the same Security Group applied
  • spin up and Ubuntu 16.04 server running on an EC2 instance of type t2.micro and ensure the instance has a public IP address allocated upon boot (so no Elastic IP permanently allocated). Also at creating time I've associated the above mentioned IAM EC2 role and Security Group with the instance
  • install WordPress, Apache2 and Mysql and make them work as on the old server. WWW root was set as /var/www/html/
  • edit /var/www/html/wp-config.php and around the top add:
    define('WP_HOME','http://1.2.3.4');
    define('WP_SITEURL','http://1.2.3.4');

    where 1.2.3.4 was the public IP address of the instance at that point in time.
    This is needed because otherwise WordPress would redirect you away to your original site url (in my case aionica.computerlink.ro)

  • install boto package (apt-get install boto) . Boto will be used to sync content to S3. I tried first s3cmd but after several hours of debugging I figured out it can't correctly guess the MIME type of files and so it was setting Content-Type text/plain for CSS files and the site was incorrectly rendering (took hours to figure out).
  • create a script for publishing new releases and placing it at /usr/local/bin/publish_blog_to_s3 :
    #/bin/sh
    S3_BUCKET='aionica.computerlink.ro'
    BLOG_WWW_ROOT='/var/www/html'
    WORKDIR='/tmp/uncompress_archive'
    ARCHIVE_FILE=$(find ${BLOG_WWW_ROOT}/wp-content/uploads/`date +%Y/%m` -type f -name "wp-static-html*.zip" -printf "%T+\t%p\n" | sort | tail -n1 | awk {'print $2'})
    rm -rf $WORKDIR
    mkdir -p $WORKDIR
    unzip $ARCHIVE_FILE -d $WORKDIR
    aws s3 sync ${WORKDIR}/ s3://${S3_BUCKET}/ --delete
  • create a script which adds IPTABLES rules which contain the Public and Private IP of the EC2 instance. This is needed because the plugin which generates static WordPress pages tries to connect back to the server using the server's public IP address. This causes problems as the Public IP is provided by AWS using NAT(Network Address Translation) and it looks like AWS Networking doesn't provide NAT Reflection/NAT Loopback which means I needed to add an Iptables rule which makes traffic destined for the Instance's public IP to be sent back to the instance (so it never leaves the server).
    I've also took the chance to have this script also edit in /var/www/html/wp-config.php the WP_HOME and WP_SITEURL so upon boot they will be correctly set up with whatever Public IP the instance will have. The resulting script was placed in /usr/local/bin/configure_environment.sh and has content:

    #!/bin/sh
    PUBLIC_IP=`wget -qO- http://169.254.169.254/latest/meta-data/public-ipv4`
    PRIVATE_IP=`wget -qO- http://169.254.169.254/latest/meta-data/local-ipv4`
    # put in NAT rule to deal with lack of AWS NAT reflection handling
    /sbin/iptables -t nat -I OUTPUT -d $PUBLIC_IP -j DNAT --to-destination $PRIVATE_IP
    # Adjust WordPress base-url based on whatever public IP we have
    /bin/sed -i -e "s#'WP_HOME','http://.*'#'WP_HOME','http://${PUBLIC_IP}'#g" -e "s#'WP_SITEURL','http://.*'#'WP_SITEURL','http://${PUBLIC_IP}'#g" /var/www/html/wp-config.php

    And I've configured /etc/rc.local to call it upon boot

Now when I want to author and publish new content the workflow is:

  • power on VM using EC2 console
  • adjust my IP address in the security group - optional only if my home's public IP address changes
  • check what public IP was allocated to the EC2 vm and point my browser at http://public-ip/wp-login.php
  • write content, publish and then go to "Tools > WP Statich HTML Output" and click Generate Static Site
  • loging to the server via SSH and run the script /usr/local/bin/publish_blog_to_s3 via sudo
  • shutdown EC VM in order to save costs

The publishing part could be further automated so instead of running a script one could write an WordPress Plugin which somehow triggers a run of the /usr/local/bin/publish_blog_to_s3 but given I'm the sole user of this setup, there is no point in doing so.

Advantages of the above setup:

  • low running costs - it all depends on the traffic you have (see S3 pricing), for example for US East 1 it's 2.3 cents per gigabyte uploaded
  • scalability - it's all static html so it's really fast to be served. Even so at any time you can enable CloudFront and massively scale (also cheap)
  • security - there is no active component on your side which is reachable by the general public. The only weak point could be your authoring system but if you keep it locked down via the firewall then you're fine
  • low management costs

P.S. this blog post was produced and is delivered by such a setup

]]>
http://54.211.228.60/2017/01/secure-and-scalable-wordpress-in-the-cloud-amazon-s3-for-content-delivery-and-ec2-for-authoring/feed/ 0
Fast MySQL database restore / import from full dump files http://54.211.228.60/2012/04/fast-mysql-database-restore-import-from-full-dump-files/ http://54.211.228.60/2012/04/fast-mysql-database-restore-import-from-full-dump-files/#respond Sun, 08 Apr 2012 18:25:41 +0000 http://aionica.computerlink.ro/?p=260 With MySQL Community Edition in most of the cases you have two ways of creating a full database backup:

  • using the command line utility mysqldump which works with both Myisam and Innodb tables, while the database server is running
  • shutting down the MySQL server and performing a copy of the full data dir in case of Innodb databases or just the database folder in the data dir in case of Innodb based databases

The full list of methods to do backups is available on Mysql's site.

While a binary backup will be the fastest to "restore" it has limitations, mainly that if using Innodb storage engine then you have to restore the whole MySQL instance and not just the specific database; and that you can safely restore on the same Mysql version (though it may work on newer ones too).
On the other hand a db dump created using mysqldump will allow you to restore only the needed database (or all of them if you want to and you have a full dump of all databases), it will allow you to restore on different Mysql versions as long as the features required are supported (if restoring on an older MySQL version) and it is also the most disk space efficient way to restore (see how MySQL manages disk space for Innodb tables)

The problem lies in the details and when restoring a large dump created with mysqldump you can disover it can take even days if the dump file is large (i've seen it for a 30GB dump file which isn't that large). The problem lies in the fact that the dump file is a series of SQL statements and each INSERT will trigger and index update.

To speed as much as possible a dump file import do as much as possible from the following list:

  1. using mysqldump create the dump files using the --opt option.

    Use of --opt is the same as specifying --add-drop-table, --add-locks, --create-options, --disable-keys, --extended-insert, --lock-tables, --quick, and --set-charset. All of the options that --opt stands for also are on by default because --opt is on by default.

    The most important one of the list is --disable-keys

    For each table, surround the INSERT statements with /*!40000 ALTER TABLE tbl_name DISABLE KEYS */; and /*!40000 ALTER TABLE tbl_name ENABLE KEYS */; statements. This makes loading
    the dump file faster because the indexes are created after all rows are inserted.

  2. Disable unique checks, foreign key checks and autocommit, explanation on mysql's site. You can do this either by editing the SQL dump file and adding at the top
    SET autocommit=0;
    SET unique_checks=0;
    SET foreign_key_checks=0;
    

    and at the end of the file append

    COMMIT;

    or you can edit /etc/mysql/my.cnf (adjust path according to your case) and in the [mysqld] section add

    init_connect='SET autocommit=0,unique_checks=0,foreign_key_checks=0'
    

    Save and restart mysqld in order for the new configuration changes to take effect.
    Be sure to read the explanation from Mysql in order to understand the impact of those three commands. Generally it is safe to run them if no other database is running on the same mysqld instance, but don't take my word on it.

  3. In mysqld config file (my.cnf) in the [mysqld] section set:
    innodb_flush_log_at_trx_commit = 2

    If you have a powerloss during the import then you will have data loss when running with this option set to a value different from 1. Again read the explanation from MySQL's site in order to understand the impact. If no other databases are running inside the MySQL instance this should be safe

Depending on your constrains you can choose to implement only part of the above advice.
Once the import is completed be sure to revert the changes in my.cnf and restart mysqld.

]]>
http://54.211.228.60/2012/04/fast-mysql-database-restore-import-from-full-dump-files/feed/ 0
mysql backups using mysqldump http://54.211.228.60/2012/02/mysql-backups-db-dumps/ http://54.211.228.60/2012/02/mysql-backups-db-dumps/#respond Sat, 25 Feb 2012 21:50:27 +0000 http://aionica.computerlink.ro/?p=247 I keep encountering all sort of bad attempts or at least not optimal attempts at doing Mysql full db backups. While it looks like a trivial task using the mysqldump tool, there are several things one needs to take into account:

  • if you backup all databases into a file (--all-databases) then when you will need to restore only one database from the backup you will be in trouble as in order to restore it you need to restore all databases on a staging server and afterward dump just the needed one or remove all of the rest of databases but with the second approach you still have the "mysql" database changed; or use some tool which can extract from a full dbdump just the needed one (it's basically a text file so you could scrip around it). Update: you can use mysql (mysql -D db_name -o < dump_file.sql) to restore a particular db from a dump done with --all-databases , just take care to have db_name created before attempting the restore
  • if you backup separately each database to it's own dump file then you will quickly learn that you should have also backed up the usernames and passwords which are allowed to access/modify the database
  • if you run a cron script each night which creates the dump(s) and overwrites the previous night's backup file(s) then you may learn it the hard way that a 0 bytes dump or incomplete dump will leave you not only without today's backup but also possibly without yesterday's valid backup (in case it wasn't bad too). So in this case the advice is keep more then tonight's and/or yesterday's backup. I generally keep at least 7 of them if done nightly as generally by the time someone realizes they need a backup a day might have passed. Also it is recommended to have a real backup infrastructure in place (with the associated retention policy)


Below is a bash script which I run nightly , keeps two weeks of db dumps, creates a dump file for each database and also in a separate file (backup_db_PRIVILEGES_and_DB-CREATE_statements.sql) keeps the statements in order to recreated db access privileges and credentials . PS: the "grant privileges" code was taken a long time a go from some ware I really don't remember so if someone feels they need the credits let me know and I'll state it or put a link in.

You may notice that the mysql user password (mysql root user in the below case) is no ware mentioned in the script. It is a really bad practice to put it in the script as it can be seen by someone running "ps" when the backup is in progress. The best practice is to create a .my.cnf file in the home of the user running the backup and add in this file the username and password. Example

# cat /root/.my.cnf
[client]
user=root
password=MY-REALLY-SECRET-PASSWORD

So here is the backup script, edit BACKUP_PREFIX variable and adjust as needed with the path where the backups should be done.

#!/bin/sh
BACKUP_PREFIX="/data/backups"
BACKUP_DIR="${BACKUP_PREFIX}/db-backup-`date +%d-%m-%Y`"
DBUSERNAME='root'
TODAY=`date +%d-%m-%Y`
TWOWEEKSAGO=`date --date="2 weeks ago" +%d-%m-%Y`
mkdir -p $BACKUP_DIR
for i in `mysql --skip-column-names --batch -u root -e 'show databases' | egrep -v '^information_schema$|^mysql$'`; do echo "CREATE DATABASE \`$i\`;";done \
        > ${BACKUP_DIR}/backup_db_PRIVILEGES_and_DB-CREATE_statements.sql
echo >> ${BACKUP_DIR}/backup_db_PRIVILEGES_and_DB-CREATE_statements.sql
  mysql --batch --skip-column-names -u root -e "SELECT DISTINCT CONCAT(
    'SHOW GRANTS FOR ''', user, '''@''', host, ''';'
    ) AS query FROM mysql.user" | \
  mysql -u root | \
  sed 's/\(GRANT .*\)/\1;/;s/^\(Grants for .*\)//;/##/{x;p;x;}' >> ${BACKUP_DIR}/backup_db_PRIVILEGES_and_DB-CREATE_statements.sql

for i in `mysql -u root --skip-column-names -B -e 'SHOW DATABASES'| egrep -v 'information_schema'`; do mysqldump -u $DBUSERNAME --opt $i > ${BACKUP_DIR}/backup_db_${TODAY}_${i}.sql; done
mysqldump -u $DBUSERNAME --single-transaction information_schema > ${BACKUP_DIR}/backup_db_${TODAY}_information_schema.sql
tar -cjpf ${BACKUP_DIR}.tar.bz2 $BACKUP_DIR

#remove backups older than 2 weeks
rm -f ${BACKUP_PREFIX}/db-backup-${TWOWEEKSAGO}
#remove backup dir
rm -rf $BACKUP_DIR
]]>
http://54.211.228.60/2012/02/mysql-backups-db-dumps/feed/ 0
What to monitor on a (Linux) server http://54.211.228.60/2011/12/what-to-monitor-on-a-linux-server/ http://54.211.228.60/2011/12/what-to-monitor-on-a-linux-server/#respond Sat, 03 Dec 2011 14:54:38 +0000 http://aionica.computerlink.ro/?p=214 It is surprisingly how many articles are out there about server monitoring, referring to how to use a specific tool, and the lack of sources of documentation regarding what you actually need to monitor from a best practices point of view.
A well monitored server allows to fix possible issues proactively or solve service interruptions a lot faster as the problem can be located faster and solved.

So here goes my list of things I always monitor, independent of actually what the specific purpose of the server is.

  • hardware status - if fans are spinning, cpu temperature, mainboard temperature, environment temperature, physical memory status, power source status, cpu's online. Most of the well know vendors (Dell, HP, IBM) provide tools to check the hardware for the above list of items
  • disk drive S.M.A.R.T. status - you can find out things like if the hdd is starting to count bad blocks or if the bad blocks are increasing fast which will give you a heads up that you need to prepare to replace the disk. Also most of the times you can monitor the HDD's temperature
  • hardware raid array status / software raid status - you really want to know when an array is degraded. Unfortunately most of the organization's don't actually monitor this
  • file system space available - I start with a warning when usage is at 80% and a critical alarm if usage is above 90%. For big filesystems ( >= 100G) of course this needs to be customized as 20% means at least 20G
  • inodes available on the file system - again I use the 80% warning, 90% critical . This is something which isn't always obvious (when you run out of inodes) and can create a whole of other problems. Of course it applies only to file systems which have a finite amount of inodes like ext2,3,4
  • system load average - as a rule of thumb I put a warning alarm at 1.5 X cpu threads on the system and a critical alarm when 2 X cpu threads. Of course depending on the server's purpose this may get customized
  • swap usage - warning at 50% usage, critical at 70%
  • memory usage - I don't actually monitor this by default as it is highly dependent on the server's purpose. If you monitor this be sure not to take into account memory allocated for disk caching (as this will be automatically freed up by the kernel if memory is needed)
  • uptime lower than a day - this is a great indication that the system rebooted and otherwise you risk of not noticing a unscheduled system restart, especially with VMs which boot really fast as there is no actual POST to do
  • network interface resets, errors, packet collisions, up/down changes, interface speed and duplex - any changes in this list may be a good signal of trouble ahead. For example servers mounting NFS exported file systems have a hard time when interfaces flap
  • total number of processes and threads - this is dependent on your system (application, amount of cpu cores, etc) but definitively worth while monitoring as you want to know when processes rise above a limit. Generally I start with a 150 warning and 180 processes critical alarm for system with up to 4 cpu threads
  • number of zombie processes - warning at 1 , critical alarm at 5 . Something is always wrong if you end up with zombie processes.
  • check if syslog is running - just a simple check to see the process is there as it is really bad to not have it running
  • check if crond is running - again things will slowly but surely start to go wrong if cron is stopped and regular maintenance tasks like logrotate and tmpwatch/tmpreaper are not running when scheduled
  • the number of running cron processes - warning if more than 5 are running at the same time and critical alarm if more than 10. This generally signals if we have cron jobs which never finish running due to badly written scripts or system issues
  • check if ntp client is running - while this is not mandatory to have, it is generally a best practice to have a synchronized/accurate clock
  • out of band management running and is reachable - this refers to things like HP's ILO, Dell's DRAC, SUN's LOM/ALOM/ILOM, IBM's RSA . It is really bad to discover during a server outage that you can't reach any more the server's out of band management solution because: it is frozen, it is unreachable (network issues) or that you don't even know how to reach it
  • smtp daemon running - in case you have it (i always recommend it on bound on the loopback interface for all servers even if they don't provide email services) you should have a check to see if it is running and accepting connection on the loop back interface

Once you monitor the basics then you need to see if the applications related to the server's specific purpose are running. So:

  1. check that the application's processes are running - example Apache, Mysql, Memcached, Postfix, etc
  2. check that you can connect to the service - for example on a smtp server check with a smtp client that you can connect
  3. check that network based resources are reachable - if for example your webserver needs to connect to mysql on another system then do check if from the server running Apache you can connect to mysql on the other system. Even if you check locally Mysql on the other system that doesn't guarantee Apache can reach it as there can be configuration issues or firewall issues

The last thing would be the "advanced" section which is always hard to achieve. Here you need to check things like application logic is working as expected, using customized tools. Also it is worth monitoring for things like I/O wait time, network latency , I/O throughput , network throughput . The latter mentioned are hard to monitor as you need to have working knowledge of the specific system and how it is behaving under heavy load.

]]>
http://54.211.228.60/2011/12/what-to-monitor-on-a-linux-server/feed/ 0
nato phonetic alphabet translator http://54.211.228.60/2011/09/nato-phonetic-alphabet-translator/ http://54.211.228.60/2011/09/nato-phonetic-alphabet-translator/#respond Sun, 18 Sep 2011 19:36:09 +0000 http://aionica.computerlink.ro/?p=211 Tired of looking at a table with the NATO phonetic alphabet and spelling different words over the phone i decided to do this: http://spellme.info in order to simplify and speed up the whole thing. Godaddy sells .info domains for 2$ so i got one for this thing.

! Update . So after a year the domain expired as I didn't see a purpose to renew it. If anyone is interested, the code behind it is attached

]]>
http://54.211.228.60/2011/09/nato-phonetic-alphabet-translator/feed/ 0
Multiple domain selfsigned ssl/tls certificates for Apache (namebased ssl/tls vhosts) http://54.211.228.60/2011/08/multiple-domain-selfsigned-ssltls-certificates-for-apache-namebased-ssltls-vhosts/ http://54.211.228.60/2011/08/multiple-domain-selfsigned-ssltls-certificates-for-apache-namebased-ssltls-vhosts/#comments Sat, 13 Aug 2011 20:53:10 +0000 http://aionica.computerlink.ro/?p=190 This is an old problem: how to have ssl/tls name based virtual hosts with Apache .
The issue is that the ssl/tls connection is established before Apache even receives a HTTP request.When Apache receives the request already the SSL connection is established with a particular hostname - ip & ssl certificate combination so this means that it is capable of serving NameBased virtual hosts only for that particular ssl/tls certificate.

There are two possible solutions here:

  • Multi domain or wildcard SSL/TLS certificates. Those are certificates which are configured with more than one name so you can create virtual hosts (in case of apache) for those domains. This is fairly easy to set up and at least for me it has worked ok in the past.
  • Server Name Indication (SNI) which is an extension to the SSL/TLS protocol and allows the client to specify the desired domain earlier and the server to be notified so it supplies the correct SSL/TLS certificate depending on the requested hostname. The problem is SNI is fairly new and few server side software supports it, also client side software needs to be fairly new. On the long run this is going to be the best solution as it has been designed to overcome this specific problem


1. Multi domain and wildcard certificates can be bought/signed from most of the certificate authorities or you can generate your own. People at CAcert.org have been doing a lot of testing and documentation on how to overcome this issue and generate your own self signed multi domain certificates. Also this practical blog post. The best way is to generate a certificate with one CommonName and multiple AltName (Alternative Name) values.

For example if you have www.domain1.com www.domain2.org and www.domain3.edu you need to generate the certificate pairs:

openssl genrsa -out multidomain-server.key 1024

or if you want to password protect the private key(and supply the private key each time the server software is started) then

openssl genrsa -des3 -out multidomain-server.key 1024

Generate the certificate request:

openssl req -new -key multidomain-server.key -out multidomain-server.csr

When asked for the CommonName enter the first name, eg: www.domain1.com
Specify all names and a text file which will be used as the certificate extensions source

echo "subjectAltName=DNS:www.domain1.com,DNS:www.domain2.org,DNS:www.domain3.edu">cert_extensions

Now you can self sign the public certificate file , for let's say three years, using:

openssl x509 -req -in multidomain-server.csr -signkey multidomain-server.key -extfile cert_extensions -out multidomain-server.crt -days 1095

and clean up

rm cert_extensions multidomain-server.csr

Now for Apache you need to have mod_ssl enabled and working and in the config file have something like:

NameVirtualHost 1.2.3.4:443

        ServerName www.domain1.com
        SSLEngine on
        SSLOptions +StrictRequire
        SSLProtocol -all +TLSv1 +SSLv3
        SSLCipherSuite HIGH
        SSLCertificateFile /path/to/multidomain-server.crt
        SSLCertificateKeyFile /path/to/multidomain-server.key

        DocumentRoot /srv/www/www.domain1.com/
        
                Options FollowSymLinks
                AllowOverride All
                SSLRequireSSL
        
        ErrorLog /var/log/apache2/www.domain1.com-ssl-error.log
        LogLevel warn
        CustomLog /var/log/apache2/www.domain1.com-ssl-access.log combined
        ServerSignature Off


        ServerName www.domain2.org
        SSLEngine on
        SSLOptions +StrictRequire
        SSLProtocol -all +TLSv1 +SSLv3
        SSLCipherSuite HIGH
        SSLCertificateFile /path/to/multidomain-server.crt
        SSLCertificateKeyFile /path/to/multidomain-server.key

        DocumentRoot /srv/www/www.domain2.org/
        
                Options FollowSymLinks
                AllowOverride All
                SSLRequireSSL
        
        ErrorLog /var/log/apache2/www.domain2.org-ssl-error.log
        LogLevel warn
        CustomLog /var/log/apache2/www.domain2.org-ssl-access.log combined
        ServerSignature Off


        ServerName www.domain3.edu
        SSLEngine on
        SSLOptions +StrictRequire
        SSLProtocol -all +TLSv1 +SSLv3
        SSLCipherSuite HIGH
        SSLCertificateFile /path/to/multidomain-server.crt
        SSLCertificateKeyFile /path/to/multidomain-server.key

        DocumentRoot /srv/www/www.domain3.edu/
        
                Options FollowSymLinks
                AllowOverride All
                SSLRequireSSL
        
        ErrorLog /var/log/apache2/www.domain3.edu-ssl-error.log
        LogLevel warn
        CustomLog /var/log/apache2/www.domain3.edu-ssl-access.log combined
        ServerSignature Off

Of course replace 1.2.3.4 with your ip for those hostnames and also replace what else is needed and adjust the Apache config according to the needs.

You can vary the setup as needed, for example having two of the names with the same document root, using an ServerAlias directive and so on. You can also create another multi domain certificate which is bound for another ip and so on.


2. Server Name Indication has been purposely developed to overcome all the issues and it's major advantage is that it allows true name based virtual hosts, each vhost having it's own unique ssl certificate (and all hosts can share the same ip address). The requirements are: Apache 2.2.12 (according to wikipedia), OpenSSL 0.9.8f or later (0.9.8k has SNI support enabled by default) and a capable browser
Once you have them all you can define SSL/TLS virtual hosts as you define them for non ssl/tls ones except you add the SSL related (path to keys, enable ssh engine, etc) statements and also disable SSL version 2 as it doesn't support SNI and has a number of security flaws.
The only new statement in Apache config is SSLStrictSNIVHostCheck which if on then it will reject connections with error 403 for non SNI capable browsers. If it is off then it will serve the first configured SSL/TLS vhost so if you leave it off then it would be a good idea that in the first vhost you put a message notifying the user to upgrade his/her browser.

Example config files are available on the Gentoo Wiki and on the Apache wiki

]]>
http://54.211.228.60/2011/08/multiple-domain-selfsigned-ssltls-certificates-for-apache-namebased-ssltls-vhosts/feed/ 1
KSM (Kernel Samepage Merging) status http://54.211.228.60/2011/08/ksm-kernel-samepage-merging-status/ http://54.211.228.60/2011/08/ksm-kernel-samepage-merging-status/#respond Sat, 13 Aug 2011 19:20:40 +0000 http://aionica.computerlink.ro/?p=185 KSM allows physical memory de-duplication in Linux, so basically you can get a lot more out of your memory at expense of some cpu usage (because there is a thread which scans memory for duplicate pages). Typical usage is for servers running virtual machines on top of KVM but applications aware of this capability could also use it even on OS instances which aren't VMs running on KVM.
The requirements are a kernel version of at least 2.6.32 and CONFIG_KSM=y. For more details you can check the official documentation and a tutorial on how to enable it.

Below is a small script (called ksm_stat) which I wrote in order to see how much memory is "shared" and how much memory is actually being saved by using this feature.

#!/bin/bash
if [ "`cat /sys/kernel/mm/ksm/run`" -ne 1 ] ; then
       echo 'KSM is not enabled. Run echo 1 > /sys/kernel/mm/ksm/run' to enable it.
       exit 1
fi
echo Shared memory is $((`cat /sys/kernel/mm/ksm/pages_shared`*`getconf PAGE_SIZE`/1024/1024)) MB
echo Saved memory is $((`cat /sys/kernel/mm/ksm/pages_sharing`*`getconf PAGE_SIZE`/1024/1024)) MB
if ! `type bc &>/dev/null`  ; then
        echo "bc is missing or not in path, skipping ratio calculation"
        exit 1
fi
if [ "`cat /sys/kernel/mm/ksm/pages_sharing`" -ne 0 ] ; then
        echo -n "Shared pages usage ratio is ";echo "scale=2;`cat /sys/kernel/mm/ksm/pages_sharing`/`cat /sys/kernel/mm/ksm/pages_shared`"|bc -q
        echo -n "Unshared pages usage ratio is ";echo "scale=2;`cat /sys/kernel/mm/ksm/pages_unshared`/`cat /sys/kernel/mm/ksm/pages_sharing`"|bc -q
fi

Example of a machine where it just has been enabled, so it takes a while until all pages are scanned

# ksm_stat
Shared memory is 67 MB
Saved memory is 328 MB
Shared pages usage ratio is 4.87
Unshared pages usage ratio is 17.04
#

]]>
http://54.211.228.60/2011/08/ksm-kernel-samepage-merging-status/feed/ 0
Zarafa templates for Zabbix http://54.211.228.60/2011/08/zarafa-templates-for-zabbix/ http://54.211.228.60/2011/08/zarafa-templates-for-zabbix/#respond Thu, 04 Aug 2011 23:06:10 +0000 http://aionica.computerlink.ro/?p=170 Recently i had to create Zabbix templates in order to monitor Zarafa Collaboration Platform installations. My employer was kind enough to make them available .

Some screenshots follow below, you can get the templates from Accelcloud's site.

]]>
http://54.211.228.60/2011/08/zarafa-templates-for-zabbix/feed/ 0
upstart (System-V init replacement on Ubuntu) tips http://54.211.228.60/2011/05/upstart-system-v-init-replacement-on-ubuntu-tips/ http://54.211.228.60/2011/05/upstart-system-v-init-replacement-on-ubuntu-tips/#respond Sun, 01 May 2011 11:01:35 +0000 http://aionica.computerlink.ro/?p=160 Since Ubuntu Server 10.04 LTS (lucid)  Canonical's System-V init replacement, Upstart has most of the init scripts converted to Upstart jobs. Upstart is event based and it is quite different from sysV init so one needs to adjust to it's config file structure and terminology; it is present in the server release since 8.04 LTS but then it didn't have the init scripts converted to it's format so it didn't really matter on the server release that it took over Sys-V init.

Reading the documentation is mandatory, but here are some quick tips for things at least i found dificult to discover on the project's website or in the man pages:

Default runlevel is defined here: /etc/init/rc-sysinit.conf  and ofcourse it can be overridden on the kernel command line . /etc/inittab is gone and everything moved to /etc/init/ while legacy init scripts(= not converted yet to upstart format) can still be found in /etc/init.d/ together with symlinks to converted init jobs.

Managing jobs:  initctl start <job> / initctl stop <job> / initctl restart <job> / initctl reload <job>  ; Listing all jobs and their status: initctl list

Now here comes the horror story: seems that there is no tool (cli based) which lists what Upstart jobs will start in a particular runlevel, or better what Upstart and /etc/rc*.d jobs will start in a runlevel. There are two GUI based tools  (jobs-admin and Boot-Up Manager) but no cli tools so you are left to use things like sysv-rc-conf / chkconfig / update-rc.d for the /etc/rc*.d system-V init like legacy folders and for Upstart jobs you need to manually look at the files in /etc/init/ which is cumbersome as beside the runlevel entry you also need to take into account events/dependencies like net-device-up

It seems like Canonical is thinking that nowadays a Server sysadmin must also install the GUI tools in order to manage basic things like what services start with the server.

]]>
http://54.211.228.60/2011/05/upstart-system-v-init-replacement-on-ubuntu-tips/feed/ 0