Sync one directory to another

I've had to sync one local fileserver directory (and all subdirs) to a remote server on the fly so whatever gets written to the local server appears to the remote.
I did have tried iocron but it's not recursive.
Tested some solutions but all they had some issues.
I ended up using watcher.py: https://github.com/greggoryhz/Watcher
Works flawlessly for 2months now. (local copy: http://www.valqk.com/assets/user/watcher.py )

Nat through non-default gateways more than one internal network.

One big office space (with one BIG net) shared by more than one company - each having different policies for IT infrastructure.
How do we nat different local networks (connected to eth2,3,4 etc) trough different gateway (connected openvpn to each Company VPN server)?

Here it is how:

#!/bin/sh

exc() {
cmd="$1";
[ -n "$2" ] && exitt="$2";
echo "Exec $cmd ...";
$cmd;
[ $? -gt 0 ] && echo "Error executing $cmd..." && [ "$exitt" != "0" ] && exit 1;
}

[ `which realpath|wc -l` -lt 1 ] && echo "This script requiers realpath command" && exit 1;

[ -z "$1" ] && echo "Param1: net config" && exit 1;
[ -n "$1" ] && cfg=`realpath $1`;
[ -n "$1" ] && ! [ -f "$cfg" ] && echo "Config $1 con't be found!" && exit 1;
[ -n "$1" ] && [ -f "$cfg" ] && . $cfg;

[ -z "$defgw" ] || [ -z "$vpnremoteip" ] || [ -z "$local1net" ] || [ -z "$local1ip" ] || [ -z "$local1netdev" ] || [ -z "$tundev1" ] || [ -z "$vpn1cfgdir" ] || [ -z "$vpn1cfg" ] || [ -z "$vpn1rtbl" ] && echo "Some variables that are required are empty! We need all: defgw : $defgw , vpnremoteip : $vpnremoteip , local1net : $local1net , local1ip : $local1ip , local1netdev : $local1netdev , tundev1 : $tundev1 , vpn1cfgdir : $vpn1cfgdir , vpn1cfg : $vpn1cfg , vpn1rtbl : $vpn1rtbl" && exit 1;


[ -n "`ps ax|grep openvpn|grep $vpn1cfg|grep -v grep`" ] && echo "Openvpn with cfg $vpn1cfg already runs PID: `ps ax|grep openvpn|grep $vpn1cfg|grep -v grep|cut -f1 -d ' '`" && exit 1;
local1ifacecheck=`ifconfig $local1netdev|grep inet|cut -f2 -d:|cut -f1 -d' '`;

[ -n "$local1ifacecheck" ] && [ "x$local1ifacecheck" != "x$local1ip" ] && echo "$local1netdev is UP but ip doesn't match ($local1ip != $local1ifacecheck)!" && exit 1;
[ -z "$local1ifacecheck" ] && exc "ifconfig $local1netdev $local1ip up" && exc "ip r del $local1net" 0;

[ `ip r s|grep $local1net|grep -v grep|wc -l` -gt 0 ] && exc "ip r del $local1net" 0;

[ `ip r s|grep $vpnremoteip|grep -v grep|wc -l` -lt 1 ] && exc "ip r add $vpnremoteip via $defgw dev eth0";

# start vpn and get local/remote ppp ip
exc "cd $vpn1cfgdir";
exc "openvpn --daemon --config $vpn1cfg";
sleep 10;

vpn1local=`ifconfig $tundev1|grep inet|awk '{print $2}'|cut -f 2 -d:`;
vpn1remote=`ifconfig $tundev1|grep inet|awk '{print $3}'|cut -f 2 -d:`;

[ -z "$vpn1local" ] || [ -z "$vpn1remote" ] && echo "Can't find local/remote vpn ips" && exit 1;

#clean up vpn routes from default routing table
vpn1net=`ip r |grep "via $vpn1remote"|grep -v grep|cut -f1 -d' '`;
[ -n "$vpn1net" ] && exc "ip r del $vpn1net" 0;
[ -n "$vpn1remote" ] && exc "ip r del $vpn1remote" 0;


echo "Add routing for: vpn1remote: $vpn1remote ; vpn1net: $vpn1net ; local1net : $local1net ; default";
#add routes in new routing table vpnr1
[ -z "`ip r s t $vpn1rtbl|grep $vpn1remote|grep -v grep`" ] && exc "ip r add $vpn1remote dev $tundev1 src $vpn1local table $vpn1rtbl";
[ -z "`ip r s t $vpn1rtbl|grep $vpn1net|grep -v grep`" ] && exc "ip r add $vpn1net dev $tundev1 via $vpn1local table $vpn1rtbl";
[ -z "`ip r s t $vpn1rtbl|grep $local1net|grep -v grep`" ] && exc "ip r add $local1net dev $local1netdev src $local1ip table $vpn1rtbl";
[ -z "`ip r s t $vpn1rtbl|grep 'default'|grep -v grep`" ] && exc "ip r add default via $vpn1local dev $tundev1 table $vpn1rtbl";
#add rules for vpn/vpn1-local nets to lookup vpnr1;
[ -z "`ip ru s|grep "from $vpn1net"|grep -v grep`" ] && exc "ip rule add from $vpn1net lookup $vpn1rtbl prio 1000";
[ -z "`ip ru s|grep "to $vpn1net"|grep -v grep`" ] && exc "ip rule add to $vpn1net lookup $vpn1rtbl prio 1000";
[ -z "`ip ru s|grep "from $vpn1local"|grep -v grep`" ] && exc "ip rule add from $vpn1local lookup $vpn1rtbl prio 1100";
[ -z "`ip ru s|grep "from $local1net"|grep -v grep`" ] && exc "ip rule add from $local1net lookup $vpn1rtbl prio 998";
[ -z "`ip ru s|grep "to $local1net"|grep -v grep`" ] && exc "ip rule add to $local1net lookup $vpn1rtbl prio 998";


[ `iptables -t nat -nvL|grep SNAT|grep "$local1net"|wc -l` -lt 1 ] && exc "iptables -t nat -A POSTROUTING -s $local1net -o $tundev1 -j SNAT --to-source $vpn1local";

RapidSSL + Intermediate Certificates + Nginx - RapidSSL unrecognized issuer problem.

If you buy a RapidSSL Geotrust SSL certificate and simply install it you will get "Invalid issuer" or such message and the browsers won't let user without confirmation.
To install the certificate correctly you have to Install RapidSSL intermediate certificate chain.
How? It's very easy.
In the file where you keep the Certificate itself simply add this certificate cahins (https://knowledge.rapidssl.com/library/VERISIGN/ALL_OTHER/RapidSSL%20Intermediate/RapidSSL_CA_bundle.pem)

After concatenating to your cert and restarting the server. You can test it here:
geotrust java ssl tester
or
sslshopper php tester

You can also check these guides/links:
SSL Certificate Installation for Nginx Server
RapidSSL - Install SSL Certificate
Geotrust - Install SSL Certificate

and RapidSSL Technical Support

I copy/paste them here if they get lost.
------------------------------------------
-----BEGIN CERTIFICATE-----
MIID1TCCAr2gAwIBAgIDAjbRMA0GCSqGSIb3DQEBBQUAMEIxCzAJBgNVBAYTAlVT
MRYwFAYDVQQKEw1HZW9UcnVzdCBJbmMuMRswGQYDVQQDExJHZW9UcnVzdCBHbG9i
YWwgQ0EwHhcNMTAwMjE5MjI0NTA1WhcNMjAwMjE4MjI0NTA1WjA8MQswCQYDVQQG
EwJVUzEXMBUGA1UEChMOR2VvVHJ1c3QsIEluYy4xFDASBgNVBAMTC1JhcGlkU1NM
IENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAx3H4Vsce2cy1rfa0
l6P7oeYLUF9QqjraD/w9KSRDxhApwfxVQHLuverfn7ZB9EhLyG7+T1cSi1v6kt1e
6K3z8Buxe037z/3R5fjj3Of1c3/fAUnPjFbBvTfjW761T4uL8NpPx+PdVUdp3/Jb
ewdPPeWsIcHIHXro5/YPoar1b96oZU8QiZwD84l6pV4BcjPtqelaHnnzh8jfyMX8
N8iamte4dsywPuf95lTq319SQXhZV63xEtZ/vNWfcNMFbPqjfWdY3SZiHTGSDHl5
HI7PynvBZq+odEj7joLCniyZXHstXZu8W1eefDp6E63yoxhbK1kPzVw662gzxigd
gtFQiwIDAQABo4HZMIHWMA4GA1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQUa2k9ahhC
St2PAmU5/TUkhniRFjAwHwYDVR0jBBgwFoAUwHqYaI2J+6sFZAwRfap9ZbjKzE4w
EgYDVR0TAQH/BAgwBgEB/wIBADA6BgNVHR8EMzAxMC+gLaArhilodHRwOi8vY3Js
Lmdlb3RydXN0LmNvbS9jcmxzL2d0Z2xvYmFsLmNybDA0BggrBgEFBQcBAQQoMCYw
JAYIKwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmdlb3RydXN0LmNvbTANBgkqhkiG9w0B
AQUFAAOCAQEAq7y8Cl0YlOPBscOoTFXWvrSY8e48HM3P8yQkXJYDJ1j8Nq6iL4/x
/torAsMzvcjdSCIrYA+lAxD9d/jQ7ZZnT/3qRyBwVNypDFV+4ZYlitm12ldKvo2O
SUNjpWxOJ4cl61tt/qJ/OCjgNqutOaWlYsS3XFgsql0BYKZiZ6PAx2Ij9OdsRu61
04BqIhPSLT90T+qvjF+0OJzbrs6vhB6m9jRRWXnT43XcvNfzc9+S7NIgWW+c+5X4
knYYCnwPLKbK3opie9jzzl9ovY8+wXS7FXI6FoOpC+ZNmZzYV+yoAVHHb1c0XqtK
LEL2TxyJeN4mTvVvk0wVaydWTQBUbHq3tw==
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIDfTCCAuagAwIBAgIDErvmMA0GCSqGSIb3DQEBBQUAME4xCzAJBgNVBAYTAlVT
MRAwDgYDVQQKEwdFcXVpZmF4MS0wKwYDVQQLEyRFcXVpZmF4IFNlY3VyZSBDZXJ0
aWZpY2F0ZSBBdXRob3JpdHkwHhcNMDIwNTIxMDQwMDAwWhcNMTgwODIxMDQwMDAw
WjBCMQswCQYDVQQGEwJVUzEWMBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEbMBkGA1UE
AxMSR2VvVHJ1c3QgR2xvYmFsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB
CgKCAQEA2swYYzD99BcjGlZ+W988bDjkcbd4kdS8odhM+KhDtgPpTSEHCIjaWC9m
OSm9BXiLnTjoBbdqfnGk5sRgprDvgOSJKA+eJdbtg/OtppHHmMlCGDUUna2YRpIu
T8rxh0PBFpVXLVDviS2Aelet8u5fa9IAjbkU+BQVNdnARqN7csiRv8lVK83Qlz6c
JmTM386DGXHKTubU1XupGc1V3sjs0l44U+VcT4wt/lAjNvxm5suOpDkZALeVAjmR
Cw7+OC7RHQWa9k0+bw8HHa8sHo9gOeL6NlMTOdReJivbPagUvTLrGAMoUgRx5asz
PeE4uwc2hGKceeoWMPRfwCvocWvk+QIDAQABo4HwMIHtMB8GA1UdIwQYMBaAFEjm
aPkr0rKV10fYIyAQTzOYkJ/UMB0GA1UdDgQWBBTAephojYn7qwVkDBF9qn1luMrM
TjAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBBjA6BgNVHR8EMzAxMC+g
LaArhilodHRwOi8vY3JsLmdlb3RydXN0LmNvbS9jcmxzL3NlY3VyZWNhLmNybDBO
BgNVHSAERzBFMEMGBFUdIAAwOzA5BggrBgEFBQcCARYtaHR0cHM6Ly93d3cuZ2Vv
dHJ1c3QuY29tL3Jlc291cmNlcy9yZXBvc2l0b3J5MA0GCSqGSIb3DQEBBQUAA4GB
AHbhEm5OSxYShjAGsoEIz/AIx8dxfmbuwu3UOx//8PDITtZDOLC5MH0Y0FWDomrL
NhGc6Ehmo21/uBPUR/6LWlxz/K7ZGzIZOKuXNBSqltLroxwUCEm2u+WR74M26x1W
b8ravHNjkOR/ez4iyz0H7V84dJzjA1BOoa+Y7mHyhD8S
-----END CERTIFICATE-----
------------------------------------------

HP Smart Array tool - HPAcuCLI Usage

Linux - hpacucli

This document is a quick cheat sheet on how to use the hpacucli utility to add, delete, identify and repair logical and physical disks on the Smart array 5i plus controller, the server that these commands were tested on was a HP DL380 G3 server with a Smart Array 5i plus controller with 6 x 72GB hot swappable disks, the server had Oracle Enterprise Linux (OEL) installed.

After a fresh install of Linux I downloaded the file hpacucli-8.50-6.0.noarch.rpm (5MB), you may want to download the latest version from HP. Then install using the standard rpm command.

I am not going to list all the commands but here are the most common ones I have used thus far, this document may be updated as I use the utility more.

Utility Keyword abbreviations
Abbreviations chassisname = ch
controller = ctrl
logicaldrive = ld
physicaldrive = pd
drivewritecache = dwc
hpacucli utility
hpacucli # hpacucli

# hpacucli help

Note: you can use the hpacucli command in a script
Controller Commands
Display (detailed) hpacucli> ctrl all show config
hpacucli> ctrl all show config detail
Status hpacucli> ctrl all show status
Cache hpacucli> ctrl slot=0 modify dwc=disable
hpacucli> ctrl slot=0 modify dwc=enable
Rescan hpacucli> rescan

Note: detects newly added devices since the last rescan
Physical Drive Commands
Display (detailed) hpacucli> ctrl slot=0 pd all show
hpacucli> ctrl slot=0 pd 2:3 show detail

Note: you can obtain the slot number by displaying the controller configuration (see above)
Status

hpacucli> ctrl slot=0 pd all show status
hpacucli> ctrl slot=0 pd 2:3 show status

Erase hpacucli> ctrl slot=0 pd 2:3 modify erase
Blink disk LED hpacucli> ctrl slot=0 pd 2:3 modify led=on
hpacucli> ctrl slot=0 pd 2:3 modify led=off
Logical Drive Commands
Display (detailed) hpacucli> ctrl slot=0 ld all show [detail]
hpacucli> ctrl slot=0 ld 4 show [detail]
Status hpacucli> ctrl slot=0 ld all show status
hpacucli> ctrl slot=0 ld 4 show status
Blink disk LED hpacucli> ctrl slot=0 ld 4 modify led=on
hpacucli> ctrl slot=0 ld 4 modify led=off
re-enabling failed drive hpacucli> ctrl slot=0 ld 4 modify reenable forced
Create # logical drive - one disk
hpacucli> ctrl slot=0 create type=ld drives=1:12 raid=0

# logical drive - mirrored
hpacucli> ctrl slot=0 create type=ld drives=1:13,1:14 size=300 raid=1

# logical drive - raid 5
hpacucli> ctrl slot=0 create type=ld drives=1:13,1:14,1:15,1:16,1:17 raid=5

Note:
drives - specific drives, all drives or unassigned drives
size - size of the logical drive in MB
raid - type of raid 0, 1 , 1+0 and 5
Remove hpacucli> ctrl slot=0 ld 4 delete
Expanding hpacucli> ctrl slot=0 ld 4 add drives=2:3
Extending hpacucli> ctrl slot=0 ld 4 modify size=500 forced
Spare hpacucli> ctrl slot=0 array all add spares=1:5,1:7

LSI SAS status tool

If you have LSI SAS attached drives with FusionMPT then you can monitor it with this: http://hwraid.le-vert.net/wiki/LSIFusionMPTSAS2#a2.Linuxkerneldrivers
There is a repo: http://hwraid.le-vert.net/wiki/DebianPackages

#> apt-get install sas2ircu-status

then:

#>sas2ircu-status
-- Controller informations --
-- ID | Model
c0 | SAS2008

-- Arrays informations --
-- ID | Type | Size | Status
c0u0 | RAID1 | 1907G | Okay (OKY)

-- Disks informations
-- ID | Model | Status
c0u0p0 | ST32000644NS (9WM3BMY3) | Optimal (OPT)
c0u0p1 | ST32000644NS (9WM3F3XK) | Optimal (OPT)

or

#> sas2ircu-status --nagios
RAID OK - Arrays: OK:1 Bad:0 - Disks: OK:2 Bad:0

Migrating contacts from IPhone to Android

If you want to use GMail sync - then it's easy.
Simply sync from the IPhone to GMail then on the android sync back.

If you don't want to use the Gmail option - it turned out to be pretty tough to transfer them.
I've used Export Contacts 1.6 app on the IPhone - it starts a service and then from any browser you can export contacts as vCard, CSV or PDF. vCard has two formats: single vCard and ZIP with many vCards (outlook option).
After I've downloaded single vCard file with all my contacts I've uploaded the file to another webserver, opened the direct url on the Android phone (with Firefox if that matters) and it asked me to open or import the vCard.
I told it to import vCard file and voila all my contacts are now there with all fields. Birthdays are kind of crappy and pics are missing (Export contacts didn't expored the pics)...

ps. If your iPhone is with broken display and has a lock pass code and you can't unlock it so you can sync with iTunes then DFU mode will do the trick. Hold the home button and sleep button for 10 seconds and then release the sleep button while continuing to hold the home button. iTunes should not show the message that a phone has been detected in recovery mode.

P.S. I've imported the contacts like this but noticed some of them (about 50% of ~600) are missing. Well... I ended up Installing a http://funambol.com/ server + outlook plugin + iphone app + android app - now I have my contacts transferred as expected and also I have a 'backup' place ( my custom funambol server).

Screen automatic startup

Have you ever wondered how to startup your scripts in screen upon boot?
I've wondered for a while, googled few times and when I found nothing nice I wrote this simple script.

It has few nice features:

- can run screen as given user
- check if screen/session is not already started.
- clean ups stale pid files
- it's a debian startup script
- reads command and user to run as from config file in $CFG dir.
- sets session name as defined in config. !new!

Comments and bugs are welcome to valqk to lozenetz dt net

Sample config /etc/screen-startup/run_site.cfg:

SCRIPT=/path/to/cron/script.sh
USER=siteuser
SCREEN_NAME=site_cronjob


Script name: screen-startup

#!/bin/sh
# /etc/init.d/screen-startup
#
### BEGIN INIT INFO
# Provides: screen-startup
# Required-Start: screen-cleanup
# Required-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Start daemon at boot time
# Description: Enable service provided by daemon.
### END INIT INFO
[ -z "$CFG" ] || ! [ -d "$CFG" ] && CFG='/etc/screen-startup/';
# Carry out specific functions when asked to by the system
startScreen() {
echo "Starting screens..."
for script in $CFG/*.cfg;
do
SCRIPT=`grep SCRIPT= $script|cut -f2 -d=`;
USER=`grep USER= $script|cut -f2 -d=`;
SCREEN_NAME=`grep SCREEN_NAME= $script|cut -f2 -d=`;
if [ -n "$SCRIPT" ] && [ -n "$USER" ]; then
if [ "x${SCREEN_NAME}" = "x" ]; then
sessName="`echo $SCRIPT|sed -e 's%/%_%g'`-$USER-AS"
else
sessName="${SCREEN_NAME}";
fi
if [ -f /var/run/screen/$sessName.pid ]; then
sessPid=`cat /var/run/screen/$sessName.pid`;
[ "x$sessPid" != "x" ] && [ `ps -p $sessPid|wc -l` -gt 1 ] && echo "$sessName alredy started ($sessPid)!!!" && continue;
echo "cleaning stale pid file: $sessName.pid"
rm /var/run/screen/$sessName.pid
fi
echo -n "Screen $SCRIPT for user $USER..."
/bin/su -c "/usr/bin/screen -dmS $sessName $SCRIPT" $USER
screenPid=`ps ax|grep "$sessName"|grep "$SCRIPT"|grep -v grep|awk '{print $1}'`
echo $screenPid > /var/run/screen/$sessName.pid
echo "done.";
fi
done
}
stopScreen() {
echo "Stopping screens..."
for script in $CFG/*.cfg;
do
SCRIPT=`grep SCRIPT= $script|cut -f2 -d=`;
USER=`grep USER= $script|cut -f2 -d=`;
SCREEN_NAME=`grep SCREEN_NAME= $script|cut -f2 -d=`;
sessName="`echo $SCRIPT|sed -e 's%/%_%g'`-$USER-AS"
if [ "x${SCREEN_NAME}" = "x" ]; then
sessName="`echo $SCRIPT|sed -e 's%/%_%g'`-$USER-AS"
else
sessName="${SCREEN_NAME}";
fi
if [ -f /var/run/screen/$sessName.pid ]; then
pidOfScreen=`cat /var/run/screen/$sessName.pid|cut -f 1 -d' '`;
pidOfBash=`cat /var/run/screen/$sessName.pid|cut -f 2 -d' '`;
if [ "x$pidOfBash" != "x" ] && [ `ps -p $pidOfBash|wc -l` -lt 2 ]; then
echo "Missing process $pidOfBash for screen $pidOfScreen. Cleaning up stale run file."
rm /var/run/screen/$sessName.pid;
continue;
else
echo -n "Screen: $SCRIPT for user $USER..."
kill $pidOfBash $pidOfScreen;
echo "done."
rm /var/run/screen/$sessName.pid;
fi
fi
done

}
case "$1" in
start)
startScreen;
;;
stop)
stopScreen;
;;
restart)
stopScreen;
startScreen;
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
;;
esac
exit 0


p.s. Edit: rev.1 of the script now supports SCREEN_NAME in config. When set you can resume screen with screen -s SCREEN_NAME (or part of it).

DRBD 3 machines stacked setup

This is copy/paste from http://www.howtoforge.com/drbd-8.3-third-node-replication-with-debian-etch plus a split-brain fixes.

WARNING: DO NOT do this setup, unless you'r OK with the speed to remote node. The max. speed you will get from drbd device is the speed you can push data to 3rd node.
--------------




DRBD 8.3 Third Node Replication With Debian Etch


Installation and Set Up Guide for DRBD 8.3 + Debian Etch


The Third Node Setup


by Brian Hellman


The recent release of DRBD 8.3 now includes The Third Node feature as a freely available component. This document will cover the basics of setting up a third node on a standard Debian Etch installation. At the end of this tutorial you will have a DRBD device that can be utilized as a SAN, an iSCSI target, a file server, or a database server.



Note: LINBIT support customers can skip Section 1 and utilize the package repositories.


LINBIT has hosted third node solutions available, please contact them at sales_us at linbit.com for more information.


 


Preface:



The setup is as follows:



  • Three servers: alpha, bravo, foxtrot

  • alpha and bravo are the primary and secondary local nodes

  • foxtrot is the third node which is on a remote network

  • Both alpha and bravo have interfaces on the 192.168.1.x network (eth0) for external connectivity.

  • A crossover link exists on alpha and bravo (eth1) for replication using 172.16.6.10 and .20

  • Heartbeat provides a virtual IP of 192.168.5.2 to communicate with the disaster recovery node located in a geographically diverse location


 


Section 1: Installing The Source


These steps need to be done on each of the 3 nodes.



Prerequisites:



  • make

  • gcc

  • glibc development libraries

  • flex scanner generator

  • headers for the current kernel


Enter the following at the command line as a privileged user to satisfy these dependencies:


apt-get install make gcc libc6 flex linux-headers-`uname -r` libc6-dev linux-kernel-headers


Once the dependencies are installed, download DRBD. The latest version can always be obtained at http://oss.linbit.com/drbd/. Currently, it is 8.3.



cd /usr/src/

wget http://oss.linbit.com/drbd/8.3/drbd-8.3.0.tar.gz


After the download is complete:



  • Uncompress DRBD

  • Enter the source directory

  • Compile the source

  • Install DRBD



tar -xzvf drbd-8.3.0.tar.gz

cd /usr/src/drbd-8.3.0/

make clean all

make install


Now load and verify the module:



modprobe drbd

cat /proc/drbd


version: 8.3.0 (api:88/proto:86-89)

GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@alpha, 2009-02-05 10:36:11


Once this has been completed on each of the three nodes, continue to next section.



 


Section 2: Heartbeat Configuration


Setting up a third node entails stacking DRBD on top of DRBD. A virtual IP is needed for the third node to connect to, for this we will set up a simple Heartbeat v1 configuration. This section will only be done on alpha and bravo.


Install Heartbeat:



apt-get install heartbeat


Edit the authkeys file:


vi /etc/ha.d/authkeys


auth 1
1 sha1 yoursupersecretpasswordhere

Once the file has been created, change the permissions on the file. Heartbeat will not start if this step is not followed.


chmod 600 /etc/ha.d/authkeys


Copy the authkeys file to bravo:


scp /etc/ha.d/authkeys bravo:/etc/ha.d/


Edit the ha.cf file:


vi /etc/ha.d/ha.cf


debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 1
deadtime 10
warntime 5
initdead 60
udpport 694
ucast eth0 192.168.1.10
ucast eth0 192.168.1.20
auto_failback off
node alpha
node bravo

Copy the ha.cf file to bravo:


scp /etc/ha.d/ha.cf bravo:/etc/ha.d/


Edit the haresources file, the IP created here will be the IP that our third node refers to.


vi /etc/ha.d/haresources


alpha IPaddr::192.168.5.2/24/eth0

Copy the haresources file to bravo:


scp /etc/ha.d/haresources bravo:/etc/ha.d/


Start the heartbeat service on both servers to bring up the virtual IP:


alpha:/# /etc/init.d/heartbeat start


bravo:/# /etc/init.d/heartbeat start


Heartbeat will bring up the new interface (eth0:0).


Note: It may take heartbeat up to one minute to bring the interface up.



alpha:/# ifconfig eth0:0


eth0:0 Link encap:Ethernet HWaddr 00:08:C7:DB:01:CC

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1


 


Section 3: DRBD Configuration


Configuration for DRBD is done via the drbd.conf file. This needs to be the same on all nodes (alpha, bravo, foxtrot). Please note that the usage-count is set to yes, which means it will notify Linbit that you have installed DRBD. No personal information is collected. Please see this page for more information :


global { usage-count yes; }

resource data-lower {
protocol C;
net {
shared-secret "LINBIT";
}
syncer {
rate 12M;
}

on alpha {
device /dev/drbd1;
disk /dev/hdb1;
address 172.16.6.10:7788;
meta-disk internal;
}

on bravo {
device /dev/drbd1;
disk /dev/hdd1;
address 172.16.6.20:7788;
meta-disk internal;
}
}

resource data-upper {
protocol A;
syncer {
after data-lower;
rate 12M;
al-extents 513;
}
net {
shared-secret "LINBIT";
}
stacked-on-top-of data-lower {
device /dev/drbd3;
address 192.168.5.2:7788; # IP provided by Heartbeat
}

on foxtrot {
device /dev/drbd3;
disk /dev/sdb1;
address 192.168.5.3:7788; # Public IP of the backup node
meta-disk internal;
}
}

 


Section 4: Preparing The DRBD Devices


Now that the configuration is in place, create the metadata on alpha and bravo.



alpha:/usr/src/drbd-8.3.0# drbdadm create-md data-lower


Writing meta data...

initializing activity log

NOT initialized bitmap

New drbd meta data block successfully created.



bravo:/usr/src/drbd-8.3.0# drbdadm create-md data-lower


Writing meta data...

initialising activity log

NOT initialized bitmap

New drbd meta data block successfully created.


Now start DRBD on alpha and bravo:


alpha:/usr/src/drbd-8.3.0# /etc/init.d/drbd start


bravo:/usr/src/drbd-8.3.0# /etc/init.d/drbd start


Verify that the lower level DRBD devices are connected:



cat /proc/drbd


version: 8.3.0 (api:88/proto:86-89)

GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@alpha, 2009-02-05 10:36:11

0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r---

ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:19530844


Tell alpha to become the primary node:


NOTE: As the command states, this is going to overwrite any data on bravo: Now is a good time to go and grab your favorite drink.


alpha:/# drbdadm -- --overwrite-data-of-peer primary data-lower

alpha:/# cat /proc/drbd


version: 8.3.0 (api:88/proto:86-89)

GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@alpha, 2009-02-05 10:36:11

0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r---

ns:3088464 nr:0 dw:0 dr:3089408 al:0 bm:188 lo:23 pe:6 ua:53 ap:0 ep:1 wo:b oos:16442556

[==>.................] sync'ed: 15.9% (16057/19073)M

finish: 0:16:30 speed: 16,512 (8,276) K/sec


After the data sync has finished, create the meta-data on data-upper on alpha, followed by foxtrot.


Note the resource is data-upper and the --stacked option is on alpha only.



alpha:~# drbdadm --stacked create-md data-upper


Writing meta data...

initialising activity log

NOT initialized bitmap

New drbd meta data block successfully created.

success



foxtrot:/usr/src/drbd-8.3.0# drbdadm create-md data-upper


Writing meta data...

initialising activity log

NOT initialized bitmap

New drbd meta data block sucessfully created.


Bring up the stacked resource, then make alpha the primary of data-upper:


alpha:/# drbdadm --stacked adjust data-upper


foxtrot:~# drbdadm adjust data-upper

foxtrot:~# cat /proc/drbd


version: 8.3.0 (api:88/proto:86-89)

GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@foxtrot, 2009-02-02 10:28:37

1: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent A r---

ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:19530208


alpha:~# drbdadm --stacked -- --overwrite-data-of-peer primary data-upper

alpha:~# cat /proc/drbd


version: 8.3.0 (api:88/proto:86-89)

GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@alpha, 2009-02-05 10:36:11

0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r---

ns:19532532 nr:0 dw:1688 dr:34046020 al:1 bm:1196 lo:156 pe:0 ua:0 ap:156 ep:1 wo:b oos:0

1: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent A r---

ns:14512132 nr:0 dw:0 dr:14512676 al:0 bm:885 lo:156 pe:32 ua:292 ap:0 ep:1 wo:b oos:5018200

[=============>......] sync'ed: 74.4% (4900/19072)M

finish: 0:07:06 speed: 11,776 (10,992) K/sec


Drink time again!


After the sync is complete, access your DRBD block device via /dev/drbd3. This will write to both local nodes and the remote third node. In your Heartbeat configuration you will use the "drbdupper" script to bring up your /dev/drbd3 device. Have fun!



DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.






If you ever get a split-brain (two nodes are in StandAlone and won't want to connect or one is WFConnection the other is StandAlone - it's splitbrain!)
On the node that is outdated do:

drbdadm secondary
drbdadm -- --discard-my-data connect

on the node that has fresh data:
drbdadm --stacked connect

DRBD 3 machines stacked setup

This is copy/paste from http://www.howtoforge.com/drbd-8.3-third-node-replication-with-debian-etch plus a split-brain fixes.

WARNING: DO NOT do this setup, unless you'r OK with the speed to remote node. The max. speed you will get from drbd device is the speed you can push data to 3rd node.
--------------




DRBD 8.3 Third Node Replication With Debian Etch


Installation and Set Up Guide for DRBD 8.3 + Debian Etch


The Third Node Setup


by Brian Hellman


The recent release of DRBD 8.3 now includes The Third Node feature as a freely available component. This document will cover the basics of setting up a third node on a standard Debian Etch installation. At the end of this tutorial you will have a DRBD device that can be utilized as a SAN, an iSCSI target, a file server, or a database server.



Note: LINBIT support customers can skip Section 1 and utilize the package repositories.


LINBIT has hosted third node solutions available, please contact them at sales_us at linbit.com for more information.


 


Preface:



The setup is as follows:



  • Three servers: alpha, bravo, foxtrot

  • alpha and bravo are the primary and secondary local nodes

  • foxtrot is the third node which is on a remote network

  • Both alpha and bravo have interfaces on the 192.168.1.x network (eth0) for external connectivity.

  • A crossover link exists on alpha and bravo (eth1) for replication using 172.16.6.10 and .20

  • Heartbeat provides a virtual IP of 192.168.5.2 to communicate with the disaster recovery node located in a geographically diverse location


 


Section 1: Installing The Source


These steps need to be done on each of the 3 nodes.



Prerequisites:



  • make

  • gcc

  • glibc development libraries

  • flex scanner generator

  • headers for the current kernel


Enter the following at the command line as a privileged user to satisfy these dependencies:


apt-get install make gcc libc6 flex linux-headers-`uname -r` libc6-dev linux-kernel-headers


Once the dependencies are installed, download DRBD. The latest version can always be obtained at http://oss.linbit.com/drbd/. Currently, it is 8.3.



cd /usr/src/

wget http://oss.linbit.com/drbd/8.3/drbd-8.3.0.tar.gz


After the download is complete:



  • Uncompress DRBD

  • Enter the source directory

  • Compile the source

  • Install DRBD



tar -xzvf drbd-8.3.0.tar.gz

cd /usr/src/drbd-8.3.0/

make clean all

make install


Now load and verify the module:



modprobe drbd

cat /proc/drbd


version: 8.3.0 (api:88/proto:86-89)

GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@alpha, 2009-02-05 10:36:11


Once this has been completed on each of the three nodes, continue to next section.



 


Section 2: Heartbeat Configuration


Setting up a third node entails stacking DRBD on top of DRBD. A virtual IP is needed for the third node to connect to, for this we will set up a simple Heartbeat v1 configuration. This section will only be done on alpha and bravo.


Install Heartbeat:



apt-get install heartbeat


Edit the authkeys file:


vi /etc/ha.d/authkeys


auth 1
1 sha1 yoursupersecretpasswordhere

Once the file has been created, change the permissions on the file. Heartbeat will not start if this step is not followed.


chmod 600 /etc/ha.d/authkeys


Copy the authkeys file to bravo:


scp /etc/ha.d/authkeys bravo:/etc/ha.d/


Edit the ha.cf file:


vi /etc/ha.d/ha.cf


debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 1
deadtime 10
warntime 5
initdead 60
udpport 694
ucast eth0 192.168.1.10
ucast eth0 192.168.1.20
auto_failback off
node alpha
node bravo

Copy the ha.cf file to bravo:


scp /etc/ha.d/ha.cf bravo:/etc/ha.d/


Edit the haresources file, the IP created here will be the IP that our third node refers to.


vi /etc/ha.d/haresources


alpha IPaddr::192.168.5.2/24/eth0

Copy the haresources file to bravo:


scp /etc/ha.d/haresources bravo:/etc/ha.d/


Start the heartbeat service on both servers to bring up the virtual IP:


alpha:/# /etc/init.d/heartbeat start


bravo:/# /etc/init.d/heartbeat start


Heartbeat will bring up the new interface (eth0:0).


Note: It may take heartbeat up to one minute to bring the interface up.



alpha:/# ifconfig eth0:0


eth0:0 Link encap:Ethernet HWaddr 00:08:C7:DB:01:CC

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1


 


Section 3: DRBD Configuration


Configuration for DRBD is done via the drbd.conf file. This needs to be the same on all nodes (alpha, bravo, foxtrot). Please note that the usage-count is set to yes, which means it will notify Linbit that you have installed DRBD. No personal information is collected. Please see this page for more information :


global { usage-count yes; }

resource data-lower {
protocol C;
net {
shared-secret "LINBIT";
}
syncer {
rate 12M;
}

on alpha {
device /dev/drbd1;
disk /dev/hdb1;
address 172.16.6.10:7788;
meta-disk internal;
}

on bravo {
device /dev/drbd1;
disk /dev/hdd1;
address 172.16.6.20:7788;
meta-disk internal;
}
}

resource data-upper {
protocol A;
syncer {
after data-lower;
rate 12M;
al-extents 513;
}
net {
shared-secret "LINBIT";
}
stacked-on-top-of data-lower {
device /dev/drbd3;
address 192.168.5.2:7788; # IP provided by Heartbeat
}

on foxtrot {
device /dev/drbd3;
disk /dev/sdb1;
address 192.168.5.3:7788; # Public IP of the backup node
meta-disk internal;
}
}

 


Section 4: Preparing The DRBD Devices


Now that the configuration is in place, create the metadata on alpha and bravo.



alpha:/usr/src/drbd-8.3.0# drbdadm create-md data-lower


Writing meta data...

initializing activity log

NOT initialized bitmap

New drbd meta data block successfully created.



bravo:/usr/src/drbd-8.3.0# drbdadm create-md data-lower


Writing meta data...

initialising activity log

NOT initialized bitmap

New drbd meta data block successfully created.


Now start DRBD on alpha and bravo:


alpha:/usr/src/drbd-8.3.0# /etc/init.d/drbd start


bravo:/usr/src/drbd-8.3.0# /etc/init.d/drbd start


Verify that the lower level DRBD devices are connected:



cat /proc/drbd


version: 8.3.0 (api:88/proto:86-89)

GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@alpha, 2009-02-05 10:36:11

0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r---

ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:19530844


Tell alpha to become the primary node:


NOTE: As the command states, this is going to overwrite any data on bravo: Now is a good time to go and grab your favorite drink.


alpha:/# drbdadm -- --overwrite-data-of-peer primary data-lower

alpha:/# cat /proc/drbd


version: 8.3.0 (api:88/proto:86-89)

GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@alpha, 2009-02-05 10:36:11

0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r---

ns:3088464 nr:0 dw:0 dr:3089408 al:0 bm:188 lo:23 pe:6 ua:53 ap:0 ep:1 wo:b oos:16442556

[==>.................] sync'ed: 15.9% (16057/19073)M

finish: 0:16:30 speed: 16,512 (8,276) K/sec


After the data sync has finished, create the meta-data on data-upper on alpha, followed by foxtrot.


Note the resource is data-upper and the --stacked option is on alpha only.



alpha:~# drbdadm --stacked create-md data-upper


Writing meta data...

initialising activity log

NOT initialized bitmap

New drbd meta data block successfully created.

success



foxtrot:/usr/src/drbd-8.3.0# drbdadm create-md data-upper


Writing meta data...

initialising activity log

NOT initialized bitmap

New drbd meta data block sucessfully created.


Bring up the stacked resource, then make alpha the primary of data-upper:


alpha:/# drbdadm --stacked adjust data-upper


foxtrot:~# drbdadm adjust data-upper

foxtrot:~# cat /proc/drbd


version: 8.3.0 (api:88/proto:86-89)

GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@foxtrot, 2009-02-02 10:28:37

1: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent A r---

ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:19530208


alpha:~# drbdadm --stacked -- --overwrite-data-of-peer primary data-upper

alpha:~# cat /proc/drbd


version: 8.3.0 (api:88/proto:86-89)

GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@alpha, 2009-02-05 10:36:11

0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r---

ns:19532532 nr:0 dw:1688 dr:34046020 al:1 bm:1196 lo:156 pe:0 ua:0 ap:156 ep:1 wo:b oos:0

1: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent A r---

ns:14512132 nr:0 dw:0 dr:14512676 al:0 bm:885 lo:156 pe:32 ua:292 ap:0 ep:1 wo:b oos:5018200

[=============>......] sync'ed: 74.4% (4900/19072)M

finish: 0:07:06 speed: 11,776 (10,992) K/sec


Drink time again!


After the sync is complete, access your DRBD block device via /dev/drbd3. This will write to both local nodes and the remote third node. In your Heartbeat configuration you will use the "drbdupper" script to bring up your /dev/drbd3 device. Have fun!



DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.






If you ever get a split-brain (two nodes are in StandAlone and won't want to connect or one is WFConnection the other is StandAlone - it's splitbrain!)
On the node that is outdated do:

drbdadm secondary
drbdadm -- --discard-my-data connect

on the node that has fresh data:
drbdadm --stacked connect

PKGSRC NetBSD update/upgrade Howto

1. Fetch the pkgsrc:

1.1. SUP way:
sup -v /path/to/your/supfile.

and this is short sample supfile:
nbsd# cat /root/sup-current
current release=pkgsrc host=sup2.fr.NetBSD.org hostbase=/home/sup/supserver \
base=/usr prefix=/usr backup use-rel-suffix compress delete

1.2. CVS way:
$ export CVSROOT="anoncvs@anoncvs.NetBSD.org:/cvsroot"
$ export CVS_RSH="ssh"
To fetch a specific pkgsrc stable branch from scratch, run:

$ cd /usr
$ cvs checkout -r pkgsrc-20xxQy -P pkgsrc
Where pkgsrc-20xxQy is the stable branch to be checked out, for example, “pkgsrc-2009Q1”

This will create the directory pkgsrc/ in your /usr/ directory and all the package source will be stored under /usr/pkgsrc/.

To fetch the pkgsrc current branch, run:

$ cd /usr
$ cvs checkout -P pkgsrc


2. Update the pkgsrc repository:

2.1. SUP way

sup -v /root/sup-current

2.2. CVS way:

$ export CVSROOT="anoncvs@anoncvs.NetBSD.org:/cvsroot"
$ export CVS_RSH="ssh"
$ cd /usr/pkgsrc
$ cvs update -dP

When updating pkgsrc, the CVS program keeps track of the branch you selected. But if you, for whatever reason, want to switch from the stable branch to the current one, you can do it by adding the option “-A” after the “update” keyword. To switch from the current branch back to the stable branch, add the “-rpkgsrc-2009Q3” option.



3. Updating a package:

cd /usr/pkgsrc/package/
make update

4. Update packages on remote server. If you have them already installed - check which one is for update:
security checks:
/usr/sbin/pkg_admin -K /var/db/pkg fetch-pkg-vulnerabilities

then do:
pkg_add -uu http://pkgserver/path/to/Pkg.tgz

this will update the package form remote with all dependent packages!

some links:
http://imil.net/pkgin/

http://pkgsrc.se/pkgtools/pkg_rolling-replace

http://wiki.netbsd.org/tutorials/pkgsrc/pkg_comp_pkg_chk/


To install packages directly from an FTP or HTTP server, run the following commands in a Bourne-compatible shell (be sure to su to root first):

# PATH="/usr/pkg/sbin:$PATH"
# PKG_PATH="ftp://ftp.NetBSD.org/pub/pkgsrc/packages/OPSYS/ARCH/VERSIONS/All"
# export PATH PKG_PATH
# pkg_add package.

OR directly:

# pkg_add http://...../

DomPDF with UNICODE UTF-8 Support! At last!

A colleague of mine spent some time and was able to make DomPDF library to run with almost ALL UTF-8 alphabets displayed.
Until now I was using TCPDF. It supports UTF-8 from a lot of time, but has crappy way of generating documents - VERY simple HTML support and A LOT of calls to internal methods so you can documents looks like the HTML page.

As far he explained to me the problem was generating proper fonts.

DomPDF with UTF-8 Support

UPDATE: Because DomPDF is "the memory MONSTER" (30pages table eat up about 1.5Gigs! GEE!!!) we are now using wkhtmltopdf. It's AMAZINGLY fast and keeps the memory footprint low (same page that took about 2-3min and 1.5Gigs ram for dompdf wkthml uses about 100-200mb and 20-40sec.)
The funny thing is that it's webkit based and renders PERFECTLY everything on each page I've tested with.
It's simply SWEET!

Debian Squeeze XEN basic setup

Install Xen:

#> aptitude install xen-hypervisor-4.0-amd64 linux-image-xen-amd64 xen-tools

Sqeeuze use Grub 2 - the defaults are wrong for Xen.
Xen hypervisor should be the first entry, so you should do this:

#> mv /etc/grub.d/10_linux /etc/grub.d/100_linux

After that disable the OS prober, so that you don’t have entries for virtual machines installed on a LVM partition.

#> echo "GRUB_DISABLE_OS_PROBER=true" >> /etc/default/grub
#> update-grub2

Xen tries to save-state the VM’s when doing Dom0 shutdown.
This save/restore has never been successful for me, so I disable it in /etc/default/xendomains to make sure machines gets shut down too:

XENDOMAINS_RESTORE=false
XENDOMAINS_SAVE=""

Enable the network bridge in /etc/xen/xend-config.sxp (uncomment existing line).
I also set some other useful params (for me):

(network-script network-bridge)
(dom0-min-mem 128)
(dom0-cpus 1)
(vnc-listen '127.0.0.1')
(vncpasswd '')


Add independent wallclocl in sysctl dom0

#> echo xen.independent_wallclock=1 >> /etc/sysctl.conf

and also in the domUs. Setup ntpdate update at 1hour for example in domUs.
This will save you a lot of clocksync headachecs.

Config /etc/xen-tools/xen-tools.conf contains default values the xen-create-image script will use. Most important are:

# Virtual machine disks are created as logical volumes in volume group universe (LVM storage is much faster than file)
lvm = vg001

install-method = debootstrap

size = 20Gb # Disk image size.
memory = 256Mb # Memory size
swap = 4Gb # Swap size
fs = ext3 # use the EXT3 filesystem for the disk image.
dist = `xt-guess-suite-and-mirror --suite` # Default distribution to install.

gateway = 1.2.3.4
netmask = 255.255.255.0

# When creating an image, interactively setup root password
passwd = 1

# I think this option was this per default, but it doesn't hurt to mention.
mirror = `xt-guess-suite-and-mirror --mirror`

mirror_squeeze = http://ftp.bg.debian.org/debian/

# let xen-create-image use pygrub, so that the grub from the VM is used, which means you no longer need to store kernels outside the VM's. Keeps this very flexible.
pygrub=1

scsi=1

Script to create vms (copied from http://blog.bigsmoke.us/):

#!/bin/bash

dist=$1
hostname=$2
ip=$3

if [ -z "$hostname" -o -z "$ip" -o -z "$dist" ]; then
echo "No dist, hostname or ip specified"
echo "Usage: $0 dist hostname ip"
exit 1
fi

# --scsi is specified because when creating maverick for instance, the xvda disk that is used can't be accessed.
# The --scsi flag causes names like sda to be used.
xen-create-image --hostname $hostname --ip $ip --vcpus 2 --pygrub --dist $dist


Usage of the script should be simple. When creating a VM named ‘host’, start it and attach console:

xm create -c /etc/xen/host.cfg

You can go back to Dom0 console with ctrl-].
Place a symlink in /etc/xen/auto to start the VM on boot.

As a sidenote: when creating a lenny, the script installs a xen kernel in the VM.
When installing maverick, it installs a normal kernel.
Normals kernels since version 2.6.32 (I believe) support pv_ops, meaning they can run on hypervisors like Xen’s.

Ubuntu encrypted home - lvm way

1. Create lvm partition. (sdaXX)
# fdisk /dev/sda
and then create 1 partition for root, swap and the rest for home.

2. Create physical extend.

# pvcreate /dev/sda3

3. Create logical volume
# lvcreate -n crypted-home -L 200G vg0
(you can leave free space if you want to be able to add additional partitions later)

4. Install needed tools
# aptitude -y install cryptsetup initramfs-tools hashalot lvm2
# modprobe dm-crypt
# modprobe dm-mod

5. Check for bad blocks (optional)
# /sbin/badblocks -c 10240 -s -w -t random -v /dev/vg0/crypted-home

6. Setup crytped home partition with luks
# cryptsetup -y --cipher serpent-xts-essiv:sha256 --hash sha512 --key-size 512 -i 50000 luksFormat /dev/vg0/crypted-home
enter uppercase YES!!

7. Open the created crypted partition
# cryptsetup luksOpen /dev/vg0/crypted-home home

8. Create filesystem on the crypted home device
# mke2fs -j -O dir_index,filetype,sparse_super /dev/mapper/home

9. Mount and copy home files.
# mount -t ext3 /dev/mapper/home /mnt
# cp -axv /home/* /mnt/
# umount /mnt

10. Setup the system to open/mount crypted home.
Insert in /etc/fstab :
#
/dev/mapper/home /home ext3 defaults 1 2

After that, add an entry in /etc/crypttab:

#
home /dev/vg0/crypted-home none luks

NetBSD OS update/upgrade quick howto.

1. Fetch/Update the OS sources.
refs: NetBSD Docs (and NetBSD guide ; Fetching sources)

Fetch the source if you don't have it:
$ cd /usr
$ export CVS_RSH=ssh 
$ cvs -d anoncvs@anoncvs.NetBSD.org:/cvsroot co -r netbsd-5-0-2 -P src

Update the source if you already have it:
$ cd /usr/src
$ export CVS_RSH=ssh 
$ cvs update -dP

If you are fetching the sources from scratch use:
$ cd /usr
$ export CVS_RSH=ssh 
$ cvs -d anoncvs@anoncvs.NetBSD.org:/cvsroot co -r netbsd-5-1 -P src

Hint: If you are using 5-0 and want to update to 5-1, use
$ cvs update -r netbsd-5-1 -dP

2. Create obj dir and build the tools:
$ mkdir /usr/obj /usr/tools
$ cd /usr/src
$ ./build.sh -O /usr/obj -T /usr/tools -U -u tools

3. Compile brand new userland:
NetBSD page says: Please always refer to build.sh -h and the files UPDATING and BUILDING for details - it's worth it, there are many options that can be set on the command line or in /etc/mk.conf.
$ cd /usr/src
$ ./build.sh -O ../obj -T ../tools -U distribution

4. Compile brand New Kernel:
$ cd /usr/src
$ ./build.sh -O ../obj -T ../tools kernel=

is a Kernel options file located in: /usr/src/sys/arch/amd64/conf/

I have XEN3_DOMU there that holds all my xen kernels compile options.
You can also find GENERIC and others there.

5. Install Kernel

Installing the new kernel (copy it in Dom0), rebooting (to ensure that the new kernel works) and installing the new userland are the final steps of the updating procedure:
$ cd /usr/obj/sys/arch/`uname -m`/compile/XEN3_DOMU/
$ scp netbsd Dom0 machine...

Go and change the kernel in the Dom0 to load the new one.
reboot the machine.

Or on native machines:
$ cd /usr/src
$ su
# mv /netbsd /netbsd.old
# mv /usr/obj/sys/arch/`uname -m`/compile/KERNEL/netbsd /
# shutdown -r now


6. Install new userland and reboot again to be sure it'll work. ;-)
Afrer we've rebooted we are sure all new calls in the new userland will be handled by the new kernel.
Now we'll install the new userland.
$ cd /usr/src
$ su
# ./build.sh -O ../obj -T ../tools -U install=/ 
#reboot

7. Build a complete release so we can copy it on all other machines and upgrade with sysinst.
$ ./build.sh -O ../obj -T ../tools -U -u -x release
The resulting install sets will be in the /usr/obj/releasedir/ directory.



When you've tested on the package server. Install/update on all other machines.


1. Make a backup
2. Fetch a new kernel and the binary sets from the release dir and store them /some/where/
3. Install the kenrel (in XEN dom0)!
4. Install the sets except etc.tzg and xetc.tgz!!
   # cd /
   # pax -zrpef /some/where/set.tgz
   # ...
   # ...
5. Run etcupdate to merge important changes:
   # cd /
   # etcupdate -s /some/where/etc.tgz -s /some/where/xetc.tgz
6. Upgrade finished, time to reboot.

Backup xen lvm/image disks. xenBackup script.

Long time no write.

I'm trying to migrate all of my freebsds to xen+netbsd. (I gave up of this OS. You can't release STABLE that's not that stable. It's a long story but in shor, I've had a sleepless night after deploying a production. The problem - when it gets real world load it hangs with kernel panic and no auto reset about every 5-15mins. WTF? Devs asked me for a dump and told me that maybe they will find the problem. Sorry. That's sux and is not an option for a production used by thousands of people. Goodbye FreeBSD (for at least 5 years).

After successfully running xen for some time, it's time to think of automated backup, that cares for everything instead of writing short shells to do each xen backup.
I've made a quick search and found this xenBackup script that almost suits my needs.
I didn't like that it mounted lvm read only and didn't use snapshots.
The second thing I disliked was that is used to work only with lvms and I do have a sparse xen images (for small machines that don't need quick disk access and have only 1-2 services running in memory).

I've modified the script and now the xenBackup script supports:
- creating backup from lvm snapshots
- creating backup from disk.img file
- dynamic determination of the disk type and path ($hostname-disk for lvms and disk.img for sparse) (BE WARNED: only -disk and .disk will be backed up!)

I'm using tar, so I didn't tested with rsync and rdiff-backup.
I'm using snapshots. Never tested with readonly lvm mounted.

so, here is the code:

#!/bin/sh
#
#   Copyright John Quinn, 2008
#   Copyright Anton Valqkoff, 2010
#
#   This program is free software: you can redistribute it and/or modify
#   it under the terms of the GNU General Public License as published by
#   the Free Software Foundation, either version 3 of the License, or
#   (at your option) any later version.
#
#   This program is distributed in the hope that it will be useful,
#   but WITHOUT ANY WARRANTY; without even the implied warranty of
#   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#   GNU General Public License for more details.
#
#   You should have received a copy of the GNU General Public License
#   along with this program.  If not, see .

#
# xenBackup - Backup Xen Domains
#
#             Version:    1.0:     Created:  John D Quinn, http://www.johnandcailin.com/john
#             Version:    1.1:     Added file/lvm recognition. lvm snapshot:  Anton Valqkoff, http://blog.valqk.com/
#

# initialize our variables
domains="null"                           # the list of domains to backup
allDomains="null"                        # backup all domains?
targetLocation="/root/backup/"                    # the default backup target directory
mountPoint="/mnt/xen"                    # the mount point to use to mount disk areas
shutdownDomains=false                    # don't shutdown domains by default
quiet=false                              # keep the chatter down
backupEngine=tar                         # the default backup engine
useSnapshot=true                        # create snampshot of the lvm and use it as backup mount.
rsyncExe=/usr/bin/rsync                  # rsync executable
rdiffbackupExe=/usr/bin/rdiff-backup     # rdiff-backup executable
tarExe=/bin/tar                      # tar executable
xmExe=/usr/sbin/xm                       # xm executable
lvmExe=/sbin/lvm
mountExe=/bin/mount
grepExe=/bin/grep
awkExe=/usr/bin/awk
umountExe=/bin/umount
cutExe=/usr/bin/cut
egrepExe=/bin/egrep
purgeAge="null"                          # age at which to purge increments
globalBackupResult=0                     # success status of overall job
#valqk: xm list --long ns.hostit.biz|grep -A 3 device|grep vbd -A 2|grep uname|grep -v swap|awk '{print $2}'

# settings for logging (syslog)
loggerArgs=""                            # what extra arguments to the logger to use
loggerTag="xenBackup"                    # the tag for our log statements
loggerFacility="local3"                  # the syslog facility to log to

# trap user exit and cleanup
trap 'cleanup;exit 1' 1 2

cleanup()
{
   ${logDebug} "Cleaning up"
   #check if file or lvm.if lvm and -snap remove it.
   mountType=`${mountExe}|${grepExe} ${mountPoint}|${awkExe} '{print $1}'`;
   [ -f ${mountType} ] && mountType="file";
   cd / ; ${umountExe} ${mountPoint}
   if [ "${mountType}" != "file" ] && [ "${useSnapshot}" = "true" ]; then
      #let's make sure we are removing snapshot!
      if [ `${mountExe}|${grepExe} -snap|wc -l` -gt 0 ]; then
         ${lvmExe} lvremove -f ${mountType}
      fi
   fi


   # restart the domain
   if test ${shutdownDomains} = "true"
   then
      ${logDebug} "Restarting domain"
      ${xmExe} create ${domain}.cfg > /dev/null
   fi
}

# function to print a usage message and bail
usageAndBail() {
   cat << EOT
Usage: xenBackup [OPTION]...
Backup xen domains to a target area. different backup engines may be specified to
produce a tarfile, an exact mirror of the disk area or a mirror with incremental backup.

   -d      backup only the specified DOMAINs (comma seperated list)
   -t      target LOCATION for the backup e.g. /tmp or root@www.example.com:/tmp
           (not used for tar engine)
   -a      backup all domains
   -s      shutdown domains before backup (and restart them afterwards)
   -q      run in quiet mode, output still goes to syslog
   -e      backup ENGINE to use, either tar, rsync or rdiff
   -p      purge increments older than TIME_SPEC. this option only applies
           to rdiff, e.g. 3W for 3 weeks. see "man rdiff-backup" for
           more information

Example 1
   Backup all domains to the /tmp directgory
   $ xenBackup -a -t /tmp

Example 2
   Backup domain: "wiki" using rsync to directory /var/xenImages on machine backupServer,
   $ xenBackup -e rsync -d wiki -t root@backupServer:/var/xenImages

Example 3
   Backup domains "domainOne" and "domainTwo" using rdiff purging old increments older than 5 days
   $ xenBackup -e rdiff -d "domainOne, domainTwo" -p 5D

EOT

   exit 1;
}

# parse the command line arguments
while getopts p:e:qsad:t:h o
do     case "$o" in
        q)     quiet="true";;
        s)     shutdownDomains="true";;
        a)     allDomains="true";;
        d)     domains="$OPTARG";;
        t)     targetLocation="$OPTARG";;
        e)     backupEngine="$OPTARG";;
        p)     purgeAge="$OPTARG";;
        h)     usageAndBail;;
        [?])   usageAndBail
       esac
done

# if quiet don't output logging to standard error
if test ${quiet} = "false"
then
   loggerArgs="-s"
fi

# setup logging subsystem. using syslog via logger
logCritical="logger -t ${loggerTag} ${loggerArgs} -p ${loggerFacility}.crit"
logWarning="logger -t ${loggerTag} ${loggerArgs} -p ${loggerFacility}.warning"
logDebug="logger -t ${loggerTag} ${loggerArgs} -p ${loggerFacility}.debug"

# make sure only root can run our script
test $(id -u) = 0 || { ${logCritical} "This script must be run as root"; exit 1; }

# make sure that the guest manager is available
test -x ${xmExe} || { ${logCritical} "xen guest manager (${xmExe}) not found"; exit 1; }

# assemble the list of domains to backup
if test ${allDomains} = "true"
then
   domainList=`${xmExe} list | cut -f1 -d" " | egrep -v "Name|Domain-0"`
else
   # make sure we've got some domains specified
   if test "${domains}" = "null"
   then
      usageAndBail
   fi

   # create the domain list by mapping commas to spaces
   domainList=`echo ${domains} | tr -d " " | tr , " "`
fi

# function to do a "rdiff-backup" of domain
backupDomainUsingrdiff() {
   domain=$1
   test -x ${rdiffbackupExe} || { ${logCritical} "rdiff-backup executable (${rdiffbackupExe}) not found"; exit 1; }

   if test ${quiet} = "false"
   then
      verbosity="3"
   else
      verbosity="0"
   fi

   targetSubDir=${targetLocation}/${domain}.rdiff-backup.mirror

   # make the targetSubDir if it doesn't already exist
   mkdir ${targetSubDir} > /dev/null 2>&1
   ${logDebug} "backing up domain ${domain} to ${targetSubDir} using rdiff-backup"

   # rdiff-backup to the target directory
   ${rdiffbackupExe} --verbosity ${verbosity} ${mountPoint}/ ${targetSubDir}
   backupResult=$?

   # purge old increments
   if test ${purgeAge} != "null"
   then
      # purge old increments
      ${logDebug} "purging increments older than ${purgeAge} from ${targetSubDir}"
      ${rdiffbackupExe} --verbosity ${verbosity} --force --remove-older-than ${purgeAge} ${targetSubDir}
   fi

   return ${backupResult}
}

# function to do a "rsync" backup of domain
backupDomainUsingrsync() {
   domain=$1
   test -x ${rsyncExe} || { ${logCritical} "rsync executable (${rsyncExe}) not found"; exit 1; }

   targetSubDir=${targetLocation}/${domain}.rsync.mirror

   # make the targetSubDir if it doesn't already exist
   mkdir ${targetSubDir} > /dev/null 2>&1
   ${logDebug} "backing up domain ${domain} to ${targetSubDir} using rsync"

   # rsync to the target directory
   ${rsyncExe} -essh -avz --delete ${mountPoint}/ ${targetSubDir}
   backupResult=$?

   return ${backupResult}
}

# function to a "tar" backup of domain
backupDomainUsingtar ()
{
   domain=$1

   # make sure we can write to the target directory
   test -w ${targetLocation} || { ${logCritical} "target directory (${targetLocation}) is not writeable"; exit 1; }

   targetFile=${targetLocation}/${domain}.`date '+%d.%m.%Y'`.$$.tar.gz
   ${logDebug} "backing up domain ${domain} to ${targetFile} using tar"

   # tar to the target directory
   cd ${mountPoint}

   ${tarExe} pcfz ${targetFile} * > /dev/null
   backupResult=$?

   return ${backupResult}
}

# backup the specified domains
for domain in ${domainList}
do
   ${logDebug} "backing up domain: ${domain}"
   [ `${xmExe} list ${domain}|wc -l` -lt 1 ] && { echo "Fatal ERROR!!! ${domain} does not exists or not running! Exiting."; exit 1; }

   # make sure that the domain is shutdown if required
   if test ${shutdownDomains} = "true"
   then
      ${logDebug} "shutting down domain ${domain}"
      ${xmExe} shutdown -w ${domain} > /dev/null
   fi

   # unmount mount point if already mounted
   umount ${mountPoint} > /dev/null 2>&1

   #inspect domain disks per domain. get only -disk or disk.img.
   #if file:// mount the xen disk read-only,umount sfter.
   #if lvm create a snapshot mount/umount/erase it.
   xenDiskStr=`${xmExe} list --long ${domain}|${grepExe} -A 3 device|${grepExe} vbd -A 2|${grepExe} uname|${grepExe} -v swap|${awkExe} '{print $2}'|${egrepExe} 'disk.img|-disk'`
   xenDiskType=`echo ${xenDiskStr}|${cutExe} -f1 -d:`;
   xenDiskDev=`echo ${xenDiskStr}|${cutExe} -f2 -d:|${cutExe} -f1 -d')'`;
   test -r ${xenDiskDev} || { ${logCritical} "xen disk area not readable. are you sure that the domain \"${domain}\" exists?"; exit 1; }
   #valqk: if the domain uses a file.img - mount ro (loop allows mount the file twice. wtf!?)
   if [ "${xenDiskType}" = "file" ]; then
      ${logDebug} "Mounting file://${xenDiskDev} read-only to ${mountPoint}"
      ${mountExe} -oloop ${xenDiskDev} ${mountPoint} || { ${logCritical} "mount failed, does mount point (${mountPoint}) exist?"; exit 1; }
      ${mountExe} -oremount,ro ${mountPoint} || { ${logCritical} "mount failed, does mount point (${mountPoint}) exist?"; exit 1; }
   fi
   if [ "${xenDiskType}" = "phy" ] ; then
      if [ "${useSnapshot}" = "true" ]; then
         vgName=`${lvmExe} lvdisplay -c |${grepExe} ${domain}-disk|${grepExe} disk|${cutExe} -f 2 -d:`;
         lvSize=`${lvmExe} lvdisplay ${xenDiskDev} -c|${cutExe} -f7 -d:`;
         lvSize=$((${lvSize}/2/100*15)); # 15% size of lvm in kilobytes
         ${lvmExe} lvcreate -s -n ${vgName}/${domain}-snap -L ${lvSize}k ${xenDiskDev} || { ${logCritical} "creation of snapshot for ${xenDiskDev} failed. exiting." exit 1; }
         ${mountExe} -r /dev/${vgName}/${domain}-snap ${mountPoint} || { ${logCritical} "mount failed, does mount point (${mountPoint}) exist?"; exit 1; }
      else
         ${mountExe} -r ${xenDiskDev} ${mountPoint}
      fi
   fi

   # do the backup according to the chosen backup engine
   backupDomainUsing${backupEngine} ${domain}

   # make sure that the backup was successful
   if test $? -ne 0
   then
      ${logCritical} "FAILURE: error backing up domain ${domain}"
      globalBackupResult=1
   else
      ${logDebug} "SUCCESS: domain ${domain} backed up"
   fi
     
   # clean up
   cleanup;
done
if test ${globalBackupResult} -eq 0
then
   ${logDebug} "SUCCESS: backup of all domains completed successfully"
else
   ${logCritical} "FAILURE: backup completed with some failures"
fi

exit ${globalBackupResult}

Setup SVN repositories only for specified users over ssh. OpenSSH limit only one command execution.

Just to blog this. I'll need it in future.
If you have svn repositories server and you are using svn+ssh for the checkout and all svn actions you will want users to have access to only predefined repos only and not to any shell or anything.
I've done this by doing symlinks in their homes and using ssh file that looks like this
authorized_keys
:

command="svnserve -t --tunnel-user=user -r /home/user",no-port-forwarding,no-agent-forwarding,no-X11-forwarding,no-pty ssh-rsa AAAAB3Nz1...KEY HERE....

this way, you can lock them to use only svnserve and it will lock them to co only what's in their home dirs.

If you're not familiar with details - eg. how to generate keys, what is authorized_keys etc, I stole this from here: http://ingomueller.net/node/331 - read more there.

Of course you have to keep your snserve up to date and pray there are no vulns in it, otherwise users can hack you :-)
But hey, you know the owners of the keys, don't you? :-)
Got my pont? ;-)

Roundcube with plugins support!!! WOW! Writing a plugin - display custom template has bogus docs.

Today I've noticed Roundcube has released a new version that finally has plugins support!
Grrrreaaat!

As expected in there is a change password plugin (with drivers supports) and some other that are pretty cool!
A list of plugins here: http://trac.roundcube.net/wiki/Plugin_Repository

Of course I've had some custom patching for my hosting users and now it's not working.
I've configured my change password plugin (which was the main showstopper for not upgrating to new roundcube) and the the little tiny hack for domain notification left.
I've decided to write a plugin that will do the job for me, so I can easily upgade after that.

Writing plugin isn't that hard at all. Here you can read more:
http://trac.roundcube.net/wiki/Doc_Plugins

also you can read plugins directory for more.

While creating my plugin I hit a problem and I've lost about 40 minutes searching for description and resolution.
The Resolutions was 5mins reading the class for temapltes but I thought I was wrong - no this is a mis-explanation in docs.
When you want to create a custom template you mkdir skins/default/templates and create/copy-modify html in it (I've copied login.html template).
Well all was fine while I've tried to show it.
Documentation is wrong.
When you call:

$rcmail->output->send('mytemplate');

you must actually call:

$rcmail->output->send('myplugin.mytemplate');
so the tpl class can understand this is a plugin and show your template and not search for default tpl.

Hope that helps someone.
Going to change/report this in docs now.
Oh. Symptoms are:

[12.Nov.2009 17:57:27 +0200]: PHP Error: Error loading template for logininfo in /var/www/roundcube/program/include/rcube_template.php on line 372 (GET /)

in your error log.

Dojo: breaking in IE*

If your dojo based website breaks in IE browsers and not in others, with strange errors in dojo.js then you have to check VERY CAREFULLY for unclosed tags.

I've had this problem - didn't closed one (only one!) div inside a HTML markup node that used dojoType and viola - dojo threw a "NICE" js error in IE (you know how js is debuged in IE don't ya?) :-)


So be very very careful when closing tags and using IE+dojo :-)

IE8 and Opera 10 absolute positioning problems

IE8 and Opera 10 differs to ALL other browsers (FF3, Safari, Chrome, IE6, IE7) in positionin an absolute element inside a div.
If you have something like this:


....
If you don't put the right: 0px the element won't keep it's original position an will go to the left side of the div becase IE8 and Opera will put default left: 0px if nothing set.
All other browsers will keep a's original position (no left: 0px;)
hope that help to someone.
Keywords: IE8 Opera absolute positioning problem

Q&A for apache in debian

Q: Why does Apache Web server in Debian has 'It works!' page as it's default host?
A: Because after you have setupped a complex VirtualHost configuration for half an hour or more (yesh, there can be such), it's nice to see that 'It worked!'
--answered by valqk. :-D

Debian HP SmartArray RAID monitoring.

You need to install 2 utils to monitor and query your smart array:

apt-get install arrayprobe cpqarrayd
the one is a daemon that logs events from the controller - cpqarrayd (thanks velin)
arrayprobe is the cli tool.

More links on the topic:
source I've got this from.
driver and utils page.

if you have faulty drive

hope that helps.

UPDATE:

In squeeze there is no cpqarrayd and arrayprobe is not that good.
You can use the hp tools provided in debian packages.
Simply add this source:


deb http://downloads.linux.hp.com/SDR/downloads/ProLiantSupportPack/Debian/ squeeze/current non-free

then
#> apt-get update && apt-get install hpacucli

This is the way this CLI is being used: hpacuclu usage
p.s. not yet figured out about monitoring. hp-health is something I've read but didn't tested yet.

Fun with JavaScript... I don't recommend this in your code! :-)

Facebook, FB.Connect - write nice js code and reuse code call...

I'm writing a poc code that calls some FB.Connect methods.
As a quick and nasty code reuse I've come up with this code:

A method that inits and makes the actual code:
function fbCall(code) {
    FB_RequireFeatures(["XFBML"], function(){
        FB.Facebook.init('ApiKey', '/xd_receiver.htm', null);
        FB.ensureInit(function () {
            eval(code);
        });
    });
}
so far so good - it all seems ok.
Here comes the tricky part. I wanted to be able to call multiline variable with comments in it - a normal js code but encapsulated in somethind...
If you don't know in JS you can't have multiline variable, and if you have something like:
    var mycall = 'FB.Connect.showFeedDialog(
\'249955020144'\, 
//here we put some data...
comment_data, '', "Awesome", null, 
FB.RequireConnect.promptConnect, function(){alert("Callback");}, fortune, user_message);';
you'll get error while parsing because of the new lines.
If you replace the new lines with ' ' you'll get the whole code after a comment - commented exept you don't use / /

The solution is this:
function fbCall(code) {
    FB_RequireFeatures(["XFBML"], function(){
        FB.Facebook.init('ApiKey', '/xd_receiver.htm', null);
        FB.ensureInit(function () {
            code();
        });
    });
}
function askPerms() {
    var c = function() {
        "FB.Connect.showPermissionDialog('perms');";
    }
    fbCall(c);
}
Notice the difference between two fbCall functions - the second one calls code as a function - it do not evals it.
This way you can write up your code inside the c 'function' variable and call it after that.
It's a bit tricky while you get it how it works but the code looks more readable after that.

Protect yourself from accidentally halting a server.

In Short: use molly-guard (debian name)
While reading my 10 unix command line mistakes I've saw the wrong halting machine command.
It's nasty to halt some server instead of your local desktop.
I use molly-guard (on Debian servers - not avaliable in FreeBSD. dunno for other linuxes? any comments?) to protect myself from this kind of mistake.
It modifies the halt/shutdown script and asks you for the hostname of the server before shutdown if from ssh session.
#>apt-get install molly-guard

when installed if you try to shutdown or reboot:
storm:/home/valqk# halt
W: molly-guard: SSH session detected!
Please type in hostname of the machine to reboot: ^C
Good thing I asked; I won't halt storm ...
W: aborting reboot due to 30-query-hostname exiting with code 1.

phew!
molly-guard saved the world for me again! :-)
have a nice Friday evening!
cheers.

How to enable new NetBSD ffs WAPBL feature? How to extend ffs size?

How to enable/use WAPBL in netbsd 5.0?


  1. you MUST have options WAPBL in your kernel (it's there in most archs)

  2. mount the desired filesystem with -o log (or add rw,log in /etc/fstab) - that's all. The log will be created automatically when this optiuon is in act.



(Source http://broadcast.oreilly.com/2009/05/netbsd-wapbl.html )

How to extend ffs size?

According to my research you can't do this at the moment.
Can anyone correct me and make me happy?

FreeBSD jails: how to login /jexec JID SHELL/ quickly in a jail by name (jlog command)?

Have you ever wondered why the heck you write jls then jexec JID /bin/csh?
I got sick of this few year ago and I wrote a tiny little script that makes my life easy every day.
Be warned, there are few cases when you'll have to see your jid but this works with jid too.
(for example when you have hang jail that won't shutdown /stop/ - happens to me pretty often and this is reported as non-critical bug for years...)

How it works?
Let's pretend that we have a jail named: 'mailserver.valqk.com'. Then you simply type this to get in the mailserver:
#>jlog mail
Logging in to mailserver.valqk.com
mailserver#                           

It's that easy. Also you can add a preffered custom shell for this session after the jail (or part) name.

What looks like the script itself?
There it goes:
#!/bin/sh
[ -z "$1" ] && echo "No jail specified." && exit 1;
[ -z "$2" ] && loginSHELL="/bin/tcsh" || loginSHELL="$2"
jName=$1;
jID=`jls | grep $jName|awk '{print $1}'`
jRealName=`jls | grep $jName|awk '{print $3}'`
[ -z "$jID" ] && echo "No such jail name $jName!" && exit 1;
echo "Logging in to $jRealName"
jexec $jID $loginSHELL
please feel free to use, comment, improve this script! If you make any improvements, pls tell me!
I'll definitely add changes if I like them!!!

Serendipity and dpSyntaxHighlighter plugin with bash support.

I wanted a syntax highlighter for the previous post.
I've installed dpsyntaxhighlighter from serendipity plugins list. I wanted a JS code highlighter - not a php one (like GeSHi), so I've chosen this one. (it uses google syntaxhighlighter)
I've noticed that this nice lib don't have a bash syntax support which I needed now.
In the wiki of the project I've found this link to a script a guy wrote for bash syntax.
Great!
I've placed the js file in ROOT/plugins/serendipity_event_dpsyntaxhighlighter/dp.SyntaxHighlighter/Scripts/ and expected it to work.
No, it didn't.

It turned out that you have to add each language highliter js script file in ROOT/serendipity_event_dpsyntaxhighlighter.php

Around:
switch($event)
                case 'frontend_header':
                    echo '    <link rel="stylesheet" type="text/css" href="' . $pluginDir.  '/SyntaxHighlighter.css" />' . "\n";
                    return true;
                    break;
                case 'frontend_footer':


there is a list with all js files.
Simply add the new language (on both frontend_footer and backend_preview cases) and there you go!

How to highlight your text?
It is used very simple.
You add code like this in your post:

    ... some code here ...

and you have nice formatted code.
More usage tips here.

That's all folks.
Hope that helps you.

Xen: firewall DomU from Dom0

Have you ever wondered how to force some firewall rules on a xen DomU and the DomU root won't be able to use some ports etc?
Well, the only proper way is to firewall DomU from the Dom0 machine.
Here is a way to do it.
This script is just an example. It should be made more universal and can apply to ALL of your DomU's for their protection :-) or logging specific traffic.
#!/bin/bash
vifname=$1;
/sbin/iptables -N vps
#outbound traffic redirect to vps - a per DomU chain.
/sbin/iptables -I FORWARD -m physdev  --physdev-out peth0 --physdev-in $vifname -j vps
#log some of the traffic
/sbin/iptables -A "vps" -j LOG -m  tcp --dport 80,110,113 --log-level 4 --log-prefix '*DomUNameHere-shows-in-logs*'
#allow some ports
/sbin/iptables -A "vps" -p tcp -m tcp --dport 20 -j RETURN
/sbin/iptables -A "vps" -p tcp -m tcp --dport 21 -j RETURN
/sbin/iptables -A "vps" -p tcp -m tcp --dport 22 -j RETURN
/sbin/iptables -A "vps" -p tcp -m tcp --dport 80 -j RETURN
/sbin/iptables -A "vps" -p tcp -m tcp --dport 443 -j RETURN
/sbin/iptables -A "vps" -p tcp -m tcp --dport 6666 -j RETURN
/sbin/iptables -A "vps" -p tcp -m tcp --dport 6667 -j RETURN
/sbin/iptables -A "vps" -p tcp -m tcp --dport 6668 -j RETURN
/sbin/iptables -A "vps" -p tcp -m tcp --dport 6669 -j RETURN
/sbin/iptables -A "vps" -p udp -m udp --dport 53 -j RETURN
#allow establieshed connections from inside the DomU to go back in
/sbin/iptables -A "vps" -p tcp -m state --state RELATED,ESTABLISHED -j RETURN
#drop all other traffic.
/sbin/iptables -A "vps" -p tcp -j DROP

Setting up GRUB to boot from both disks of mirrored RAID

copy/paste from: http://grub.enbug.org/MirroringRAID

Many people use mirrored RAID (also known as 'RAID 1') to protect themselves against data loss caused by hard disk failure. Sometimes, you even want GRUB to boot from the secondary hard disk in case the primary fails to keep the system up and running. This is however not as easy as one might think...

GRUB keeps track of the hard disks currently available on your system, on most distributions you can find this information in /boot/grub/device.map. You might have a file like this:

hopper:~# cat /boot/grub/device.map
(hd0) /dev/sda
(hd1) /dev/sdb

Of course you can install GRUB to /dev/sdb (which is hd1), but obviously GRUB will be confused if /dev/sda fails and hd1 becomes hd0. Most likely, it will complain about a failing hard disk at boot time:

GRUB Hard Disk Error

In this case, you want to install GRUB to /dev/sdb and have sdb also mapped to hd0:

hopper:~# cat /boot/grub/device.map
(hd0) /dev/sda
(hd0) /dev/sdb
hopper:~# grub-install /dev/sdb
The drive (hd0) is defined multiple times in the device map /boot/grub/device.map

GRUB doesn't accept this duplicate definition (which is indeed incorrect), so you need to configure things by hand:

hopper:~# grub
grub> device (hd0) /dev/sdb
grub> root (hd0,0)
Filesystem type is ext2fs, partition type 0xfd

grub> setup (hd0)
Checking if "/boot/grub/stage1" exists... yes
Checking if "/boot/grub/stage2" exists... yes
Checking if "/boot/grub/e2fs_stage1_5" exists... yes
Running "embed /boot/grub/e2fs_stage1_5 (hd0)"... 15 sectors are embedded.
succeeded
Running "install /boot/grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/boot/grub/stage2 /boot/grub/menu.lst"... succeeded
Done.

grub> quit

Now, /dev/sda and /dev/sdb are configured as hd0 and the system remains bootable if /dev/sda fails.

Assumptions about partitions

The above information only works if your boot filesystem can be found on both /dev/sda1 and /dev/sdb1. If you have /boot on e.g. /dev/sda5 and /dev/sdb5, you'll have to replace root (hd0,0) with something more applicable for your specific configuration.

Howto: Migrate linux (debian lenny) from one single disk to two mirrored/lvm-ed disks?

Allright.
I've got a server (actually my desktop testing machine) with two brand new installed 2x1T disks.
I'm going to setup the disks like this:
3 partitions:
1 swap (we really don't need abstractions for just keeping swap)
2 boot partition in md raid1 (grub2 really sux, so no boot from lvm support in the old one...)
3 all othe space for md raid1 and lvm over it.
It's a good idea to use lvm because you can always add another disk and also can make snapshots... and in short - have more fun with space allocating.


1. partition the two disks identically:
#> fdisk -l /dev/sdb
Device Boot Start End Blocks Id System
/dev/sdb1 1 974 7823623+ 82 Linux swap / Solaris
/dev/sdb2 975 1461 3911827+ fd Linux raid autodetect
/dev/sdb3 1462 121601 965024550 fd Linux raid autodetect

(yes, I know the boot partition is quite big, but there is a lot of space and I prefer to have more space than to wonder wtf I've done... happened few timesof course :-D)


2. create raids.
#> mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb2 /dev/sdc2
and activate
#> mdadm --readwrite /dev/md0
be sure it's sync-ing
#> cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdc2[1] sdb2[0]
3911744 blocks [2/2] [UU]
[=>...................] resync = 5.2% (206848/3911744) finish=0.5min speed=103424K/sec

do the same for lvm raid partitions....


3. format boot partition (I'll use etx3) and copy boot files in there.
#> mkfs.ext3 /dev/md0
#> mount /dev/md0 /mnt/
#> cd /mnt/
#> cp -a /boot/
.
#> cd grub
WARNING: sda2 is supposed to be your current partition.
#> sed -i'' -e 's/\/boot\//\//g' -e 's/sda2/mapper\/1tb-root/g' menu.lst
#> cd /;unmount /mnt


4. create physical volume on md1.
#> pvcreate /dev/md1
Physical volume "/dev/md1" successfully created


5. create volume groups - my vg name 1tb
#> vgcreate -A y 1tb /dev/md1
Volume group "1tb" successfully created


6. then add root volume group. mkfx.ext3.mount and copy the currently running root system in new root partition...
#> lvcreate -A y -L 30G -Z y -n root 1tb
#> mkfs.ext3 /dev/1tb/root
#> mount /dev/1tb/root /mnt
#> cd /mnt
#> cp -a {/bin,/cdrom,/emul,/etc,/home,/initrd*,/lib,/lib32,/lib64,/media,/opt,/root,/sbin,/selinux,/srv,/tmp,/usr,/var,/vmlinuz*} .
#> mkdir dev proc sys mnt misc boot
#> cd etc
#> sed -i'' -e 's/sda2/mapper\/1tb-root/g' fstab
WARNING: There is a nasty bug with initramfs tools described here: http://www.mail-archive.com/debian-kernel@lists.debian.org/msg32272.html
You MUST set root to /dev/mapper/VGNAME-LVNAME otherwise you won't get lvm support in your kernel.
#> echo "/dev/md0 /boot ext3 defaults 0 0" >> fstab
#> mount -o bind /dev /mnt/dev
#> mount -o bind /prov /mnt/proc
#> cp -a /boot/ /mnt/boot/
#> chroot /proc
#> update-initramfs -u -t -k `uname -r`
#> exit
reboot the machine, edit the grub menu by hand to boot from (hd0,0) as boot and load /dev/mapper/1tb-root as root.
login as root and make:
#> cd /boot; mkdir oldrootfs; mv
oldrootfs; mv -fr oldrootfs/boot/* .;
edit grub/menu.lst to have / instead ot /boot/ dirs.
run:
#> grub-install --root-directory=/ "(hd0)" (described in /usr/share/doc/grub/README.Debian.gz
(root-directory is where the boor partition is on hd0, if you put /boot/ then in live system you'll have /boot(md0)/boot/)...
reboot again and it must be ok now.

7. Install grub on both disks as described here...

8. reboot and disable hdd from bios. the system should boot normally and you should see that you are using the lvm root partition.

Linux LVM - MD Raid vs LVM mirror. Snapshots.

I'm migrating a xen machine (my gallery - g.pechurka.com ) from one machine to another.
I wanted to research from quite a long time what should I use in Linux.
I've got a nice working setup but when you research and test you always makes things better.

So, this night I've researched on if I should do MD raid1 or LVM2 mirroring.
Short answer: MD0 raid1 and Physical Volume over it (as used to now). Sometimes LVM is faster but when using single file. You can have serious problems if you have power failure or disk failure/disconnection when using LVM mirroring. It relies on underneath layer to write caches. For further reading open the link below.
Long answer here: lvm2-mirrors-vs-md-raid-1

PureEdit (rocking ultra light CMS system) and utf8 / unicode support. Pagination.

I've found PureEdit today and decided to use it as my CMS backend for some of my simpler projects!
This is a GREAT and FAST way to setup a backend content editing.
Take a look at the videos on their site! I was fascinated!

I've had old database filled with correct Unicode (UTF-8) data (in Cyrillic).
I've installed and loaded pe-admin, updated my db accortind to PureEdit specs, added some fields type like (status 1||0).
When opened pe-admin I've come up with few Unicode problems.
(as described here: http://www.pureedit.com/community/comments.php?DiscussionID=149&page=1 )
I've solved my problem exactly by putting: mysql_query('SET NAMES UTF8');

I've had to modify the connect function in pe-admin/databases/mysql.db.php like this:
function connect($host, $username, $password, $database)
{
$dbh = mysql_connect($host, $username, $password);
mysql_query('SET NAMES UTF8');
mysql_select_db($database, $dbh);
}
(it used to connect/select db in one line, but you MUST make SET NAMES BEFORE selecting db!)

The second problem that I've had as with utf8 again.
Utils class is not UTF-8 ready.
I've had to add to set utf8 encoding
public function __construct() {
mb_internal_encoding('UTF-8');
}
and replace all str functions with mb_str.

After that I wanted to have pagination.
Here it's described how to set it up.

http://www.pureedit.com/community/comments.php?DiscussionID=221&page=1#Item_0

Now I have to setup host type filed (that will be automatically filled with $_SERVER['REMOTE_ADDR']) and some othe fancy stuff, but in about 40 minutes I've setup a GREAT and easy to use backend posting solution for my website!

Thanks PureEdit :-)

The PureEdit code is not from the greatest ones but when it works and if .htaccess protected - it simply works! :-)

FUN: WTFs per minute ... wtf/min :)

How do we understand when a project is well coded or is a piece of..... fish?
Well, have you heard of WTFs per minute?
See this :-)

thanks to OSnews.com/comics for this one!

Filesystems speed comparison: ext2, ext3, ext4dev, reiserfs and xfs.

Just a simple test for speed of ext2, ext3, ext4dev, reiserfs and xfs.
I've created a 5G lvm and did mkfs.* to it with all these systems.
lvm> lvcreate -L 5G -n speedtest tvg
mkfs (all default opts)
mount (no opts)
Then make dd if=/dev/zero of=a bs=G count=3
Then make dd if=a of=b


Here are the results:

FS type dd if=/dev/zero of=a speed dd if=a of=b speed
ext2 3221225472 bytes (3.2 GB) copied, 225.283 s, 14.3 MB/s 2047094784 bytes (2.0 GB) copied, 38.4914 s, 53.2 MB/s
ext3 3221225472 bytes (3.2 GB) copied, 248.433 s, 13.0 MB/s 1912479744 bytes (1.9 GB) copied, 52.1087 s, 36.7 MB/s
ext4 3221225472 bytes (3.2 GB) copied, 281.699 s, 11.4 MB/s 1918500864 bytes (1.9 GB) copied, 49.1343 s, 39.0 MB/s
reiserfs 3221225472 bytes (3.2 GB) copied, 248.827 s, 12.9 MB/s 2108379136 bytes (2.1 GB) copied, 62.5061 s, 33.7 MB/s
xfs 3221225472 bytes (3.2 GB) copied, 426.823 s, 7.5 MB/s 2132619264 bytes (2.1 GB) copied, 55.3356 s, 38.5 MB/s


It seems like ext3 is working best with big continious files currently.
Of course this is not a real life situation.
Maybe xfs and reiser will be better for a lot of small files? (reiser has it's glory about that).

Please let me know what you think!

Copy permissions of files from one dir to another or how to change permissions and ownership of identical dirs.

Have you ever been in the middle of deployment and noticed that the owner and permissions on dev package you have and production one are different?
Very oftern when deploing some sites I have two identical dirs with different owners and permissions - one from a dev web server and another from the production (where everything is tuned and working).
Here is a quick and durty way to change all ownership and permissions from prodution to development dir.
BE VERY CAREFULL. These permissions are not all masks. These are the one I needed!
Don't blind apply this script and run chm.sh after that!!!
FIRST check that you don't have some permissions rules in the chmod!!!
If you have 'chmod rw-r---- FILENAME' you will finish with -------- (0000) permissions!!!
You were warned!
mysite.production is a copy of the production dir somewhere else in the tree (your home for example).


$> find mysite.production/ ! -type l -ls | awk '{print "chmod "$3" "$11 " && chown "$5":"$6" "$11}' | sed -e 's/\.production//g' -e 's/-r--r--r--/044/' -e 's/-rw-rw-rw/0666/' -e 's/-rwxrwxrwx/0777/' -e 's/-rwx------/0700/' -e 's/-rwxr-xr-x/0755/' -e 's/-rwxr-x---/0750/' -e 's/-rw-r--r--/0644/' -e 's/-r-x------/0500/' -e 's/drwxr-xr-x/755/' -e 's/drwxrwxrwx/777/' > chm.sh
$>sh chm.sh


and try to compare some files that have specific perms.

that's all.
I'm SURE there is an easier way, and I'll be glad to share!!!