Clone/Replace HDD in Software RAID

When replacing hdd in software raid, easiest way to do it is to clone partition table from the healthy disk on the new one. Regenerate UUIDS and add new disk in the array.

Here's how we do it. (/dev/sda is healthy disk)

For MBR Disks:

# sfdisk -d /dev/sda | sfdisk /dev/sdb


also can do a partitions rescan:
# sfdisk -R /dev/sdb



For GPT Disks:
WARNING: /dev/sdb is the DESTINATION! /dev/sda is SOURCE.

# sgdisk -R /dev/sdb /dev/sda

then set new UUIDs

# sgdisk -G /dev/sdb



then check if partitions have appeared:

# fdisk -l /dev/sdb


after that simply add your partitions:

# mdadm /dev/md0 -a /dev/sdb1
....add all md dev-disk pair...


then you can watch it syncing:

# watch cat /proc/mdstat


afterwards don't forget to install grub:


# grub-install /dev/sdb

Postfix - relay trough other smtp server with email authorization

To relay our mail from a home/office server trough real mail server, that has all needed stuff like PTR, SPF, DKIM etc. stuff setup, the easies way is to 'login' to a free, open service like gmail, yahoo etc. server and send from our account there.
So, let's do this with postfix.
The EASY way to do is, if the provider supports submission protocol (port 587).
Then in main.cf we set a relayhost

relayhost = [smtp.gmail.com]:587
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_security_options = noanonymous
smtp_tls_CAfile = /etc/postfix/cacert.pem
smtp_use_tls = yes


then we create /etc/postfix/sasl_passwd:

[smtp.gmail.com]:587 USERNAME@gmail.com:PASSWORD


we create hash file:
#> postmap /etc/postfix/sasl_passwd

then we make:
#> chmod 400 /etc/postfix/sasl_passwd*
and then
#> /etc/init.d/postfix restart
OR
#> systemctl restart postfix (8 and above)

The HARD way is to do it with stunnel program.
We should do this ONLY if provider supports only ssl smtp connection.

Make sure stunnel is installed:
#> apt-get install stunnel4

First, make same steps as above BUT relayhost is with other valie:
then in /etc/postfix/main.cf check

relayhost = 127.0.0.1:10465


After setting relay to localhost at random port, we setup stunnel.
We create /etc/stunnel/mail.server.com.conf
It holds this:

client = yes
chroot = /var/lib/stunnel4/
setuid = stunnel4
setgid = stunnel4
pid = /stunnel4.pid
socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1

[ssmtp]
accept = 127.0.0.1:10465
connect = mail.hostit.bg:465


Then, we restart stunnel and postfix and we are ready.

Remove failed disk from LVM group

Removing a Disk from a Logical Volume
Moving Extents to Existing Physical Volumes
#> pvs -o+pv_used

We want to move the extents off of /dev/md4 so that we can remove it from the volume group.
If there are enough free extents on the other physical volumes in the volume group, you can execute the pvmove command on the device you want to remove with no other options and the extents will be distributed to the other devices.
#> pvmove /dev/md4

After the pvmove command has finished executing, check the distribution of extents :
#> pvs -o+pv_used

Use the vgreduce command to remove the physical volume /dev/md4 from the volume group.
# vgreduce vg_name /dev/md4
Removed "/dev/md4" from volume group "vg_name"
# pvs

Debian systemd run process at startup

This is a copy of: https://mkaz.github.io/2013/07/03/run-script-at-start-on-debian/ (because it disappeared).


Run Script at Start on Debian/Ubuntu




July 3, 2013



Updated: Dec 12, 2015



Determine Init System



Newer installs use systemd as the init system, which has a newer and more
consisten way to manage services. There was a lot of linux drama around this
change, my guess mostly due to "cheese moving" and having to learn something
new.



You can determine if your system is running systemd using: $ ps -p1



Start Daemon on System boot



The main reason I need this script is becaues BitTorrent Sync is not distributed as an Ubuntu package with necessary start scripts. Also, btsync needs to run as the user, which I often forget. First create the service file which describes how to start and information about the service.



Saved to: /etc/systemd/user/btsync.service


[Unit]
Description=BitTorrent Sync Service

[Service]
Type=forking
User=mkaz
ExecStart=/home/mkaz/bin/btsync --config /home/mkaz/.btsync.conf

[Install]
WantedBy=multi-user.target

Enable


sudo systemctl enable /etc/systemd/user/btsync.service



Start


sudo systemctl start btsync.service



That's it, it should be running now and each time you start. You can test by rebooting and confirming it works. The above is not ideal for multi-user servers, since I'm hardcoding everything to my user, adjust to your needs.



Extra



There is a lot more information available around systemd and a lot more
configuration available for example to execute a command before start, or other
dependencies.







systemd throttling too fast - debian jessie bug

Systemd looks nice, but makes a lot more trouble than it helps (according to me).
There is a nasty bug in current jessie systemd 215 which makes it from time to time to say:
systemd[1]: Looping too fast. Throttling execution a little.
and eats up cpu.

The only way I've found, to solve temporary, without reboot is

#> systemctl daemon-reexec

Sniffing Unix Socket - debugging communication between nginx and php-fpm

Ever wondered how to sniff communication of a unix socket?
Here's how:

#> socat -t100 -x -v UNIX-LISTEN:/var/run/php5-fpm.sock.socat,mode=777,reuseaddr,fork UNIX-CONNECT:/var/run/php5-fpm.sock

You can remove -x and just leave -v for ascii communication.
Hope that helps someone.

xl block-attach - cool!

Just wanted to copy something from usb flash drive inside my vm.
Wondered what's the fastest and easiest way to do it and viola:

#> xl block-attach 10 phy:/dev/sdb1 xvdf1 w

then, inside vm mount /dev/xvdf1

afterwards detach it (after freeing resources):

># xl block-detach 10 xvdf1

The interesting thing is that the I got about 20MB/s speed from the usb (it is usually that fast - 3.0 usb drive, plugged in 2.0 port).

Starting x11vnc from init.d in jessie

I've spent last hour trying to make a descent init.d script 'new way' that's working properly so I can have x11vnc started at boot time.
Here it is:
#!/bin/sh
### BEGIN INIT INFO
# Provides:          x11vnc
# Should-Start:
# Required-Start:    gdm3
# Required-Stop:
# Default-Start:     5
# Default-Stop:      0 1 2 6
# Short-Description: x11vnc server
# Description:       Debian init script for the x11vnc server
### END INIT INFO
#
# Author:       Anton Valqkoff < anton  valqk  com >
#
set -e
PATH=/sbin:/bin:/usr/sbin:/usr/bin
SERVICE=$(basename $0)
PIDFILE="/var/run/$SERVICE.pid"
BIN=/usr/bin/x11vnc
OPT=" -display :0 -auth guess -rfbauth /etc/x11vncpassword -oa /var/log/vnc.log -xkb -forever"

test -x $BIN || exit 0

if [ -r /etc/default/locale ]; then
  . /etc/default/locale
  export LANG LANGUAGE
fi

. /lib/lsb/init-functions

case "$1" in
  start)
        CONFIGURED_DAEMON=$(basename "$(cat $DEFAULT_DISPLAY_MANAGER_FILE 2> /dev/null)")
        if [ `ps ax|grep $SERVICE|grep -v grep|wc -l` -gt 1 ]; then
                log_daemon_msg "Starting $SERVICE server" "$SERVICE"
        set +e
                start-stop-daemon --start --pidfile $PIDFILE -m --background --exec $BIN -- $OPT || log_end_msg 1
                log_end_msg 0
        set -e
        else
            log_daemon_msg "$SERVICE Already started..." "$SERVICE"
        fi
  ;;
  stop)
        log_daemon_msg "Stopping $SERVICE" "$SERVICE"
        set +e
        start-stop-daemon --stop --quiet --pidfile $PIDFILE \
                --name $SERVICE --retry 5
        set -e
        log_end_msg $?
  ;;
  reload)
        log_daemon_msg "Scheduling reload of $SERVICE" "$SERVICE"
        set +e
        start-stop-daemon --stop --signal HUP --quiet --pidfile $PIDFILE \
                --name $SERVICE
        set -e
        log_end_msg $?
  ;;
  status)
        status_of_proc -p "$PIDFILE" "$BIN" $SERVICE && exit 0 || exit $?
  ;;
  restart|force-reload)
        $0 stop
        $0 start
  ;;
  *)
        echo "Usage: $0 {start|stop|restart|reload|force-reload|status}"
        exit 1
  ;;
esac

exit 0

Bash loop delimiter

Setting

IFS=\n

at your bash prompt will set the delimiter a for loop uses to a newline.

You can do:

OLDIFS=$IFS;
IFS="--a--";

for i in `cat $somefile`;
do
echo $i;
done
IFS=OLDIFS;

Xen Block Devices Usage tap2:aio

In case of getting max loop devices reached and have already built image of VM and you CAN't restart machine to increase max_loop=XX when loading loop module,
use tap:aio.


#> apt-get install blktap-dkms blktap-utils
#> modprobe blktap

Then change the config of vm from file:/ to tap2:tapdisk:aio:/
Something like this:

bootloader = '/usr/lib/xen-4.1/bin/pygrub'
vcpus = '1'
memory = '512'
root = '/dev/xvda2 ro'
disk = [
'tap2:tapdisk:aio:/home/xen//domains/virtual.com/disk.img,xvda2,w',
'tap2:tapdisk:aio:/home/xen/domains/virtual.com/swap.img,xvda1,w',
]
name = 'virtual.com'
vif = [ 'ip=192.168.10.100 ,mac=01:06:1A:12:FE:C5' ]
on_poweroff = 'destroy'
on_reboot = 'restart'
on_crash = 'restart'


Screencast with ffmpeg

Source: http://www.upubuntu.com/2012/10/some-useful-ffmpeg-commands.html

Some Useful FFMPEG Commands (Screencasting, Rotate Video, Add Logo, etc.)

In this tutorial we will see some useful FFMPEG commands that you can use on Ubuntu/Linux Mint to make screencasting videos, rotate videos, add logo/text watermarks to a video, insert shapes, and so on.

To install ffmpeg and some other packages on Ubuntu/Linux Mint, open the terminal and run these commands:

sudo apt-get install ubuntu-restricted-extras

sudo apt-get install ffmpeg x264

sudo apt-get install frei0r-plugins mjpegtools

Note: The file formats used in this tutorial are selected randomly and you can set any other extension of your choice.

1. Screecasting

To record your screen withh FFMPEG, you can use this command:

ffmpeg -f x11grab -follow_mouse 100 -r 25 -s vga -i :0.0 filename.avi

Now the command will record every spot on your screen you hover your mouse cursor over. Press Ctrl+C to stop recording. If you want to set a screen resolution for the video to be recorded, you can use this ffmpeg command:

ffmpeg -f x11grab -s 800x600 -r 25 -i :0.0 -qscale 5 filename.avi

To show the region that will be recorded while moving your mouse pointer, use this command:

ffmpeg -f x11grab -follow_mouse centered -show_region 1 -r 25 -s vga -i :0.0 filename.avi

If you want to record in fullscreen with better video quality (HD), you can use this command:

ffmpeg -f x11grab -s wxga -r 25 -i :0.0 -sameq video.mp4

Her is a video example created with the latter command:



2. Add Audio To A Static Picture

If you want to add music to a static picture with ffmpeg, run this command from the terminal:

ffmpeg -i audio.mp3 -loop_input -f image2 -i file.jpg -t 188 output.mp4

3. Add Image Watermarks to A Video

To add an image to a video using ffmpeg, you can use one of these commands:

Picture Location: Top Left Corner

ffmpeg -i input.avi -vf "movie=file.png [watermark]; [in][watermark] overlay=10:10 [out]" output.flv

Here is an example:



Picture Location: Top Right Corner

ffmpeg –i input.avi -vf "movie=watermarklogo.png [watermark]; [in][watermark] overlay=main_w-overlay_w-10:10 [out]" output.flv

Picture Location: Bottom Left Corner

ffmpeg –i input.avi -vf "movie=watermarklogo.png [watermark]; [in][watermark] overlay=10:main_h-overlay_h-10 [out]" output.flv

Picture Location: Bottom Right Corner

ffmpeg –i input.avi -vf "movie=watermarklogo.png [watermark]; [in][watermark] overlay=main_w-overlay_w-10:main_h-overlay_h-10 [out]" output.flv

4. Add Text Watermarks To Videos

To add text to a video, use this command:

ffmpeg -i input.mp4 -vf drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSans.ttf: text='YOUR TEXT HERE':fontcolor=red@1.0:fontsize=70:x=00: y=40" -y output.mp4

An example:



To use another text font, you can list them from the terminal with this command:

ls /usr/share/fonts/truetype/freefont/

4. Rotate Videos

To rotate a video 90 degrees with ffmpeg, run this command:

ffmpeg -i input.avi -vf transpose=1 output.avi

Here is an example for a video rotated with ffmpeg:



Here is all parameters:

0 = 90 degrees CounterCLockwise (Vertical Flip (default))
1 = 90 degrees Clockwise
2 = 90 degrees CounterClockwise
3 = 90 degrees Clockwise (Vertical Flip)

5. Adjust Audio/Video Volume
You can use ffmpeg to change volume of a video file with this command:

ffmpeg -i input.avi -vol 100 output.avi

To change volume of an audio file, run this command:

ffmpeg -i input.mp3 -vol 100 -ab 128 output.mp3

6. Insert A Video Inside Another Video

To do this, run this command:

ffmpeg -i video1.mp4 -vf "movie=video2.mp4:seek_point=5, scale=200:-1, setpts=PTS-STARTPTS [movie]; [in] setpts=PTS-STARTPTS, [movie] overlay=270:240 [out]" output.mp4

Here is an example:



7. Add a Rectangle To A Video

To draw for example an orange rectangle in a video, you can use this command:

ffmpeg -i input.avi -vf "drawbox=500:150:600:400:orange@0.9" -sameq -y output.avi

How to create and start VirtualBox VM without GUI

Source: http://xmodulo.com/how-to-create-and-start-virtualbox-vm-without-gui.html

Suppose you want to create and run virtual machines (VMs) on VirtualBox. However, a host machine does not support X11 environment, or you only have access to a terminal on a remote host machine. Then how can you create and run VMs on such a host machine without VirtualBox GUI? This can be a common situation for servers where VMs are managed from remotely.

In fact, VirtualBox comes with a suite of command line utilities, and you can use the VirtualBox command line interfaces (CLIs) to manage VMs on a remote headless server. In this tutorial, I will show you how to create and start a VM without VirtualBox GUI.

Prerequisite for starting VirtualBox VM without GUI

First, you need to install VirtualBox Extension Pack. The Extension Pack is needed to run a VRDE remote desktop server used to access headless VMs. Its binary is available for free. To download and install VirtualBox Extension Pack:

$ wget http://download.virtualbox.org/virtualbox/4.2.12/Oracle_VM_VirtualBox_Extension_Pack-4.2.12-84980.vbox-extpack
$ sudo VBoxManage extpack install ./Oracle_VM_VirtualBox_Extension_Pack-4.2.12-84980.vbox-extpack
Verify that the Extension Pack is successfully installed, by using the following command.

$ VBoxManage list extpacks
Extension Packs: 1
Pack no. 0: Oracle VM VirtualBox Extension Pack
Version: 4.2.12
Revision: 84980
Edition:
Description: USB 2.0 Host Controller, VirtualBox RDP, PXE ROM with E1000 support.
VRDE Module: VBoxVRDP
Usable: true
Why unusable:
Create a VirtualBox VM from the command line

I assume that the VirtualBox' VM directory is located in "~/VirtualBox\ VMs".

First create a VM. The name of the VM is "testvm" in this example.

$ VBoxManage createvm --name "testvm" --register
Specify the hardware configurations of the VM (e.g., Ubuntu OS type, 1024MB memory, bridged networking, DVD booting).

$ VBoxManage modifyvm "testvm" --memory 1024 --acpi on --boot1 dvd --nic1 bridged --bridgeadapter1 eth0 --ostype Ubuntu
Create a disk image (with size of 10000 MB). Optionally, you can specify disk image format by using "--format [VDI|VMDK|VHD]" option. Without this option, VDI image format will be used by default.

$ VBoxManage createvdi --filename ~/VirtualBox\ VMs/testvm/testvm-disk01.vdi --size 10000
Add an IDE controller to the VM.

$ VBoxManage storagectl "testvm" --name "IDE Controller" --add ide
Attach the previously created disk image as well as CD/DVD drive to the IDE controller. Ubuntu installation ISO image (found in /iso/ubuntu-12.04.1-server-i386.iso) is then inserted to the CD/DVD drive.

$ VBoxManage storageattach "testvm" --storagectl "IDE Controller" --port 0 --device 0 --type hdd --medium ~/VirtualBox\ VMs/testvm/testvm-disk01.vdi
$ VBoxManage storageattach "testvm" --storagectl "IDE Controller" --port 1 --device 0 --type dvddrive --medium /iso/ubuntu-12.04.1-server-i386.iso
OR Detach ISO:
$ VBoxManage storageattach "testvm" --storagectl "IDE Controller" --port 1 --device 0 --type dvddrive --medium none
Start VirtualBox VM from the command line

Once a new VM is created, you can start the VM headless (i.e., without VirtualBox console GUI) as follows.

$ VBoxHeadless --startvm "testvm" &
The above command will launch the VM, as well as VRDE remote desktop server. The remote desktop server is needed to access the headless VM's console.

By default, the VRDE server is listening on TCP port 3389. If you want to change the default port number, use "-e" option as follows.

$ VBoxHeadless --startvm "testvm" -e "TCP/Ports=4444" &
If you don't need remote desktop support, launch a VM with "--vrde off" option.

$ VBoxHeadless --startvm "testvm" --vrde off &
Connect to headless VirtualBox VM via remote desktop

Once a VM is launched with remote desktop support, you can access the VM's console via any remote desktop client (e.g., rdesktop).

To install rdesktop on Ubuntu or Debian:

$ sudo apt-get install rdesktop
To install rdesktop on CentOS, RHEL or Fedora, configure Repoforge on your system, and then run the following.

$ sudo yum install rdesktop
To access a headless VM on a remote host machine, run the following.

$ rdesktop -a 16 IP_address_host_machine
If you use a custom port number for a remote desktop server, run the following instead.

$ rdesktop -a 16 IP_address_host_machine:port_number

Mysql Master-Master with many slaves replication

Sources:
https://www.packtpub.com/books/content/setting-mysql-replication-high-availability
https://www.packtpub.com/books/content/installing-and-managing-multi-master-replication-managermmm-mysql-high-availability
https://capttofu.livejournal.com/1752.html

Using Master<->Master replication is good backup solution, but is not good enough if we want to offload queries from master.

Thus we can create:
Master - Master
| |
Slave-Slave Slave-Slave

1. Setup both masters.
Tweak some options in my.cnf (on all masters!):
server-id = 1
log-slave-updates
log-bin = /var/log/mysql/bin.log
log-bin-index = /usr/local/mysql/var/log-bin.index
log-error = /usr/local/mysql/var/error.log
expire_logs_days = 10
max_binlog_size = 200M

WARNING: log-slave-updates is crucial!!! If not set slaves on second node won't get updated and vice versa if pushed from first master.

2. Add MySQL Users:
mysql> grant replication slave on . to 'replication'@'10.0.0.%' identified by 'pass';
Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

3. Dump all DBs from master. SCP dump on slave and import it. This way we will have 1:1 dbs on both nodes. Note that you may set password for debian-sys-maint user in /etc/mysql/debian.cnf

On master:
$> mysqldump --delete-master-logs --master-data --lock-all-tables --all-databases --hex-blob -u root -p > dumpall.sql
$> bzip2 dumpall.sql
$> scp dumpall.sql.bz2 root@slave:

NOTICE: --delete-master-logs clears all master logs BEFORE this dump. If you have other slaves syncin' or need earlier binlogs remove this option!

On slave:
$> bunzip2 dumpall.sql.bz2
$> mysql -uroot -p mysql < dumpall.sql

check BIN_LOG and POSITION:

$> grep BIN_LOG dumpall.sql

now login in mysql and change master to:

mysql> change master to master_host = '10.0.0.1', master_user='replication', master_password='pass', master_log_file='node1-binary.000001', master_log_pos=1;
mysql> start slave;

Check if 2nd Master slave is running. Check seconds behind. Should be 0 and Error_* too. Usually this means everything is OK.
mysql> show slave status\G
mysq> show master status;

Now do the same thing on 1st Master. Just use second master bin log and position.

mysql> change master to master_host = '10.0.0.2', master_user='replication', master_password='pass', master_log_file='node1-binary.000001', master_log_pos=1;
mysql> start slave;

Check if 1st Master slave is running. Check seconds behind. Should be 0 and Error_* too. Usually this means everything is OK.
mysql> show slave status\G

Now test create/insert/update/delete.
First on 1st master create table. Insert a record. Check on 2nd master if table is there and has record.
On second master insert second record. Check on 1st if there are 2 records.

4. Create Read-Only Slaves connected to the 1st master and on 2nd:

Simply do same setup as above. Dump DB. populate, then change master to BUT WATCH OUT for the binlog/position!

When done settiing up and slave status shows 0 TEST!

First create table on 1st master, insert 1 record.
Then Check on all slaves connected to 1st master.
After Check all slaves connected to 2nd master!
All MUST have table+record.
After that test to insert second row on 2nd slave.
Then Check on all slaves connected to 1st master.
After Check all slaves connected to 2nd master!


I think that's all!
Happy replicating.

OpenSSL mostly used commands

Here's a list of mostly used openssl commands:

1. Create key + csr:

$> openssl req -new -nodes -keyout server.key -out server.csr -newkey rsa:4096

2. Create key only:

$> openssl genrsa -des3 -out server.key.crypted 4096

3. Remove password from key:

$> openssl rsa -in server.key.crypted -out server.key

4. Generate CSR

$> openssl req -new -key server.key -out server.csr

5. Self generated certificate

$> openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt

6. View the details of CSR

$> openssl req -noout -text -in server.csr

7. Check a Certificate Signing Request (CSR)

$> openssl req -text -noout -verify -in CSR.csr

8. Check a private key

$> openssl rsa -in privateKey.key -check

9. Check a certificate

$> openssl x509 -in certificate.crt -text -noout

10. Check a PKCS#12 file (.pfx or .p12)

$> openssl pkcs12 -info -in keyStore.p12

11. Convert .crt to .pfx for IIS server

$> openssl pkcs12 -export -out server.pfx -inkey server.key -in server.crt



How do I extract information from a certificate? (from: https://www.madboa.com/geek/openssl/ )

An SSL certificate contains a wide range of information: issuer, valid dates, subject, and some hardcore crypto stuff. The x509 subcommand is the entry point for retrieving this information. The examples below all assume that the certificate you want to examine is stored in a file named cert.pem.

Using the -text option will give you the full breadth of information.

$> openssl x509 -text -in cert.pem
Other options will provide more targeted sets of data.

# who issued the cert?
$> openssl x509 -noout -in cert.pem -issuer

# to whom was it issued?
$> openssl x509 -noout -in cert.pem -subject

# for what dates is it valid?
$> openssl x509 -noout -in cert.pem -dates

# the above, all at once
$> openssl x509 -noout -in cert.pem -issuer -subject -dates

# what is its hash value?
$> openssl x509 -noout -in cert.pem -hash

#serial
$> openssl x509 -noout -in cert.pem -serial

# what is its MD5 fingerprint?
#> openssl x509 -noout -in cert.pem -fingerprint -md5

# what is its SHA1 fingerprint?
$> openssl x509 -noout -in cert.pem -fingerprint -sha1

Monit example configurations.

Postfix/Dovecot fail2ban

Sources:
http://workaround.org/ispmail/squeeze/sysadmin-niceties
http://www.fail2ban.org/wiki/index.php/Postfix

Copy of my post http://superuser.com/questions/576751/example-of-fail2ban-configuration-to-ban-servers-spamming-my-postfix-server/600365



I've just got sick of all the RBL spammers filling my logs, so I've setup my postfix to ban them.

After doing so, load dropped because they were a lot!

Be aware that you have to implement some way of cleaning the banned list.

I'm planing to restart fail2ban on weekly basis.

Check out these rules: http://www.fail2ban.org/wiki/index.php/Postfix

Add them in: /etc/fail2ban/filter.d/postfix.conf (that's in Debian System!)

Also good to read this (search for fail2ban): http://workaround.org/ispmail/squeeze/sysadmin-niceties (some snippets from there).

In short:

In jain.conf set:

[postfix]
enabled = true
Good to do if you'r using dovecot (from link above):Create /etc/fail2ban/filter.d/dovecot-pop3imap.con and add in it:

[Definition]
failregex = (?: pop3-login|imap-login): .*(?:Authentication failure|Aborted login \ (auth failed|Aborted login \(tried to use disabled|Disconnected \(auth failed).*rip=(?P\S*),.*
ignoreregex =
Add section in jail.conf:

[dovecot-pop3imap]
enabled = true
port = pop3,pop3s,imap,imaps
filter = dovecot-pop3imap
logpath = /var/log/mail.log
Restart fail2ban and check iptables -nvL if the chans for postfix and courier are added. BE AWARE! This is for Debian based systems. Check files paths for RH or others.

Building postfix with vda patch in debian.

While reading howtos for the postfix quota, no body ever said that VDA patch should be applied for the quota to work.
After finding out this, I've wanted to build it debian way and that's how it's done:


# cd /usr/src
# apt-get source postfix
# wget http://vda.sourceforge.net/VDA/postfix-vda-2.7.1.patch
# cd postfix-2.7.1
# patch -p1 < ../postfix-vda-2.7.1.patch
# dpkg-buildpackage
# cd ..
# dpkg -i postfix_2.7.1-1+squeeze1_amd64.deb
# dpkg -i postfix-mysql_2.7.1-1+squeeze1_amd64.deb
# dpkg -i postfix-pcre_2.7.1-1+squeeze1_amd64.deb

Sync one directory to another

I've had to sync one local fileserver directory (and all subdirs) to a remote server on the fly so whatever gets written to the local server appears to the remote.
I did have tried iocron but it's not recursive.
Tested some solutions but all they had some issues.
I ended up using watcher.py: https://github.com/greggoryhz/Watcher
Works flawlessly for 2months now. (local copy: http://www.valqk.com/assets/user/watcher.py )
install dependent libs:

#> sudo apt-get install python python-pyinotify python-yaml

Another example - if you want to sync two local dirs - you do it like this:

jobs.yml file:


job1:
label: Watch user/dir for added and changed files and cp to user1/dir/
watch: /home/user/dir/
events: ['atrribute_change', 'modify', 'create', 'move']
recursive: true
command: /home/user/cpfile.sh /home/user/dir/ $filename /home/user1/dir/
job2:
label: Watch user/dir for remove files and cp to user1/dir
watch: /home/user/dir/
events: ['delete','self_delete']
recursive: true
command: /home/user/dir/delfile.sh /home/user/dir/ $filename /home/user1/dir/


and the .sh sctipts:

cpfile.sh
#!/bin/bash

prefix="$1";
file="$2";
dst="$3";
plen=${#prefix};
echo "RUN $0 $1 $2 $3" >> /tmp/a
echo cp -a $file $dst/${file:$plen} >> /tmp/a;
cp -a "$file" "$dst/${file:$plen}";
exit $?;


delfile.sh
#!/bin/bash

prefix="$1";
file="$2";
dst="$3";
plen=${#prefix};
rm "$dst/${file:$plen}";
exit $?;

Nat through non-default gateways more than one internal network.

One big office space (with one BIG net) shared by more than one company - each having different policies for IT infrastructure.
How do we nat different local networks (connected to eth2,3,4 etc) trough different gateway (connected openvpn to each Company VPN server)?

Here it is how:

#!/bin/sh

exc() {
cmd="$1";
[ -n "$2" ] && exitt="$2";
echo "Exec $cmd ...";
$cmd;
[ $? -gt 0 ] && echo "Error executing $cmd..." && [ "$exitt" != "0" ] && exit 1;
}

[ `which realpath|wc -l` -lt 1 ] && echo "This script requiers realpath command" && exit 1;

[ -z "$1" ] && echo "Param1: net config" && exit 1;
[ -n "$1" ] && cfg=`realpath $1`;
[ -n "$1" ] && ! [ -f "$cfg" ] && echo "Config $1 con't be found!" && exit 1;
[ -n "$1" ] && [ -f "$cfg" ] && . $cfg;

[ -z "$defgw" ] || [ -z "$vpnremoteip" ] || [ -z "$local1net" ] || [ -z "$local1ip" ] || [ -z "$local1netdev" ] || [ -z "$tundev1" ] || [ -z "$vpn1cfgdir" ] || [ -z "$vpn1cfg" ] || [ -z "$vpn1rtbl" ] && echo "Some variables that are required are empty! We need all: defgw : $defgw , vpnremoteip : $vpnremoteip , local1net : $local1net , local1ip : $local1ip , local1netdev : $local1netdev , tundev1 : $tundev1 , vpn1cfgdir : $vpn1cfgdir , vpn1cfg : $vpn1cfg , vpn1rtbl : $vpn1rtbl" && exit 1;


[ -n "`ps ax|grep openvpn|grep $vpn1cfg|grep -v grep`" ] && echo "Openvpn with cfg $vpn1cfg already runs PID: `ps ax|grep openvpn|grep $vpn1cfg|grep -v grep|cut -f1 -d ' '`" && exit 1;
local1ifacecheck=`ifconfig $local1netdev|grep inet|cut -f2 -d:|cut -f1 -d' '`;

[ -n "$local1ifacecheck" ] && [ "x$local1ifacecheck" != "x$local1ip" ] && echo "$local1netdev is UP but ip doesn't match ($local1ip != $local1ifacecheck)!" && exit 1;
[ -z "$local1ifacecheck" ] && exc "ifconfig $local1netdev $local1ip up" && exc "ip r del $local1net" 0;

[ `ip r s|grep $local1net|grep -v grep|wc -l` -gt 0 ] && exc "ip r del $local1net" 0;

[ `ip r s|grep $vpnremoteip|grep -v grep|wc -l` -lt 1 ] && exc "ip r add $vpnremoteip via $defgw dev eth0";

# start vpn and get local/remote ppp ip
exc "cd $vpn1cfgdir";
exc "openvpn --daemon --config $vpn1cfg";
sleep 10;

vpn1local=`ifconfig $tundev1|grep inet|awk '{print $2}'|cut -f 2 -d:`;
vpn1remote=`ifconfig $tundev1|grep inet|awk '{print $3}'|cut -f 2 -d:`;

[ -z "$vpn1local" ] || [ -z "$vpn1remote" ] && echo "Can't find local/remote vpn ips" && exit 1;

#clean up vpn routes from default routing table
vpn1net=`ip r |grep "via $vpn1remote"|grep -v grep|cut -f1 -d' '`;
[ -n "$vpn1net" ] && exc "ip r del $vpn1net" 0;
[ -n "$vpn1remote" ] && exc "ip r del $vpn1remote" 0;


echo "Add routing for: vpn1remote: $vpn1remote ; vpn1net: $vpn1net ; local1net : $local1net ; default";
#add routes in new routing table vpnr1
[ -z "`ip r s t $vpn1rtbl|grep $vpn1remote|grep -v grep`" ] && exc "ip r add $vpn1remote dev $tundev1 src $vpn1local table $vpn1rtbl";
[ -z "`ip r s t $vpn1rtbl|grep $vpn1net|grep -v grep`" ] && exc "ip r add $vpn1net dev $tundev1 via $vpn1local table $vpn1rtbl";
[ -z "`ip r s t $vpn1rtbl|grep $local1net|grep -v grep`" ] && exc "ip r add $local1net dev $local1netdev src $local1ip table $vpn1rtbl";
[ -z "`ip r s t $vpn1rtbl|grep 'default'|grep -v grep`" ] && exc "ip r add default via $vpn1local dev $tundev1 table $vpn1rtbl";
#add rules for vpn/vpn1-local nets to lookup vpnr1;
[ -z "`ip ru s|grep "from $vpn1net"|grep -v grep`" ] && exc "ip rule add from $vpn1net lookup $vpn1rtbl prio 1000";
[ -z "`ip ru s|grep "to $vpn1net"|grep -v grep`" ] && exc "ip rule add to $vpn1net lookup $vpn1rtbl prio 1000";
[ -z "`ip ru s|grep "from $vpn1local"|grep -v grep`" ] && exc "ip rule add from $vpn1local lookup $vpn1rtbl prio 1100";
[ -z "`ip ru s|grep "from $local1net"|grep -v grep`" ] && exc "ip rule add from $local1net lookup $vpn1rtbl prio 998";
[ -z "`ip ru s|grep "to $local1net"|grep -v grep`" ] && exc "ip rule add to $local1net lookup $vpn1rtbl prio 998";


[ `iptables -t nat -nvL|grep SNAT|grep "$local1net"|wc -l` -lt 1 ] && exc "iptables -t nat -A POSTROUTING -s $local1net -o $tundev1 -j SNAT --to-source $vpn1local";

HP Smart Array tool - HPAcuCLI Usage

Linux - hpacucli

This document is a quick cheat sheet on how to use the hpacucli utility to add, delete, identify and repair logical and physical disks on the Smart array 5i plus controller, the server that these commands were tested on was a HP DL380 G3 server with a Smart Array 5i plus controller with 6 x 72GB hot swappable disks, the server had Oracle Enterprise Linux (OEL) installed.

After a fresh install of Linux I downloaded the file hpacucli-8.50-6.0.noarch.rpm (5MB), you may want to download the latest version from HP. Then install using the standard rpm command.

I am not going to list all the commands but here are the most common ones I have used thus far, this document may be updated as I use the utility more.

Utility Keyword abbreviations
Abbreviations chassisname = ch
controller = ctrl
logicaldrive = ld
physicaldrive = pd
drivewritecache = dwc
hpacucli utility
hpacucli # hpacucli

# hpacucli help

Note: you can use the hpacucli command in a script
Controller Commands
Display (detailed) hpacucli> ctrl all show config
hpacucli> ctrl all show config detail
Status hpacucli> ctrl all show status
Cache hpacucli> ctrl slot=0 modify dwc=disable
hpacucli> ctrl slot=0 modify dwc=enable
Rescan hpacucli> rescan

Note: detects newly added devices since the last rescan
Physical Drive Commands
Display (detailed) hpacucli> ctrl slot=0 pd all show
hpacucli> ctrl slot=0 pd 2:3 show detail

Note: you can obtain the slot number by displaying the controller configuration (see above)
Status

hpacucli> ctrl slot=0 pd all show status
hpacucli> ctrl slot=0 pd 2:3 show status

Erase hpacucli> ctrl slot=0 pd 2:3 modify erase
Blink disk LED hpacucli> ctrl slot=0 pd 2:3 modify led=on
hpacucli> ctrl slot=0 pd 2:3 modify led=off
Logical Drive Commands
Display (detailed) hpacucli> ctrl slot=0 ld all show [detail]
hpacucli> ctrl slot=0 ld 4 show [detail]
Status hpacucli> ctrl slot=0 ld all show status
hpacucli> ctrl slot=0 ld 4 show status
Blink disk LED hpacucli> ctrl slot=0 ld 4 modify led=on
hpacucli> ctrl slot=0 ld 4 modify led=off
re-enabling failed drive hpacucli> ctrl slot=0 ld 4 modify reenable forced
Create # logical drive - one disk
hpacucli> ctrl slot=0 create type=ld drives=1:12 raid=0

# logical drive - mirrored
hpacucli> ctrl slot=0 create type=ld drives=1:13,1:14 size=300 raid=1

# logical drive - raid 5
hpacucli> ctrl slot=0 create type=ld drives=1:13,1:14,1:15,1:16,1:17 raid=5

Note:
drives - specific drives, all drives or unassigned drives
size - size of the logical drive in MB
raid - type of raid 0, 1 , 1+0 and 5
Remove hpacucli> ctrl slot=0 ld 4 delete
Expanding hpacucli> ctrl slot=0 ld 4 add drives=2:3
Extending hpacucli> ctrl slot=0 ld 4 modify size=500 forced
Spare hpacucli> ctrl slot=0 array all add spares=1:5,1:7

LSI SAS status tool

If you have LSI SAS attached drives with FusionMPT then you can monitor it with this: http://hwraid.le-vert.net/wiki/LSIFusionMPTSAS2#a2.Linuxkerneldrivers
There is a repo: http://hwraid.le-vert.net/wiki/DebianPackages

#> apt-get install sas2ircu-status

then:

#>sas2ircu-status
-- Controller informations --
-- ID | Model
c0 | SAS2008

-- Arrays informations --
-- ID | Type | Size | Status
c0u0 | RAID1 | 1907G | Okay (OKY)

-- Disks informations
-- ID | Model | Status
c0u0p0 | ST32000644NS (9WM3BMY3) | Optimal (OPT)
c0u0p1 | ST32000644NS (9WM3F3XK) | Optimal (OPT)

or

#> sas2ircu-status --nagios
RAID OK - Arrays: OK:1 Bad:0 - Disks: OK:2 Bad:0

Screen automatic startup

UPDATE: To use it with systemd, create a file in (Debian):
/lib/systemd/system/screen-startup.service
containing:
[Unit]
Description=Screen startup service
After=network.target

[Service]
Type=oneshot
PIDFile=/run/screen-startup.pid
ExecStart=/usr/sbin/screen-startup start
ExecStop=/usr/sbin/screen-startup stop
#ExecReload=/usr/sbin/screen-startup restart
RemainAfterExit=yes


[Install]
WantedBy=multi-user.target


Then link it in etc:
#> ln -s /lib/systemd/system/screen-startup.service /etc/systemd/system/multi-user.target.wants/screen-startup.service
Then enable it:
#> systemctl enable screen-startup

Have you ever wondered how to startup your scripts in screen upon boot?
I've wondered for a while, googled few times and when I found nothing nice I wrote this simple script.

It has few nice features:

- can run screen as given user
- check if screen/session is not already started.
- clean ups stale pid files
- it's a debian startup script
- reads command and user to run as from config file in $CFG dir.
- sets session name as defined in config. !new!

Comments and bugs are welcome to valqk to lozenetz dt net

Sample config /etc/screen-startup/run_site.cfg:

SCRIPT=/path/to/cron/script.sh
USER=siteuser
SCREEN_NAME=site_cronjob


Script name: screen-startup

#!/bin/bash
# /etc/init.d/screen-startup
#
### BEGIN INIT INFO
# Provides: screen-startup
# Required-Start: screen-cleanup
# Required-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Start daemon at boot time
# Description: Enable service provided by daemon.
### END INIT INFO
[ -z "$CFG" ] || ! [ -d "$CFG" ] && CFG='/etc/screen-startup/';
! [ -d "$CFG" ] && echo "No config dir!" && exit 1;
# Carry out specific functions when asked to by the system
startScreen() {
echo "Starting screens..."
for script in $CFG/*.cfg;
do
! [ -f "$script" ] && continue;
SCRIPT=`grep SCRIPT= $script|cut -f2 -d=`;
USER=`grep USER= $script|cut -f2 -d=`;
SCREEN_NAME=`grep SCREEN_NAME= $script|cut -f2 -d=`;
if [ -n "$SCRIPT" ] && [ -n "$USER" ]; then
if [ "x${SCREEN_NAME}" = "x" ]; then
sessName="`echo $SCRIPT|sed -e 's%/%_%g'`-$USER-AS"
else
sessName="${SCREEN_NAME}";
fi
if [ -f /var/run/screen/$sessName.pid ]; then
sessPid=`cat /var/run/screen/$sessName.pid`;
[ "x$sessPid" != "x" ] && [ `ps -p $sessPid|wc -l` -gt 1 ] && echo "$sessName alredy started ($sessPid)!!!" && continue;
echo "cleaning stale pid file: $sessName.pid"
rm /var/run/screen/$sessName.pid
fi
echo -n "Screen $SCRIPT for user $USER..."
/bin/su -c "/usr/bin/screen -dmS $sessName $SCRIPT" $USER
screenPid=`ps ax|grep "$sessName"|grep "$SCRIPT"|grep -v grep|awk '{print $1}'`
echo $screenPid > /var/run/screen/$sessName.pid
echo "done.";
fi
done
}
stopScreen() {
echo "Stopping screens..."
for script in $CFG/*.cfg;
do
! [ -f "$script" ] && continue;
SCRIPT=`grep SCRIPT= $script|cut -f2 -d=`;
USER=`grep USER= $script|cut -f2 -d=`;
SCREEN_NAME=`grep SCREEN_NAME= $script|cut -f2 -d=`;
sessName="`echo $SCRIPT|sed -e 's%/%_%g'`-$USER-AS"
if [ "x${SCREEN_NAME}" = "x" ]; then
sessName="`echo $SCRIPT|sed -e 's%/%_%g'`-$USER-AS"
else
sessName="${SCREEN_NAME}";
fi
if [ -f /var/run/screen/$sessName.pid ]; then
pidOfScreen=`cat /var/run/screen/$sessName.pid|cut -f 1 -d' '`;
pidOfBash=`cat /var/run/screen/$sessName.pid|cut -f 2 -d' '`;
if [ "x$pidOfBash" != "x" ] && [ `ps -p $pidOfBash|wc -l` -lt 2 ]; then
echo "Missing process $pidOfBash for screen $pidOfScreen. Cleaning up stale run file."
rm /var/run/screen/$sessName.pid;
continue;
else
echo -n "Screen: $SCRIPT for user $USER..."
kill $pidOfBash $pidOfScreen;
echo "done."
rm /var/run/screen/$sessName.pid;
fi
fi
done

}
case "$1" in
start)
startScreen;
;;
stop)
stopScreen;
;;
restart)
stopScreen;
startScreen;
;;
*)
echo "Usage: $0 {start|stop|restart}"
exit 1
;;
esac
exit 0


p.s. Edit: rev.1 of the script now supports SCREEN_NAME in config. When set you can resume screen with screen -s SCREEN_NAME (or part of it).

DRBD 3 machines stacked setup

This is copy/paste from http://www.howtoforge.com/drbd-8.3-third-node-replication-with-debian-etch plus a split-brain fixes.

WARNING: DO NOT do this setup, unless you'r OK with the speed to remote node. The max. speed you will get from drbd device is the speed you can push data to 3rd node.
--------------




DRBD 8.3 Third Node Replication With Debian Etch


Installation and Set Up Guide for DRBD 8.3 + Debian Etch


The Third Node Setup


by Brian Hellman


The recent release of DRBD 8.3 now includes The Third Node feature as a freely available component. This document will cover the basics of setting up a third node on a standard Debian Etch installation. At the end of this tutorial you will have a DRBD device that can be utilized as a SAN, an iSCSI target, a file server, or a database server.



Note: LINBIT support customers can skip Section 1 and utilize the package repositories.


LINBIT has hosted third node solutions available, please contact them at sales_us at linbit.com for more information.


 


Preface:



The setup is as follows:



  • Three servers: alpha, bravo, foxtrot

  • alpha and bravo are the primary and secondary local nodes

  • foxtrot is the third node which is on a remote network

  • Both alpha and bravo have interfaces on the 192.168.1.x network (eth0) for external connectivity.

  • A crossover link exists on alpha and bravo (eth1) for replication using 172.16.6.10 and .20

  • Heartbeat provides a virtual IP of 192.168.5.2 to communicate with the disaster recovery node located in a geographically diverse location


 


Section 1: Installing The Source


These steps need to be done on each of the 3 nodes.



Prerequisites:



  • make

  • gcc

  • glibc development libraries

  • flex scanner generator

  • headers for the current kernel


Enter the following at the command line as a privileged user to satisfy these dependencies:


apt-get install make gcc libc6 flex linux-headers-`uname -r` libc6-dev linux-kernel-headers


Once the dependencies are installed, download DRBD. The latest version can always be obtained at http://oss.linbit.com/drbd/. Currently, it is 8.3.



cd /usr/src/

wget http://oss.linbit.com/drbd/8.3/drbd-8.3.0.tar.gz


After the download is complete:



  • Uncompress DRBD

  • Enter the source directory

  • Compile the source

  • Install DRBD



tar -xzvf drbd-8.3.0.tar.gz

cd /usr/src/drbd-8.3.0/

make clean all

make install


Now load and verify the module:



modprobe drbd

cat /proc/drbd


version: 8.3.0 (api:88/proto:86-89)

GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@alpha, 2009-02-05 10:36:11


Once this has been completed on each of the three nodes, continue to next section.



 


Section 2: Heartbeat Configuration


Setting up a third node entails stacking DRBD on top of DRBD. A virtual IP is needed for the third node to connect to, for this we will set up a simple Heartbeat v1 configuration. This section will only be done on alpha and bravo.


Install Heartbeat:



apt-get install heartbeat


Edit the authkeys file:


vi /etc/ha.d/authkeys


auth 1
1 sha1 yoursupersecretpasswordhere

Once the file has been created, change the permissions on the file. Heartbeat will not start if this step is not followed.


chmod 600 /etc/ha.d/authkeys


Copy the authkeys file to bravo:


scp /etc/ha.d/authkeys bravo:/etc/ha.d/


Edit the ha.cf file:


vi /etc/ha.d/ha.cf


debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 1
deadtime 10
warntime 5
initdead 60
udpport 694
ucast eth0 192.168.1.10
ucast eth0 192.168.1.20
auto_failback off
node alpha
node bravo

Copy the ha.cf file to bravo:


scp /etc/ha.d/ha.cf bravo:/etc/ha.d/


Edit the haresources file, the IP created here will be the IP that our third node refers to.


vi /etc/ha.d/haresources


alpha IPaddr::192.168.5.2/24/eth0

Copy the haresources file to bravo:


scp /etc/ha.d/haresources bravo:/etc/ha.d/


Start the heartbeat service on both servers to bring up the virtual IP:


alpha:/# /etc/init.d/heartbeat start


bravo:/# /etc/init.d/heartbeat start


Heartbeat will bring up the new interface (eth0:0).


Note: It may take heartbeat up to one minute to bring the interface up.



alpha:/# ifconfig eth0:0


eth0:0 Link encap:Ethernet HWaddr 00:08:C7:DB:01:CC

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1


 


Section 3: DRBD Configuration


Configuration for DRBD is done via the drbd.conf file. This needs to be the same on all nodes (alpha, bravo, foxtrot). Please note that the usage-count is set to yes, which means it will notify Linbit that you have installed DRBD. No personal information is collected. Please see this page for more information :


global { usage-count yes; }

resource data-lower {
protocol C;
net {
shared-secret "LINBIT";
}
syncer {
rate 12M;
}

on alpha {
device /dev/drbd1;
disk /dev/hdb1;
address 172.16.6.10:7788;
meta-disk internal;
}

on bravo {
device /dev/drbd1;
disk /dev/hdd1;
address 172.16.6.20:7788;
meta-disk internal;
}
}

resource data-upper {
protocol A;
syncer {
after data-lower;
rate 12M;
al-extents 513;
}
net {
shared-secret "LINBIT";
}
stacked-on-top-of data-lower {
device /dev/drbd3;
address 192.168.5.2:7788; # IP provided by Heartbeat
}

on foxtrot {
device /dev/drbd3;
disk /dev/sdb1;
address 192.168.5.3:7788; # Public IP of the backup node
meta-disk internal;
}
}

 


Section 4: Preparing The DRBD Devices


Now that the configuration is in place, create the metadata on alpha and bravo.



alpha:/usr/src/drbd-8.3.0# drbdadm create-md data-lower


Writing meta data...

initializing activity log

NOT initialized bitmap

New drbd meta data block successfully created.



bravo:/usr/src/drbd-8.3.0# drbdadm create-md data-lower


Writing meta data...

initialising activity log

NOT initialized bitmap

New drbd meta data block successfully created.


Now start DRBD on alpha and bravo:


alpha:/usr/src/drbd-8.3.0# /etc/init.d/drbd start


bravo:/usr/src/drbd-8.3.0# /etc/init.d/drbd start


Verify that the lower level DRBD devices are connected:



cat /proc/drbd


version: 8.3.0 (api:88/proto:86-89)

GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@alpha, 2009-02-05 10:36:11

0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r---

ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:19530844


Tell alpha to become the primary node:


NOTE: As the command states, this is going to overwrite any data on bravo: Now is a good time to go and grab your favorite drink.


alpha:/# drbdadm -- --overwrite-data-of-peer primary data-lower

alpha:/# cat /proc/drbd


version: 8.3.0 (api:88/proto:86-89)

GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@alpha, 2009-02-05 10:36:11

0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r---

ns:3088464 nr:0 dw:0 dr:3089408 al:0 bm:188 lo:23 pe:6 ua:53 ap:0 ep:1 wo:b oos:16442556

[==>.................] sync'ed: 15.9% (16057/19073)M

finish: 0:16:30 speed: 16,512 (8,276) K/sec


After the data sync has finished, create the meta-data on data-upper on alpha, followed by foxtrot.


Note the resource is data-upper and the --stacked option is on alpha only.



alpha:~# drbdadm --stacked create-md data-upper


Writing meta data...

initialising activity log

NOT initialized bitmap

New drbd meta data block successfully created.

success



foxtrot:/usr/src/drbd-8.3.0# drbdadm create-md data-upper


Writing meta data...

initialising activity log

NOT initialized bitmap

New drbd meta data block sucessfully created.


Bring up the stacked resource, then make alpha the primary of data-upper:


alpha:/# drbdadm --stacked adjust data-upper


foxtrot:~# drbdadm adjust data-upper

foxtrot:~# cat /proc/drbd


version: 8.3.0 (api:88/proto:86-89)

GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@foxtrot, 2009-02-02 10:28:37

1: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent A r---

ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:19530208


alpha:~# drbdadm --stacked -- --overwrite-data-of-peer primary data-upper

alpha:~# cat /proc/drbd


version: 8.3.0 (api:88/proto:86-89)

GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@alpha, 2009-02-05 10:36:11

0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r---

ns:19532532 nr:0 dw:1688 dr:34046020 al:1 bm:1196 lo:156 pe:0 ua:0 ap:156 ep:1 wo:b oos:0

1: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent A r---

ns:14512132 nr:0 dw:0 dr:14512676 al:0 bm:885 lo:156 pe:32 ua:292 ap:0 ep:1 wo:b oos:5018200

[=============>......] sync'ed: 74.4% (4900/19072)M

finish: 0:07:06 speed: 11,776 (10,992) K/sec


Drink time again!


After the sync is complete, access your DRBD block device via /dev/drbd3. This will write to both local nodes and the remote third node. In your Heartbeat configuration you will use the "drbdupper" script to bring up your /dev/drbd3 device. Have fun!



DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.






If you ever get a split-brain (two nodes are in StandAlone and won't want to connect or one is WFConnection the other is StandAlone - it's splitbrain!)
On the node that is outdated do:

drbdadm secondary
drbdadm -- --discard-my-data connect

on the node that has fresh data:
drbdadm --stacked connect

DRBD 3 machines stacked setup

This is copy/paste from http://www.howtoforge.com/drbd-8.3-third-node-replication-with-debian-etch plus a split-brain fixes.

WARNING: DO NOT do this setup, unless you'r OK with the speed to remote node. The max. speed you will get from drbd device is the speed you can push data to 3rd node.
--------------




DRBD 8.3 Third Node Replication With Debian Etch


Installation and Set Up Guide for DRBD 8.3 + Debian Etch


The Third Node Setup


by Brian Hellman


The recent release of DRBD 8.3 now includes The Third Node feature as a freely available component. This document will cover the basics of setting up a third node on a standard Debian Etch installation. At the end of this tutorial you will have a DRBD device that can be utilized as a SAN, an iSCSI target, a file server, or a database server.



Note: LINBIT support customers can skip Section 1 and utilize the package repositories.


LINBIT has hosted third node solutions available, please contact them at sales_us at linbit.com for more information.


 


Preface:



The setup is as follows:



  • Three servers: alpha, bravo, foxtrot

  • alpha and bravo are the primary and secondary local nodes

  • foxtrot is the third node which is on a remote network

  • Both alpha and bravo have interfaces on the 192.168.1.x network (eth0) for external connectivity.

  • A crossover link exists on alpha and bravo (eth1) for replication using 172.16.6.10 and .20

  • Heartbeat provides a virtual IP of 192.168.5.2 to communicate with the disaster recovery node located in a geographically diverse location


 


Section 1: Installing The Source


These steps need to be done on each of the 3 nodes.



Prerequisites:



  • make

  • gcc

  • glibc development libraries

  • flex scanner generator

  • headers for the current kernel


Enter the following at the command line as a privileged user to satisfy these dependencies:


apt-get install make gcc libc6 flex linux-headers-`uname -r` libc6-dev linux-kernel-headers


Once the dependencies are installed, download DRBD. The latest version can always be obtained at http://oss.linbit.com/drbd/. Currently, it is 8.3.



cd /usr/src/

wget http://oss.linbit.com/drbd/8.3/drbd-8.3.0.tar.gz


After the download is complete:



  • Uncompress DRBD

  • Enter the source directory

  • Compile the source

  • Install DRBD



tar -xzvf drbd-8.3.0.tar.gz

cd /usr/src/drbd-8.3.0/

make clean all

make install


Now load and verify the module:



modprobe drbd

cat /proc/drbd


version: 8.3.0 (api:88/proto:86-89)

GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@alpha, 2009-02-05 10:36:11


Once this has been completed on each of the three nodes, continue to next section.



 


Section 2: Heartbeat Configuration


Setting up a third node entails stacking DRBD on top of DRBD. A virtual IP is needed for the third node to connect to, for this we will set up a simple Heartbeat v1 configuration. This section will only be done on alpha and bravo.


Install Heartbeat:



apt-get install heartbeat


Edit the authkeys file:


vi /etc/ha.d/authkeys


auth 1
1 sha1 yoursupersecretpasswordhere

Once the file has been created, change the permissions on the file. Heartbeat will not start if this step is not followed.


chmod 600 /etc/ha.d/authkeys


Copy the authkeys file to bravo:


scp /etc/ha.d/authkeys bravo:/etc/ha.d/


Edit the ha.cf file:


vi /etc/ha.d/ha.cf


debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 1
deadtime 10
warntime 5
initdead 60
udpport 694
ucast eth0 192.168.1.10
ucast eth0 192.168.1.20
auto_failback off
node alpha
node bravo

Copy the ha.cf file to bravo:


scp /etc/ha.d/ha.cf bravo:/etc/ha.d/


Edit the haresources file, the IP created here will be the IP that our third node refers to.


vi /etc/ha.d/haresources


alpha IPaddr::192.168.5.2/24/eth0

Copy the haresources file to bravo:


scp /etc/ha.d/haresources bravo:/etc/ha.d/


Start the heartbeat service on both servers to bring up the virtual IP:


alpha:/# /etc/init.d/heartbeat start


bravo:/# /etc/init.d/heartbeat start


Heartbeat will bring up the new interface (eth0:0).


Note: It may take heartbeat up to one minute to bring the interface up.



alpha:/# ifconfig eth0:0


eth0:0 Link encap:Ethernet HWaddr 00:08:C7:DB:01:CC

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1


 


Section 3: DRBD Configuration


Configuration for DRBD is done via the drbd.conf file. This needs to be the same on all nodes (alpha, bravo, foxtrot). Please note that the usage-count is set to yes, which means it will notify Linbit that you have installed DRBD. No personal information is collected. Please see this page for more information :


global { usage-count yes; }

resource data-lower {
protocol C;
net {
shared-secret "LINBIT";
}
syncer {
rate 12M;
}

on alpha {
device /dev/drbd1;
disk /dev/hdb1;
address 172.16.6.10:7788;
meta-disk internal;
}

on bravo {
device /dev/drbd1;
disk /dev/hdd1;
address 172.16.6.20:7788;
meta-disk internal;
}
}

resource data-upper {
protocol A;
syncer {
after data-lower;
rate 12M;
al-extents 513;
}
net {
shared-secret "LINBIT";
}
stacked-on-top-of data-lower {
device /dev/drbd3;
address 192.168.5.2:7788; # IP provided by Heartbeat
}

on foxtrot {
device /dev/drbd3;
disk /dev/sdb1;
address 192.168.5.3:7788; # Public IP of the backup node
meta-disk internal;
}
}

 


Section 4: Preparing The DRBD Devices


Now that the configuration is in place, create the metadata on alpha and bravo.



alpha:/usr/src/drbd-8.3.0# drbdadm create-md data-lower


Writing meta data...

initializing activity log

NOT initialized bitmap

New drbd meta data block successfully created.



bravo:/usr/src/drbd-8.3.0# drbdadm create-md data-lower


Writing meta data...

initialising activity log

NOT initialized bitmap

New drbd meta data block successfully created.


Now start DRBD on alpha and bravo:


alpha:/usr/src/drbd-8.3.0# /etc/init.d/drbd start


bravo:/usr/src/drbd-8.3.0# /etc/init.d/drbd start


Verify that the lower level DRBD devices are connected:



cat /proc/drbd


version: 8.3.0 (api:88/proto:86-89)

GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@alpha, 2009-02-05 10:36:11

0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r---

ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:19530844


Tell alpha to become the primary node:


NOTE: As the command states, this is going to overwrite any data on bravo: Now is a good time to go and grab your favorite drink.


alpha:/# drbdadm -- --overwrite-data-of-peer primary data-lower

alpha:/# cat /proc/drbd


version: 8.3.0 (api:88/proto:86-89)

GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@alpha, 2009-02-05 10:36:11

0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r---

ns:3088464 nr:0 dw:0 dr:3089408 al:0 bm:188 lo:23 pe:6 ua:53 ap:0 ep:1 wo:b oos:16442556

[==>.................] sync'ed: 15.9% (16057/19073)M

finish: 0:16:30 speed: 16,512 (8,276) K/sec


After the data sync has finished, create the meta-data on data-upper on alpha, followed by foxtrot.


Note the resource is data-upper and the --stacked option is on alpha only.



alpha:~# drbdadm --stacked create-md data-upper


Writing meta data...

initialising activity log

NOT initialized bitmap

New drbd meta data block successfully created.

success



foxtrot:/usr/src/drbd-8.3.0# drbdadm create-md data-upper


Writing meta data...

initialising activity log

NOT initialized bitmap

New drbd meta data block sucessfully created.


Bring up the stacked resource, then make alpha the primary of data-upper:


alpha:/# drbdadm --stacked adjust data-upper


foxtrot:~# drbdadm adjust data-upper

foxtrot:~# cat /proc/drbd


version: 8.3.0 (api:88/proto:86-89)

GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@foxtrot, 2009-02-02 10:28:37

1: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent A r---

ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:19530208


alpha:~# drbdadm --stacked -- --overwrite-data-of-peer primary data-upper

alpha:~# cat /proc/drbd


version: 8.3.0 (api:88/proto:86-89)

GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@alpha, 2009-02-05 10:36:11

0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r---

ns:19532532 nr:0 dw:1688 dr:34046020 al:1 bm:1196 lo:156 pe:0 ua:0 ap:156 ep:1 wo:b oos:0

1: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent A r---

ns:14512132 nr:0 dw:0 dr:14512676 al:0 bm:885 lo:156 pe:32 ua:292 ap:0 ep:1 wo:b oos:5018200

[=============>......] sync'ed: 74.4% (4900/19072)M

finish: 0:07:06 speed: 11,776 (10,992) K/sec


Drink time again!


After the sync is complete, access your DRBD block device via /dev/drbd3. This will write to both local nodes and the remote third node. In your Heartbeat configuration you will use the "drbdupper" script to bring up your /dev/drbd3 device. Have fun!



DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.






If you ever get a split-brain (two nodes are in StandAlone and won't want to connect or one is WFConnection the other is StandAlone - it's splitbrain!)
On the node that is outdated do:

drbdadm secondary
drbdadm -- --discard-my-data connect

on the node that has fresh data:
drbdadm --stacked connect

PKGSRC NetBSD update/upgrade Howto

1. Fetch the pkgsrc:

1.1. SUP way:
sup -v /path/to/your/supfile.

and this is short sample supfile:
nbsd# cat /root/sup-current
current release=pkgsrc host=sup2.fr.NetBSD.org hostbase=/home/sup/supserver \
base=/usr prefix=/usr backup use-rel-suffix compress delete

1.2. CVS way:
$ export CVSROOT="anoncvs@anoncvs.NetBSD.org:/cvsroot"
$ export CVS_RSH="ssh"
To fetch a specific pkgsrc stable branch from scratch, run:

$ cd /usr
$ cvs checkout -r pkgsrc-20xxQy -P pkgsrc
Where pkgsrc-20xxQy is the stable branch to be checked out, for example, “pkgsrc-2009Q1”

This will create the directory pkgsrc/ in your /usr/ directory and all the package source will be stored under /usr/pkgsrc/.

To fetch the pkgsrc current branch, run:

$ cd /usr
$ cvs checkout -P pkgsrc


2. Update the pkgsrc repository:

2.1. SUP way

sup -v /root/sup-current

2.2. CVS way:

$ export CVSROOT="anoncvs@anoncvs.NetBSD.org:/cvsroot"
$ export CVS_RSH="ssh"
$ cd /usr/pkgsrc
$ cvs update -dP

When updating pkgsrc, the CVS program keeps track of the branch you selected. But if you, for whatever reason, want to switch from the stable branch to the current one, you can do it by adding the option “-A” after the “update” keyword. To switch from the current branch back to the stable branch, add the “-rpkgsrc-2009Q3” option.



3. Updating a package:

cd /usr/pkgsrc/package/
make update

4. Update packages on remote server. If you have them already installed - check which one is for update:
security checks:
/usr/sbin/pkg_admin -K /var/db/pkg fetch-pkg-vulnerabilities

then do:
pkg_add -uu http://pkgserver/path/to/Pkg.tgz

this will update the package form remote with all dependent packages!

some links:
http://imil.net/pkgin/

http://pkgsrc.se/pkgtools/pkg_rolling-replace

http://wiki.netbsd.org/tutorials/pkgsrc/pkg_comp_pkg_chk/


To install packages directly from an FTP or HTTP server, run the following commands in a Bourne-compatible shell (be sure to su to root first):

# PATH="/usr/pkg/sbin:$PATH"
# PKG_PATH="ftp://ftp.NetBSD.org/pub/pkgsrc/packages/OPSYS/ARCH/VERSIONS/All"
# export PATH PKG_PATH
# pkg_add package.

OR directly:

# pkg_add http://...../

Ubuntu encrypted home - lvm way

1. Create lvm partition. (sdaXX)
# fdisk /dev/sda
and then create 1 partition for root, swap and the rest for home.

2. Create physical extend.

# pvcreate /dev/sda3

3. Create logical volume
# lvcreate -n crypted-home -L 200G vg0
(you can leave free space if you want to be able to add additional partitions later)

4. Install needed tools
# aptitude -y install cryptsetup initramfs-tools hashalot lvm2
# modprobe dm-crypt
# modprobe dm-mod

5. Check for bad blocks (optional)
# /sbin/badblocks -c 10240 -s -w -t random -v /dev/vg0/crypted-home

6. Setup crytped home partition with luks
# cryptsetup -y --cipher serpent-xts-essiv:sha256 --hash sha512 --key-size 512 -i 50000 luksFormat /dev/vg0/crypted-home
enter uppercase YES!!

7. Open the created crypted partition
# cryptsetup luksOpen /dev/vg0/crypted-home home

8. Create filesystem on the crypted home device
# mke2fs -j -O dir_index,filetype,sparse_super /dev/mapper/home

9. Mount and copy home files.
# mount -t ext3 /dev/mapper/home /mnt
# cp -axv /home/* /mnt/
# umount /mnt

10. Setup the system to open/mount crypted home.
Insert in /etc/fstab :
#
/dev/mapper/home /home ext3 defaults 1 2

After that, add an entry in /etc/crypttab:

#
home /dev/vg0/crypted-home none luks

NetBSD OS update/upgrade quick howto.

1. Fetch/Update the OS sources.
refs: NetBSD Docs (and NetBSD guide ; Fetching sources)

Fetch the source if you don't have it:
$ cd /usr
$ export CVS_RSH=ssh 
$ cvs -d anoncvs@anoncvs.NetBSD.org:/cvsroot co -r netbsd-5-0-2 -P src

Update the source if you already have it:
$ cd /usr/src
$ export CVS_RSH=ssh 
$ cvs update -dP

If you are fetching the sources from scratch use:
$ cd /usr
$ export CVS_RSH=ssh 
$ cvs -d anoncvs@anoncvs.NetBSD.org:/cvsroot co -r netbsd-5-1 -P src

Hint: If you are using 5-0 and want to update to 5-1, use
$ cvs update -r netbsd-5-1 -dP

2. Create obj dir and build the tools:
$ mkdir /usr/obj /usr/tools
$ cd /usr/src
$ ./build.sh -O /usr/obj -T /usr/tools -U -u tools

3. Compile brand new userland:
NetBSD page says: Please always refer to build.sh -h and the files UPDATING and BUILDING for details - it's worth it, there are many options that can be set on the command line or in /etc/mk.conf.
$ cd /usr/src
$ ./build.sh -O ../obj -T ../tools -U distribution

4. Compile brand New Kernel:
$ cd /usr/src
$ ./build.sh -O ../obj -T ../tools kernel=

is a Kernel options file located in: /usr/src/sys/arch/amd64/conf/

I have XEN3_DOMU there that holds all my xen kernels compile options.
You can also find GENERIC and others there.

5. Install Kernel

Installing the new kernel (copy it in Dom0), rebooting (to ensure that the new kernel works) and installing the new userland are the final steps of the updating procedure:
$ cd /usr/obj/sys/arch/`uname -m`/compile/XEN3_DOMU/
$ scp netbsd Dom0 machine...

Go and change the kernel in the Dom0 to load the new one.
reboot the machine.

Or on native machines:
$ cd /usr/src
$ su
# mv /netbsd /netbsd.old
# mv /usr/obj/sys/arch/`uname -m`/compile/KERNEL/netbsd /
# shutdown -r now


6. Install new userland and reboot again to be sure it'll work. ;-)
Afrer we've rebooted we are sure all new calls in the new userland will be handled by the new kernel.
Now we'll install the new userland.
$ cd /usr/src
$ su
# ./build.sh -O ../obj -T ../tools -U install=/ 
#reboot

7. Build a complete release so we can copy it on all other machines and upgrade with sysinst.
$ ./build.sh -O ../obj -T ../tools -U -u -x release
The resulting install sets will be in the /usr/obj/releasedir/ directory.



When you've tested on the package server. Install/update on all other machines.


1. Make a backup
2. Fetch a new kernel and the binary sets from the release dir and store them /some/where/
3. Install the kenrel (in XEN dom0)!
4. Install the sets except etc.tzg and xetc.tgz!!
   # cd /
   # pax -zrpef /some/where/set.tgz
   # ...
   # ...
5. Run etcupdate to merge important changes:
   # cd /
   # etcupdate -s /some/where/etc.tgz -s /some/where/xetc.tgz
6. Upgrade finished, time to reboot.

Backup xen lvm/image disks. xenBackup script.

Long time no write.

I'm trying to migrate all of my freebsds to xen+netbsd. (I gave up of this OS. You can't release STABLE that's not that stable. It's a long story but in shor, I've had a sleepless night after deploying a production. The problem - when it gets real world load it hangs with kernel panic and no auto reset about every 5-15mins. WTF? Devs asked me for a dump and told me that maybe they will find the problem. Sorry. That's sux and is not an option for a production used by thousands of people. Goodbye FreeBSD (for at least 5 years).

After successfully running xen for some time, it's time to think of automated backup, that cares for everything instead of writing short shells to do each xen backup.
I've made a quick search and found this xenBackup script that almost suits my needs.
I didn't like that it mounted lvm read only and didn't use snapshots.
The second thing I disliked was that is used to work only with lvms and I do have a sparse xen images (for small machines that don't need quick disk access and have only 1-2 services running in memory).

I've modified the script and now the xenBackup script supports:
- creating backup from lvm snapshots
- creating backup from disk.img file
- dynamic determination of the disk type and path ($hostname-disk for lvms and disk.img for sparse) (BE WARNED: only -disk and .disk will be backed up!)

I'm using tar, so I didn't tested with rsync and rdiff-backup.
I'm using snapshots. Never tested with readonly lvm mounted.

so, here is the code:

#!/bin/sh
#
#   Copyright John Quinn, 2008
#   Copyright Anton Valqkoff, 2010
#
#   This program is free software: you can redistribute it and/or modify
#   it under the terms of the GNU General Public License as published by
#   the Free Software Foundation, either version 3 of the License, or
#   (at your option) any later version.
#
#   This program is distributed in the hope that it will be useful,
#   but WITHOUT ANY WARRANTY; without even the implied warranty of
#   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#   GNU General Public License for more details.
#
#   You should have received a copy of the GNU General Public License
#   along with this program.  If not, see .

#
# xenBackup - Backup Xen Domains
#
#             Version:    1.0:     Created:  John D Quinn, http://www.johnandcailin.com/john
#             Version:    1.1:     Added file/lvm recognition. lvm snapshot:  Anton Valqkoff, http://blog.valqk.com/
#

# initialize our variables
domains="null"                           # the list of domains to backup
allDomains="null"                        # backup all domains?
targetLocation="/root/backup/"                    # the default backup target directory
mountPoint="/mnt/xen"                    # the mount point to use to mount disk areas
shutdownDomains=false                    # don't shutdown domains by default
quiet=false                              # keep the chatter down
backupEngine=tar                         # the default backup engine
useSnapshot=true                        # create snampshot of the lvm and use it as backup mount.
rsyncExe=/usr/bin/rsync                  # rsync executable
rdiffbackupExe=/usr/bin/rdiff-backup     # rdiff-backup executable
tarExe=/bin/tar                      # tar executable
xmExe=/usr/sbin/xm                       # xm executable
lvmExe=/sbin/lvm
mountExe=/bin/mount
grepExe=/bin/grep
awkExe=/usr/bin/awk
umountExe=/bin/umount
cutExe=/usr/bin/cut
egrepExe=/bin/egrep
purgeAge="null"                          # age at which to purge increments
globalBackupResult=0                     # success status of overall job
#valqk: xm list --long ns.hostit.biz|grep -A 3 device|grep vbd -A 2|grep uname|grep -v swap|awk '{print $2}'

# settings for logging (syslog)
loggerArgs=""                            # what extra arguments to the logger to use
loggerTag="xenBackup"                    # the tag for our log statements
loggerFacility="local3"                  # the syslog facility to log to

# trap user exit and cleanup
trap 'cleanup;exit 1' 1 2

cleanup()
{
   ${logDebug} "Cleaning up"
   #check if file or lvm.if lvm and -snap remove it.
   mountType=`${mountExe}|${grepExe} ${mountPoint}|${awkExe} '{print $1}'`;
   [ -f ${mountType} ] && mountType="file";
   cd / ; ${umountExe} ${mountPoint}
   if [ "${mountType}" != "file" ] && [ "${useSnapshot}" = "true" ]; then
      #let's make sure we are removing snapshot!
      if [ `${mountExe}|${grepExe} -snap|wc -l` -gt 0 ]; then
         ${lvmExe} lvremove -f ${mountType}
      fi
   fi


   # restart the domain
   if test ${shutdownDomains} = "true"
   then
      ${logDebug} "Restarting domain"
      ${xmExe} create ${domain}.cfg > /dev/null
   fi
}

# function to print a usage message and bail
usageAndBail() {
   cat << EOT
Usage: xenBackup [OPTION]...
Backup xen domains to a target area. different backup engines may be specified to
produce a tarfile, an exact mirror of the disk area or a mirror with incremental backup.

   -d      backup only the specified DOMAINs (comma seperated list)
   -t      target LOCATION for the backup e.g. /tmp or root@www.example.com:/tmp
           (not used for tar engine)
   -a      backup all domains
   -s      shutdown domains before backup (and restart them afterwards)
   -q      run in quiet mode, output still goes to syslog
   -e      backup ENGINE to use, either tar, rsync or rdiff
   -p      purge increments older than TIME_SPEC. this option only applies
           to rdiff, e.g. 3W for 3 weeks. see "man rdiff-backup" for
           more information

Example 1
   Backup all domains to the /tmp directgory
   $ xenBackup -a -t /tmp

Example 2
   Backup domain: "wiki" using rsync to directory /var/xenImages on machine backupServer,
   $ xenBackup -e rsync -d wiki -t root@backupServer:/var/xenImages

Example 3
   Backup domains "domainOne" and "domainTwo" using rdiff purging old increments older than 5 days
   $ xenBackup -e rdiff -d "domainOne, domainTwo" -p 5D

EOT

   exit 1;
}

# parse the command line arguments
while getopts p:e:qsad:t:h o
do     case "$o" in
        q)     quiet="true";;
        s)     shutdownDomains="true";;
        a)     allDomains="true";;
        d)     domains="$OPTARG";;
        t)     targetLocation="$OPTARG";;
        e)     backupEngine="$OPTARG";;
        p)     purgeAge="$OPTARG";;
        h)     usageAndBail;;
        [?])   usageAndBail
       esac
done

# if quiet don't output logging to standard error
if test ${quiet} = "false"
then
   loggerArgs="-s"
fi

# setup logging subsystem. using syslog via logger
logCritical="logger -t ${loggerTag} ${loggerArgs} -p ${loggerFacility}.crit"
logWarning="logger -t ${loggerTag} ${loggerArgs} -p ${loggerFacility}.warning"
logDebug="logger -t ${loggerTag} ${loggerArgs} -p ${loggerFacility}.debug"

# make sure only root can run our script
test $(id -u) = 0 || { ${logCritical} "This script must be run as root"; exit 1; }

# make sure that the guest manager is available
test -x ${xmExe} || { ${logCritical} "xen guest manager (${xmExe}) not found"; exit 1; }

# assemble the list of domains to backup
if test ${allDomains} = "true"
then
   domainList=`${xmExe} list | cut -f1 -d" " | egrep -v "Name|Domain-0"`
else
   # make sure we've got some domains specified
   if test "${domains}" = "null"
   then
      usageAndBail
   fi

   # create the domain list by mapping commas to spaces
   domainList=`echo ${domains} | tr -d " " | tr , " "`
fi

# function to do a "rdiff-backup" of domain
backupDomainUsingrdiff() {
   domain=$1
   test -x ${rdiffbackupExe} || { ${logCritical} "rdiff-backup executable (${rdiffbackupExe}) not found"; exit 1; }

   if test ${quiet} = "false"
   then
      verbosity="3"
   else
      verbosity="0"
   fi

   targetSubDir=${targetLocation}/${domain}.rdiff-backup.mirror

   # make the targetSubDir if it doesn't already exist
   mkdir ${targetSubDir} > /dev/null 2>&1
   ${logDebug} "backing up domain ${domain} to ${targetSubDir} using rdiff-backup"

   # rdiff-backup to the target directory
   ${rdiffbackupExe} --verbosity ${verbosity} ${mountPoint}/ ${targetSubDir}
   backupResult=$?

   # purge old increments
   if test ${purgeAge} != "null"
   then
      # purge old increments
      ${logDebug} "purging increments older than ${purgeAge} from ${targetSubDir}"
      ${rdiffbackupExe} --verbosity ${verbosity} --force --remove-older-than ${purgeAge} ${targetSubDir}
   fi

   return ${backupResult}
}

# function to do a "rsync" backup of domain
backupDomainUsingrsync() {
   domain=$1
   test -x ${rsyncExe} || { ${logCritical} "rsync executable (${rsyncExe}) not found"; exit 1; }

   targetSubDir=${targetLocation}/${domain}.rsync.mirror

   # make the targetSubDir if it doesn't already exist
   mkdir ${targetSubDir} > /dev/null 2>&1
   ${logDebug} "backing up domain ${domain} to ${targetSubDir} using rsync"

   # rsync to the target directory
   ${rsyncExe} -essh -avz --delete ${mountPoint}/ ${targetSubDir}
   backupResult=$?

   return ${backupResult}
}

# function to a "tar" backup of domain
backupDomainUsingtar ()
{
   domain=$1

   # make sure we can write to the target directory
   test -w ${targetLocation} || { ${logCritical} "target directory (${targetLocation}) is not writeable"; exit 1; }

   targetFile=${targetLocation}/${domain}.`date '+%d.%m.%Y'`.$$.tar.gz
   ${logDebug} "backing up domain ${domain} to ${targetFile} using tar"

   # tar to the target directory
   cd ${mountPoint}

   ${tarExe} pcfz ${targetFile} * > /dev/null
   backupResult=$?

   return ${backupResult}
}

# backup the specified domains
for domain in ${domainList}
do
   ${logDebug} "backing up domain: ${domain}"
   [ `${xmExe} list ${domain}|wc -l` -lt 1 ] && { echo "Fatal ERROR!!! ${domain} does not exists or not running! Exiting."; exit 1; }

   # make sure that the domain is shutdown if required
   if test ${shutdownDomains} = "true"
   then
      ${logDebug} "shutting down domain ${domain}"
      ${xmExe} shutdown -w ${domain} > /dev/null
   fi

   # unmount mount point if already mounted
   umount ${mountPoint} > /dev/null 2>&1

   #inspect domain disks per domain. get only -disk or disk.img.
   #if file:// mount the xen disk read-only,umount sfter.
   #if lvm create a snapshot mount/umount/erase it.
   xenDiskStr=`${xmExe} list --long ${domain}|${grepExe} -A 3 device|${grepExe} vbd -A 2|${grepExe} uname|${grepExe} -v swap|${awkExe} '{print $2}'|${egrepExe} 'disk.img|-disk'`
   xenDiskType=`echo ${xenDiskStr}|${cutExe} -f1 -d:`;
   xenDiskDev=`echo ${xenDiskStr}|${cutExe} -f2 -d:|${cutExe} -f1 -d')'`;
   test -r ${xenDiskDev} || { ${logCritical} "xen disk area not readable. are you sure that the domain \"${domain}\" exists?"; exit 1; }
   #valqk: if the domain uses a file.img - mount ro (loop allows mount the file twice. wtf!?)
   if [ "${xenDiskType}" = "file" ]; then
      ${logDebug} "Mounting file://${xenDiskDev} read-only to ${mountPoint}"
      ${mountExe} -oloop ${xenDiskDev} ${mountPoint} || { ${logCritical} "mount failed, does mount point (${mountPoint}) exist?"; exit 1; }
      ${mountExe} -oremount,ro ${mountPoint} || { ${logCritical} "mount failed, does mount point (${mountPoint}) exist?"; exit 1; }
   fi
   if [ "${xenDiskType}" = "phy" ] ; then
      if [ "${useSnapshot}" = "true" ]; then
         vgName=`${lvmExe} lvdisplay -c |${grepExe} ${domain}-disk|${grepExe} disk|${cutExe} -f 2 -d:`;
         lvSize=`${lvmExe} lvdisplay ${xenDiskDev} -c|${cutExe} -f7 -d:`;
         lvSize=$((${lvSize}/2/100*15)); # 15% size of lvm in kilobytes
         ${lvmExe} lvcreate -s -n ${vgName}/${domain}-snap -L ${lvSize}k ${xenDiskDev} || { ${logCritical} "creation of snapshot for ${xenDiskDev} failed. exiting." exit 1; }
         ${mountExe} -r /dev/${vgName}/${domain}-snap ${mountPoint} || { ${logCritical} "mount failed, does mount point (${mountPoint}) exist?"; exit 1; }
      else
         ${mountExe} -r ${xenDiskDev} ${mountPoint}
      fi
   fi

   # do the backup according to the chosen backup engine
   backupDomainUsing${backupEngine} ${domain}

   # make sure that the backup was successful
   if test $? -ne 0
   then
      ${logCritical} "FAILURE: error backing up domain ${domain}"
      globalBackupResult=1
   else
      ${logDebug} "SUCCESS: domain ${domain} backed up"
   fi
     
   # clean up
   cleanup;
done
if test ${globalBackupResult} -eq 0
then
   ${logDebug} "SUCCESS: backup of all domains completed successfully"
else
   ${logCritical} "FAILURE: backup completed with some failures"
fi

exit ${globalBackupResult}

Setup SVN repositories only for specified users over ssh. OpenSSH limit only one command execution.

Just to blog this. I'll need it in future.
If you have svn repositories server and you are using svn+ssh for the checkout and all svn actions you will want users to have access to only predefined repos only and not to any shell or anything.
I've done this by doing symlinks in their homes and using ssh file that looks like this
authorized_keys
:

command="svnserve -t --tunnel-user=user -r /home/user",no-port-forwarding,no-agent-forwarding,no-X11-forwarding,no-pty ssh-rsa AAAAB3Nz1...KEY HERE....

this way, you can lock them to use only svnserve and it will lock them to co only what's in their home dirs.

If you're not familiar with details - eg. how to generate keys, what is authorized_keys etc, I stole this from here: http://ingomueller.net/node/331 - read more there.

Of course you have to keep your snserve up to date and pray there are no vulns in it, otherwise users can hack you :-)
But hey, you know the owners of the keys, don't you? :-)
Got my pont? ;-)

Q&A for apache in debian

Q: Why does Apache Web server in Debian has 'It works!' page as it's default host?
A: Because after you have setupped a complex VirtualHost configuration for half an hour or more (yesh, there can be such), it's nice to see that 'It worked!'
--answered by valqk. :-D

Debian HP SmartArray RAID monitoring.

You need to install 2 utils to monitor and query your smart array:

apt-get install arrayprobe cpqarrayd
the one is a daemon that logs events from the controller - cpqarrayd (thanks velin)
arrayprobe is the cli tool.

More links on the topic:
source I've got this from.
driver and utils page.

if you have faulty drive

hope that helps.

UPDATE:

In squeeze there is no cpqarrayd and arrayprobe is not that good.
You can use the hp tools provided in debian packages.
Simply add this source:


deb http://downloads.linux.hp.com/SDR/downloads/ProLiantSupportPack/Debian/ squeeze/current non-free

then
#> apt-get update && apt-get install hpacucli

This is the way this CLI is being used: hpacuclu usage
p.s. not yet figured out about monitoring. hp-health is something I've read but didn't tested yet.

Protect yourself from accidentally halting a server.

In Short: use molly-guard (debian name)
While reading my 10 unix command line mistakes I've saw the wrong halting machine command.
It's nasty to halt some server instead of your local desktop.
I use molly-guard (on Debian servers - not avaliable in FreeBSD. dunno for other linuxes? any comments?) to protect myself from this kind of mistake.
It modifies the halt/shutdown script and asks you for the hostname of the server before shutdown if from ssh session.
#>apt-get install molly-guard

when installed if you try to shutdown or reboot:
storm:/home/valqk# halt
W: molly-guard: SSH session detected!
Please type in hostname of the machine to reboot: ^C
Good thing I asked; I won't halt storm ...
W: aborting reboot due to 30-query-hostname exiting with code 1.

phew!
molly-guard saved the world for me again! :-)
have a nice Friday evening!
cheers.

How to enable new NetBSD ffs WAPBL feature? How to extend ffs size?

How to enable/use WAPBL in netbsd 5.0?


  1. you MUST have options WAPBL in your kernel (it's there in most archs)

  2. mount the desired filesystem with -o log (or add rw,log in /etc/fstab) - that's all. The log will be created automatically when this optiuon is in act.



(Source http://broadcast.oreilly.com/2009/05/netbsd-wapbl.html )

How to extend ffs size?

According to my research you can't do this at the moment.
Can anyone correct me and make me happy?

FreeBSD jails: how to login /jexec JID SHELL/ quickly in a jail by name (jlog command)?

Have you ever wondered why the heck you write jls then jexec JID /bin/csh?
I got sick of this few year ago and I wrote a tiny little script that makes my life easy every day.
Be warned, there are few cases when you'll have to see your jid but this works with jid too.
(for example when you have hang jail that won't shutdown /stop/ - happens to me pretty often and this is reported as non-critical bug for years...)

How it works?
Let's pretend that we have a jail named: 'mailserver.valqk.com'. Then you simply type this to get in the mailserver:
#>jlog mail
Logging in to mailserver.valqk.com
mailserver#                           

It's that easy. Also you can add a preffered custom shell for this session after the jail (or part) name.

What looks like the script itself?
There it goes:
#!/bin/sh
[ -z "$1" ] && echo "No jail specified." && exit 1;
[ -z "$2" ] && loginSHELL="/bin/tcsh" || loginSHELL="$2"
jName=$1;
jID=`jls | grep $jName|awk '{print $1}'`
jRealName=`jls | grep $jName|awk '{print $3}'`
[ -z "$jID" ] && echo "No such jail name $jName!" && exit 1;
echo "Logging in to $jRealName"
jexec $jID $loginSHELL
please feel free to use, comment, improve this script! If you make any improvements, pls tell me!
I'll definitely add changes if I like them!!!

Xen: firewall DomU from Dom0

Have you ever wondered how to force some firewall rules on a xen DomU and the DomU root won't be able to use some ports etc?
Well, the only proper way is to firewall DomU from the Dom0 machine.
Here is a way to do it.
This script is just an example. It should be made more universal and can apply to ALL of your DomU's for their protection :-) or logging specific traffic.
#!/bin/bash
vifname=$1;
/sbin/iptables -N vps
#outbound traffic redirect to vps - a per DomU chain.
/sbin/iptables -I FORWARD -m physdev  --physdev-out peth0 --physdev-in $vifname -j vps
#log some of the traffic
/sbin/iptables -A "vps" -j LOG -m  tcp --dport 80,110,113 --log-level 4 --log-prefix '*DomUNameHere-shows-in-logs*'
#allow some ports
/sbin/iptables -A "vps" -p tcp -m tcp --dport 20 -j RETURN
/sbin/iptables -A "vps" -p tcp -m tcp --dport 21 -j RETURN
/sbin/iptables -A "vps" -p tcp -m tcp --dport 22 -j RETURN
/sbin/iptables -A "vps" -p tcp -m tcp --dport 80 -j RETURN
/sbin/iptables -A "vps" -p tcp -m tcp --dport 443 -j RETURN
/sbin/iptables -A "vps" -p tcp -m tcp --dport 6666 -j RETURN
/sbin/iptables -A "vps" -p tcp -m tcp --dport 6667 -j RETURN
/sbin/iptables -A "vps" -p tcp -m tcp --dport 6668 -j RETURN
/sbin/iptables -A "vps" -p tcp -m tcp --dport 6669 -j RETURN
/sbin/iptables -A "vps" -p udp -m udp --dport 53 -j RETURN
#allow establieshed connections from inside the DomU to go back in
/sbin/iptables -A "vps" -p tcp -m state --state RELATED,ESTABLISHED -j RETURN
#drop all other traffic.
/sbin/iptables -A "vps" -p tcp -j DROP