Protect yourself from accidentally halting a server.

In Short: use molly-guard (debian name)
While reading my 10 unix command line mistakes I've saw the wrong halting machine command.
It's nasty to halt some server instead of your local desktop.
I use molly-guard (on Debian servers - not avaliable in FreeBSD. dunno for other linuxes? any comments?) to protect myself from this kind of mistake.
It modifies the halt/shutdown script and asks you for the hostname of the server before shutdown if from ssh session.
#>apt-get install molly-guard

when installed if you try to shutdown or reboot:
storm:/home/valqk# halt
W: molly-guard: SSH session detected!
Please type in hostname of the machine to reboot: ^C
Good thing I asked; I won't halt storm ...
W: aborting reboot due to 30-query-hostname exiting with code 1.

phew!
molly-guard saved the world for me again! :-)
have a nice Friday evening!
cheers.

How to enable new NetBSD ffs WAPBL feature? How to extend ffs size?

How to enable/use WAPBL in netbsd 5.0?


  1. you MUST have options WAPBL in your kernel (it's there in most archs)

  2. mount the desired filesystem with -o log (or add rw,log in /etc/fstab) - that's all. The log will be created automatically when this optiuon is in act.



(Source http://broadcast.oreilly.com/2009/05/netbsd-wapbl.html )

How to extend ffs size?

According to my research you can't do this at the moment.
Can anyone correct me and make me happy?

FreeBSD jails: how to login /jexec JID SHELL/ quickly in a jail by name (jlog command)?

Have you ever wondered why the heck you write jls then jexec JID /bin/csh?
I got sick of this few year ago and I wrote a tiny little script that makes my life easy every day.
Be warned, there are few cases when you'll have to see your jid but this works with jid too.
(for example when you have hang jail that won't shutdown /stop/ - happens to me pretty often and this is reported as non-critical bug for years...)

How it works?
Let's pretend that we have a jail named: 'mailserver.valqk.com'. Then you simply type this to get in the mailserver:
#>jlog mail
Logging in to mailserver.valqk.com
mailserver#                           

It's that easy. Also you can add a preffered custom shell for this session after the jail (or part) name.

What looks like the script itself?
There it goes:
#!/bin/sh
[ -z "$1" ] && echo "No jail specified." && exit 1;
[ -z "$2" ] && loginSHELL="/bin/tcsh" || loginSHELL="$2"
jName=$1;
jID=`jls | grep $jName|awk '{print $1}'`
jRealName=`jls | grep $jName|awk '{print $3}'`
[ -z "$jID" ] && echo "No such jail name $jName!" && exit 1;
echo "Logging in to $jRealName"
jexec $jID $loginSHELL
please feel free to use, comment, improve this script! If you make any improvements, pls tell me!
I'll definitely add changes if I like them!!!

Xen: firewall DomU from Dom0

Have you ever wondered how to force some firewall rules on a xen DomU and the DomU root won't be able to use some ports etc?
Well, the only proper way is to firewall DomU from the Dom0 machine.
Here is a way to do it.
This script is just an example. It should be made more universal and can apply to ALL of your DomU's for their protection :-) or logging specific traffic.
#!/bin/bash
vifname=$1;
/sbin/iptables -N vps
#outbound traffic redirect to vps - a per DomU chain.
/sbin/iptables -I FORWARD -m physdev  --physdev-out peth0 --physdev-in $vifname -j vps
#log some of the traffic
/sbin/iptables -A "vps" -j LOG -m  tcp --dport 80,110,113 --log-level 4 --log-prefix '*DomUNameHere-shows-in-logs*'
#allow some ports
/sbin/iptables -A "vps" -p tcp -m tcp --dport 20 -j RETURN
/sbin/iptables -A "vps" -p tcp -m tcp --dport 21 -j RETURN
/sbin/iptables -A "vps" -p tcp -m tcp --dport 22 -j RETURN
/sbin/iptables -A "vps" -p tcp -m tcp --dport 80 -j RETURN
/sbin/iptables -A "vps" -p tcp -m tcp --dport 443 -j RETURN
/sbin/iptables -A "vps" -p tcp -m tcp --dport 6666 -j RETURN
/sbin/iptables -A "vps" -p tcp -m tcp --dport 6667 -j RETURN
/sbin/iptables -A "vps" -p tcp -m tcp --dport 6668 -j RETURN
/sbin/iptables -A "vps" -p tcp -m tcp --dport 6669 -j RETURN
/sbin/iptables -A "vps" -p udp -m udp --dport 53 -j RETURN
#allow establieshed connections from inside the DomU to go back in
/sbin/iptables -A "vps" -p tcp -m state --state RELATED,ESTABLISHED -j RETURN
#drop all other traffic.
/sbin/iptables -A "vps" -p tcp -j DROP

Setting up GRUB to boot from both disks of mirrored RAID

copy/paste from: http://grub.enbug.org/MirroringRAID

Many people use mirrored RAID (also known as 'RAID 1') to protect themselves against data loss caused by hard disk failure. Sometimes, you even want GRUB to boot from the secondary hard disk in case the primary fails to keep the system up and running. This is however not as easy as one might think...

GRUB keeps track of the hard disks currently available on your system, on most distributions you can find this information in /boot/grub/device.map. You might have a file like this:

hopper:~# cat /boot/grub/device.map
(hd0) /dev/sda
(hd1) /dev/sdb

Of course you can install GRUB to /dev/sdb (which is hd1), but obviously GRUB will be confused if /dev/sda fails and hd1 becomes hd0. Most likely, it will complain about a failing hard disk at boot time:

GRUB Hard Disk Error

In this case, you want to install GRUB to /dev/sdb and have sdb also mapped to hd0:

hopper:~# cat /boot/grub/device.map
(hd0) /dev/sda
(hd0) /dev/sdb
hopper:~# grub-install /dev/sdb
The drive (hd0) is defined multiple times in the device map /boot/grub/device.map

GRUB doesn't accept this duplicate definition (which is indeed incorrect), so you need to configure things by hand:

hopper:~# grub
grub> device (hd0) /dev/sdb
grub> root (hd0,0)
Filesystem type is ext2fs, partition type 0xfd

grub> setup (hd0)
Checking if "/boot/grub/stage1" exists... yes
Checking if "/boot/grub/stage2" exists... yes
Checking if "/boot/grub/e2fs_stage1_5" exists... yes
Running "embed /boot/grub/e2fs_stage1_5 (hd0)"... 15 sectors are embedded.
succeeded
Running "install /boot/grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/boot/grub/stage2 /boot/grub/menu.lst"... succeeded
Done.

grub> quit

Now, /dev/sda and /dev/sdb are configured as hd0 and the system remains bootable if /dev/sda fails.

Assumptions about partitions

The above information only works if your boot filesystem can be found on both /dev/sda1 and /dev/sdb1. If you have /boot on e.g. /dev/sda5 and /dev/sdb5, you'll have to replace root (hd0,0) with something more applicable for your specific configuration.

Howto: Migrate linux (debian lenny) from one single disk to two mirrored/lvm-ed disks?

Allright.
I've got a server (actually my desktop testing machine) with two brand new installed 2x1T disks.
I'm going to setup the disks like this:
3 partitions:
1 swap (we really don't need abstractions for just keeping swap)
2 boot partition in md raid1 (grub2 really sux, so no boot from lvm support in the old one...)
3 all othe space for md raid1 and lvm over it.
It's a good idea to use lvm because you can always add another disk and also can make snapshots... and in short - have more fun with space allocating.


1. partition the two disks identically:
#> fdisk -l /dev/sdb
Device Boot Start End Blocks Id System
/dev/sdb1 1 974 7823623+ 82 Linux swap / Solaris
/dev/sdb2 975 1461 3911827+ fd Linux raid autodetect
/dev/sdb3 1462 121601 965024550 fd Linux raid autodetect

(yes, I know the boot partition is quite big, but there is a lot of space and I prefer to have more space than to wonder wtf I've done... happened few timesof course :-D)


2. create raids.
#> mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb2 /dev/sdc2
and activate
#> mdadm --readwrite /dev/md0
be sure it's sync-ing
#> cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdc2[1] sdb2[0]
3911744 blocks [2/2] [UU]
[=>...................] resync = 5.2% (206848/3911744) finish=0.5min speed=103424K/sec

do the same for lvm raid partitions....


3. format boot partition (I'll use etx3) and copy boot files in there.
#> mkfs.ext3 /dev/md0
#> mount /dev/md0 /mnt/
#> cd /mnt/
#> cp -a /boot/
.
#> cd grub
WARNING: sda2 is supposed to be your current partition.
#> sed -i'' -e 's/\/boot\//\//g' -e 's/sda2/mapper\/1tb-root/g' menu.lst
#> cd /;unmount /mnt


4. create physical volume on md1.
#> pvcreate /dev/md1
Physical volume "/dev/md1" successfully created


5. create volume groups - my vg name 1tb
#> vgcreate -A y 1tb /dev/md1
Volume group "1tb" successfully created


6. then add root volume group. mkfx.ext3.mount and copy the currently running root system in new root partition...
#> lvcreate -A y -L 30G -Z y -n root 1tb
#> mkfs.ext3 /dev/1tb/root
#> mount /dev/1tb/root /mnt
#> cd /mnt
#> cp -a {/bin,/cdrom,/emul,/etc,/home,/initrd*,/lib,/lib32,/lib64,/media,/opt,/root,/sbin,/selinux,/srv,/tmp,/usr,/var,/vmlinuz*} .
#> mkdir dev proc sys mnt misc boot
#> cd etc
#> sed -i'' -e 's/sda2/mapper\/1tb-root/g' fstab
WARNING: There is a nasty bug with initramfs tools described here: http://www.mail-archive.com/debian-kernel@lists.debian.org/msg32272.html
You MUST set root to /dev/mapper/VGNAME-LVNAME otherwise you won't get lvm support in your kernel.
#> echo "/dev/md0 /boot ext3 defaults 0 0" >> fstab
#> mount -o bind /dev /mnt/dev
#> mount -o bind /prov /mnt/proc
#> cp -a /boot/ /mnt/boot/
#> chroot /proc
#> update-initramfs -u -t -k `uname -r`
#> exit
reboot the machine, edit the grub menu by hand to boot from (hd0,0) as boot and load /dev/mapper/1tb-root as root.
login as root and make:
#> cd /boot; mkdir oldrootfs; mv
oldrootfs; mv -fr oldrootfs/boot/* .;
edit grub/menu.lst to have / instead ot /boot/ dirs.
run:
#> grub-install --root-directory=/ "(hd0)" (described in /usr/share/doc/grub/README.Debian.gz
(root-directory is where the boor partition is on hd0, if you put /boot/ then in live system you'll have /boot(md0)/boot/)...
reboot again and it must be ok now.

7. Install grub on both disks as described here...

8. reboot and disable hdd from bios. the system should boot normally and you should see that you are using the lvm root partition.