piece of a steel mind - valqk /steamroller/ :-)

tin stuff

Tuesday, November 25. 2014

Screencast with ffmpeg

*nix Source: http://www.upubuntu.com/2012/10/some-useful-ffmpeg-commands.html

Some Useful FFMPEG Commands (Screencasting, Rotate Video, Add Logo, etc.)

In this tutorial we will see some useful FFMPEG commands that you can use on Ubuntu/Linux Mint to make screencasting videos, rotate videos, add logo/text watermarks to a video, insert shapes, and so on.

To install ffmpeg and some other packages on Ubuntu/Linux Mint, open the terminal and run these commands:

sudo apt-get install ubuntu-restricted-extras

sudo apt-get install ffmpeg x264

sudo apt-get install frei0r-plugins mjpegtools

Note: The file formats used in this tutorial are selected randomly and you can set any other extension of your choice.

1. Screecasting

To record your screen withh FFMPEG, you can use this command:

ffmpeg -f x11grab -follow_mouse 100 -r 25 -s vga -i :0.0 filename.avi

Now the command will record every spot on your screen you hover your mouse cursor over. Press Ctrl+C to stop recording. If you want to set a screen resolution for the video to be recorded, you can use this ffmpeg command:

ffmpeg -f x11grab -s 800x600 -r 25 -i :0.0 -qscale 5 filename.avi

To show the region that will be recorded while moving your mouse pointer, use this command:

ffmpeg -f x11grab -follow_mouse centered -show_region 1 -r 25 -s vga -i :0.0 filename.avi

If you want to record in fullscreen with better video quality (HD), you can use this command:

ffmpeg -f x11grab -s wxga -r 25 -i :0.0 -sameq video.mp4

Her is a video example created with the latter command:



2. Add Audio To A Static Picture

If you want to add music to a static picture with ffmpeg, run this command from the terminal:

ffmpeg -i audio.mp3 -loop_input -f image2 -i file.jpg -t 188 output.mp4

3. Add Image Watermarks to A Video

To add an image to a video using ffmpeg, you can use one of these commands:

Picture Location: Top Left Corner

ffmpeg -i input.avi -vf "movie=file.png [watermark]; [in][watermark] overlay=10:10 [out]" output.flv

Here is an example:



Picture Location: Top Right Corner

ffmpeg –i input.avi -vf "movie=watermarklogo.png [watermark]; [in][watermark] overlay=main_w-overlay_w-10:10 [out]" output.flv

Picture Location: Bottom Left Corner

ffmpeg –i input.avi -vf "movie=watermarklogo.png [watermark]; [in][watermark] overlay=10:main_h-overlay_h-10 [out]" output.flv

Picture Location: Bottom Right Corner

ffmpeg –i input.avi -vf "movie=watermarklogo.png [watermark]; [in][watermark] overlay=main_w-overlay_w-10:main_h-overlay_h-10 [out]" output.flv

4. Add Text Watermarks To Videos

To add text to a video, use this command:

ffmpeg -i input.mp4 -vf drawtext="fontfile=/usr/share/fonts/truetype/freefont/FreeSans.ttf: text='YOUR TEXT HERE':fontcolor=red@1.0:fontsize=70:x=00: y=40" -y output.mp4

An example:



To use another text font, you can list them from the terminal with this command:

ls /usr/share/fonts/truetype/freefont/

4. Rotate Videos

To rotate a video 90 degrees with ffmpeg, run this command:

ffmpeg -i input.avi -vf transpose=1 output.avi

Here is an example for a video rotated with ffmpeg:



Here is all parameters:

0 = 90 degrees CounterCLockwise (Vertical Flip (default))
1 = 90 degrees Clockwise
2 = 90 degrees CounterClockwise
3 = 90 degrees Clockwise (Vertical Flip)

5. Adjust Audio/Video Volume
You can use ffmpeg to change volume of a video file with this command:

ffmpeg -i input.avi -vol 100 output.avi

To change volume of an audio file, run this command:

ffmpeg -i input.mp3 -vol 100 -ab 128 output.mp3

6. Insert A Video Inside Another Video

To do this, run this command:

ffmpeg -i video1.mp4 -vf "movie=video2.mp4:seek_point=5, scale=200:-1, setpts=PTS-STARTPTS [movie]; [in] setpts=PTS-STARTPTS, [movie] overlay=270:240 [out]" output.mp4

Here is an example:



7. Add a Rectangle To A Video

To draw for example an orange rectangle in a video, you can use this command:

ffmpeg -i input.avi -vf "drawbox=500:150:600:400:orange@0.9" -sameq -y output.avi

Tuesday, October 21. 2014

How to create and start VirtualBox VM without GUI

*nix Source: http://xmodulo.com/how-to-create-and-start-virtualbox-vm-without-gui.html

Suppose you want to create and run virtual machines (VMs) on VirtualBox. However, a host machine does not support X11 environment, or you only have access to a terminal on a remote host machine. Then how can you create and run VMs on such a host machine without VirtualBox GUI? This can be a common situation for servers where VMs are managed from remotely.

In fact, VirtualBox comes with a suite of command line utilities, and you can use the VirtualBox command line interfaces (CLIs) to manage VMs on a remote headless server. In this tutorial, I will show you how to create and start a VM without VirtualBox GUI.

Prerequisite for starting VirtualBox VM without GUI

First, you need to install VirtualBox Extension Pack. The Extension Pack is needed to run a VRDE remote desktop server used to access headless VMs. Its binary is available for free. To download and install VirtualBox Extension Pack:

$ wget http://download.virtualbox.org/virtualbox/4.2.12/Oracle_VM_VirtualBox_Extension_Pack-4.2.12-84980.vbox-extpack
$ sudo VBoxManage extpack install ./Oracle_VM_VirtualBox_Extension_Pack-4.2.12-84980.vbox-extpack
Verify that the Extension Pack is successfully installed, by using the following command.

$ VBoxManage list extpacks
Extension Packs: 1
Pack no. 0: Oracle VM VirtualBox Extension Pack
Version: 4.2.12
Revision: 84980
Edition:
Description: USB 2.0 Host Controller, VirtualBox RDP, PXE ROM with E1000 support.
VRDE Module: VBoxVRDP
Usable: true
Why unusable:
Create a VirtualBox VM from the command line

I assume that the VirtualBox' VM directory is located in "~/VirtualBox\ VMs".

First create a VM. The name of the VM is "testvm" in this example.

$ VBoxManage createvm --name "testvm" --register
Specify the hardware configurations of the VM (e.g., Ubuntu OS type, 1024MB memory, bridged networking, DVD booting).

$ VBoxManage modifyvm "testvm" --memory 1024 --acpi on --boot1 dvd --nic1 bridged --bridgeadapter1 eth0 --ostype Ubuntu
Create a disk image (with size of 10000 MB). Optionally, you can specify disk image format by using "--format [VDI|VMDK|VHD]" option. Without this option, VDI image format will be used by default.

$ VBoxManage createvdi --filename ~/VirtualBox\ VMs/testvm/testvm-disk01.vdi --size 10000
Add an IDE controller to the VM.

$ VBoxManage storagectl "testvm" --name "IDE Controller" --add ide
Attach the previously created disk image as well as CD/DVD drive to the IDE controller. Ubuntu installation ISO image (found in /iso/ubuntu-12.04.1-server-i386.iso) is then inserted to the CD/DVD drive.

$ VBoxManage storageattach "testvm" --storagectl "IDE Controller" --port 0 --device 0 --type hdd --medium ~/VirtualBox\ VMs/testvm/testvm-disk01.vdi
$ VBoxManage storageattach "testvm" --storagectl "IDE Controller" --port 1 --device 0 --type dvddrive --medium /iso/ubuntu-12.04.1-server-i386.iso
OR Detach ISO:
$ VBoxManage storageattach "testvm" --storagectl "IDE Controller" --port 1 --device 0 --type dvddrive --medium none
Start VirtualBox VM from the command line

Once a new VM is created, you can start the VM headless (i.e., without VirtualBox console GUI) as follows.

$ VBoxHeadless --startvm "testvm" &
The above command will launch the VM, as well as VRDE remote desktop server. The remote desktop server is needed to access the headless VM's console.

By default, the VRDE server is listening on TCP port 3389. If you want to change the default port number, use "-e" option as follows.

$ VBoxHeadless --startvm "testvm" -e "TCP/Ports=4444" &
If you don't need remote desktop support, launch a VM with "--vrde off" option.

$ VBoxHeadless --startvm "testvm" --vrde off &
Connect to headless VirtualBox VM via remote desktop

Once a VM is launched with remote desktop support, you can access the VM's console via any remote desktop client (e.g., rdesktop).

To install rdesktop on Ubuntu or Debian:

$ sudo apt-get install rdesktop
To install rdesktop on CentOS, RHEL or Fedora, configure Repoforge on your system, and then run the following.

$ sudo yum install rdesktop
To access a headless VM on a remote host machine, run the following.

$ rdesktop -a 16 IP_address_host_machine
If you use a custom port number for a remote desktop server, run the following instead.

$ rdesktop -a 16 IP_address_host_machine:port_number

Sunday, October 12. 2014

Create Own CA

WildWildWeb SRC: http://www.debiantutorials.com/create-your-private-certificate-authority-ca/

Create your private certificate authority (CA)
AUGUST 29, 2008 BY ADMIN·3 COMMENTS
Creating a private CA can be useful if you have a lot of services encrypting data for internal use but don’t need the domain to be verified by a public CA like Verisign, Thawte etc. By importing the CA to all computers that will use these services users won’t get the a popup in IE and Firefox saying that the certificate is invalid.

1. Create a CA certificate

Create a private key for your CA:

openssl genrsa -des3 -out ca.key 4096

You will need to enter passphrase, this password will be used everytime you sign a certificate with this CA

Make sure unauthorized users don’t get access to your private key:

chmod 700 ca.key

Create the certificate, this will be shown as the top level certificate when you have signed other certificates so choose expiration day and the certificate contents carefully. All signed certificates will expirate if the top level certificate expires so you may want to choose a few years here

openssl req -new -x509 -days 3650 -key ca.key -out ca.crt

Here is a sample of input values:

Enter pass phrase for ca.key:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Debian Tutorials
Organizational Unit Name (eg, section) []:
Common Name (eg, YOUR name) []:Debian Tutorials CA
Email Address []:

Common name will be shown when users are displaying details about the certificate

2. Create a certificate request

Create a private key:

openssl genrsa -des3 -out secure.debiantutorials.net.key 4096

Replace secure.debiantutorials.net by your domain name

Create the certificate request

openssl req -new -key secure.debiantutorials.net.key -out secure.debiantutorials.net.csr

Make sure you put your domain name in the “Common Name” field

3. Sign the certificate with your CA certificate

You will need to provide the certificate request here and the CA key

openssl x509 -req -days 365 -in secure.debiantutorials.net.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out secure.debiantutorials.net.crt

4. Remove password from key (optional)

If using the certificate with Apache, Postfix or other services you may need to replace the password in your private key so that the service can start without user interaction

openssl rsa -in secure.debiantutorials.net.key -out secure.debiantutorials.net.key.insecure
mv secure.debiantutorials.net.key secure.debiantutorials.net.key.secure
mv secure.debiantutorials.net.key.insecure secure.debiantutorials.net.key

Set permissions on the keys

chmod 700 secure.debiantutorials.net.key
chmod 700 secure.debiantutorials.net.key.secure

Tuesday, August 19. 2014

Mysql Master-Master with many slaves replication

*nix Technical WildWildWeb Sources:
https://www.packtpub.com/books/content/setting-mysql-replication-high-availability
https://www.packtpub.com/books/content/installing-and-managing-multi-master-replication-managermmm-mysql-high-availability
https://capttofu.livejournal.com/1752.html

Using Master<->Master replication is good backup solution, but is not good enough if we want to offload queries from master.

Thus we can create:
Master - Master
| |
Slave-Slave Slave-Slave

1. Setup both masters.
Tweak some options in my.cnf (on all masters!):
server-id = 1
log-slave-updates
log-bin = /var/log/mysql/bin.log
log-bin-index = /usr/local/mysql/var/log-bin.index
log-error = /usr/local/mysql/var/error.log
expire_logs_days = 10
max_binlog_size = 200M

WARNING: log-slave-updates is crucial!!! If not set slaves on second node won't get updated and vice versa if pushed from first master.

2. Add MySQL Users:
mysql> grant replication slave on . to 'replication'@'10.0.0.%' identified by 'pass';
Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

3. Dump all DBs from master. SCP dump on slave and import it. This way we will have 1:1 dbs on both nodes. Note that you may set password for debian-sys-maint user in /etc/mysql/debian.cnf

On master:
$> mysqldump --delete-master-logs --master-data --lock-all-tables --all-databases --hex-blob -u root -p > dumpall.sql
$> bzip2 dumpall.sql
$> scp dumpall.sql.bz2 root@slave:

NOTICE: --delete-master-logs clears all master logs BEFORE this dump. If you have other slaves syncin' or need earlier binlogs remove this option!

On slave:
$> bunzip2 dumpall.sql.bz2
$> mysql -uroot -p mysql < dumpall.sql

check BIN_LOG and POSITION:

$> grep BIN_LOG dumpall.sql

now login in mysql and change master to:

mysql> change master to master_host = '10.0.0.1', master_user='replication', master_password='pass', master_log_file='node1-binary.000001', master_log_pos=1;
mysql> start slave;

Check if 2nd Master slave is running. Check seconds behind. Should be 0 and Error_* too. Usually this means everything is OK.
mysql> show slave status\G
mysq> show master status;

Now do the same thing on 1st Master. Just use second master bin log and position.

mysql> change master to master_host = '10.0.0.2', master_user='replication', master_password='pass', master_log_file='node1-binary.000001', master_log_pos=1;
mysql> start slave;

Check if 1st Master slave is running. Check seconds behind. Should be 0 and Error_* too. Usually this means everything is OK.
mysql> show slave status\G

Now test create/insert/update/delete.
First on 1st master create table. Insert a record. Check on 2nd master if table is there and has record.
On second master insert second record. Check on 1st if there are 2 records.

4. Create Read-Only Slaves connected to the 1st master and on 2nd:

Simply do same setup as above. Dump DB. populate, then change master to BUT WATCH OUT for the binlog/position!

When done settiing up and slave status shows 0 TEST!

First create table on 1st master, insert 1 record.
Then Check on all slaves connected to 1st master.
After Check all slaves connected to 2nd master!
All MUST have table+record.
After that test to insert second row on 2nd slave.
Then Check on all slaves connected to 1st master.
After Check all slaves connected to 2nd master!


I think that's all!
Happy replicating.

Friday, July 18. 2014

Symfony2 Sonata Admin sonata_type_collection inline editing hidden field

PHP Here I've found the answer to my simple question: How to inline edit a dependend one to many field without showing the parent as select box.

http://www.obverse.com/2013/05/working-with-the-sonata-admin-bundle-and-sonata_type_collection/


Here is a brief answer: Use prePersist / preUpdate methods in ParentAdmin.php class.
// in the ParentAdmin class


public function prePersist($promotion)
{
foreach ($promotion->getRules() as $rule) {
$rule->setPromotion($promotion);
}
}

public function preUpdate($promotion)
{
foreach ($promotion->getRules() as $rule) {
$rule->setPromotion($promotion);
}
$promotion->setRules($promotion->getRules());
}

Yes indeed, the sonata admin class allows for prePersist and preUpdate calls that allows us to set the promotion for the rule before persisting. Of course don’t forget to declare your admin classes as services in your configs. Other than that, I hope that this helps somebody out there.
Posted by valqk in PHP at 03:59 | Comments (0) | Trackbacks (0)

Tuesday, June 24. 2014

OpenDKMI+Postfix on Debian

Technical Original link: http://linuxaria.com/howto/using-opendkim-to-sign-postfix-mails-on-debian
Article By Chris Pentago


Goal of this how-to: Step to step guide on how to setup OpenDKIM with postfix on Debian GNU/Linux to send signed email from your VPS.

There are numerous methods or techniques that you can use to achieve email message signing. Good examples are DomainKey as well as DKIM which is an abbreviation for DomainKeys Identified Mail.


DomainKeys Identified Mail (DKIM) lets an organization take responsibility for a message that is in transit. The organization is a handler of the message, either as its originator or as an intermediary. Their reputation is the basis for evaluating whether to trust the message for further handling, such as delivery. Technically DKIM provides a method for validating a domain name identity that is associated with a message through cryptographic authentication.



These two techniques will not use symmetric encryption but rather will employ asymmetric encryption. (more info: http://support.microsoft.com/kb/246071) In both methods, the common algorithm used is RSA. This algorithm is also the default for these methods of achieving email message signing.

For those wondering about what asymmetric means, the following is a detailed explanation. It is a technique that utilizes a key to sign the email message. Other methods will not require a key. One can have two types of keys: a private key and a public key. These keys will come into play to verify the message as well. The two methods of creating email message signing as highlighted above are filters for SMTP server. DomainKey works with a dk-filter although this filter has been discontinued in the market. OpenDKIM has become the preferred replacement where filters are concerned.

A mail server must be enabled with a filter to set up the server properly. In light of this, Postfix can be used because it is enabled accordingly. Another requirement is the freedom to add or change the DNS records as you desire. With the above in mind, the following is a step by step guide on how to set up Postfix email server with DomainKey Indentified Mail on Debian.





1. The first thing is to update your software if you do not have Postfix installed already. Look at the manual provided to know exactly how to install the software. Once you have it running, move on to the next step.

On Debian, issue these commands:

aptitude update
aptitude safe-upgrade

2. At this point, it is important to install the DKIM filter. As hinted above, the most common and available filter is OpenDKIM. Installing this filter is not complicated at all and should not take much time.

aptitude install opendkim opendkim-tools

3. The next step involves setting up a directory for the storage of private keys. You can have as many domains as you wish but make sure that the permission settings are in order because they are the most critical.

mkdir -pv /etc/opendkim/example.com/
chown -Rv opendkim:opendkim /etc/opendkim
chmod go-rwx /etc/opendkim/

4. Here, security is pivotal and it will warrant you to create a key pair for each domain you have. In other words, every single domain should have a key pair and this is the way to go.

cd /etc/opendkim/example.com/
opendkim-genkey -r -h rsa-sha256 -d example.com -s mail
mv -v mail.private mail
chown opendkim:opendkim

chmod u=rw,go-rwx

5. The next thing to do is to publish a public key using the DNS record. You will be required to insert new TXT DNS record with key generated previously. You’ll be presented with key in Bind (DNS Server) format but it’s easy to paste necesary parts to your domain’s DNS provider:

mail._domainkey.example.com IN TXT "v=DKIM1; h=rsa-sha256; k=rsa;p=AySFjB......xorQAB"

Example on how it look in CloudFlare’s DNS manager:

cloudflare

6. At this juncture, it is vital to set up the key table. You will do this by using a specified format:

KeyID Domain:Selector:PathToPrivateKey

So fire up your text editor of choice and open/create /etc/opendkim/KeyTable file our example looks like this:

example.com example.com:mail:/etc/opendkim/example.com/mail

7. The next step involves setting up the signing table. The filter used is programmed to read the table by looking for matched domain. Again, open/create /etc/opendkim/SigningTable in your favorite text editor and put this into it:

@example.com example.com

8. You will then have to create a /etc/opendkim/TrustedHosts file at this point. It will list the top trusted hosts as you desire. Again, the format used can be as given earlier when creating a signing table as well as a key table.

127.0.0.1
lola.ns.cloudflare.com (this is DNS server you'll get from your provider)
example.com

9. Next, set up the ownership of files we just created:

chown opendkim:opendkim /etc/opendkim/KeyTable
chown opendkim:opendkim /etc/opendkim/SigningTable
chown opendkim:opendkim /etc/opendkim/TrustedHosts

10. This step is critical because it involves configuring the OpenDKIM filter to read the files that you have created above. Do this by opening /etc/opendkim.conf using your chosen editor. Consequently, it might be good to delete the Debian configuration so that you can replace it with the new and edited information.

# Enable Logging
Syslog yes
SyslogSuccess yes
LogWhy yes

# User mask
UMask 002

# Always oversign From (sign using actual From and a null From to prevent malicious signatures header fields (From and/or others) between the signer and the verifier)

OversignHeaders From

# Our KeyTable and SigningTable
KeyTable refile:/etc/opendkim/KeyTable
SigningTable refile:/etc/opendkim/SigningTable

# Trusted Hosts
ExternalIgnoreList /etc/opendkim/TrustedHosts
InternalHosts /etc/opendkim/TrustedHosts

# Hashing Algorithm
SignatureAlgorithm rsa-sha256

# Auto restart when the failure occurs. CAUTION: This may cause a tight fork loops
AutoRestart Yes

# Set the user and group to opendkim user
UserID opendkim:opendkim

# Specify the working socket
Socket inet:8891@localhost

11. It is now time to change or configure the OpenDKIM filter on Postfix. This can be done by simply altering some parameters to achieve what you require. It is very important to do this carefully so that you avoid any errors that may come up later. Open /etc/postfix/main.cf and add/uncomment these lines:

# OpenDKIM
milter_default_action = accept
milter_protocol = 2
smtpd_milters = inet:localhost:8891
non_smtpd_milters = $smtpd_milters

12. When you reach this step, you are almost there. You will however be required to restart OpenDKIM service as well as Postfix. After doing this, make sure that everything is fine for you to move on to the final step.

service opendkim start

13. The final step is to check whether the changes you were making have turned out well. Check if OpenDKIM service is online and listens on port we defined above.

ps aux | grep dkim
netstat -tanp | grep dkim

This method works like a charm and it is the sure way to attain email messaging signing. It is really not a complicated process but you will need to follow the method to the letter. Beyond the technical jargon, any person willing to follow this guide diligently can achieve success. Keep in mind that the top benefits of DKIM is to curb abuse as well as to reduce spamming to recipients.

It is a method to verify how genuine an organization or a domain is. There are many other elements that will play a good role in helping your business or domain establish a credible name in the market through this process.

This is especially important if you plan to send email from your server outside to GMail or Hotmail servers with increased security/spam filters or your mail may end up sent to SPAM folder or even rejected.

If you followed the procedure but still unable to send mail to GMail for instance, ask your web host to set you up with ReverseDNS so those mail receiving servers could match message header IP address with your domain.

If your VPS or dedicated server are located in highly available and secure government data centre such as Macquarie’s or TheBunker’s, these indetifying features might be already set by default for their new customers.

Tuesday, April 22. 2014

OpenSSL mostly used commands

*nix Here's a list of mostly used openssl commands:

1. Create key + csr:

$> openssl req -new -nodes -keyout server.key -out server.csr -newkey rsa:4096

2. Create key only:

$> openssl genrsa -des3 -out server.key.crypted 4096

3. Remove password from key:

$> openssl rsa -in server.key.crypted -out server.key

4. Generate CSR

$> openssl req -new -key server.key -out server.csr

5. Self generated certificate

$> openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt

6. View the details of CSR

$> openssl req -noout -text -in server.csr

7. Check a Certificate Signing Request (CSR)

$> openssl req -text -noout -verify -in CSR.csr

8. Check a private key

$> openssl rsa -in privateKey.key -check

9. Check a certificate

$> openssl x509 -in certificate.crt -text -noout

10. Check a PKCS#12 file (.pfx or .p12)

$> openssl pkcs12 -info -in keyStore.p12


How do I extract information from a certificate? (from: https://www.madboa.com/geek/openssl/ )

An SSL certificate contains a wide range of information: issuer, valid dates, subject, and some hardcore crypto stuff. The x509 subcommand is the entry point for retrieving this information. The examples below all assume that the certificate you want to examine is stored in a file named cert.pem.

Using the -text option will give you the full breadth of information.

$> openssl x509 -text -in cert.pem
Other options will provide more targeted sets of data.

# who issued the cert?
$> openssl x509 -noout -in cert.pem -issuer

# to whom was it issued?
$> openssl x509 -noout -in cert.pem -subject

# for what dates is it valid?
$> openssl x509 -noout -in cert.pem -dates

# the above, all at once
$> openssl x509 -noout -in cert.pem -issuer -subject -dates

# what is its hash value?
$> openssl x509 -noout -in cert.pem -hash

#serial
$> openssl x509 -noout -in cert.pem -serial

# what is its MD5 fingerprint?
#> openssl x509 -noout -in cert.pem -fingerprint -md5

# what is its SHA1 fingerprint?
$> openssl x509 -noout -in cert.pem -fingerprint -sha1

Tuesday, February 25. 2014

Monit example configurations.

*nix http://mmonit.com/wiki/Monit/ConfigurationExamples

Thursday, October 3. 2013

jqGrid - update row and blink / highlight it

Ever wondered how to Update a row in jqGrid and make it blink so user see that it's updated?
Here's how I did it.
It's quite simple - extend the jqGrid and call the method after.
Color and time are set in side the method, but they can easily be passed as params.

After loading the jqGrid add this code:

$.jgrid.extend({
updateRowData: function (rowId, data){
var oGrid = $(this);
oGrid.setRowData(rowId,data);

var blinks = 5;
var delay = 500;
var blinkCnt = 0;
var changeColor='red';
var curr=false;
var rr=setInterval(function() {
var color;
if (curr===false) {
color=changeColor;
curr=color;
} else {
color='';
curr=false;
}
oGrid.setRowData(rowId,false,{background:color});
if (blinkCnt >= blinks*2) {
blinkCnt=0;
clearInterval(rr);
oGrid.setRowData(rowId,false,{background:''});
} else {
blinkCnt++;
}
}, delay);
}
});

then you simply call:

grid.updateRowData(41, { col1: 'do', col2: 'good' });

Where 41 is the row id and grid is my grid variable:
grid = $("#list");

Monday, September 2. 2013

Handle cookies without jQuery. jQuery.cookie without jQuery dependency.

WildWildWeb I've just had to use cookie in a banner, but the owner of the site placed the jQuery include after my include.
That's why I got my jQuery predefined and my .cookie() method disappeared.
Here is why I simply added jQuery.extend implementation in jQuery.cookie moethod and assigned it to a separate var.
This is a simple solution to get your code working without jQuery if it only depends on .cookie method.


jQcookie = function(key, value, options) {
if (arguments.length > 1 && String(value) !== "[object Object]") {
extendObject = function extend() {
for (var i = 1; i < arguments.length; i++)
for (var key in arguments[i])
if (arguments[i].hasOwnProperty(key))
arguments[0][key] = arguments[i][key];
return arguments[0];
}
options = extendObject({}, options);
if (value === null || value === undefined) {
options.expires = -1;
}
if (typeof options.expires === 'number') {
var days = options.expires, t = options.expires = new Date();
t.setDate(t.getDate() + days);
}
value = String(value);
return (document.cookie = [encodeURIComponent(key), '=', options.raw ? value : encodeURIComponent(value), options.expires ? '; expires=' + options.expires.toUTCString() : '', options.path ? '; path=' + options.path : '', options.domain ? '; domain=' + options.domain : '', options.secure ? '; secure' : ''].join(''));
}
options = value || {};
var result, decode = options.raw ? function(s) {
return s;
} : decodeURIComponent;
return (result = new RegExp('(?:^|; )' + encodeURIComponent(key) + '=([^;]*)').exec(document.cookie)) ? decode(result[1]) : null;
};

Monday, May 27. 2013

Postfix/Dovecot fail2ban

*nix Sources:
http://workaround.org/ispmail/squeeze/sysadmin-niceties
http://www.fail2ban.org/wiki/index.php/Postfix

Copy of my post http://superuser.com/questions/576751/example-of-fail2ban-configuration-to-ban-servers-spamming-my-postfix-server/600365



I've just got sick of all the RBL spammers filling my logs, so I've setup my postfix to ban them.

After doing so, load dropped because they were a lot!

Be aware that you have to implement some way of cleaning the banned list.

I'm planing to restart fail2ban on weekly basis.

Check out these rules: http://www.fail2ban.org/wiki/index.php/Postfix

Add them in: /etc/fail2ban/filter.d/postfix.conf (that's in Debian System!)

Also good to read this (search for fail2ban): http://workaround.org/ispmail/squeeze/sysadmin-niceties (some snippets from there).

In short:

In jain.conf set:

[postfix]
enabled = true
Good to do if you'r using dovecot (from link above):Create /etc/fail2ban/filter.d/dovecot-pop3imap.con and add in it:

[Definition]
failregex = (?: pop3-login|imap-login): .*(?:Authentication failure|Aborted login \ (auth failed|Aborted login \(tried to use disabled|Disconnected \(auth failed).*rip=(?P\S*),.*
ignoreregex =
Add section in jail.conf:

[dovecot-pop3imap]
enabled = true
port = pop3,pop3s,imap,imaps
filter = dovecot-pop3imap
logpath = /var/log/mail.log
Restart fail2ban and check iptables -nvL if the chans for postfix and courier are added. BE AWARE! This is for Debian based systems. Check files paths for RH or others.

Building postfix with vda patch in debian.

*nix While reading howtos for the postfix quota, no body ever said that VDA patch should be applied for the quota to work.
After finding out this, I've wanted to build it debian way and that's how it's done:


# cd /usr/src
# apt-get source postfix
# wget http://vda.sourceforge.net/VDA/postfix-vda-2.7.1.patch
# cd postfix-2.7.1
# patch -p1 < ../postfix-vda-2.7.1.patch
# dpkg-buildpackage
# cd ..
# dpkg -i postfix_2.7.1-1+squeeze1_amd64.deb
# dpkg -i postfix-mysql_2.7.1-1+squeeze1_amd64.deb
# dpkg -i postfix-pcre_2.7.1-1+squeeze1_amd64.deb

Sunday, February 24. 2013

Symfony2: 3d to 2d. Display tree navigation menu in a select dropdown in SonataAdmin

PHP WildWildWeb Keys: SonataAdmin, Gedmo Tree nested type, Select Dropdown, use EntityManager in configureFormFields Admin page

I've lost 2 days bumping my head on a simple task - I needed to display simple dropdown combo box that will display Nested Gedmo Tree in a Sonata Admin form.

I've installed and get working http://knpbundles.com/roomthirteen/Room13NavigationBundle - It's simple implementation of Gedmo\Tree type="nested" - exactly what I need for a simple menu navigation + it has ready Sonataadmin page to edit nodes.
The nice thing about this exact bundle is that it uses @Translatable, @Blameable, @Timestampable and this is all what I need - to be able to translate my menu, to see when and who updated the records.

After installing it I've noticed that 'path' is missing/empty event getting undefined notice.
I've dug around and found that I have to implement getpath() myself.
I did and created custom repository. I wasn't able to use childrenHierarchy directly in the Entity.


namespace Room13\NavigationBundle\Entity\Repository;
class NavigationNodeRepository extends \Gedmo\Tree\Entity\Repository\NestedTreeRepository{
function getFlatNodes($startNode = null, $options = null) {
if (is_null($options)) {
$options = array(
'decorate' => false,
'rootOpen' => '
    ',
    'rootClose' => '
',
'childOpen' => '
  • ',
    'childClose' => '
  • ',
    'nodeDecorator' => function($node) {
    return ''.$node['title'].'';
    }
    );
    }
    $htmlTree = $this->childrenHierarchy(
    $startNode, / starting from root nodes /
    false, / load all children, not only direct /
    $options
    );
    return $this->ToFlat($htmlTree, ' » ');
    }

    function ToFlat($node, $sep = ' > ', $path = '') {
    $els = array();
    foreach ($node as $id => $opts) {
    $els[$opts['id']] = $path . $opts['title'];
    if (isset($opts['__children']) && is_array($opts['__children']) && sizeof($opts['__children'])) {
    $r = $this->ToFlat($opts['__children'], $sep, ($path . $opts['title'] . $sep));
    foreach($r as $id => $title) {
    $els[$id] = $title;
    }
    }
    }
    return $els;
    }
    }


    After implementing it I've had to find a way so I can display result of this for root node in a flat select box in SonataAdminPage so user can select from a dropdown where the content should show.
    Well.. it turned out that entity type is impossible to be used because it can't call the method from CustomRepo, just the native Entity methods.
    I ended up using simple 'select' type like this:


    $em = $this->modelManager->getEntityManager('Room13NavigationBundle:NavigationNode');
    $tree = $em->getRepository('Room13NavigationBundle:NavigationNode')->getFlatNodes();
    $formMapper
    ->add('name')
    ->add('menu', 'choice', array(
    'label' => 'Place in menu',
    'empty_value' => 'Select menu',
    'choices' => $tree,
    )
    )
    ......
    ;

    Thursday, January 31. 2013

    Sync one directory to another

    *nix I've had to sync one local fileserver directory (and all subdirs) to a remote server on the fly so whatever gets written to the local server appears to the remote.
    I did have tried iocron but it's not recursive.
    Tested some solutions but all they had some issues.
    I ended up using watcher.py: https://github.com/greggoryhz/Watcher
    Works flawlessly for 2months now. (local copy: http://www.valqk.com/assets/user/watcher.py )
    install dependent libs:

    #> sudo apt-get install python python-pyinotify python-yaml

    Another example - if you want to sync two local dirs - you do it like this:

    jobs.yml file:


    job1:
    label: Watch user/dir for added and changed files and cp to user1/dir/
    watch: /home/user/dir/
    events: ['atrribute_change', 'modify', 'create', 'move']
    recursive: true
    command: /home/user/cpfile.sh /home/user/dir/ $filename /home/user1/dir/
    job2:
    label: Watch user/dir for remove files and cp to user1/dir
    watch: /home/user/dir/
    events: ['delete','self_delete']
    recursive: true
    command: /home/user/dir/delfile.sh /home/user/dir/ $filename /home/user1/dir/


    and the .sh sctipts:

    cpfile.sh
    #!/bin/bash

    prefix="$1";
    file="$2";
    dst="$3";
    plen=${#prefix};
    echo "RUN $0 $1 $2 $3" >> /tmp/a
    echo cp -a $file $dst/${file:$plen} >> /tmp/a;
    cp -a "$file" "$dst/${file:$plen}";
    exit $?;


    delfile.sh
    #!/bin/bash

    prefix="$1";
    file="$2";
    dst="$3";
    plen=${#prefix};
    rm "$dst/${file:$plen}";
    exit $?;

    Sync one directory to another

    I've had to sync one local fileserver directory (and all subdirs) to a remote server on the fly so whatever gets written to the local server appears to the remote.
    I did have tried iocron but it's not recursive.
    Tested some solutions but all they had some issues.
    I ended up using watcher.py: https://github.com/greggoryhz/Watcher
    Works flawlessly for 2months now. (local copy: http://www.valqk.com/assets/user/watcher.py )

    Wednesday, November 21. 2012

    Nat through non-default gateways more than one internal network.

    *nix One big office space (with one BIG net) shared by more than one company - each having different policies for IT infrastructure. How do we nat different local networks (connected to eth2,3,4 etc) trough different gateway (connected openvpn to each Company VPN server)? Here it is how:
    #!/bin/sh
    
    exc() { 
     cmd="$1";
     [ -n "$2" ] && exitt="$2";
     echo "Exec $cmd ...";
     $cmd;
     [ $? -gt 0 ] && echo "Error executing $cmd..." && [ "$exitt" != "0" ] && exit 1;
    }
    
    [ `which realpath|wc -l` -lt 1 ] && echo "This script requiers realpath command" && exit 1;
    
    [ -z "$1" ] && echo "Param1: net config" && exit 1;
    [ -n "$1" ] && cfg=`realpath $1`;
    [ -n "$1" ] && ! [ -f "$cfg" ] && echo "Config $1 con't be found!" && exit 1;
    [ -n "$1" ] && [ -f "$cfg" ] && . $cfg;
    
    [ -z "$defgw" ] || [ -z "$vpnremoteip" ] || [ -z "$local1net" ] || [ -z "$local1ip" ] || [ -z "$local1netdev" ] || [ -z "$tundev1" ] || [ -z "$vpn1cfgdir" ] || [ -z "$vpn1cfg" ] || [ -z "$vpn1rtbl" ] && echo "Some variables that are required are empty! We need all: defgw : $defgw , vpnremoteip : $vpnremoteip , local1net : $local1net , local1ip : $local1ip , local1netdev : $local1netdev ,  tundev1 : $tundev1 , vpn1cfgdir : $vpn1cfgdir , vpn1cfg : $vpn1cfg , vpn1rtbl : $vpn1rtbl" && exit 1;
    
    
    [ -n "`ps ax|grep openvpn|grep $vpn1cfg|grep -v grep`" ] && echo "Openvpn with cfg $vpn1cfg already runs PID: `ps ax|grep openvpn|grep $vpn1cfg|grep -v grep|cut -f1 -d ' '`" && exit 1;
    local1ifacecheck=`ifconfig $local1netdev|grep inet|cut -f2 -d:|cut -f1 -d' '`;
    
    [ -n "$local1ifacecheck" ] && [ "x$local1ifacecheck" != "x$local1ip" ] && echo "$local1netdev is UP but ip doesn't match ($local1ip != $local1ifacecheck)!" && exit 1;
    [ -z "$local1ifacecheck" ] && exc "ifconfig $local1netdev $local1ip up" && exc "ip r del $local1net" 0;
    
    [ `ip r s|grep $local1net|grep -v grep|wc -l` -gt 0 ] && exc "ip r del $local1net" 0;
    
    [ `ip r s|grep $vpnremoteip|grep -v grep|wc -l` -lt 1 ] && exc "ip r add $vpnremoteip via $defgw dev eth0";
    
    # start vpn and get local/remote ppp ip
    exc "cd $vpn1cfgdir";
    exc "openvpn --daemon --config $vpn1cfg";
    sleep 10;
    
    vpn1local=`ifconfig $tundev1|grep inet|awk '{print $2}'|cut -f 2 -d:`;
    vpn1remote=`ifconfig $tundev1|grep inet|awk '{print $3}'|cut -f 2 -d:`;
    
    [ -z "$vpn1local" ] || [ -z "$vpn1remote" ] && echo "Can't find local/remote vpn ips" && exit 1;
    
    #clean up vpn routes from default routing table
    vpn1net=`ip r |grep "via $vpn1remote"|grep -v grep|cut -f1 -d' '`;
    [ -n "$vpn1net" ] && exc "ip r del $vpn1net" 0;
    [ -n "$vpn1remote" ] && exc "ip r del $vpn1remote" 0;
    
    
    echo "Add routing for: vpn1remote: $vpn1remote ; vpn1net: $vpn1net ; local1net : $local1net ; default";
    #add routes in new routing table vpnr1
    [ -z "`ip r s t $vpn1rtbl|grep $vpn1remote|grep -v grep`" ] && exc "ip r add $vpn1remote dev $tundev1 src $vpn1local table $vpn1rtbl";
    [ -z "`ip r s t $vpn1rtbl|grep $vpn1net|grep -v grep`" ] && exc "ip r add $vpn1net dev $tundev1 via $vpn1local table $vpn1rtbl";
    [ -z "`ip r s t $vpn1rtbl|grep $local1net|grep -v grep`" ] && exc "ip r add $local1net dev $local1netdev src $local1ip table $vpn1rtbl";
    [ -z "`ip r s t $vpn1rtbl|grep 'default'|grep -v grep`" ] && exc "ip r add default via $vpn1local dev $tundev1 table $vpn1rtbl";
    #add rules for vpn/vpn1-local nets to lookup vpnr1;
    [ -z "`ip ru s|grep "from $vpn1net"|grep -v grep`" ] && exc "ip rule add from $vpn1net lookup $vpn1rtbl prio 1000";
    [ -z "`ip ru s|grep "to $vpn1net"|grep -v grep`" ] && exc "ip rule add to $vpn1net lookup $vpn1rtbl prio 1000";
    [ -z "`ip ru s|grep "from $vpn1local"|grep -v grep`" ] && exc "ip rule add from $vpn1local lookup $vpn1rtbl prio 1100";
    [ -z "`ip ru s|grep "from $local1net"|grep -v grep`" ] && exc "ip rule add from $local1net lookup $vpn1rtbl prio 998";
    [ -z "`ip ru s|grep "to $local1net"|grep -v grep`" ] && exc "ip rule add to $local1net lookup $vpn1rtbl prio 998";
    
    
    [ `iptables -t nat -nvL|grep SNAT|grep "$local1net"|wc -l` -lt 1 ] && exc "iptables -t nat -A POSTROUTING -s $local1net -o $tundev1 -j SNAT --to-source $vpn1local";
    

    Friday, October 5. 2012

    RapidSSL + Intermediate Certificates + Nginx - RapidSSL unrecognized issuer problem.

    If you buy a RapidSSL Geotrust SSL certificate and simply install it you will get "Invalid issuer" or such message and the browsers won't let user without confirmation.
    To install the certificate correctly you have to Install RapidSSL intermediate certificate chain.
    How? It's very easy.
    In the file where you keep the Certificate itself simply add this certificate cahins (https://knowledge.rapidssl.com/library/VERISIGN/ALL_OTHER/RapidSSL%20Intermediate/RapidSSL_CA_bundle.pem)

    After concatenating to your cert and restarting the server. You can test it here:
    geotrust java ssl tester
    or
    sslshopper php tester

    You can also check these guides/links:
    SSL Certificate Installation for Nginx Server
    RapidSSL - Install SSL Certificate
    Geotrust - Install SSL Certificate

    and RapidSSL Technical Support

    I copy/paste them here if they get lost.
    ------------------------------------------
    -----BEGIN CERTIFICATE-----
    MIID1TCCAr2gAwIBAgIDAjbRMA0GCSqGSIb3DQEBBQUAMEIxCzAJBgNVBAYTAlVT
    MRYwFAYDVQQKEw1HZW9UcnVzdCBJbmMuMRswGQYDVQQDExJHZW9UcnVzdCBHbG9i
    YWwgQ0EwHhcNMTAwMjE5MjI0NTA1WhcNMjAwMjE4MjI0NTA1WjA8MQswCQYDVQQG
    EwJVUzEXMBUGA1UEChMOR2VvVHJ1c3QsIEluYy4xFDASBgNVBAMTC1JhcGlkU1NM
    IENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAx3H4Vsce2cy1rfa0
    l6P7oeYLUF9QqjraD/w9KSRDxhApwfxVQHLuverfn7ZB9EhLyG7+T1cSi1v6kt1e
    6K3z8Buxe037z/3R5fjj3Of1c3/fAUnPjFbBvTfjW761T4uL8NpPx+PdVUdp3/Jb
    ewdPPeWsIcHIHXro5/YPoar1b96oZU8QiZwD84l6pV4BcjPtqelaHnnzh8jfyMX8
    N8iamte4dsywPuf95lTq319SQXhZV63xEtZ/vNWfcNMFbPqjfWdY3SZiHTGSDHl5
    HI7PynvBZq+odEj7joLCniyZXHstXZu8W1eefDp6E63yoxhbK1kPzVw662gzxigd
    gtFQiwIDAQABo4HZMIHWMA4GA1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQUa2k9ahhC
    St2PAmU5/TUkhniRFjAwHwYDVR0jBBgwFoAUwHqYaI2J+6sFZAwRfap9ZbjKzE4w
    EgYDVR0TAQH/BAgwBgEB/wIBADA6BgNVHR8EMzAxMC+gLaArhilodHRwOi8vY3Js
    Lmdlb3RydXN0LmNvbS9jcmxzL2d0Z2xvYmFsLmNybDA0BggrBgEFBQcBAQQoMCYw
    JAYIKwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmdlb3RydXN0LmNvbTANBgkqhkiG9w0B
    AQUFAAOCAQEAq7y8Cl0YlOPBscOoTFXWvrSY8e48HM3P8yQkXJYDJ1j8Nq6iL4/x
    /torAsMzvcjdSCIrYA+lAxD9d/jQ7ZZnT/3qRyBwVNypDFV+4ZYlitm12ldKvo2O
    SUNjpWxOJ4cl61tt/qJ/OCjgNqutOaWlYsS3XFgsql0BYKZiZ6PAx2Ij9OdsRu61
    04BqIhPSLT90T+qvjF+0OJzbrs6vhB6m9jRRWXnT43XcvNfzc9+S7NIgWW+c+5X4
    knYYCnwPLKbK3opie9jzzl9ovY8+wXS7FXI6FoOpC+ZNmZzYV+yoAVHHb1c0XqtK
    LEL2TxyJeN4mTvVvk0wVaydWTQBUbHq3tw==
    -----END CERTIFICATE-----
    -----BEGIN CERTIFICATE-----
    MIIDfTCCAuagAwIBAgIDErvmMA0GCSqGSIb3DQEBBQUAME4xCzAJBgNVBAYTAlVT
    MRAwDgYDVQQKEwdFcXVpZmF4MS0wKwYDVQQLEyRFcXVpZmF4IFNlY3VyZSBDZXJ0
    aWZpY2F0ZSBBdXRob3JpdHkwHhcNMDIwNTIxMDQwMDAwWhcNMTgwODIxMDQwMDAw
    WjBCMQswCQYDVQQGEwJVUzEWMBQGA1UEChMNR2VvVHJ1c3QgSW5jLjEbMBkGA1UE
    AxMSR2VvVHJ1c3QgR2xvYmFsIENBMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB
    CgKCAQEA2swYYzD99BcjGlZ+W988bDjkcbd4kdS8odhM+KhDtgPpTSEHCIjaWC9m
    OSm9BXiLnTjoBbdqfnGk5sRgprDvgOSJKA+eJdbtg/OtppHHmMlCGDUUna2YRpIu
    T8rxh0PBFpVXLVDviS2Aelet8u5fa9IAjbkU+BQVNdnARqN7csiRv8lVK83Qlz6c
    JmTM386DGXHKTubU1XupGc1V3sjs0l44U+VcT4wt/lAjNvxm5suOpDkZALeVAjmR
    Cw7+OC7RHQWa9k0+bw8HHa8sHo9gOeL6NlMTOdReJivbPagUvTLrGAMoUgRx5asz
    PeE4uwc2hGKceeoWMPRfwCvocWvk+QIDAQABo4HwMIHtMB8GA1UdIwQYMBaAFEjm
    aPkr0rKV10fYIyAQTzOYkJ/UMB0GA1UdDgQWBBTAephojYn7qwVkDBF9qn1luMrM
    TjAPBgNVHRMBAf8EBTADAQH/MA4GA1UdDwEB/wQEAwIBBjA6BgNVHR8EMzAxMC+g
    LaArhilodHRwOi8vY3JsLmdlb3RydXN0LmNvbS9jcmxzL3NlY3VyZWNhLmNybDBO
    BgNVHSAERzBFMEMGBFUdIAAwOzA5BggrBgEFBQcCARYtaHR0cHM6Ly93d3cuZ2Vv
    dHJ1c3QuY29tL3Jlc291cmNlcy9yZXBvc2l0b3J5MA0GCSqGSIb3DQEBBQUAA4GB
    AHbhEm5OSxYShjAGsoEIz/AIx8dxfmbuwu3UOx//8PDITtZDOLC5MH0Y0FWDomrL
    NhGc6Ehmo21/uBPUR/6LWlxz/K7ZGzIZOKuXNBSqltLroxwUCEm2u+WR74M26x1W
    b8ravHNjkOR/ez4iyz0H7V84dJzjA1BOoa+Y7mHyhD8S
    -----END CERTIFICATE-----
    ------------------------------------------

    HP Smart Array tool - HPAcuCLI Usage

    *nix

    Linux - hpacucli

    This document is a quick cheat sheet on how to use the hpacucli utility to add, delete, identify and repair logical and physical disks on the Smart array 5i plus controller, the server that these commands were tested on was a HP DL380 G3 server with a Smart Array 5i plus controller with 6 x 72GB hot swappable disks, the server had Oracle Enterprise Linux (OEL) installed.

    After a fresh install of Linux I downloaded the file hpacucli-8.50-6.0.noarch.rpm (5MB), you may want to download the latest version from HP. Then install using the standard rpm command.

    I am not going to list all the commands but here are the most common ones I have used thus far, this document may be updated as I use the utility more.

    Utility Keyword abbreviations
    Abbreviations chassisname = ch
    controller = ctrl
    logicaldrive = ld
    physicaldrive = pd
    drivewritecache = dwc
    hpacucli utility
    hpacucli # hpacucli

    # hpacucli help

    Note: you can use the hpacucli command in a script
    Controller Commands
    Display (detailed) hpacucli> ctrl all show config
    hpacucli> ctrl all show config detail
    Status hpacucli> ctrl all show status
    Cache hpacucli> ctrl slot=0 modify dwc=disable
    hpacucli> ctrl slot=0 modify dwc=enable
    Rescan hpacucli> rescan

    Note: detects newly added devices since the last rescan
    Physical Drive Commands
    Display (detailed) hpacucli> ctrl slot=0 pd all show
    hpacucli> ctrl slot=0 pd 2:3 show detail

    Note: you can obtain the slot number by displaying the controller configuration (see above)
    Status

    hpacucli> ctrl slot=0 pd all show status
    hpacucli> ctrl slot=0 pd 2:3 show status

    Erase hpacucli> ctrl slot=0 pd 2:3 modify erase
    Blink disk LED hpacucli> ctrl slot=0 pd 2:3 modify led=on
    hpacucli> ctrl slot=0 pd 2:3 modify led=off
    Logical Drive Commands
    Display (detailed) hpacucli> ctrl slot=0 ld all show [detail]
    hpacucli> ctrl slot=0 ld 4 show [detail]
    Status hpacucli> ctrl slot=0 ld all show status
    hpacucli> ctrl slot=0 ld 4 show status
    Blink disk LED hpacucli> ctrl slot=0 ld 4 modify led=on
    hpacucli> ctrl slot=0 ld 4 modify led=off
    re-enabling failed drive hpacucli> ctrl slot=0 ld 4 modify reenable forced
    Create # logical drive - one disk
    hpacucli> ctrl slot=0 create type=ld drives=1:12 raid=0

    # logical drive - mirrored
    hpacucli> ctrl slot=0 create type=ld drives=1:13,1:14 size=300 raid=1

    # logical drive - raid 5
    hpacucli> ctrl slot=0 create type=ld drives=1:13,1:14,1:15,1:16,1:17 raid=5

    Note:
    drives - specific drives, all drives or unassigned drives
    size - size of the logical drive in MB
    raid - type of raid 0, 1 , 1+0 and 5
    Remove hpacucli> ctrl slot=0 ld 4 delete
    Expanding hpacucli> ctrl slot=0 ld 4 add drives=2:3
    Extending hpacucli> ctrl slot=0 ld 4 modify size=500 forced
    Spare hpacucli> ctrl slot=0 array all add spares=1:5,1:7

    Wednesday, August 22. 2012

    LSI SAS status tool

    *nix If you have LSI SAS attached drives with FusionMPT then you can monitor it with this: http://hwraid.le-vert.net/wiki/LSIFusionMPTSAS2#a2.Linuxkerneldrivers
    There is a repo: http://hwraid.le-vert.net/wiki/DebianPackages

    #> apt-get install sas2ircu-status

    then:

    #>sas2ircu-status
    -- Controller informations --
    -- ID | Model
    c0 | SAS2008

    -- Arrays informations --
    -- ID | Type | Size | Status
    c0u0 | RAID1 | 1907G | Okay (OKY)

    -- Disks informations
    -- ID | Model | Status
    c0u0p0 | ST32000644NS (9WM3BMY3) | Optimal (OPT)
    c0u0p1 | ST32000644NS (9WM3F3XK) | Optimal (OPT)

    or

    #> sas2ircu-status --nagios
    RAID OK - Arrays: OK:1 Bad:0 - Disks: OK:2 Bad:0

    Wednesday, July 25. 2012

    Migrating contacts from IPhone to Android

    If you want to use GMail sync - then it's easy.
    Simply sync from the IPhone to GMail then on the android sync back.

    If you don't want to use the Gmail option - it turned out to be pretty tough to transfer them.
    I've used Export Contacts 1.6 app on the IPhone - it starts a service and then from any browser you can export contacts as vCard, CSV or PDF. vCard has two formats: single vCard and ZIP with many vCards (outlook option).
    After I've downloaded single vCard file with all my contacts I've uploaded the file to another webserver, opened the direct url on the Android phone (with Firefox if that matters) and it asked me to open or import the vCard.
    I told it to import vCard file and voila all my contacts are now there with all fields. Birthdays are kind of crappy and pics are missing (Export contacts didn't expored the pics)...

    ps. If your iPhone is with broken display and has a lock pass code and you can't unlock it so you can sync with iTunes then DFU mode will do the trick. Hold the home button and sleep button for 10 seconds and then release the sleep button while continuing to hold the home button. iTunes should not show the message that a phone has been detected in recovery mode.

    P.S. I've imported the contacts like this but noticed some of them (about 50% of ~600) are missing. Well... I ended up Installing a http://funambol.com/ server + outlook plugin + iphone app + android app - now I have my contacts transferred as expected and also I have a 'backup' place ( my custom funambol server).

    Friday, April 6. 2012

    Screen automatic startup

    *nix Have you ever wondered how to startup your scripts in screen upon boot?
    I've wondered for a while, googled few times and when I found nothing nice I wrote this simple script.

    It has few nice features:

    - can run screen as given user
    - check if screen/session is not already started.
    - clean ups stale pid files
    - it's a debian startup script
    - reads command and user to run as from config file in $CFG dir.
    - sets session name as defined in config. !new!

    Comments and bugs are welcome to valqk to lozenetz dt net

    Sample config /etc/screen-startup/run_site.cfg:

    SCRIPT=/path/to/cron/script.sh
    USER=siteuser
    SCREEN_NAME=site_cronjob


    Script name: screen-startup

    #!/bin/sh
    # /etc/init.d/screen-startup
    #
    ### BEGIN INIT INFO
    # Provides: screen-startup
    # Required-Start: screen-cleanup
    # Required-Stop:
    # Default-Start: 2 3 4 5
    # Default-Stop: 0 1 6
    # Short-Description: Start daemon at boot time
    # Description: Enable service provided by daemon.
    ### END INIT INFO
    [ -z "$CFG" ] || ! [ -d "$CFG" ] && CFG='/etc/screen-startup/';
    # Carry out specific functions when asked to by the system
    startScreen() {
    echo "Starting screens..."
    for script in $CFG/*.cfg;
    do
    SCRIPT=`grep SCRIPT= $script|cut -f2 -d=`;
    USER=`grep USER= $script|cut -f2 -d=`;
    SCREEN_NAME=`grep SCREEN_NAME= $script|cut -f2 -d=`;
    if [ -n "$SCRIPT" ] && [ -n "$USER" ]; then
    if [ "x${SCREEN_NAME}" = "x" ]; then
    sessName="`echo $SCRIPT|sed -e 's%/%_%g'`-$USER-AS"
    else
    sessName="${SCREEN_NAME}";
    fi
    if [ -f /var/run/screen/$sessName.pid ]; then
    sessPid=`cat /var/run/screen/$sessName.pid`;
    [ "x$sessPid" != "x" ] && [ `ps -p $sessPid|wc -l` -gt 1 ] && echo "$sessName alredy started ($sessPid)!!!" && continue;
    echo "cleaning stale pid file: $sessName.pid"
    rm /var/run/screen/$sessName.pid
    fi
    echo -n "Screen $SCRIPT for user $USER..."
    /bin/su -c "/usr/bin/screen -dmS $sessName $SCRIPT" $USER
    screenPid=`ps ax|grep "$sessName"|grep "$SCRIPT"|grep -v grep|awk '{print $1}'`
    echo $screenPid > /var/run/screen/$sessName.pid
    echo "done.";
    fi
    done
    }
    stopScreen() {
    echo "Stopping screens..."
    for script in $CFG/*.cfg;
    do
    SCRIPT=`grep SCRIPT= $script|cut -f2 -d=`;
    USER=`grep USER= $script|cut -f2 -d=`;
    SCREEN_NAME=`grep SCREEN_NAME= $script|cut -f2 -d=`;
    sessName="`echo $SCRIPT|sed -e 's%/%_%g'`-$USER-AS"
    if [ "x${SCREEN_NAME}" = "x" ]; then
    sessName="`echo $SCRIPT|sed -e 's%/%_%g'`-$USER-AS"
    else
    sessName="${SCREEN_NAME}";
    fi
    if [ -f /var/run/screen/$sessName.pid ]; then
    pidOfScreen=`cat /var/run/screen/$sessName.pid|cut -f 1 -d' '`;
    pidOfBash=`cat /var/run/screen/$sessName.pid|cut -f 2 -d' '`;
    if [ "x$pidOfBash" != "x" ] && [ `ps -p $pidOfBash|wc -l` -lt 2 ]; then
    echo "Missing process $pidOfBash for screen $pidOfScreen. Cleaning up stale run file."
    rm /var/run/screen/$sessName.pid;
    continue;
    else
    echo -n "Screen: $SCRIPT for user $USER..."
    kill $pidOfBash $pidOfScreen;
    echo "done."
    rm /var/run/screen/$sessName.pid;
    fi
    fi
    done

    }
    case "$1" in
    start)
    startScreen;
    ;;
    stop)
    stopScreen;
    ;;
    restart)
    stopScreen;
    startScreen;
    ;;
    *)
    echo "Usage: $0 {start|stop}"
    exit 1
    ;;
    esac
    exit 0


    p.s. Edit: rev.1 of the script now supports SCREEN_NAME in config. When set you can resume screen with screen -s SCREEN_NAME (or part of it).

    Sunday, October 30. 2011

    DRBD 3 machines stacked setup

    *nix This is copy/paste from http://www.howtoforge.com/drbd-8.3-third-node-replication-with-debian-etch plus a split-brain fixes.

    WARNING: DO NOT do this setup, unless you'r OK with the speed to remote node. The max. speed you will get from drbd device is the speed you can push data to 3rd node.
    --------------




    DRBD 8.3 Third Node Replication With Debian Etch


    Installation and Set Up Guide for DRBD 8.3 + Debian Etch


    The Third Node Setup


    by Brian Hellman


    The recent release of DRBD 8.3 now includes The Third Node feature as a freely available component. This document will cover the basics of setting up a third node on a standard Debian Etch installation. At the end of this tutorial you will have a DRBD device that can be utilized as a SAN, an iSCSI target, a file server, or a database server.



    Note: LINBIT support customers can skip Section 1 and utilize the package repositories.


    LINBIT has hosted third node solutions available, please contact them at sales_us at linbit.com for more information.


     


    Preface:



    The setup is as follows:



    • Three servers: alpha, bravo, foxtrot

    • alpha and bravo are the primary and secondary local nodes

    • foxtrot is the third node which is on a remote network

    • Both alpha and bravo have interfaces on the 192.168.1.x network (eth0) for external connectivity.

    • A crossover link exists on alpha and bravo (eth1) for replication using 172.16.6.10 and .20

    • Heartbeat provides a virtual IP of 192.168.5.2 to communicate with the disaster recovery node located in a geographically diverse location


     


    Section 1: Installing The Source


    These steps need to be done on each of the 3 nodes.



    Prerequisites:



    • make

    • gcc

    • glibc development libraries

    • flex scanner generator

    • headers for the current kernel


    Enter the following at the command line as a privileged user to satisfy these dependencies:


    apt-get install make gcc libc6 flex linux-headers-`uname -r` libc6-dev linux-kernel-headers


    Once the dependencies are installed, download DRBD. The latest version can always be obtained at http://oss.linbit.com/drbd/. Currently, it is 8.3.



    cd /usr/src/

    wget http://oss.linbit.com/drbd/8.3/drbd-8.3.0.tar.gz


    After the download is complete:



    • Uncompress DRBD

    • Enter the source directory

    • Compile the source

    • Install DRBD



    tar -xzvf drbd-8.3.0.tar.gz

    cd /usr/src/drbd-8.3.0/

    make clean all

    make install


    Now load and verify the module:



    modprobe drbd

    cat /proc/drbd


    version: 8.3.0 (api:88/proto:86-89)

    GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@alpha, 2009-02-05 10:36:11


    Once this has been completed on each of the three nodes, continue to next section.



     


    Section 2: Heartbeat Configuration


    Setting up a third node entails stacking DRBD on top of DRBD. A virtual IP is needed for the third node to connect to, for this we will set up a simple Heartbeat v1 configuration. This section will only be done on alpha and bravo.


    Install Heartbeat:



    apt-get install heartbeat


    Edit the authkeys file:


    vi /etc/ha.d/authkeys


    auth 1
    1 sha1 yoursupersecretpasswordhere

    Once the file has been created, change the permissions on the file. Heartbeat will not start if this step is not followed.


    chmod 600 /etc/ha.d/authkeys


    Copy the authkeys file to bravo:


    scp /etc/ha.d/authkeys bravo:/etc/ha.d/


    Edit the ha.cf file:


    vi /etc/ha.d/ha.cf


    debugfile /var/log/ha-debug
    logfile /var/log/ha-log
    logfacility local0
    keepalive 1
    deadtime 10
    warntime 5
    initdead 60
    udpport 694
    ucast eth0 192.168.1.10
    ucast eth0 192.168.1.20
    auto_failback off
    node alpha
    node bravo

    Copy the ha.cf file to bravo:


    scp /etc/ha.d/ha.cf bravo:/etc/ha.d/


    Edit the haresources file, the IP created here will be the IP that our third node refers to.


    vi /etc/ha.d/haresources


    alpha IPaddr::192.168.5.2/24/eth0

    Copy the haresources file to bravo:


    scp /etc/ha.d/haresources bravo:/etc/ha.d/


    Start the heartbeat service on both servers to bring up the virtual IP:


    alpha:/# /etc/init.d/heartbeat start


    bravo:/# /etc/init.d/heartbeat start


    Heartbeat will bring up the new interface (eth0:0).


    Note: It may take heartbeat up to one minute to bring the interface up.



    alpha:/# ifconfig eth0:0


    eth0:0 Link encap:Ethernet HWaddr 00:08:C7:DB:01:CC

    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1


     


    Section 3: DRBD Configuration


    Configuration for DRBD is done via the drbd.conf file. This needs to be the same on all nodes (alpha, bravo, foxtrot). Please note that the usage-count is set to yes, which means it will notify Linbit that you have installed DRBD. No personal information is collected. Please see this page for more information :


    global { usage-count yes; }

    resource data-lower {
    protocol C;
    net {
    shared-secret "LINBIT";
    }
    syncer {
    rate 12M;
    }

    on alpha {
    device /dev/drbd1;
    disk /dev/hdb1;
    address 172.16.6.10:7788;
    meta-disk internal;
    }

    on bravo {
    device /dev/drbd1;
    disk /dev/hdd1;
    address 172.16.6.20:7788;
    meta-disk internal;
    }
    }

    resource data-upper {
    protocol A;
    syncer {
    after data-lower;
    rate 12M;
    al-extents 513;
    }
    net {
    shared-secret "LINBIT";
    }
    stacked-on-top-of data-lower {
    device /dev/drbd3;
    address 192.168.5.2:7788; # IP provided by Heartbeat
    }

    on foxtrot {
    device /dev/drbd3;
    disk /dev/sdb1;
    address 192.168.5.3:7788; # Public IP of the backup node
    meta-disk internal;
    }
    }

     


    Section 4: Preparing The DRBD Devices


    Now that the configuration is in place, create the metadata on alpha and bravo.



    alpha:/usr/src/drbd-8.3.0# drbdadm create-md data-lower


    Writing meta data...

    initializing activity log

    NOT initialized bitmap

    New drbd meta data block successfully created.



    bravo:/usr/src/drbd-8.3.0# drbdadm create-md data-lower


    Writing meta data...

    initialising activity log

    NOT initialized bitmap

    New drbd meta data block successfully created.


    Now start DRBD on alpha and bravo:


    alpha:/usr/src/drbd-8.3.0# /etc/init.d/drbd start


    bravo:/usr/src/drbd-8.3.0# /etc/init.d/drbd start


    Verify that the lower level DRBD devices are connected:



    cat /proc/drbd


    version: 8.3.0 (api:88/proto:86-89)

    GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@alpha, 2009-02-05 10:36:11

    0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r---

    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:19530844


    Tell alpha to become the primary node:


    NOTE: As the command states, this is going to overwrite any data on bravo: Now is a good time to go and grab your favorite drink.


    alpha:/# drbdadm -- --overwrite-data-of-peer primary data-lower

    alpha:/# cat /proc/drbd


    version: 8.3.0 (api:88/proto:86-89)

    GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@alpha, 2009-02-05 10:36:11

    0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r---

    ns:3088464 nr:0 dw:0 dr:3089408 al:0 bm:188 lo:23 pe:6 ua:53 ap:0 ep:1 wo:b oos:16442556

    [==>.................] sync'ed: 15.9% (16057/19073)M

    finish: 0:16:30 speed: 16,512 (8,276) K/sec


    After the data sync has finished, create the meta-data on data-upper on alpha, followed by foxtrot.


    Note the resource is data-upper and the --stacked option is on alpha only.



    alpha:~# drbdadm --stacked create-md data-upper


    Writing meta data...

    initialising activity log

    NOT initialized bitmap

    New drbd meta data block successfully created.

    success



    foxtrot:/usr/src/drbd-8.3.0# drbdadm create-md data-upper


    Writing meta data...

    initialising activity log

    NOT initialized bitmap

    New drbd meta data block sucessfully created.


    Bring up the stacked resource, then make alpha the primary of data-upper:


    alpha:/# drbdadm --stacked adjust data-upper


    foxtrot:~# drbdadm adjust data-upper

    foxtrot:~# cat /proc/drbd


    version: 8.3.0 (api:88/proto:86-89)

    GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@foxtrot, 2009-02-02 10:28:37

    1: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent A r---

    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:19530208


    alpha:~# drbdadm --stacked -- --overwrite-data-of-peer primary data-upper

    alpha:~# cat /proc/drbd


    version: 8.3.0 (api:88/proto:86-89)

    GIT-hash: 9ba8b93e24d842f0dd3fb1f9b90e8348ddb95829 build by root@alpha, 2009-02-05 10:36:11

    0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r---

    ns:19532532 nr:0 dw:1688 dr:34046020 al:1 bm:1196 lo:156 pe:0 ua:0 ap:156 ep:1 wo:b oos:0

    1: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent A r---

    ns:14512132 nr:0 dw:0 dr:14512676 al:0 bm:885 lo:156 pe:32 ua:292 ap:0 ep:1 wo:b oos:5018200

    [=============>......] sync'ed: 74.4% (4900/19072)M

    finish: 0:07:06 speed: 11,776 (10,992) K/sec


    Drink time again!


    After the sync is complete, access your DRBD block device via /dev/drbd3. This will write to both local nodes and the remote third node. In your Heartbeat configuration you will use the "drbdupper" script to bring up your /dev/drbd3 device. Have fun!



    DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.






    If you ever get a split-brain (two nodes are in StandAlone and won't want to connect or one is WFConnection the other is StandAlone - it's splitbrain!)
    On the node that is outdated do:

    drbdadm secondary
    drbdadm -- --discard-my-data connect

    on the node that has fresh data:
    drbdadm --stacked connect

    Wednesday, August 10. 2011

    PKGSRC NetBSD update/upgrade Howto

    NetBSD 1. Fetch the pkgsrc:

    1.1. SUP way:
    sup -v /path/to/your/supfile.

    and this is short sample supfile:
    nbsd# cat /root/sup-current
    current release=pkgsrc host=sup2.fr.NetBSD.org hostbase=/home/sup/supserver \
    base=/usr prefix=/usr backup use-rel-suffix compress delete

    1.2. CVS way:
    $ export CVSROOT="anoncvs@anoncvs.NetBSD.org:/cvsroot"
    $ export CVS_RSH="ssh"
    To fetch a specific pkgsrc stable branch from scratch, run:

    $ cd /usr
    $ cvs checkout -r pkgsrc-20xxQy -P pkgsrc
    Where pkgsrc-20xxQy is the stable branch to be checked out, for example, “pkgsrc-2009Q1”

    This will create the directory pkgsrc/ in your /usr/ directory and all the package source will be stored under /usr/pkgsrc/.

    To fetch the pkgsrc current branch, run:

    $ cd /usr
    $ cvs checkout -P pkgsrc


    2. Update the pkgsrc repository:

    2.1. SUP way

    sup -v /root/sup-current

    2.2. CVS way:

    $ export CVSROOT="anoncvs@anoncvs.NetBSD.org:/cvsroot"
    $ export CVS_RSH="ssh"
    $ cd /usr/pkgsrc
    $ cvs update -dP

    When updating pkgsrc, the CVS program keeps track of the branch you selected. But if you, for whatever reason, want to switch from the stable branch to the current one, you can do it by adding the option “-A” after the “update” keyword. To switch from the current branch back to the stable branch, add the “-rpkgsrc-2009Q3” option.



    3. Updating a package:

    cd /usr/pkgsrc/package/
    make update

    4. Update packages on remote server. If you have them already installed - check which one is for update:
    security checks:
    /usr/sbin/pkg_admin -K /var/db/pkg fetch-pkg-vulnerabilities

    then do:
    pkg_add -uu http://pkgserver/path/to/Pkg.tgz

    this will update the package form remote with all dependent packages!

    some links:
    http://imil.net/pkgin/

    http://pkgsrc.se/pkgtools/pkg_rolling-replace

    http://wiki.netbsd.org/tutorials/pkgsrc/pkg_comp_pkg_chk/


    To install packages directly from an FTP or HTTP server, run the following commands in a Bourne-compatible shell (be sure to su to root first):

    # PATH="/usr/pkg/sbin:$PATH"
    # PKG_PATH="ftp://ftp.NetBSD.org/pub/pkgsrc/packages/OPSYS/ARCH/VERSIONS/All"
    # export PATH PKG_PATH
    # pkg_add package.

    OR directly:

    # pkg_add http://...../

    Monday, April 11. 2011

    CheatSheets gathered

    Technical Here I'll post all the 'cheatsheets' (cheat sheets) - Documents with VERY useful information.
    These are sort of a quick howto's.

    Regular Expressions Cheat Sheet (V2) (source)
    PHP Cheat Sheet (V2) (source)
    Mdadm Cheat Sheet

    Wednesday, March 2. 2011

    DomPDF with UNICODE UTF-8 Support! At last!

    PHP WildWildWeb A colleague of mine spent some time and was able to make DomPDF library to run with almost ALL UTF-8 alphabets displayed.
    Until now I was using TCPDF. It supports UTF-8 from a lot of time, but has crappy way of generating documents - VERY simple HTML support and A LOT of calls to internal methods so you can documents looks like the HTML page.

    As far he explained to me the problem was generating proper fonts.

    DomPDF with UTF-8 Support

    UPDATE: Because DomPDF is "the memory MONSTER" (30pages table eat up about 1.5Gigs! GEE!!!) we are now using wkhtmltopdf. It's AMAZINGLY fast and keeps the memory footprint low (same page that took about 2-3min and 1.5Gigs ram for dompdf wkthml uses about 100-200mb and 20-40sec.)
    The funny thing is that it's webkit based and renders PERFECTLY everything on each page I've tested with.
    It's simply SWEET!

    Friday, February 11. 2011

    Debian Squeeze XEN basic setup

    Install Xen:

    #> aptitude install xen-hypervisor-4.0-amd64 linux-image-xen-amd64 xen-tools

    Sqeeuze use Grub 2 - the defaults are wrong for Xen.
    Xen hypervisor should be the first entry, so you should do this:

    #> mv /etc/grub.d/10_linux /etc/grub.d/100_linux

    After that disable the OS prober, so that you don’t have entries for virtual machines installed on a LVM partition.

    #> echo "GRUB_DISABLE_OS_PROBER=true" >> /etc/default/grub
    #> update-grub2

    Xen tries to save-state the VM’s when doing Dom0 shutdown.
    This save/restore has never been successful for me, so I disable it in /etc/default/xendomains to make sure machines gets shut down too:

    XENDOMAINS_RESTORE=false
    XENDOMAINS_SAVE=""

    Enable the network bridge in /etc/xen/xend-config.sxp (uncomment existing line).
    I also set some other useful params (for me):

    (network-script network-bridge)
    (dom0-min-mem 128)
    (dom0-cpus 1)
    (vnc-listen '127.0.0.1')
    (vncpasswd '')


    Add independent wallclocl in sysctl dom0

    #> echo xen.independent_wallclock=1 >> /etc/sysctl.conf

    and also in the domUs. Setup ntpdate update at 1hour for example in domUs.
    This will save you a lot of clocksync headachecs.

    Config /etc/xen-tools/xen-tools.conf contains default values the xen-create-image script will use. Most important are:

    # Virtual machine disks are created as logical volumes in volume group universe (LVM storage is much faster than file)
    lvm = vg001

    install-method = debootstrap

    size = 20Gb # Disk image size.
    memory = 256Mb # Memory size
    swap = 4Gb # Swap size
    fs = ext3 # use the EXT3 filesystem for the disk image.
    dist = `xt-guess-suite-and-mirror --suite` # Default distribution to install.

    gateway = 1.2.3.4
    netmask = 255.255.255.0

    # When creating an image, interactively setup root password
    passwd = 1

    # I think this option was this per default, but it doesn't hurt to mention.
    mirror = `xt-guess-suite-and-mirror --mirror`

    mirror_squeeze = http://ftp.bg.debian.org/debian/

    # let xen-create-image use pygrub, so that the grub from the VM is used, which means you no longer need to store kernels outside the VM's. Keeps this very flexible.
    pygrub=1

    scsi=1

    Script to create vms (copied from http://blog.bigsmoke.us/):

    #!/bin/bash

    dist=$1
    hostname=$2
    ip=$3

    if [ -z "$hostname" -o -z "$ip" -o -z "$dist" ]; then
    echo "No dist, hostname or ip specified"
    echo "Usage: $0 dist hostname ip"
    exit 1
    fi

    # --scsi is specified because when creating maverick for instance, the xvda disk that is used can't be accessed.
    # The --scsi flag causes names like sda to be used.
    xen-create-image --hostname $hostname --ip $ip --vcpus 2 --pygrub --dist $dist


    Usage of the script should be simple. When creating a VM named ‘host’, start it and attach console:

    xm create -c /etc/xen/host.cfg

    You can go back to Dom0 console with ctrl-].
    Place a symlink in /etc/xen/auto to start the VM on boot.

    As a sidenote: when creating a lenny, the script installs a xen kernel in the VM.
    When installing maverick, it installs a normal kernel.
    Normals kernels since version 2.6.32 (I believe) support pv_ops, meaning they can run on hypervisors like Xen’s.

    Friday, February 4. 2011

    Ubuntu encrypted home - lvm way

    *nix 1. Create lvm partition. (sdaXX)
    # fdisk /dev/sda
    and then create 1 partition for root, swap and the rest for home.
    2. Create physical extend.
    # pvcreate /dev/sda3
    3. Create logical volume
    # lvcreate -n crypted-home -L 200G vg0
    (you can leave free space if you want to be able to add additional partitions later)
    4. Install needed tools
    # aptitude -y install cryptsetup initramfs-tools hashalot lvm2
    # modprobe dm-crypt
    # modprobe dm-mod
    5. Check for bad blocks (optional)
    # /sbin/badblocks -c 10240 -s -w -t random -v /dev/vg0/crypted-home
    6. Setup crytped home partition with luks
    # cryptsetup -y --cipher aes-cbc-essiv:sha256 --key-size 256 luksFormat /dev/vg0/crypted-home
    enter uppercase YES!!

    7. Open the created crypted partition
    # cryptsetup luksOpen /dev/vg0/crypted-home home
    8. Create filesystem on the crypted home device
    # mke2fs -j -O dir_index,filetype,sparse_super /dev/mapper/home
    9. Mount and copy home files.
    # mount -t ext3 /dev/mapper/home /mnt
    # cp -axv /home/* /mnt/
    # umount /mnt
    10. Setup the system to open/mount crypted home.
    Insert in /etc/fstab :
    #
    /dev/mapper/home /home ext3 defaults 1 2

    After that, add an entry in /etc/crypttab:

    #
    home /dev/vg0/crypted-home none luks

    Tuesday, November 2. 2010

    NetBSD OS update/upgrade quick howto.

    NetBSD 1. Fetch/Update the OS sources.
    refs: NetBSD Docs (and NetBSD guide ; Fetching sources)

    Fetch the source if you don't have it:
    $ cd /usr
    $ export CVS_RSH=ssh 
    $ cvs -d anoncvs@anoncvs.NetBSD.org:/cvsroot co -r netbsd-5-0-2 -P src
    

    Update the source if you already have it:
    $ cd /usr/src
    $ export CVS_RSH=ssh 
    $ cvs update -dP
    

    If you are fetching the sources from scratch use:
    $ cd /usr
    $ export CVS_RSH=ssh 
    $ cvs -d anoncvs@anoncvs.NetBSD.org:/cvsroot co -r netbsd-5-1 -P src
    

    Hint: If you are using 5-0 and want to update to 5-1, use
    $ cvs update -r netbsd-5-1 -dP
    

    2. Create obj dir and build the tools:
    $ mkdir /usr/obj /usr/tools
    $ cd /usr/src
    $ ./build.sh -O /usr/obj -T /usr/tools -U -u tools
    

    3. Compile brand new userland:
    NetBSD page says: Please always refer to build.sh -h and the files UPDATING and BUILDING for details - it's worth it, there are many options that can be set on the command line or in /etc/mk.conf.
    $ cd /usr/src
    $ ./build.sh -O ../obj -T ../tools -U distribution
    

    4. Compile brand New Kernel:
    $ cd /usr/src
    $ ./build.sh -O ../obj -T ../tools kernel=
    

    is a Kernel options file located in: /usr/src/sys/arch/amd64/conf/

    I have XEN3_DOMU there that holds all my xen kernels compile options.
    You can also find GENERIC and others there.

    5. Install Kernel

    Installing the new kernel (copy it in Dom0), rebooting (to ensure that the new kernel works) and installing the new userland are the final steps of the updating procedure:
    $ cd /usr/obj/sys/arch/`uname -m`/compile/XEN3_DOMU/
    $ scp netbsd Dom0 machine...
    

    Go and change the kernel in the Dom0 to load the new one.
    reboot the machine.

    Or on native machines:
    $ cd /usr/src
    $ su
    # mv /netbsd /netbsd.old
    # mv /usr/obj/sys/arch/`uname -m`/compile/KERNEL/netbsd /
    # shutdown -r now
    


    6. Install new userland and reboot again to be sure it'll work. ;-)
    Afrer we've rebooted we are sure all new calls in the new userland will be handled by the new kernel.
    Now we'll install the new userland.
    $ cd /usr/src
    $ su
    # ./build.sh -O ../obj -T ../tools -U install=/ 
    #reboot
    

    7. Build a complete release so we can copy it on all other machines and upgrade with sysinst.
    $ ./build.sh -O ../obj -T ../tools -U -u -x release
    
    The resulting install sets will be in the /usr/obj/releasedir/ directory.



    When you've tested on the package server. Install/update on all other machines.


    1. Make a backup
    2. Fetch a new kernel and the binary sets from the release dir and store them /some/where/
    3. Install the kenrel (in XEN dom0)!
    4. Install the sets except etc.tzg and xetc.tgz!!
       # cd /
       # pax -zrpef /some/where/set.tgz
       # ...
       # ...
    
    5. Run etcupdate to merge important changes:
       # cd /
       # etcupdate -s /some/where/etc.tgz -s /some/where/xetc.tgz
    
    6. Upgrade finished, time to reboot.

    Friday, May 7. 2010

    Backup xen lvm/image disks. xenBackup script.

    *nix Long time no write.

    I'm trying to migrate all of my freebsds to xen+netbsd. (I gave up of this OS. You can't release STABLE that's not that stable. It's a long story but in shor, I've had a sleepless night after deploying a production. The problem - when it gets real world load it hangs with kernel panic and no auto reset about every 5-15mins. WTF? Devs asked me for a dump and told me that maybe they will find the problem. Sorry. That's sux and is not an option for a production used by thousands of people. Goodbye FreeBSD (for at least 5 years).

    After successfully running xen for some time, it's time to think of automated backup, that cares for everything instead of writing short shells to do each xen backup.
    I've made a quick search and found this xenBackup script that almost suits my needs.
    I didn't like that it mounted lvm read only and didn't use snapshots.
    The second thing I disliked was that is used to work only with lvms and I do have a sparse xen images (for small machines that don't need quick disk access and have only 1-2 services running in memory).

    I've modified the script and now the xenBackup script supports:
    - creating backup from lvm snapshots
    - creating backup from disk.img file
    - dynamic determination of the disk type and path ($hostname-disk for lvms and disk.img for sparse) (BE WARNED: only -disk and .disk will be backed up!)

    I'm using tar, so I didn't tested with rsync and rdiff-backup.
    I'm using snapshots. Never tested with readonly lvm mounted.

    so, here is the code:

    #!/bin/sh
    #
    #   Copyright John Quinn, 2008
    #   Copyright Anton Valqkoff, 2010
    #
    #   This program is free software: you can redistribute it and/or modify
    #   it under the terms of the GNU General Public License as published by
    #   the Free Software Foundation, either version 3 of the License, or
    #   (at your option) any later version.
    #
    #   This program is distributed in the hope that it will be useful,
    #   but WITHOUT ANY WARRANTY; without even the implied warranty of
    #   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    #   GNU General Public License for more details.
    #
    #   You should have received a copy of the GNU General Public License
    #   along with this program.  If not, see .
    
    #
    # xenBackup - Backup Xen Domains
    #
    #             Version:    1.0:     Created:  John D Quinn, http://www.johnandcailin.com/john
    #             Version:    1.1:     Added file/lvm recognition. lvm snapshot:  Anton Valqkoff, http://blog.valqk.com/
    #
    
    # initialize our variables
    domains="null"                           # the list of domains to backup
    allDomains="null"                        # backup all domains?
    targetLocation="/root/backup/"                    # the default backup target directory
    mountPoint="/mnt/xen"                    # the mount point to use to mount disk areas
    shutdownDomains=false                    # don't shutdown domains by default
    quiet=false                              # keep the chatter down
    backupEngine=tar                         # the default backup engine
    useSnapshot=true                        # create snampshot of the lvm and use it as backup mount.
    rsyncExe=/usr/bin/rsync                  # rsync executable
    rdiffbackupExe=/usr/bin/rdiff-backup     # rdiff-backup executable
    tarExe=/bin/tar                      # tar executable
    xmExe=/usr/sbin/xm                       # xm executable
    lvmExe=/sbin/lvm
    mountExe=/bin/mount
    grepExe=/bin/grep
    awkExe=/usr/bin/awk
    umountExe=/bin/umount
    cutExe=/usr/bin/cut
    egrepExe=/bin/egrep
    purgeAge="null"                          # age at which to purge increments
    globalBackupResult=0                     # success status of overall job
    #valqk: xm list --long ns.hostit.biz|grep -A 3 device|grep vbd -A 2|grep uname|grep -v swap|awk '{print $2}'
    
    # settings for logging (syslog)
    loggerArgs=""                            # what extra arguments to the logger to use
    loggerTag="xenBackup"                    # the tag for our log statements
    loggerFacility="local3"                  # the syslog facility to log to
    
    # trap user exit and cleanup
    trap 'cleanup;exit 1' 1 2
    
    cleanup()
    {
       ${logDebug} "Cleaning up"
       #check if file or lvm.if lvm and -snap remove it.
       mountType=`${mountExe}|${grepExe} ${mountPoint}|${awkExe} '{print $1}'`;
       [ -f ${mountType} ] && mountType="file";
       cd / ; ${umountExe} ${mountPoint}
       if [ "${mountType}" != "file" ] && [ "${useSnapshot}" = "true" ]; then
          #let's make sure we are removing snapshot!
          if [ `${mountExe}|${grepExe} -snap|wc -l` -gt 0 ]; then
             ${lvmExe} lvremove -f ${mountType}
          fi
       fi
    
    
       # restart the domain
       if test ${shutdownDomains} = "true"
       then
          ${logDebug} "Restarting domain"
          ${xmExe} create ${domain}.cfg > /dev/null
       fi
    }
    
    # function to print a usage message and bail
    usageAndBail() {
       cat << EOT
    Usage: xenBackup [OPTION]...
    Backup xen domains to a target area. different backup engines may be specified to
    produce a tarfile, an exact mirror of the disk area or a mirror with incremental backup.
    
       -d      backup only the specified DOMAINs (comma seperated list)
       -t      target LOCATION for the backup e.g. /tmp or root@www.example.com:/tmp
               (not used for tar engine)
       -a      backup all domains
       -s      shutdown domains before backup (and restart them afterwards)
       -q      run in quiet mode, output still goes to syslog
       -e      backup ENGINE to use, either tar, rsync or rdiff
       -p      purge increments older than TIME_SPEC. this option only applies
               to rdiff, e.g. 3W for 3 weeks. see "man rdiff-backup" for
               more information
    
    Example 1
       Backup all domains to the /tmp directgory
       $ xenBackup -a -t /tmp
    
    Example 2
       Backup domain: "wiki" using rsync to directory /var/xenImages on machine backupServer,
       $ xenBackup -e rsync -d wiki -t root@backupServer:/var/xenImages
    
    Example 3
       Backup domains "domainOne" and "domainTwo" using rdiff purging old increments older than 5 days
       $ xenBackup -e rdiff -d "domainOne, domainTwo" -p 5D
    
    EOT
    
       exit 1;
    }
    
    # parse the command line arguments
    while getopts p:e:qsad:t:h o
    do     case "$o" in
            q)     quiet="true";;
            s)     shutdownDomains="true";;
            a)     allDomains="true";;
            d)     domains="$OPTARG";;
            t)     targetLocation="$OPTARG";;
            e)     backupEngine="$OPTARG";;
            p)     purgeAge="$OPTARG";;
            h)     usageAndBail;;
            [?])   usageAndBail
           esac
    done
    
    # if quiet don't output logging to standard error
    if test ${quiet} = "false"
    then
       loggerArgs="-s"
    fi
    
    # setup logging subsystem. using syslog via logger
    logCritical="logger -t ${loggerTag} ${loggerArgs} -p ${loggerFacility}.crit"
    logWarning="logger -t ${loggerTag} ${loggerArgs} -p ${loggerFacility}.warning"
    logDebug="logger -t ${loggerTag} ${loggerArgs} -p ${loggerFacility}.debug"
    
    # make sure only root can run our script
    test $(id -u) = 0 || { ${logCritical} "This script must be run as root"; exit 1; }
    
    # make sure that the guest manager is available
    test -x ${xmExe} || { ${logCritical} "xen guest manager (${xmExe}) not found"; exit 1; }
    
    # assemble the list of domains to backup
    if test ${allDomains} = "true"
    then
       domainList=`${xmExe} list | cut -f1 -d" " | egrep -v "Name|Domain-0"`
    else
       # make sure we've got some domains specified
       if test "${domains}" = "null"
       then
          usageAndBail
       fi
    
       # create the domain list by mapping commas to spaces
       domainList=`echo ${domains} | tr -d " " | tr , " "`
    fi
    
    # function to do a "rdiff-backup" of domain
    backupDomainUsingrdiff() {
       domain=$1
       test -x ${rdiffbackupExe} || { ${logCritical} "rdiff-backup executable (${rdiffbackupExe}) not found"; exit 1; }
    
       if test ${quiet} = "false"
       then
          verbosity="3"
       else
          verbosity="0"
       fi
    
       targetSubDir=${targetLocation}/${domain}.rdiff-backup.mirror
    
       # make the targetSubDir if it doesn't already exist
       mkdir ${targetSubDir} > /dev/null 2>&1
       ${logDebug} "backing up domain ${domain} to ${targetSubDir} using rdiff-backup"
    
       # rdiff-backup to the target directory
       ${rdiffbackupExe} --verbosity ${verbosity} ${mountPoint}/ ${targetSubDir}
       backupResult=$?
    
       # purge old increments
       if test ${purgeAge} != "null"
       then
          # purge old increments
          ${logDebug} "purging increments older than ${purgeAge} from ${targetSubDir}"
          ${rdiffbackupExe} --verbosity ${verbosity} --force --remove-older-than ${purgeAge} ${targetSubDir}
       fi
    
       return ${backupResult}
    }
    
    # function to do a "rsync" backup of domain
    backupDomainUsingrsync() {
       domain=$1
       test -x ${rsyncExe} || { ${logCritical} "rsync executable (${rsyncExe}) not found"; exit 1; }
    
       targetSubDir=${targetLocation}/${domain}.rsync.mirror
    
       # make the targetSubDir if it doesn't already exist
       mkdir ${targetSubDir} > /dev/null 2>&1
       ${logDebug} "backing up domain ${domain} to ${targetSubDir} using rsync"
    
       # rsync to the target directory
       ${rsyncExe} -essh -avz --delete ${mountPoint}/ ${targetSubDir}
       backupResult=$?
    
       return ${backupResult}
    }
    
    # function to a "tar" backup of domain
    backupDomainUsingtar ()
    {
       domain=$1
    
       # make sure we can write to the target directory
       test -w ${targetLocation} || { ${logCritical} "target directory (${targetLocation}) is not writeable"; exit 1; }
    
       targetFile=${targetLocation}/${domain}.`date '+%d.%m.%Y'`.$$.tar.gz
       ${logDebug} "backing up domain ${domain} to ${targetFile} using tar"
    
       # tar to the target directory
       cd ${mountPoint}
    
       ${tarExe} pcfz ${targetFile} * > /dev/null
       backupResult=$?
    
       return ${backupResult}
    }
    
    # backup the specified domains
    for domain in ${domainList}
    do
       ${logDebug} "backing up domain: ${domain}"
       [ `${xmExe} list ${domain}|wc -l` -lt 1 ] && { echo "Fatal ERROR!!! ${domain} does not exists or not running! Exiting."; exit 1; }
    
       # make sure that the domain is shutdown if required
       if test ${shutdownDomains} = "true"
       then
          ${logDebug} "shutting down domain ${domain}"
          ${xmExe} shutdown -w ${domain} > /dev/null
       fi
    
       # unmount mount point if already mounted
       umount ${mountPoint} > /dev/null 2>&1
    
       #inspect domain disks per domain. get only -disk or disk.img.
       #if file:// mount the xen disk read-only,umount sfter.
       #if lvm create a snapshot mount/umount/erase it.
       xenDiskStr=`${xmExe} list --long ${domain}|${grepExe} -A 3 device|${grepExe} vbd -A 2|${grepExe} uname|${grepExe} -v swap|${awkExe} '{print $2}'|${egrepExe} 'disk.img|-disk'`
       xenDiskType=`echo ${xenDiskStr}|${cutExe} -f1 -d:`;
       xenDiskDev=`echo ${xenDiskStr}|${cutExe} -f2 -d:|${cutExe} -f1 -d')'`;
       test -r ${xenDiskDev} || { ${logCritical} "xen disk area not readable. are you sure that the domain \"${domain}\" exists?"; exit 1; }
       #valqk: if the domain uses a file.img - mount ro (loop allows mount the file twice. wtf!?)
       if [ "${xenDiskType}" = "file" ]; then
          ${logDebug} "Mounting file://${xenDiskDev} read-only to ${mountPoint}"
          ${mountExe} -oloop ${xenDiskDev} ${mountPoint} || { ${logCritical} "mount failed, does mount point (${mountPoint}) exist?"; exit 1; }
          ${mountExe} -oremount,ro ${mountPoint} || { ${logCritical} "mount failed, does mount point (${mountPoint}) exist?"; exit 1; }
       fi
       if [ "${xenDiskType}" = "phy" ] ; then
          if [ "${useSnapshot}" = "true" ]; then
             vgName=`${lvmExe} lvdisplay -c |${grepExe} ${domain}-disk|${grepExe} disk|${cutExe} -f 2 -d:`;
             lvSize=`${lvmExe} lvdisplay ${xenDiskDev} -c|${cutExe} -f7 -d:`;
             lvSize=$((${lvSize}/2/100*15)); # 15% size of lvm in kilobytes
             ${lvmExe} lvcreate -s -n ${vgName}/${domain}-snap -L ${lvSize}k ${xenDiskDev} || { ${logCritical} "creation of snapshot for ${xenDiskDev} failed. exiting." exit 1; }
             ${mountExe} -r /dev/${vgName}/${domain}-snap ${mountPoint} || { ${logCritical} "mount failed, does mount point (${mountPoint}) exist?"; exit 1; }
          else
             ${mountExe} -r ${xenDiskDev} ${mountPoint}
          fi
       fi
    
       # do the backup according to the chosen backup engine
       backupDomainUsing${backupEngine} ${domain}
    
       # make sure that the backup was successful
       if test $? -ne 0
       then
          ${logCritical} "FAILURE: error backing up domain ${domain}"
          globalBackupResult=1
       else
          ${logDebug} "SUCCESS: domain ${domain} backed up"
       fi
         
       # clean up
       cleanup;
    done
    if test ${globalBackupResult} -eq 0
    then
       ${logDebug} "SUCCESS: backup of all domains completed successfully"
    else
       ${logCritical} "FAILURE: backup completed with some failures"
    fi
    
    exit ${globalBackupResult}
    
    

    Thursday, November 26. 2009

    Setup SVN repositories only for specified users over ssh. OpenSSH limit only one command execution.

    *nix Just to blog this. I'll need it in future.
    If you have svn repositories server and you are using svn+ssh for the checkout and all svn actions you will want users to have access to only predefined repos only and not to any shell or anything.
    I've done this by doing symlinks in their homes and using ssh file that looks like this
    authorized_keys
    :

    command="svnserve -t --tunnel-user=user -r /home/user",no-port-forwarding,no-agent-forwarding,no-X11-forwarding,no-pty ssh-rsa AAAAB3Nz1...KEY HERE....

    this way, you can lock them to use only svnserve and it will lock them to co only what's in their home dirs.

    If you're not familiar with details - eg. how to generate keys, what is authorized_keys etc, I stole this from here: http://ingomueller.net/node/331 - read more there.

    Of course you have to keep your snserve up to date and pray there are no vulns in it, otherwise users can hack you :-)
    But hey, you know the owners of the keys, don't you? :-)
    Got my pont? ;-)

    Thursday, November 12. 2009

    Roundcube with plugins support!!! WOW! Writing a plugin - display custom template has bogus docs.

    PHP WildWildWeb Today I've noticed Roundcube has released a new version that finally has plugins support!
    Grrrreaaat!

    As expected in there is a change password plugin (with drivers supports) and some other that are pretty cool!
    A list of plugins here: http://trac.roundcube.net/wiki/Plugin_Repository

    Of course I've had some custom patching for my hosting users and now it's not working.
    I've configured my change password plugin (which was the main showstopper for not upgrating to new roundcube) and the the little tiny hack for domain notification left.
    I've decided to write a plugin that will do the job for me, so I can easily upgade after that.

    Writing plugin isn't that hard at all. Here you can read more:
    http://trac.roundcube.net/wiki/Doc_Plugins

    also you can read plugins directory for more.

    While creating my plugin I hit a problem and I've lost about 40 minutes searching for description and resolution.
    The Resolutions was 5mins reading the class for temapltes but I thought I was wrong - no this is a mis-explanation in docs.
    When you want to create a custom template you mkdir skins/default/templates and create/copy-modify html in it (I've copied login.html template).
    Well all was fine while I've tried to show it.
    Documentation is wrong.
    When you call:

    $rcmail->output->send('mytemplate');

    you must actually call:

    $rcmail->output->send('myplugin.mytemplate');
    so the tpl class can understand this is a plugin and show your template and not search for default tpl.

    Hope that helps someone.
    Going to change/report this in docs now.
    Oh. Symptoms are:

    [12.Nov.2009 17:57:27 +0200]: PHP Error: Error loading template for logininfo in /var/www/roundcube/program/include/rcube_template.php on line 372 (GET /)

    in your error log.

    Tuesday, October 13. 2009

    Dojo: breaking in IE*

    WildWildWeb If your dojo based website breaks in IE browsers and not in others, with strange errors in dojo.js then you have to check VERY CAREFULLY for unclosed tags.

    I've had this problem - didn't closed one (only one!) div inside a HTML markup node that used dojoType and viola - dojo threw a "NICE" js error in IE (you know how js is debuged in IE don't ya?) :-)


    So be very very careful when closing tags and using IE+dojo :-)

    IE8 and Opera 10 absolute positioning problems

    WildWildWeb IE8 and Opera 10 differs to ALL other browsers (FF3, Safari, Chrome, IE6, IE7) in positionin an absolute element inside a div.
    If you have something like this:

    
    ....
    
    If you don't put the right: 0px the element won't keep it's original position an will go to the left side of the div becase IE8 and Opera will put default left: 0px if nothing set.
    All other browsers will keep a's original position (no left: 0px;)
    hope that help to someone.
    Keywords: IE8 Opera absolute positioning problem

    Tuesday, September 29. 2009

    Q&A for apache in debian

    *nix Q: Why does Apache Web server in Debian has 'It works!' page as it's default host?
    A: Because after you have setupped a complex VirtualHost configuration for half an hour or more (yesh, there can be such), it's nice to see that 'It worked!'
    --answered by valqk. :-D

    Tuesday, September 1. 2009

    Debian HP SmartArray RAID monitoring.

    *nix You need to install 2 utils to monitor and query your smart array:

    apt-get install arrayprobe cpqarrayd
    the one is a daemon that logs events from the controller - cpqarrayd (thanks velin)
    arrayprobe is the cli tool.

    More links on the topic:
    source I've got this from.
    driver and utils page.

    if you have faulty drive

    hope that helps.

    UPDATE:

    In squeeze there is no cpqarrayd and arrayprobe is not that good.
    You can use the hp tools provided in debian packages.
    Simply add this source:


    deb http://downloads.linux.hp.com/SDR/downloads/ProLiantSupportPack/Debian/ squeeze/current non-free

    then
    #> apt-get update && apt-get install hpacucli

    This is the way this CLI is being used: hpacuclu usage
    p.s. not yet figured out about monitoring. hp-health is something I've read but didn't tested yet.