Wednesday, November 19, 2014

Customize outgoing SendGrid emails X-SMTPAPI header by Postfix

SendGrid’s SMTP API allows developers to specify custom handling instructions for e-mail. This is accomplished through a header, X-SMTPAPI, that is inserted into the message.

We can do it easily, for example in PHP:
 $email = new SendGrid\Email();  
 $email->addTo('from@abc.com')->  
     setFrom('to@abc.com')->  
     setSubject('Subject goes here')->  
     setText('Hello World!')->  
     addFilter("subscriptiontrack", "enable", 0)->  
     addFilter("clicktrack", "enable", 0)->  
     addFilter("opentrack", "enable", 0)->  
     addCategory("www")->  
     setHtml('<strong>Hello World!</strong>');  

But for some reasons, Client would like to replay emails to SendGrid by local Postfix and using Drupal built-in mail function without custom code, we can use Postfix smtp_header_checks to add the X-SMTPAPI header for outgoing emails.

Below are steps to do that:

1. Setup Postfix and config it to send emails via SendGrid as here
(it's better to use hashed password file like smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd instead smtp_sasl_password_maps = static:yourSendGridUsername:yourSendGridPassword)

2. Add smtp_header_checks to /etc/postfix/main.cf:
smtp_header_checks = regexp:/etc/postfix/smtp_header_checks

3. Create /etc/postfix/smtp_header_checks file with the content, like:
/^From:/ PREPEND X-SMTPAPI: "category":["www"],"filters":{"subscriptiontrack":{"settings":{"enable":0}},"clicktrack":{"settings":{"enable":0}},"opentrack":{"settings":{"enable":0}}}}

Hint: we can easily get the X-SMTPAPI content by print it out from above PHP code, like:
 $arr = $email->toWebFormat();  
 echo $arr['x-smtpapi'];  

4. Reload Postfix

Friday, November 7, 2014

Move Mail-in-a-box to another server

I was asked to move a Mail-in-a-box to another server last night. It's quite new for me although I've migrated Zimbra server several times. On their web site, there's just a very simple setup guide. So I searched in the forum and found a instruction from Josh. I don't have much experience with Mail-in-a-box so had to find out what are STORAGE_ROOT and STORAGE_USER variables in code. It turned out they are set:
STORAGE_USER=user-data
STORAGE_ROOT=/home/$STORAGE_USER

Then learned how to decrypt files via openssl as instructed and how to restore them via duplicity. Had to look into the backup.py code to see how they are encrypted then how to decrypt..
But then found out I didn't need to decrypt those files when we can use normal files in the /home/user-data/backup/duplicity/ instead of decrypted files from /home/user-data/backup/encrypted/.

To summary, below are steps to move Mail-in-a-box to another server:
1. Setup a new Ubuntu 14.04 x64 and setup Mail-in-a-box as the setup guide.
Copy the /etc/postfix/main.cf from the old server over the new server.
2. Stop the service mailinabox on the old server : sudo service mailinabox stop
3. Do backup manually: sudo /home/my_account/mailinabox/management/backup.py
(this is usually incremental backup since mailinabox is scheduled backup daily)
4. Then stop mailinabox again (because it's started by the backup tool) to make sure no new mails come in.
5. rsync entire /home/user-data/backup/duplicity/ to the new server in a folder.
rsync -avr -e ssh /home/user-data/backup/ my_account@new_server:backup/
6. Do restore on the new server:
- Stop the mailinabox service: sudo service mailinabox stop
- In case we restore from encrypted files, we decrypt them with commands:
mkdir  /home/my_account/backup/decrypted && cd /home/my_account/backup/encrypted
for FILE in *.enc; do  openssl enc -d -aes-256-cbc -a -in $FILE -out ../decrypted/${FILE%.*} -pass file:../secret_key.txt; done- Restore by duplicity:
sudo duplicity --no-encryption restore file:///home/my_account/backup/duplicity /home/user-data
or
sudo duplicity --no-encryption restore file:///home/my_account/backup/decrypted /home/user-data
- Re-configure/update: cd /home/my_account/mailinabox && sudo setup/start.sh

In case we don't switch immediately, we can start the service mailinabox on the old server again and just before transition, do from step #2 to the rest. It will be faster because we will just sync new incremental backup.

Note that Josh said on the other thread that we can sync /home/user-data directly but I didn't test that.

Thursday, October 16, 2014

Phusion Passenger and missing passenger_native_support.so issue

When upgrade an Rails application from Ruby 1.9.3 to 2.0.0 by using rvm, it throws out error as:

Raw process output:

 --> Compiling passenger_native_support.so for the current Ruby interpreter...
     (set PASSENGER_COMPILE_NATIVE_SUPPORT_BINARY=0 to disable)
 --> Downloading precompiled passenger_native_support.so for the current Ruby interpreter...
     (set PASSENGER_DOWNLOAD_NATIVE_SUPPORT_BINARY=0 to disable)
     Could not download https://oss-binaries.phusionpassenger.com/binaries/passenger/by_release/4.0.48/rubyext-ruby-2.0.0-x86_64-linux.tar.gz: Resolving timed out after 4516 milliseconds
     Trying next mirror...
     Could not download https://s3.amazonaws.com/phusion-passenger/binaries/passenger/by_release/4.0.48/rubyext-ruby-2.0.0-x86_64-linux.tar.gz: Resolving timed out after 4516 milliseconds
 --> Continuing without passenger_native_support.so.
/usr/local/rvm/gems/ruby-2.0.0-p451/gems/json-1.8.1/lib/json/common.rb:67: [BUG] Segmentation fault
ruby 2.0.0p451 (2014-02-24 revision 45167) [x86_64-linux]

It turned out the passenger_native_support.so was missed in rvm /usr/local/rvm/rubies/ruby-2.0.0-p451/lib/ruby/2.0.0/x86_64-linux/ folder.

So searched and found it in /usr/lib/ruby/2.0.0/x86_64-linux-gnu/ then copied it to /usr/local/rvm/rubies/ruby-2.0.0-p451/lib/ruby/2.0.0/x86_64-linux/

Saturday, October 11, 2014

Drupal 7.3.1 crashes with eAccelerator 0.9.6.1

When deploy a site which was built on Drupal 7.3.1 on CentOS 6.5, it crashed with an error:

"Fatal error: Cannot create references to/from string offsets nor overloaded objects in /var/www/drupal/includes/errors.inc on line 184

That's interesting.  So I've commented out that line ($test_info = &$GLOBALS['drupal_test_info'];) and the app continues working but the come with some other strange behaviors like the URLS are pointing with querystring parameters such as http://drupal-site.com/?q=careers instead clean URL like http://drupal-site.com/careers
or
Fatal error: Cannot use object of type ctools_context as array in /var/www/drupal/sites/all/modules/contrib/panels_everywhere/plugins/tasks/site_template.inc on line 103...

Then I've tried to setup a new Drupal 7.3.1 site from scratch and it worked pretty well. It looks like some extra modules like contrib caused the problem.

Finally I noticed there were bunch of errors "child pid xyz exit signal Segmentation fault (11)" in the /var/log/httpd/error_log so I found out why and tried to disable eAccelerator (the latest 0.9.6.1 - which is required by the Dev team) and the app works again.

Then I enabled it and the app was still working... until sometimes later it crashed again - looks like the cached was expired?

So finally I've asked the Dev team to switch to the APC for alternative. 




Sunday, September 21, 2014

Auto syncronize Salesforce data to MySQL or Postgresql with Talend Open Studio

Today I'm writing about using Talend Open Studio to sync Salesforce data to MySQL or Postgresql databases.

First of all, let's download and install Talend Open Studio for Data Integration, make sure you already have Java installed.

Next is unzip and run TOS_DI-macosx-cocoa as I'm using a Mac OS X, or TOS_DI-win-x86_64.exe if you're using Windows.
It's based on Eclipse IDE so you'll see it's pretty easy if you used to work with Eclipse.

Here is the main screen.


Then we will need to create a database connection by expanding the Metadata node -> Db connections -> Right click -> Create connection -> enter a Name (without space) -> click Next button

Then select Db Type (MySQL or Postgresql or whatever in the long list). I'm choosing Postgresql because MySQL limits row size to 65535. So if the table contains too much fields, it will throw out an error: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Row size too large. The maximum row size for the used table type, not counting BLOBs, is 65535. You have to change some columns to TEXT or BLOBs

After enter all connection information, click Check button to make sure they are correct. Then Finish.


Next step is creating Salesforce connection. Let's right click the Salesforce node and select Create Salesforce connection. Enter a name then Next.


 Enter the User name and Password.  The Password is combination of Salesforce password and the token. i.e if the Salesforce password is xyz and the token is 1234, then we enter xyz1234 on the Password field. Then Check Login to make sure connection is valid. Then Finish.

Next step is retrieving Salesforce modules by right click on created Salesforce connection and select Retrieve Salesforce modules. Let's check for Account


 Then we will need to create a job to import Salesforce data into the database. Right click on Job Design node, select Create Jobs

Then job design screen is opened and we will expand the Salesforce Account module to drag/drop the Account table into the job design screen as tSalesforceInput component.

Click OK then we have it on the screen

Next step is search for database output component on the Palette box, then drag/drop tPostgresqlOutput component into the screen


 Then we might need to rename it as "account" and click the inside icon to set its properties
Select Repository on Property Type , then select created Postgresql connection - DB (POSTGRESQL):PosgressqlSalesforce 
All connection information automatically filled out. Except Table - enter "account" here, then select Create table if does not exist for Action for table, and Insert for Action for data

 Next step is linking between Salesforce input and Postgresql output by right click on Salesforce input's icon and select Row -> Main, then wire the connection to the Postgresql output's icon
and Run the job.

Then let's check the Postgresql to see the new account table is created with data from Salesforce
Then double the Postgrsql account output to edit its properties, select Insert or Update for Action on data and click Edit schema button, then select the checkbox Key on the Id column of account Output

 Then we might run the job again.
We might repeat those steps to sync other tables.

Now if we want to execute the job automatically by schedule / cron, we will need to build the job.
Right click on the job node, select Build Job


Then unzip the file, explore to the sub folder sync, we will see sync_run.sh or sync_run.bat (or jobName_run.bat/.sh) that will be called in schedule / cron.

So we are done!

FYI, here is the videocast to show you how to sync from database to Salesforce.



Saturday, September 13, 2014

Automate Rackspace Cloud Block Storage volume creation with Ansible

Today I want to automate Rackspace Cloud Block Storage volume creation with Ansible for Rackspace instances.

To do that, we will need rax modules that requires to install pyrax.

After pyrax is installed, then I tested to create a block storage volume with the module rax_cbs
and that thrown out an error:
msg: pyrax is required for this module

That's interesting. I checked the python path and tested pyrax:

$ which python
/usr/local/bin/python

 $ python
Python 2.7.7 (default, Jun 18 2014, 16:33:32)
[GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.9)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyrax
>>>

Checked the ansible python path
$ head /usr/local/bin/ansible
#!/usr/local/opt/python/bin/python2.7

Found both python and /usr/local/opt/python/bin/python2.7 are symbolic links of /usr/local/Cellar/python/2.7.7_2/Frameworks/Python.framework/Versions/2.7/bin/python2.7.

Google around but can't find the answer.

Finally verified the module rax_cbs:
$ head /usr/share/ansible/cloud/rax_cbs
#!/usr/bin/python

Then:
$ /usr/bin/python
Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49)
[GCC 4.2.1 (Apple Inc. build 5646)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyrax
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: No module named pyrax

That's why it didn't work! /usr/bin/python is built-in python on Mac OS X while 2.7.7 was installed via Homebrew to different path.

So let's set the ansible_python_interpreter for localhost that is used by rax modules in the ansible/hosts file:
[local]
localhost ansible_connection=local ansible_python_interpreter=/usr/local/opt/python/bin/python2.7

But then got another error:
msg: No CloudBlockStorageVolume matching

Found the answer to patch the /usr/share/ansible/cloud/rax_cbs

Voila!

I also want to attach the volume to the instance, then fdisk/create partition/format/mount it automatically.., so we have to check if the device already exist or not, if not then create/attach a Cloud Block Storage volume and fdisk/create partition/format..

Below is the playbook.

Note that this playbook will be executed for remote Rackspace instances so we don't need to set hosts/connection for Rackspace build/attach block storage volume tasks as the example when the local_action module already delegates to the localhost defined in ansible/hosts automatically.

 # file:Rackspace_disk/tasks/main.yml  
 - name: Check if device /dev/xvd_ present  
  shell: fdisk -l | grep 'Disk {{device}}' | wc -l  
  changed_when: False  
  register: device_present  
 - name: Build a Block Storage Volume  
  local_action:  
   module: rax_cbs  
   credentials: "{{credentials}}"  
   name: "{{volum_name}}"  
   volume_type: "{{volum_type}}"  
   size: "{{volum_size}}"  
   region: "{{region}}"   
   wait: yes  
   state: present  
   meta:  
    app: "{{volum_name}}"  
  sudo: no  
  when: device_present.stdout is defined and device_present.stdout|int == 0  
 - name: Attach a Block Storage Volume  
  local_action:  
   module: rax_cbs_attachments  
   credentials: "{{credentials}}"  
   volume: "{{volum_name}}"  
   server: "{{server}}"  
   device: "{{device}}" #/dev/xvd_  
   region: "{{region}}"  
   wait: yes  
   state: present  
  sudo: no  
  when: device_present.stdout is defined and device_present.stdout|int == 0  
 - name: Check if partition /dev/xvd_1 present  
  shell: fdisk -l | grep {{device}}1 | wc -l  
  changed_when: False  
  register: partition_present  
 - name: Fdisk / create partition / format  
  shell: "echo -e 'n\np\n1\n\n\nw\n' | fdisk {{device}} && mkfs -t {{fstype}} {{device}}1 && tune2fs -m 0 {{device}}1 "  
  when: partition_present.stdout is defined and partition_present.stdout|int == 0  
 - file: path={{mount_dir}} state=directory  
 - name: Mount device  
  mount: name={{mount_dir}} src={{device}}1 fstype={{fstype}} opts='defaults,noatime,nofail' state=mounted  




 



Saturday, August 23, 2014

Resize ext4 partition on CentOS 5

When trying to resize an ext4 partition on CentOS 5, I got an error:

$ resize2fs /dev/sdf
resize2fs 1.39 (29-May-2006)
resize2fs: Filesystem has unsupported feature(s) while trying to open /dev/sdf
Couldn't find valid filesystem superblock.

Google around and found out we must use resize4fs instead

$ resize4fs /dev/sdf
resize4fs 1.41.12 (17-May-2010)
Filesystem at /dev/sdf is mounted on /opt; on-line resizing required
old desc_blocks = 5, new_desc_blocks = 7
Performing an on-line resize of /dev/sdf to 26214400 (4k) blocks.

Similarly, we will use fsck.ext4, tune4fs for ext4


Saturday, August 9, 2014

Cisco SPA525G2 SIP configuration over wireless connection

We bought a new Cisco SPA525G2 phone to use wireless and need to setup it with RingCentral.

It's pretty simple so we don't need provisioning via private TFTP server like old Cisco 7940/7960 phones.

Below is how I made it work.

1. Config the phone to connect the wireless router. (since I work from home so need someone in the office helping on this).

2. Remote to a PC in the same phone's network, open the browser and enter the phone's IP address.
If it asks user/password to login, enter: admin / (blank password)

3. Config the phone:
Suppose the RingCentral provides provision information:
SIP Domain sip.ringcentral.com:5060
Outbound Proxy sip10.ringcentral.com:5090
User Name 14151234567
Password aPassword
Authorization ID 0123456789

On the Cisco admin UI, click the Advance on bottom of the page.

Menu Provisioning
Provision Enable: No

Menu Voice > Ext 1
SIP Transport: TCP
Proxy: sip.ringcentral.com:5060
Outbound Proxy: sip10.ringcentral.com:5090
Use Outbound Proxy: Yes
Display Name: a name
User ID: 14151234567
Password: aPassword
Use Auth ID: Yes
Auth ID: 0123456789

Menu System
Primary NTP Server: 0.north-america.pool.ntp.org
Secondary NTP Server: 1.north-america.pool.ntp.org

Click "Submit All Changes"

Then check the phone status to see if the line 1 is Registered or not.

Wednesday, July 9, 2014

Windows update automatic e-mail notification

When managing Windows servers, I looked for a way to check update and notificate automatically via email like yum-cron on Redhat/CentOS and found one here. Thank you Paulie.

Though it's written in 2007 but is still working well for Windows 2012 servers.

Below is my modification to categorize MsrcSeverity to Important when it's blank and use Gmail SMTP with authentication.

 '==========================================================================  
 ' NAME:      Windows update automatic e-mail notification  
 ' AUTHOR:      Paul Murana, Accendo Solutions  
 ' DATE :      26/08/2007  
 ' MODIFIED: Luan Nguyen - 7/8/2014  
 '==========================================================================  
 'Change these variables to control which updates trigger an e-mail alert  
 'and to configure e-mail from/to addresses  
 AlertCritical     = 1   
 AlertImportant     = 1  
 AlertModerate     = 0  
 AlertLow     = 0  
 EmailFrom      = "from.email@gmail.com"  
 EmailTo      = "to.email@gmail.com"  
 smtpserver = "smtp.gmail.com"   
 smtpserverport = 465  
 smtpauthenticate = 1  
 sendusername = "from.email@gmail.com"  
 sendpassword = "password"  
 smtpusessl = 1  
 '==========================================================================  
 Set fso           = CreateObject("Scripting.FileSystemObject")  
 Set updateSession      = CreateObject("Microsoft.Update.Session")  
 Set updateSearcher      = updateSession.CreateupdateSearcher()  
 Set oShell           = CreateObject( "WScript.Shell" )  
 computername          = oShell.ExpandEnvironmentStrings("%ComputerName%")  
 DomainName          = oShell.ExpandEnvironmentStrings("%userdomain%")  
 EMailSubject           = "Windows Update Notification - " & DomainName & "\" & computername  
 Set oshell           = Nothing  
 Set searchResult      = updateSearcher.Search("IsInstalled=0 and Type='Software'")  
 If searchResult.Updates.count > 0 Then  
      For I = 0 To searchResult.Updates.Count-1  
        Set update = searchResult.Updates.Item(I)  
           Select Case update.MsrcSeverity  
                Case "Critical"   
                     CriticalCount = Criticalcount+1  
                     CriticalHTML = CriticalHTML & MakeHTMLLine(update)  
                     Wscript.Echo update.MsrcSeverity & " : " & update, vbCRLF  
                Case "Moderate"  
                     ModerateCount = Moderatecount + 1  
                     ModerateHTML = ModerateHTML & MakeHTMLLine(update)  
                     Wscript.Echo update.MsrcSeverity & " : " & update, vbCRLF  
                Case "Low"  
                     Lowcount = Lowcount + 1  
                     LowHTML = LowHTML & MakeHTMLLine(update)  
                     Wscript.Echo update.MsrcSeverity & " : " & update, vbCRLF  
                Case Else '"Important" or blank  
                     ImportantCount = Importantcount + 1  
                     ImportantHTML = ImportantHTML & MakeHTMLLine(update)  
                     Wscript.Echo "Important : " & update, vbCRLF  
           end select                 
      Next  
      If searchResult.Updates.Count = 0 Then  
           Wscript.Echo "No updates :)"  
           WScript.Quit  
      Else  
           If (AlertCritical=1 and CriticalCount > 0) then SendEmail=1 end if  
           If (AlertImportant=1 and ImportantCount > 0) then SendEmail=1 end if  
           If (AlertModerate=1 and ModerateCount > 0) then SendEmail=1 end if  
           If (AlertLow=1 and LowCount > 0) then SendEmail=1 end If  
           if SendEmail=1 and smtpserver <> "" Then  
                Set objMessage           = CreateObject("CDO.Message")   
                objMessage.Subject      = EMailSubject  
                objMessage.From      = EmailFrom  
                objMessage.To           = EmailTo  
                objMessage.HTMLBody      = ReplaceHTMLTemplate()  
                Set iConf = CreateObject("CDO.Configuration")  
                Set Flds = iConf.Fields           
                schema = "http://schemas.microsoft.com/cdo/configuration/"  
                Flds.Item(schema & "sendusing") = 2  
                Flds.Item(schema & "smtpserver") = smtpserver  
                Flds.Item(schema & "smtpserverport") = smtpserverport  
                Flds.Item(schema & "smtpauthenticate") = smtpauthenticate  
                if smtpauthenticate = 1 and sendusername <> "" and sendpassword <> "" then  
                     Flds.Item(schema & "sendusername") = sendusername  
                     Flds.Item(schema & "sendpassword") = sendpassword  
                end if  
                Flds.Item(schema & "smtpusessl") = smtpusessl  
                Flds.Update  
                Set objMessage.Configuration = iConf  
                objMessage.Send  
                set objMessage = nothing  
                set iConf = nothing  
                set Flds = nothing  
                Wscript.Echo "Email sent to " & EmailTo, vbCRLF  
           end if  
      end If  
 End If  
 Function MakeHTMLLine(update)  
      HTMLLine="<tr><td>" & update.Title & "</td><td>" & update.description & "</td><td>"  
      counter     =0       
      For Each Article in Update.KBArticleIDs   
            if counter > 0 then HTMLLine=HTMLLine & "<BR>"  
           HTMLLine=HTMLLine & "<a href=" & chr(34) & "http://support.microsoft.com/kb/" & article & "/en-us" & chr(34) & ">KB" & article & "</a>"  
            counter = counter +1  
        Next   
      For Each Info in Update.moreinfourls   
           if counter > 0 then HTMLLine=HTMLLine & "<BR>"  
           HTMLLine=HTMLLine & "<a href=" & chr(34) & info & chr(34) & ">" & "More information...</a>"  
           counter = counter +1  
        Next        
      HTMLLine = HTMLLine & "</td></tr>"  
      MakeHTMLLine = HTMLLine  
 End function  
 Function ReplaceHTMLTemplate()  
      Set HTMLFile = fso.opentextfile((fso.GetParentFolderName(WScript.ScriptFullName) & "\updatetemplate.htm"),1,false)  
      MasterHTML = HTMLFile.Readall  
      HTMLFile.close  
      MasterHTML = Replace(MasterHTML, "[criticalupdatecontents]", CriticalHTML)  
      MasterHTML = Replace(MasterHTML, "[importantupdatecontents]", ImportantHTML)  
      MasterHTML = Replace(MasterHTML, "[moderateupdatecontents]", ModerateHTML)  
      MasterHTML = Replace(MasterHTML, "[lowupdatecontents]", LowHTML)  
      MasterHTML = Replace(MasterHTML, "[computername]", Computername)  
      MasterHTML = Replace(MasterHTML, "[domainname]", domainname)  
      MasterHTML = Replace(MasterHTML, "[timenow]", now())  
      If (CriticalCount = 0) then  
          MasterHTML = TrimSection(MasterHTML, "<!--CriticalStart-->", "<!--CriticalEnd-->")  
      end if  
      If (ImportantCount = 0) then  
          MasterHTML = TrimSection(MasterHTML, "<!--ImportantStart-->", "<!--ImportantEnd-->")  
   end if  
   If (moderateCount = 0) then  
          MasterHTML = TrimSection(MasterHTML, "<!--ModerateStart-->", "<!--ModerateEnd-->")  
   end if  
      If (LowCount = 0) then       
          MasterHTML = TrimSection(MasterHTML, "<!--LowStart-->", "<!--LowEnd-->")  
   end if  
   ReplaceHTMLTemplate = MasterHTML         
 End Function  
 Function TrimSection(CompleteString,LeftString,RightString)  
      LeftChunkPos=inStr(CompleteString, LeftString)  
      RightChunkPos=inStrRev(CompleteString, Rightstring)  
      LeftChunk=Left(CompleteString, LeftChunkPos-1)  
      RightChunk=mid(CompleteString, RightChunkPos)  
      TrimSection=LeftChunk & RightChunk  
 End Function  

Wednesday, June 4, 2014

Locking CVS repository branches

Sometimes we need to lock CVS repository branches before a release to prevent accidentally committed by developers.

In a CVS repository, the commitinfo file under the CVSROOT that defines programs to execute whenever `cvs commit' is about to execute.

So lets create a trigger bash script called validateCommit.sh that get branches parameters to be validated.

 #!/bin/bash  
 if [ -f CVS/Tag ]; then  
  tag=`cat CVS/Tag`  
 else  
  tag=THEAD  
 fi  
 for branch in "$@"  
 do  
   if [ "$tag" == "T$branch" ]; then  
      echo Cannot commit to $branch  
      exit 1  
   fi  
 done  
 echo Commit OK  
 exit 0  

We can place this script in any folder, for example /cvs/scripts/

Then just append a line at the bottom of the CVSROOT/commitinfo of a repository:
ALL /cvs/scripts/validateCommit.sh branch1 branch2

We could make a script to append/remove branches from that list on multiple repositories at the same time.

With GIT or SVN, they also provide similar pre-commit hook feature so we can do the same thing.

Saturday, May 10, 2014

Tunnelblick - OpenVPN GUI for Mac OS X

To connect to an OpenVPN server, I tried the Tunnelblick 3.3.2 but it's unable to connect the server with the console log:

openvpn[1304] Options error: Unrecognized option or missing parameter(s) in /Library/Application Support/Tunnelblick/Users/username/Tunnelblick.tblk/Contents/Resources/config.ovpn:18: verify-x509-name (2.2.1)

It turns out the Tunnelblick 3.3.2 uses OpenVPN 2.2.1 that doesn't support verify-x509-name

So try Tunnelblick 3.4beta26 with OpenVPN 2.3.4 selection and it works like a charm.

Monday, April 28, 2014

AWS : Starting udev: Unable to handle kernel paging request at virtual address..

I "baked" custom CentOS AMIs to use on the AWS 4 years ago with their module modules-2.6.16-ec2.tgz without any problems.

Last week, when we wanted to expand a volume, that a simple process:
- detach the volume from the instance
- create a snapshot of it
- create new volume from that snapshot
- attach that new volume to the instance

But when the instance started, it thrown out errors:
Starting udev: Unable to handle kernel paging request at virtual address..

I wondered there's some problems with the hardware, then tried to repeat several times, even I executed the fsck.ext3 to fix it if there's any hardware failure, but it didn't work although I could access that volume data by attaching it to another running instance..

So I decided to launch a new instance from a worked AMI to transfer the old instance data/configuration to it but unluckily it also wasn't started with the same the old errors.

Looked into the AWS forum, technicians recommended users to use pvgrub kernel instead because 2.6.16-xenU kernel is extremely old and does not receive any update... since 2011. That's odd because I still could do detach/attach volume a couple of months.. But it seems their suggestion is terminating the old non-working instance and launch a new one with pvgrub kernel.. That sounds like we will have to bake the AMI again, transfer data to it (although Puppet helps to provision for softwares and configurations..). But that will take time to do the same thing for bunch of current running instances with the old kernel module..

So finally I found a way to upgrade it to pvgrub kernel on existing non-boot volume with minimal impact.

Following is what I did (for someone has same problem):

1. detach the volume out of non-boot instance
2. attach the volume to a running instance as /dev/sdg for example.
3. login into that running instance and mount the new volume:
# mount /dev/sdg /mnt/sdg

4. install grub and kernel-xen for that device
# mkdir /mnt/sdg/sys/block
# yum -c /mnt/sdg/etc/yum.conf --installroot=/mnt/sdg -y install grub kernel-xen

5. check to see what installed vmlinuz and initrd version
# ls /mnt/sdg/boot

in my case they are vmlinuz-2.6.18-371.8.1.el5xen and initrd-2.6.18-371.8.1.el5xen.img

6. Recreate initial image
# mv /mnt/sdg/boot/initrd-2.6.18-371.8.1.el5xen.img /mnt/sdg/boot/initrd-2.6.18-371.8.1.el5xen.img.org
# chroot /mnt/sdg mkinitrd /boot/initrd-2.6.18-371.8.1.el5xen.img 2.6.18-371.8.1.el5xen --preload=xenblk --preload=xennet --fstab=/etc/fstab

7. Install grub
# chroot /mnt/sdg grub-install /dev/sdg

8. Create the /mnt/sdg/boot/grub/menu.lst file with below content

default 0
timeout 5
title CentOS
root (hd0)
kernel /boot/vmlinuz-2.6.18-371.8.1.el5xen ro root=/dev/sda1
initrd /boot/initrd-2.6.18-371.8.1.el5xen.img

9. Unmount it
# umount /mnt/sdg

10. Detach that volume.

11. Create a snapshot from that volume.

12. Create an AMI from that snapshot with suitable kernel/Image ID from this
(in my case it's aki-f08f11c0)

13. Launch a new instance from that AMI

So we could recover existing installed softwares/configuration/data.


Friday, April 11, 2014

mongoose-paginate

When using mongoose-paginate 1.2.0 for pagination I got two following issues:


1. No paginate method on my Model class error


Regarding to the closed "No paginate method on my Model class" issue at #9, I found the problem that the mongoose module initializes new object:
module.exports = exports = new Mongoose; var mongoose = module.exports;so if we use :
var mongoose = require('mongoose'), paginate = require('mongoose-paginate');then the 1st mongoose variable is different from the mongoose variable with extend paginate method inside the mongoose-paginate module.
That why the error "No paginate method" was thrown out.
So I suggest we export the mongoose variable at the bottom of the mongoose-paginate.js:
module.exports = mongoose;
Then we use it as:
var mongoose = require('mongoose-paginate');
2. Mongoose also supports Population where we can get related documents within query - like join in RDBMS so I add it to mongoose-paginate. But I got an error:
"MissingSchemaError: Schema hasn't been registered for model "undefined"
I checked everything -  Schema, Model.. were correct and tested them well with the main Mongoose module (3.8.8) and finally it turned out that mongoose-paginate 1.2.0 is using Mongoose 3.5.1 so then upgraded to 3.8.8 and got it to work.

Tuesday, February 11, 2014

Using Python script to deploy Java J2EE apps

When deploy a J2EE app, we usually want to update configuration files such as web.xml, hibernate.cfg.xml, ehcache.xml.. We can use a bash script to do that but Python can do better since it provides XML parser.

To make the script for many environments, we will need a configuration file that contains specific environment's app information.
I prefer to make it in MS .INI file format. We will need some sections like environment, hibernate.cfg.xml, web.xml, ehcache.xml... such as:

 [environment]  
 tomcat-folder = /var/lib/tomcat6  
 tomcat-user = tomcat6  
 [hibernate.cfg.xml]  
 hibernate.connection.url = jdbc:postgresql://dbserver:5432/app  
 hibernate.connection.username = username
 hibernate.connection.password = password  
 [web.xml]  
 data1 = abc  
 Data2 = xyz  
To parse that file, we use ConfigParser:
 import sys, os, datetime, shutil, copy, ConfigParser
 from xml.etree import ElementTree as ET 
 app = sys.argv[1]  
 deployFolder = os.path.dirname(os.path.realpath(__file__)) + '/'  
 config = ConfigParser.ConfigParser()  
 config.optionxform = str #preserve case-sensitive  
 config_file = app + '.cfg'  
 config.read(deployFolder + config_file)  
 tomcatFolder = config.get('environment', 'tomcat-folder')  
 appFolder = tomcatFolder + '/webapps/' + app  

Then parse .xml configuration and update them
 #parse web.xml section into dictionary  
 webXMLdic = dict(config.items('web.xml'))  
 #parse the app web.xml  
 webXMLfile = appFolder + '/WEB-INF/web.xml'  
 webXML = ET.parse(webXMLfile)  
 for param in webXML.getroot().getiterator('context-param'):  
      name = param.find('param-name')  
      value = param.find('param-value')  
      if webXMLdic.has_key(name.text):  
           new_value = webXMLdic.get(name.text)  
           value.text = new_value  
      else:  
           print 'not found value for param ' + name.text + ' in ' + config_file  
 #update the app web.xml  
 with open(webXMLfile, 'w') as f:  
   f.write('<?xml version="1.0" encoding="UTF-8"?>\n' \  
           '<!DOCTYPE web-app \nPUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" \n' \  
     '"http://java.sun.com/dtd/web-app_2_3.dtd">\n')  
   ET.ElementTree(webXML.getroot()).write(f,'utf-8')  
 #parse hibernate.cfg.xml section into dictionary  
 hibernateXMLdic = dict(config.items('hibernate.cfg.xml'))  
 #parse the app hibernate.cfg.xml  
 hibernateXML = ET.parse(hibernateXMLfile)  
 hibernateXMLfile = appFolder + '/WEB-INF/classes/hibernate.cfg.xml'  
 #the XPath selector functionality was not implemented in ElementTree until version 1.3, which ships with Python 2.7, so I use iterator loop to work for lower Python versions  
 for property in hibernateXML.getroot().getiterator('property'):  
      name = property.get('name')  
      if hibernateXMLdic.has_key(name):  
           new_value = hibernateXMLdic.get(name)  
           property.text = new_value  
           print name, new_value  
      if (name == 'hbm2ddl.auto'): #remove update/create db if it's declared  
           hibernateXML.getroot().find('session-factory').remove(property)  
 #update the app hibernate.cfg.xml            
 with open(hibernateXMLfile, 'w') as f:  
   f.write('<!DOCTYPE hibernate-configuration PUBLIC \n' \  
      '"-//Hibernate/Hibernate Configuration DTD 3.0//EN" \n' \  
      '"http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd">\n')  
   ET.ElementTree(hibernateXML.getroot()).write(f)  

We will need other functions such as extract the war file, copy/backup..., but they are simple things to do with Python.

Saturday, January 4, 2014

rsync for non-root user over SSH and preserve ownership/permissions/time..

If we need to archive files by rsync over ssh and preserve ownership/permissions/time..with a non-root account, we could do as follow steps:

1. Edit /etc/sudoers file (or exec visudo)
- Set NOPASSWD  for sync_user to execute the rsync command.
sync_user ALL= NOPASSWD:/usr/bin/rsync
- Skip tty requirement for sync_user (If we don't do this, we'll get the error: sudo: sorry, you must have a tty to run sudo)
Defaults:sync_user !requiretty

2. Then do rsync from local to remote with private key and rsync-path option:
rsync -avzH -e "ssh -i sync_user.pem" --rsync-path="sudo rsync" --delete  /local_dir/ sync_user@remote_host:/remote_dir