When deploy a site which was built on Drupal 7.3.1 on CentOS 6.5, it crashed with an error:
"Fatal error: Cannot create references to/from string offsets nor overloaded objects in /var/www/drupal/includes/errors.inc on line 184
That's interesting. So I've commented out that line ($test_info = &$GLOBALS['drupal_test_info'];) and the app continues working but the come with some other strange behaviors like the URLS are pointing with querystring parameters such as http://drupal-site.com/?q=careers instead clean URL like http://drupal-site.com/careers
or
Fatal error: Cannot use object of type ctools_context as array in /var/www/drupal/sites/all/modules/contrib/panels_everywhere/plugins/tasks/site_template.inc on line 103...
Then I've tried to setup a new Drupal 7.3.1 site from scratch and it worked pretty well. It looks like some extra modules like contrib caused the problem.
Finally I noticed there were bunch of errors "child pid xyz exit signal
Segmentation fault (11)" in the /var/log/httpd/error_log so I found out
why and tried to disable eAccelerator (the latest 0.9.6.1 - which is required by the Dev team) and the app works again.
Then I enabled it and the app was still working... until sometimes later it crashed again - looks like the cached was expired?
So finally I've asked the Dev team to switch to the APC for alternative.
My posts about my jobs and hobbies in computer technology: web development (Java, PHP, jQuery, Angularjs, CSS, HTML, XML, web services), mobile development (Android, PhoneGap), system admin/devops (Linux, CentOS, Apache, Tomcat, MySQL, PostgreSQL, Puppet), open sources...
Saturday, October 11, 2014
Sunday, September 21, 2014
Auto syncronize Salesforce data to MySQL or Postgresql with Talend Open Studio
Today I'm writing about using Talend Open Studio to sync Salesforce data to MySQL or Postgresql databases.
First of all, let's download and install Talend Open Studio for Data Integration, make sure you already have Java installed.
Next is unzip and run TOS_DI-macosx-cocoa as I'm using a Mac OS X, or TOS_DI-win-x86_64.exe if you're using Windows.
It's based on Eclipse IDE so you'll see it's pretty easy if you used to work with Eclipse.
Here is the main screen.
Then we will need to create a database connection by expanding the Metadata node -> Db connections -> Right click -> Create connection -> enter a Name (without space) -> click Next button
Then select Db Type (MySQL or Postgresql or whatever in the long list). I'm choosing Postgresql because MySQL limits row size to 65535. So if the table contains too much fields, it will throw out an error: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Row size too large. The maximum row size for the used table type, not counting BLOBs, is 65535. You have to change some columns to TEXT or BLOBs
After enter all connection information, click Check button to make sure they are correct. Then Finish.
Next step is creating Salesforce connection. Let's right click the Salesforce node and select Create Salesforce connection. Enter a name then Next.
Enter the User name and Password. The Password is combination of Salesforce password and the token. i.e if the Salesforce password is xyz and the token is 1234, then we enter xyz1234 on the Password field. Then Check Login to make sure connection is valid. Then Finish.
Next step is retrieving Salesforce modules by right click on created Salesforce connection and select Retrieve Salesforce modules. Let's check for Account
Then we will need to create a job to import Salesforce data into the database. Right click on Job Design node, select Create Jobs
Then job design screen is opened and we will expand the Salesforce Account module to drag/drop the Account table into the job design screen as tSalesforceInput component.
Click OK then we have it on the screen
Next step is search for database output component on the Palette box, then drag/drop tPostgresqlOutput component into the screen
Then we might need to rename it as "account" and click the inside icon to set its properties
Select Repository on Property Type , then select created Postgresql connection - DB (POSTGRESQL):PosgressqlSalesforce
All connection information automatically filled out. Except Table - enter "account" here, then select Create table if does not exist for Action for table, and Insert for Action for data
Next step is linking between Salesforce input and Postgresql output by right click on Salesforce input's icon and select Row -> Main, then wire the connection to the Postgresql output's icon
and Run the job.
Then let's check the Postgresql to see the new account table is created with data from Salesforce
Then double the Postgrsql account output to edit its properties, select Insert or Update for Action on data and click Edit schema button, then select the checkbox Key on the Id column of account Output
Then we might run the job again.
We might repeat those steps to sync other tables.
Now if we want to execute the job automatically by schedule / cron, we will need to build the job.
Right click on the job node, select Build Job
Then unzip the file, explore to the sub folder sync, we will see sync_run.sh or sync_run.bat (or jobName_run.bat/.sh) that will be called in schedule / cron.
So we are done!
FYI, here is the videocast to show you how to sync from database to Salesforce.
First of all, let's download and install Talend Open Studio for Data Integration, make sure you already have Java installed.
Next is unzip and run TOS_DI-macosx-cocoa as I'm using a Mac OS X, or TOS_DI-win-x86_64.exe if you're using Windows.
It's based on Eclipse IDE so you'll see it's pretty easy if you used to work with Eclipse.
Here is the main screen.
Then we will need to create a database connection by expanding the Metadata node -> Db connections -> Right click -> Create connection -> enter a Name (without space) -> click Next button
Then select Db Type (MySQL or Postgresql or whatever in the long list). I'm choosing Postgresql because MySQL limits row size to 65535. So if the table contains too much fields, it will throw out an error: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Row size too large. The maximum row size for the used table type, not counting BLOBs, is 65535. You have to change some columns to TEXT or BLOBs
After enter all connection information, click Check button to make sure they are correct. Then Finish.
Next step is creating Salesforce connection. Let's right click the Salesforce node and select Create Salesforce connection. Enter a name then Next.
Enter the User name and Password. The Password is combination of Salesforce password and the token. i.e if the Salesforce password is xyz and the token is 1234, then we enter xyz1234 on the Password field. Then Check Login to make sure connection is valid. Then Finish.
Next step is retrieving Salesforce modules by right click on created Salesforce connection and select Retrieve Salesforce modules. Let's check for Account
Then we will need to create a job to import Salesforce data into the database. Right click on Job Design node, select Create Jobs
Then job design screen is opened and we will expand the Salesforce Account module to drag/drop the Account table into the job design screen as tSalesforceInput component.
Click OK then we have it on the screen
Next step is search for database output component on the Palette box, then drag/drop tPostgresqlOutput component into the screen
Then we might need to rename it as "account" and click the inside icon to set its properties
Select Repository on Property Type , then select created Postgresql connection - DB (POSTGRESQL):PosgressqlSalesforce
All connection information automatically filled out. Except Table - enter "account" here, then select Create table if does not exist for Action for table, and Insert for Action for data
Next step is linking between Salesforce input and Postgresql output by right click on Salesforce input's icon and select Row -> Main, then wire the connection to the Postgresql output's icon
and Run the job.
Then let's check the Postgresql to see the new account table is created with data from Salesforce
Then double the Postgrsql account output to edit its properties, select Insert or Update for Action on data and click Edit schema button, then select the checkbox Key on the Id column of account Output
Then we might run the job again.
We might repeat those steps to sync other tables.
Now if we want to execute the job automatically by schedule / cron, we will need to build the job.
Right click on the job node, select Build Job
Then unzip the file, explore to the sub folder sync, we will see sync_run.sh or sync_run.bat (or jobName_run.bat/.sh) that will be called in schedule / cron.
So we are done!
FYI, here is the videocast to show you how to sync from database to Salesforce.
Saturday, September 13, 2014
Automate Rackspace Cloud Block Storage volume creation with Ansible
Today I want to automate Rackspace Cloud Block Storage volume creation with Ansible for Rackspace instances.
To do that, we will need rax modules that requires to install pyrax.
After pyrax is installed, then I tested to create a block storage volume with the module rax_cbs
and that thrown out an error:
msg: pyrax is required for this module
That's interesting. I checked the python path and tested pyrax:
$ which python
/usr/local/bin/python
$ python
Python 2.7.7 (default, Jun 18 2014, 16:33:32)
[GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.9)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyrax
>>>
Checked the ansible python path
$ head /usr/local/bin/ansible
#!/usr/local/opt/python/bin/python2.7
Found both python and /usr/local/opt/python/bin/python2.7 are symbolic links of /usr/local/Cellar/python/2.7.7_2/Frameworks/Python.framework/Versions/2.7/bin/python2.7.
Google around but can't find the answer.
Finally verified the module rax_cbs:
$ head /usr/share/ansible/cloud/rax_cbs
#!/usr/bin/python
Then:
$ /usr/bin/python
Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49)
[GCC 4.2.1 (Apple Inc. build 5646)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyrax
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named pyrax
That's why it didn't work! /usr/bin/python is built-in python on Mac OS X while 2.7.7 was installed via Homebrew to different path.
So let's set the ansible_python_interpreter for localhost that is used by rax modules in the ansible/hosts file:
[local]
localhost ansible_connection=local ansible_python_interpreter=/usr/local/opt/python/bin/python2.7
But then got another error:
msg: No CloudBlockStorageVolume matching
Found the answer to patch the /usr/share/ansible/cloud/rax_cbs
Voila!
I also want to attach the volume to the instance, then fdisk/create partition/format/mount it automatically.., so we have to check if the device already exist or not, if not then create/attach a Cloud Block Storage volume and fdisk/create partition/format..
Below is the playbook.
Note that this playbook will be executed for remote Rackspace instances so we don't need to set hosts/connection for Rackspace build/attach block storage volume tasks as the example when the local_action module already delegates to the localhost defined in ansible/hosts automatically.
To do that, we will need rax modules that requires to install pyrax.
After pyrax is installed, then I tested to create a block storage volume with the module rax_cbs
and that thrown out an error:
msg: pyrax is required for this module
That's interesting. I checked the python path and tested pyrax:
$ which python
/usr/local/bin/python
$ python
Python 2.7.7 (default, Jun 18 2014, 16:33:32)
[GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.9)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyrax
>>>
Checked the ansible python path
$ head /usr/local/bin/ansible
#!/usr/local/opt/python/bin/python2.7
Found both python and /usr/local/opt/python/bin/python2.7 are symbolic links of /usr/local/Cellar/python/2.7.7_2/Frameworks/Python.framework/Versions/2.7/bin/python2.7.
Google around but can't find the answer.
Finally verified the module rax_cbs:
$ head /usr/share/ansible/cloud/rax_cbs
#!/usr/bin/python
Then:
$ /usr/bin/python
Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49)
[GCC 4.2.1 (Apple Inc. build 5646)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyrax
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named pyrax
That's why it didn't work! /usr/bin/python is built-in python on Mac OS X while 2.7.7 was installed via Homebrew to different path.
So let's set the ansible_python_interpreter for localhost that is used by rax modules in the ansible/hosts file:
[local]
localhost ansible_connection=local ansible_python_interpreter=/usr/local/opt/python/bin/python2.7
But then got another error:
msg: No CloudBlockStorageVolume matching
Found the answer to patch the /usr/share/ansible/cloud/rax_cbs
Voila!
I also want to attach the volume to the instance, then fdisk/create partition/format/mount it automatically.., so we have to check if the device already exist or not, if not then create/attach a Cloud Block Storage volume and fdisk/create partition/format..
Below is the playbook.
Note that this playbook will be executed for remote Rackspace instances so we don't need to set hosts/connection for Rackspace build/attach block storage volume tasks as the example when the local_action module already delegates to the localhost defined in ansible/hosts automatically.
# file:Rackspace_disk/tasks/main.yml
- name: Check if device /dev/xvd_ present
shell: fdisk -l | grep 'Disk {{device}}' | wc -l
changed_when: False
register: device_present
- name: Build a Block Storage Volume
local_action:
module: rax_cbs
credentials: "{{credentials}}"
name: "{{volum_name}}"
volume_type: "{{volum_type}}"
size: "{{volum_size}}"
region: "{{region}}"
wait: yes
state: present
meta:
app: "{{volum_name}}"
sudo: no
when: device_present.stdout is defined and device_present.stdout|int == 0
- name: Attach a Block Storage Volume
local_action:
module: rax_cbs_attachments
credentials: "{{credentials}}"
volume: "{{volum_name}}"
server: "{{server}}"
device: "{{device}}" #/dev/xvd_
region: "{{region}}"
wait: yes
state: present
sudo: no
when: device_present.stdout is defined and device_present.stdout|int == 0
- name: Check if partition /dev/xvd_1 present
shell: fdisk -l | grep {{device}}1 | wc -l
changed_when: False
register: partition_present
- name: Fdisk / create partition / format
shell: "echo -e 'n\np\n1\n\n\nw\n' | fdisk {{device}} && mkfs -t {{fstype}} {{device}}1 && tune2fs -m 0 {{device}}1 "
when: partition_present.stdout is defined and partition_present.stdout|int == 0
- file: path={{mount_dir}} state=directory
- name: Mount device
mount: name={{mount_dir}} src={{device}}1 fstype={{fstype}} opts='defaults,noatime,nofail' state=mounted
Saturday, August 23, 2014
Resize ext4 partition on CentOS 5
When trying to resize an ext4 partition on CentOS 5, I got an error:
$ resize2fs /dev/sdf
resize2fs 1.39 (29-May-2006)
resize2fs: Filesystem has unsupported feature(s) while trying to open /dev/sdf
Couldn't find valid filesystem superblock.
Google around and found out we must use resize4fs instead
$ resize4fs /dev/sdf
resize4fs 1.41.12 (17-May-2010)
Filesystem at /dev/sdf is mounted on /opt; on-line resizing required
old desc_blocks = 5, new_desc_blocks = 7
Performing an on-line resize of /dev/sdf to 26214400 (4k) blocks.
Similarly, we will use fsck.ext4, tune4fs for ext4
$ resize2fs /dev/sdf
resize2fs 1.39 (29-May-2006)
resize2fs: Filesystem has unsupported feature(s) while trying to open /dev/sdf
Couldn't find valid filesystem superblock.
Google around and found out we must use resize4fs instead
$ resize4fs /dev/sdf
resize4fs 1.41.12 (17-May-2010)
Filesystem at /dev/sdf is mounted on /opt; on-line resizing required
old desc_blocks = 5, new_desc_blocks = 7
Performing an on-line resize of /dev/sdf to 26214400 (4k) blocks.
Similarly, we will use fsck.ext4, tune4fs for ext4
Saturday, August 9, 2014
Cisco SPA525G2 SIP configuration over wireless connection
We bought a new Cisco SPA525G2 phone to use wireless and need to setup it with RingCentral.
It's pretty simple so we don't need provisioning via private TFTP server like old Cisco 7940/7960 phones.
Below is how I made it work.
1. Config the phone to connect the wireless router. (since I work from home so need someone in the office helping on this).
2. Remote to a PC in the same phone's network, open the browser and enter the phone's IP address.
If it asks user/password to login, enter: admin / (blank password)
3. Config the phone:
Suppose the RingCentral provides provision information:
SIP Domain sip.ringcentral.com:5060
Outbound Proxy sip10.ringcentral.com:5090
User Name 14151234567
Password aPassword
Authorization ID 0123456789
On the Cisco admin UI, click the Advance on bottom of the page.
Menu Provisioning
Provision Enable: No
Menu Voice > Ext 1
SIP Transport: TCP
Proxy: sip.ringcentral.com:5060
Outbound Proxy: sip10.ringcentral.com:5090
Use Outbound Proxy: Yes
Display Name: a name
User ID: 14151234567
Password: aPassword
Use Auth ID: Yes
Auth ID: 0123456789
Menu System
Primary NTP Server: 0.north-america.pool.ntp.org
Secondary NTP Server: 1.north-america.pool.ntp.org
Click "Submit All Changes"
Then check the phone status to see if the line 1 is Registered or not.
It's pretty simple so we don't need provisioning via private TFTP server like old Cisco 7940/7960 phones.
Below is how I made it work.
1. Config the phone to connect the wireless router. (since I work from home so need someone in the office helping on this).
2. Remote to a PC in the same phone's network, open the browser and enter the phone's IP address.
If it asks user/password to login, enter: admin / (blank password)
3. Config the phone:
Suppose the RingCentral provides provision information:
SIP Domain sip.ringcentral.com:5060
Outbound Proxy sip10.ringcentral.com:5090
User Name 14151234567
Password aPassword
Authorization ID 0123456789
On the Cisco admin UI, click the Advance on bottom of the page.
Provision Enable: No
Menu Voice > Ext 1
SIP Transport: TCP
Proxy: sip.ringcentral.com:5060
Outbound Proxy: sip10.ringcentral.com:5090
Use Outbound Proxy: Yes
Display Name: a name
User ID: 14151234567
Password: aPassword
Use Auth ID: Yes
Auth ID: 0123456789
Menu System
Primary NTP Server: 0.north-america.pool.ntp.org
Secondary NTP Server: 1.north-america.pool.ntp.org
Click "Submit All Changes"
Then check the phone status to see if the line 1 is Registered or not.
Wednesday, July 9, 2014
Windows update automatic e-mail notification
When managing Windows servers, I looked for a way to check update and notificate automatically via email like yum-cron on Redhat/CentOS and found one here. Thank you Paulie.
Though it's written in 2007 but is still working well for Windows 2012 servers.
Below is my modification to categorize MsrcSeverity to Important when it's blank and use Gmail SMTP with authentication.
Though it's written in 2007 but is still working well for Windows 2012 servers.
Below is my modification to categorize MsrcSeverity to Important when it's blank and use Gmail SMTP with authentication.
'==========================================================================
' NAME: Windows update automatic e-mail notification
' AUTHOR: Paul Murana, Accendo Solutions
' DATE : 26/08/2007
' MODIFIED: Luan Nguyen - 7/8/2014
'==========================================================================
'Change these variables to control which updates trigger an e-mail alert
'and to configure e-mail from/to addresses
AlertCritical = 1
AlertImportant = 1
AlertModerate = 0
AlertLow = 0
EmailFrom = "from.email@gmail.com"
EmailTo = "to.email@gmail.com"
smtpserver = "smtp.gmail.com"
smtpserverport = 465
smtpauthenticate = 1
sendusername = "from.email@gmail.com"
sendpassword = "password"
smtpusessl = 1
'==========================================================================
Set fso = CreateObject("Scripting.FileSystemObject")
Set updateSession = CreateObject("Microsoft.Update.Session")
Set updateSearcher = updateSession.CreateupdateSearcher()
Set oShell = CreateObject( "WScript.Shell" )
computername = oShell.ExpandEnvironmentStrings("%ComputerName%")
DomainName = oShell.ExpandEnvironmentStrings("%userdomain%")
EMailSubject = "Windows Update Notification - " & DomainName & "\" & computername
Set oshell = Nothing
Set searchResult = updateSearcher.Search("IsInstalled=0 and Type='Software'")
If searchResult.Updates.count > 0 Then
For I = 0 To searchResult.Updates.Count-1
Set update = searchResult.Updates.Item(I)
Select Case update.MsrcSeverity
Case "Critical"
CriticalCount = Criticalcount+1
CriticalHTML = CriticalHTML & MakeHTMLLine(update)
Wscript.Echo update.MsrcSeverity & " : " & update, vbCRLF
Case "Moderate"
ModerateCount = Moderatecount + 1
ModerateHTML = ModerateHTML & MakeHTMLLine(update)
Wscript.Echo update.MsrcSeverity & " : " & update, vbCRLF
Case "Low"
Lowcount = Lowcount + 1
LowHTML = LowHTML & MakeHTMLLine(update)
Wscript.Echo update.MsrcSeverity & " : " & update, vbCRLF
Case Else '"Important" or blank
ImportantCount = Importantcount + 1
ImportantHTML = ImportantHTML & MakeHTMLLine(update)
Wscript.Echo "Important : " & update, vbCRLF
end select
Next
If searchResult.Updates.Count = 0 Then
Wscript.Echo "No updates :)"
WScript.Quit
Else
If (AlertCritical=1 and CriticalCount > 0) then SendEmail=1 end if
If (AlertImportant=1 and ImportantCount > 0) then SendEmail=1 end if
If (AlertModerate=1 and ModerateCount > 0) then SendEmail=1 end if
If (AlertLow=1 and LowCount > 0) then SendEmail=1 end If
if SendEmail=1 and smtpserver <> "" Then
Set objMessage = CreateObject("CDO.Message")
objMessage.Subject = EMailSubject
objMessage.From = EmailFrom
objMessage.To = EmailTo
objMessage.HTMLBody = ReplaceHTMLTemplate()
Set iConf = CreateObject("CDO.Configuration")
Set Flds = iConf.Fields
schema = "http://schemas.microsoft.com/cdo/configuration/"
Flds.Item(schema & "sendusing") = 2
Flds.Item(schema & "smtpserver") = smtpserver
Flds.Item(schema & "smtpserverport") = smtpserverport
Flds.Item(schema & "smtpauthenticate") = smtpauthenticate
if smtpauthenticate = 1 and sendusername <> "" and sendpassword <> "" then
Flds.Item(schema & "sendusername") = sendusername
Flds.Item(schema & "sendpassword") = sendpassword
end if
Flds.Item(schema & "smtpusessl") = smtpusessl
Flds.Update
Set objMessage.Configuration = iConf
objMessage.Send
set objMessage = nothing
set iConf = nothing
set Flds = nothing
Wscript.Echo "Email sent to " & EmailTo, vbCRLF
end if
end If
End If
Function MakeHTMLLine(update)
HTMLLine="<tr><td>" & update.Title & "</td><td>" & update.description & "</td><td>"
counter =0
For Each Article in Update.KBArticleIDs
if counter > 0 then HTMLLine=HTMLLine & "<BR>"
HTMLLine=HTMLLine & "<a href=" & chr(34) & "http://support.microsoft.com/kb/" & article & "/en-us" & chr(34) & ">KB" & article & "</a>"
counter = counter +1
Next
For Each Info in Update.moreinfourls
if counter > 0 then HTMLLine=HTMLLine & "<BR>"
HTMLLine=HTMLLine & "<a href=" & chr(34) & info & chr(34) & ">" & "More information...</a>"
counter = counter +1
Next
HTMLLine = HTMLLine & "</td></tr>"
MakeHTMLLine = HTMLLine
End function
Function ReplaceHTMLTemplate()
Set HTMLFile = fso.opentextfile((fso.GetParentFolderName(WScript.ScriptFullName) & "\updatetemplate.htm"),1,false)
MasterHTML = HTMLFile.Readall
HTMLFile.close
MasterHTML = Replace(MasterHTML, "[criticalupdatecontents]", CriticalHTML)
MasterHTML = Replace(MasterHTML, "[importantupdatecontents]", ImportantHTML)
MasterHTML = Replace(MasterHTML, "[moderateupdatecontents]", ModerateHTML)
MasterHTML = Replace(MasterHTML, "[lowupdatecontents]", LowHTML)
MasterHTML = Replace(MasterHTML, "[computername]", Computername)
MasterHTML = Replace(MasterHTML, "[domainname]", domainname)
MasterHTML = Replace(MasterHTML, "[timenow]", now())
If (CriticalCount = 0) then
MasterHTML = TrimSection(MasterHTML, "<!--CriticalStart-->", "<!--CriticalEnd-->")
end if
If (ImportantCount = 0) then
MasterHTML = TrimSection(MasterHTML, "<!--ImportantStart-->", "<!--ImportantEnd-->")
end if
If (moderateCount = 0) then
MasterHTML = TrimSection(MasterHTML, "<!--ModerateStart-->", "<!--ModerateEnd-->")
end if
If (LowCount = 0) then
MasterHTML = TrimSection(MasterHTML, "<!--LowStart-->", "<!--LowEnd-->")
end if
ReplaceHTMLTemplate = MasterHTML
End Function
Function TrimSection(CompleteString,LeftString,RightString)
LeftChunkPos=inStr(CompleteString, LeftString)
RightChunkPos=inStrRev(CompleteString, Rightstring)
LeftChunk=Left(CompleteString, LeftChunkPos-1)
RightChunk=mid(CompleteString, RightChunkPos)
TrimSection=LeftChunk & RightChunk
End Function
Wednesday, June 4, 2014
Locking CVS repository branches
Sometimes we need to lock CVS repository branches before a release to prevent accidentally committed by developers.
In a CVS repository, the commitinfo file under the CVSROOT that defines programs to execute whenever `cvs commit' is about to execute.
So lets create a trigger bash script called validateCommit.sh that get branches parameters to be validated.
We can place this script in any folder, for example /cvs/scripts/
Then just append a line at the bottom of the CVSROOT/commitinfo of a repository:
ALL /cvs/scripts/validateCommit.sh branch1 branch2
We could make a script to append/remove branches from that list on multiple repositories at the same time.
With GIT or SVN, they also provide similar pre-commit hook feature so we can do the same thing.
In a CVS repository, the commitinfo file under the CVSROOT that defines programs to execute whenever `cvs commit' is about to execute.
So lets create a trigger bash script called validateCommit.sh that get branches parameters to be validated.
#!/bin/bash
if [ -f CVS/Tag ]; then
tag=`cat CVS/Tag`
else
tag=THEAD
fi
for branch in "$@"
do
if [ "$tag" == "T$branch" ]; then
echo Cannot commit to $branch
exit 1
fi
done
echo Commit OK
exit 0
We can place this script in any folder, for example /cvs/scripts/
Then just append a line at the bottom of the CVSROOT/commitinfo of a repository:
ALL /cvs/scripts/validateCommit.sh branch1 branch2
We could make a script to append/remove branches from that list on multiple repositories at the same time.
With GIT or SVN, they also provide similar pre-commit hook feature so we can do the same thing.
Subscribe to:
Comments (Atom)












