Setup Chef Cluster On Centos7

1.   Environment: Four hosts:  HD8, HD7, HD6, HD5

On each of these hosts, there is chefadmin account with sudo privilege.

HD8: chefserver

/etc/hosts:  HD8   chefserver  HD7   chefdk HD6  chefclient1 HD5  chefclient2


HD7: chefdk (Work Station)

/etc/hosts:  HD8   chefserver  HD7   chefdk HD6  chefclient1 HD5  chefclient2


HD6: chefclient1

/etc/hosts:  HD8   chefserver  HD7   chefdk HD6  chefclient1 HD5  chefclient2


HD5: chefclient2

/etc/hosts:  HD8   chefserver  HD7   chefdk HD6  chefclient1 HD5  chefclient2


2.  On HD8 (chefserver)

Use root account:

#  cd /usr/local/src

#  wget

#  rpm -ivh chef-server-core-12.17.33-1.el7.x86_64.rpm

#  chef-server-ctl reconfigure

#   chef-server-ctl status

#   chef-server-ctl user-create chefadmin FirstName LastName chefadminpassword  -f /etc/chef/chefadmin.pem

#  chef-server-ctl service-list

#   chef-server-ctl user-list

#  chef-server-ctl org-create datajaguar “DataJaguar, Inc” –association_user chefadmin -f /etc/chef/datajaguar-validator.pem

#  firewall-cmd –permanent –zone public –add-service http

#  firewall-cmd –permanent –zone public –add-service https


3.  On HD7 (chefdk)

#  yum install ruby

# yum install git

# cd /usr/local/src

#  wget

#  rpm -ivh chefdk-1.5.0-1.el7.x86_64.rpm

#   chef verify

#  useradd chefadmin

# passwd chefadmin

# su – chefadmin

In user chefadmin account:

$ echo ‘eval “$(chef shell-init bash)”‘ >> ~/.bash_profile

$  .  ~/.bash_profile

$  cd ~

$  chef generate repo chef-repo

$  cd chef-repo

$  git init

$ git config –global “chefadmin”

$  git config –global “”

$  mkdir  .chef

$  echo ‘.chef’ >> ~/chef-repo/.gitignore

$ cd  ~/chef-repo

$ git add .

$ git commit

$  scp -pr root@chefserver:/etc/chef/chefadmin.pem ~/chef-repo/.chef/

$  scp -pr root@chefserver:/etc/chef/datajaguar-validator.pem ~/chef-repo/.chef/

$   vi ~/chef-repo/.chef/knife.rb

current_dir = File.dirname(__FILE__)
log_level :info
log_location STDOUT
node_name “chefadmin”
client_key “#{current_dir}/chefadmin.pem”
validation_client_name “datajaguar-validator”
validation_key “#{current_dir}/datajaguar-validator.pem”
chef_server_url “https://HD8/organizations/datajaguar”
syntax_check_cache_path “#{ENV[‘HOME’]}/.chef/syntaxcache”
cookbook_path [“#{current_dir}/../cookbooks”]

$ knife ssl fetch

$ knife bootstrap chefclient1 -x chefadmin –sudo

(chefadmin is user account on host chefclient1. It must have sudo privilege)

$   knife bootstrap chefclient2 -x chefadmin –sudo

(chefadmin is user account on host chefclient2. It must have sudo privilege)





Install Boost on Linux

  1. download boost_1_68_0.tar.gz
  2. # cp boost_1_68_0.tar.gz /usr/local/src
  3. # tar zxf boost_1_68_0.tar.gz
  4. # cd boost_1_68_0
  5. # ./ –prefix=/usr/local/boost_168_0
  6. # ./b2
  7. # ./b2 install

/usr/local/boost_168_0/include/ will contain header files

/usr/local/boost_168_0/lib/  will contain individual library files

Install CGAL Library on Linux

    1. download the source tar ball CGAL-4.12.tar.gz
    2. Run the following commands as sudo or root
    3. # cp CGAL-4.12.tar.gz  /usr/local/src
    4. # tar zxf CGAL-4.12.tar.gz
    5. # cd cgal-releases-CGAL-4.12
    6. # mkdir -p build/release
    7. Make sure you have an updated cmake (old cmake will not work)
    8. # cmake -DCMAKE_BUILD_TYPE=Release -DBoost_INCLUDE_DIR=/usr/local/boost_168_0/include ../..
    9. # make
    10. # make install

The CGAL header files will be in /usr/local/include/CGAL/

The library .so files will be in /usr/local/lib64/   (

Replicating database using triggers

Suppose you have a table on any RDBMS database:

table123:  column uid and column addr

You can create another table to capture insert, update, and delete operations on table123:

create table table123_trigger_table


ts datetime primary key,

uid int,

addr varchar(64),

action: char(1)



Then you can create three triggers to capture the changes in table123:

CREATE TRIGGER after_table123_insert AFTER INSERT ON table123 FOR EACH ROW
INSERT INTO table123_trigger_table
SET action = ‘I’,
uid = NEW.uid,
addr = NEW.addr,
ts = NOW();


CREATE TRIGGER after_table123_update AFTER UPDATE ON table123 FOR EACH ROW
INSERT INTO table123_trigger_table
SET action = ‘U’,
uid = NEW.uid,
addr = NEW.addr,
ts = NOW();


CREATE TRIGGER after_table123_delete AFTER DELETE ON table123 FOR EACH ROW
INSERT INTO table123_trigger_table
SET action = ‘D’,
uid = OLD.uid,
addr = OLD.addr,
ts = NOW();


After the 3 triggers are created, you can write a Java program to use JDBC and pull the records into target database and table. The ‘ts’ column in the trigger table is a timestamp and is primary key, which can be used to track the change time. The trigger table can be cleaned up periodically.




Suppose you want to install all Java 1.8 files into  /opt directory:

(# means root prompt, $ means your regular user account)

#mkdir /opt

# cd /opt/
# wget --no-cookies --no-check-certificate --header "Cookie:; oraclelicense=accept-securebackup-cookie" ""

# tar xzf jdk-8u111-linux-x64.tar.gz
# ln -sf /opt/jdk1.8.0_77 /opt/java

$ export JAVA_HOME=/opt/java
$ export JRE_HOME=/opt/java/jre

$ export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH

There you go, check "java -version" for the new 1.8 version!


How to replace a failed hard disk from Linux RAID

Suppose /dev/raid1 consists of two bootable physical drives  /dev/hde1 and /dev/hdf1 and /dev/hdf1 has failed. Here are the steps to replace the failed /dev/hdf1.

Step One: confirm that /dev/hdf1 failed

# cat /proc/mdstat

You should see [U_] instead of [UU] in degraded RAID1 array.


Step Two: remove the failed disk

# mdadm   –manage /dev/raid1   –fail /dev/hdf1

# mdadm –manage /dev/raid1   –remove /dev/hdf1


Step Three:  shutdown the system and install a new disk

# shutdown -h now

# physically install new disk

# boot up the system


Step Four:  add the new disk  /dev/hdf

#  sfdisk -d /dev/hde | sfdisk /dev/hdf

# mdadm –manage /dev/raid1  –add /dev/hdf1


Step Five: wait for /dev/hde and /dev/hdf  to become fully synchronized

#  cat /proc/mdstat



How to make software RAID on Linux

RAID is redundant arrays of inexpensive disks.  In this article, we will show you how to implement RAID 1 (disk mirroring) where data is duplicated on two disks (either HDD or SSD) simultaneously.

Step One: Use two disk partitions that are of approximately the same size. For example, /dev/hde1, /dev/hdf1

Step Two:  Set the type of each the disk parition to “Linux raid autodetect”

# fdisk   /dev/hde
Command (m for help): m

p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition’s system id

Command (m for help): t
Partition number (1-5): 1
Hex code (type L to list codes): L

16 Hidden FAT16 61 SpeedStor f2 DOS secondary
17 Hidden HPFS/NTF 63 GNU HURD or Sys fd Linux raid auto
18 AST SmartSleep 64 Novell Netware fe LANstep
1b Hidden Win95 FA 65 Novell Netware ff BBT
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Repeat this step for /dev/hdf1:
# fdisk  /dev/hdf
… (similar to /dev/hde1) …

Step Three: create the RAID set of type 1

# mdadm  –create  –verbose  /dev/raid1  –level=1 \
–raid-devices=2  /dev/hde1  /dev/hdf1

# cat /proc/mdstat   (to confirm it is created)

Step Four: format the new RAID set

# mkfs.xfs   /dev/raid1

Step Five: create config file

On Centos, Redhat, Fedora:
# mdadm –detail –scan > /etc/mdadm.conf

On Debian, Ubuntu:
# mdadm –detail –scan > /etc/mdadm/mdadm.conf

Step Six: mount the RAID set
# mkdir  /mnt/raid1
# vi  /etc/fstab:
/dev/raid1    /mnt/raid1     xfs      defaults     1 2
# mount  -a

You can check the status of all devices:
# cat  /proc/mdstat

Linux software RAID provides redundancy across hard disks, but it is slower than a hardware-based RAID disk controller, which is usually done via the system BIOS and transparent to Linux.