Using ANSI ble to write playbook to automatically install CEPH cluster in Huawei cloud batch configuration management tool

Time:2019-11-13

Ansible, playbook, huaweiyun, CEPH

First, purchase the virtual machine needed to build the CEPH cluster on Huawei cloud:

 

 

 

Then purchase the storage disk required by CEPH

 

 

Mount the purchased disk to the virtual machine used to build CEPH

Installing ansible on the springboard machine

Check the ansible version and verify that ansible is installed successfully

Configure host groups

test result

Write the playbook file as follows:

---
 #Synchronize the yum file to each node
 - hosts: ceph
   remote_user: root
   tasks: 
     - copy:
         src: /etc/yum.repos.d/ceph.repo
         dest: /etc/yum.repos.d/ceph.repo
     - shell: yum clean all
 #Install CEPH deploy to ceph-0001 host, create working directory and initialize configuration file
 - hosts: ceph-0001
   remote_user: root
   tasks:
     - yum: 
         name: ceph-deploy
         state: installed
     - file: 
         path: /root/ceph-cluster
         state: directory
         mode: '0755'
 #Install CEPH related software package for all CEPH nodes
 - hosts: ceph
   remote_user: root
   tasks:
     - yum:
         name: ceph-osd,ceph-mds
         state: installed
 #Install CEPH Mon to ceph-0001, ceph-0002, ceph-0003
 - hosts: ceph-0001,ceph-0002,ceph-0003
   remote_user: root
   tasks:
     - yum:
         name: ceph-mon
         state: installed
 #Initialize mon service
 - hosts: ceph-0001
   tasks:
     - shell: 'chdir=/root/ceph-cluster ceph-deploy new ceph-0001 ceph-0002 ceph-0003'
     - shell: 'chdir=/root/ceph-cluster ceph-deploy mon create-initial'
 #Prepare the disk partition, create the journal disk, and permanently modify the device permissions. Use the CEPH deploy tool to initialize the data disk, initialize the OSD cluster, and deploy the CEPH file system
 - hosts: ceph
   remote_user: root
   tasks:
     - shell: parted /dev/vdb mklabel gpt
     - shell: parted /dev/vdb mkpart primary 1 100%
     - shell: chown  ceph.ceph  /dev/vdb1
     - copy: 
         src: /etc/udev/rules.d/70-vdb.rules
         dest: /etc/udev/rules.d/70-vdb.rules
 - hosts: ceph-0001
   remote_user: root
   tasks:
     - shell: 'chdir=/root/ceph-cluster ceph-deploy disk zap ceph-0001:vdc'
     - shell: 'chdir=/root/ceph-cluster ceph-deploy disk zap ceph-0002:vdc'
     - shell: 'chdir=/root/ceph-cluster ceph-deploy disk zap ceph-0003:vdc'
     - shell: 'chdir=/root/ceph-cluster ceph-deploy disk zap ceph-0004:vdc'
     - shell: 'chdir=/root/ceph-cluster ceph-deploy disk zap ceph-0005:vdc'
     - shell: 'chdir=/root/ceph-cluster ceph-deploy disk zap ceph-0006:vdc'
     - shell: 'chdir=/root/ceph-cluster ceph-deploy osd create ceph-0001:vdc:/dev/vdb1'
     - shell: 'chdir=/root/ceph-cluster ceph-deploy osd create ceph-0002:vdc:/dev/vdb1'
     - shell: 'chdir=/root/ceph-cluster ceph-deploy osd create ceph-0003:vdc:/dev/vdb1'
     - shell: 'chdir=/root/ceph-cluster ceph-deploy osd create ceph-0004:vdc:/dev/vdb1'
     - shell: 'chdir=/root/ceph-cluster ceph-deploy osd create ceph-0005:vdc:/dev/vdb1'
     - shell: 'chdir=/root/ceph-cluster ceph-deploy osd create ceph-0006:vdc:/dev/vdb1'
     - shell: 'chdir=/root/ceph-cluster ceph-deploy mds create ceph-0006'
     - shell: 'chdir=/root/ceph-cluster ceph osd pool create cephfs_data 128'
     - shell: 'chdir=/root/ceph-cluster ceph osd pool create cephfs_metadata 128'
     - shell: 'chdir=/root/ceph-cluster ceph fs new myfs1 cephfs_metadata cephfs_data'

The specific implementation process of playbook is as follows:

 

Go to ceph-0001 management host to verify that the cluster has been built successfully