Hadoop framework: Construction of distributed environment in cluster mode

Time:2020-11-17

Source code:GitHub. Click here || Gitee. Click here

1、 Basic environment configuration

1. Three services

Prepare three centos7 services and clone the basic environment from the pseudo distributed environment.

133 hop01,134 hop02,136 hop03

2. Set host name

##Set name
hostnamectl set-hostname hop01
##Restart
reboot -f

3. Host name communication

vim /etc/hosts
#Add service node
192.168.37.133 hop01
192.168.37.134 hop02
192.168.37.136 hop03

4. SSH password free login

Configure three services SSH password free login.

[[email protected] ~]# ssh-keygen -t rsa
... all the way back to the end
[[email protected] ~]# cd .ssh
... permissions are assigned to the specified cluster service
[[email protected] .ssh]# ssh-copy-id hop01
[[email protected] .ssh]# ssh-copy-id hop02
[[email protected] .ssh]# ssh-copy-id hop03
... log in to hop02 on hop01
[[email protected] ~]# ssh hop02

Here, for the hop01 service, this operation should be performed in both hop02 and hop03 services.

5. Synchronization time

NTP component installation

#Installation
yum install ntpdate ntp -y
#View
rpm -qa|grep ntp

Basic management command

#View status
service ntpd status
#Start
service ntpd start
#Boot up
chkconfig ntpd on

Modify time service hop01

#Modify NTP configuration
vim /etc/ntp.conf
#Add content
restrict 192.168.0.0 mask 255.255.255.0 nomodify notrap
server 127.0.0.1
fudge 127.0.0.1 stratum 10

Modify the time mechanism of hop02hop03, synchronize the time from hop01, and log off the mechanism of obtaining time from the network.

server 192.168.37.133
# server 0.centos.pool.ntp.org iburst
# server 1.centos.pool.ntp.org iburst
# server 2.centos.pool.ntp.org iburst
# server 3.centos.pool.ntp.org iburst

Write timed tasks

[[email protected] ~]# crontab -e
*/10 * * * * /usr/sbin/ntpdate hop01

Modify the service time of hop02 and hop03

#Specified time
date -s "2018-05-20 13:14:55"
#View time
date

In this way, the time will be corrected or synchronized based on the time of hop01 service.

6. Clean up the environment

Clone three centos7 services from the virtual machine of pseudo distributed environment, and delete the data and log folders of the original Hadoop environment configuration.

[[email protected] hadoop2.7]# rm -rf data/ logs/

2、 Construction of cluster environment

1. Overview of cluster configuration

Service list HDFS file Scheduling yarn Single service
hop01 DataNode NodeManager NameNode
hop02 DataNode NodeManager ResourceManager
hop03 DataNode NodeManager SecondaryNameNode

2. Modify configuration

vim core-site.xml

<property>
    <name>fs.defaultFS</name>
    <value>hdfs://hop01:9000</value>
</property>

The three services need to specify the current host name respectively.

vim hdfs-site.xml

<property>
    <name>dfs.replication</name>
    <value>3</value>
</property>

<property>
      <name>dfs.namenode.secondary.http-address</name>
      <value>hop03:50090</value>
</property>

Here, modify the number of copies to 3 and specify the secondarynamenode service. The three services also modify and specify the secondarynamenode on the hop03 service.

vim yarn-site.xml

<property>
    <name>yarn.resourcemanager.hostname</name>
    <value>hop02</value>
</property>

Specify the ResourceManager service on hop02.

vim mapred-site.xml

<! -- server address -- >
<property>
<name>mapreduce.jobhistory.address</name>
<value>hop01:10020</value>
</property>

<! -- server web address -- >
<property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>hop01:19888</value>
</property>

Specify the relevant web-side viewing address on service hop01.

3. Cluster service configuration

route:/opt/hadoop2.7/etc/hadoop

Document:vim slaves

hop01
hop02
hop03

The cluster list of three services is configured here. Modify the same configuration of other services synchronously.

4. Format namenode

Note that the namenode is configured on the hop01 service.

[[email protected] hadoop2.7]# bin/hdfs namenode -format

5. Start HDFS

[[email protected] hadoop2.7]# sbin/start-dfs.sh
Starting namenodes on [hop01]
hop01: starting namenode
hop03: starting datanode
hop02: starting datanode
hop01: starting datanode
Starting secondary namenodes [hop03]
hop03: starting secondarynamenode

Pay attention to the print information here, which is consistent with the configuration. Namenodes are started on hop01, and secondary namenodes are started on hop03. You can view and verify each service through JPS command.

6. Start yarn

Note that yarn is configured on the hop02 service, so execute the start command in the hop02 service.

[[email protected] hadoop2.7]# sbin/start-yarn.sh
starting yarn daemons
starting resourcemanager
hop03: starting nodemanager
hop01: starting nodemanager
hop02: starting nodemanager,

Note the start-up print log here. So far, all the services planned by the cluster are started.

[[email protected] hadoop2.7]# jps
4306 NodeManager
4043 DataNode
3949 NameNode
[[email protected] hadoop2.7]# jps
3733 ResourceManager
3829 NodeManager
3613 DataNode
[[email protected] hadoop2.7]# jps
3748 DataNode
3928 NodeManager
3803 SecondaryNameNode

View the cluster process under each service, which is consistent with the planning configuration.

7. Web interface

NameNode:http://hop01:50070
SecondaryNameNode:http://hop03:50090

3、 Source code address

GitHub · address
https://github.com/cicadasmile/big-data-parent
Gitee · address
https://gitee.com/cicadasmile/big-data-parent

Recommended reading: programming system arrangement

entry name
[Java describes design patterns, algorithms, data structures]GitHub==GitEE
[Java foundation, concurrency, object-oriented, web development]GitHub==GitEE
[detailed explanation of basic components of spring cloud microservices]GitHub==GitEE
[comprehensive practical case of spring cloud microservice Architecture]GitHub==GitEE
[introduction to basic application of springboot framework to advanced level]GitHub==GitEE
[springboot framework integrated development of common middleware]GitHub==GitEE
[basic cases of data management, distribution and architecture design]GitHub==GitEE
[big data series, storage, components, computing and other frameworks]GitHub==GitEE

Recommended Today

Proxy pattern of design pattern in JavaScript

1、 Definition When the client is inconvenient to directly access an object or does not meet the needs, provide an object to control the access of the heap object. 2、 Examples The implementation of lazy singleton mode depends on cache proxy 3、 Structure Proxy mode requires aOntology objectAnd oneProxy object. In the proxy mode, specific […]