集群/节点规划

1.png

初始化配置

所有节点都需要配置

关闭防火墙/SELinux

1
2
3
4
5
[root@xnode1 ~]# systemctl disable --now firewalld

[root@xnode1 ~]# echo "setenforce 0" >> /etc/rc.local
[root@xnode1 ~]# chmod +x /etc/rc.local
[root@xnode1 ~]# setenforce 0

HOSTS/YUM配置

必须配置解析名字为x.mall这是jar包里面写死的,还有数据库密码为123456

以xnode1为ftp服务器,配置ftp源

1
2
3
4
5
6
7
8
9
10
11
[root@xnode1 ~]# cat /etc/hosts

192.168.1.11 xnode1 zk1.mall kafka1.mall mall redis.mall mysql.mall nginx.mall
192.168.1.12 xnode2 zk2.mall kafka2.mall
192.168.1.13 xnode3 zk3.mall kafka3.mall



[root@xnode1 ~]# yum install vsftpd -y
[root@xnode1 ~]# echo "anon_root=/opt" >> /etc/vsftpd/vsftpd.conf
[root@xnode1 ~]# systemctl enable --now vsftpd

配置读写分离数据库

xnode2和xnode3为主从数据库

xnode1为mycat节点实现读写分离

主从数据库

安装软件

此处xnode2和xnode3相同。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
[root@xnode2 ~]# yum install mariadb mariadb-server -y
[root@xnode2 ~]# systemctl start mariadb
[root@xnode2 ~]# systemctl enable mariadb
[root@xnode2 ~]# mysql_secure_installation

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none):
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] Y
New password: 123456
Re-enter new password: 123456
Password updated successfully!
Reloading privilege tables..
... Success!


By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] y
... Success!

Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] n
... skipping.

By default, MariaDB comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] y
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y
... Success!

Cleaning up...

All done! If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!
  • 配置文件修改

此处略有不同。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@xnode2 ~]# vim /etc/my.cnf
[mysqld]
log_bin=mysql-bin
server_id=12

[root@xnode2 ~]# systemctl restart mariadb

************************************************************

[root@xnode3 ~]# vim /etc/my.cnf
[mysqld]
log_bin=mysql-bin
server_id=13

[root@xnode3 ~]# systemctl restart mariadb
  • 配置主从
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
[root@xnode2 ~]# mysql -uroot -p123456

MariaDB [(none)]> grant all on *.* to root@'%' identified by '123456';
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> grant all on *.* to root@localhost identified by '123456';
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> grant replication slave on *.* to root@'%' identified by '123456';
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> grant replication slave on *.* to root@localhost identified by '123456';
Query OK, 0 rows affected (0.000 sec)

************************************************************

[root@xnode3 ~]# mysql -uroot -p123456

MariaDB [(none)]> change master to master_host='192.168.1.12',master_user='root',master_password='123456';
Query OK, 0 rows affected (0.007 sec)

MariaDB [(none)]> start slave;
Query OK, 0 rows affected (0.002 sec)

MariaDB [(none)]> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.1.12
Master_User: root
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 1056
Relay_Log_File: xnode3-relay-bin.000002
Relay_Log_Pos: 1355
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 1056
Relay_Log_Space: 1665
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 12
Master_SSL_Crl:
Master_SSL_Crlpath:
Using_Gtid: No
Gtid_IO_Pos:
Replicate_Do_Domain_Ids:
Replicate_Ignore_Domain_Ids:
Parallel_Mode: conservative
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
Slave_DDL_Groups: 4
Slave_Non_Transactional_Groups: 0
Slave_Transactional_Groups: 0
1 row in set (0.000 sec)

  • 创建商城所需数据库
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
[root@xnode2 ~]# mysql -uroot -p123456

MariaDB [(none)]> create database gpmall;
Query OK, 1 row affected (0.001 sec)

MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| gpmall |
| information_schema |
| mysql |
| performance_schema |
+--------------------+
4 rows in set (0.000 sec)

************************************************************

# xnode3查看
[root@xnode3 ~]# mysql -uroot -p123456

MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| gpmall |
| information_schema |
| mysql |
| performance_schema |
+--------------------+
4 rows in set (0.001 sec)

读写分离

  • 在xnode1上安装java环境
1
2
3
4
5
[root@xnode1 ~]# yum install java-1.8.0-openjdk -y
[root@xnode1 ~]# java -version
openjdk version "1.8.0_222"
OpenJDK Runtime Environment (build 1.8.0_222-b10)
OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)
  • 解压mycat,并赋予权限
1
2
3
4
5
6
7
8
9
[root@xnode1 ~]# tar xf Mycat-server-1.6-RELEASE-20161028204710-linux.tar.gz 
[root@xnode1 ~]# mv mycat/ /usr/local/mycat
[root@xnode1 ~]# chmod -R 777 /usr/local/mycat

# 添加环境变量
[root@xnode1 ~]# vim /etc/profile
export MYCAT_HOME=/usr/local/mycat/
export PATH=$PATH:$MYCAT_HOME/bin
[root@xnode1 ~]# source /etc/profile
  • 修改配置文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[root@xnode1 ~]# vim /usr/local/mycat/conf/schema.xml

<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
<schema name="gpmall" checkSQLschema="true" sqlMaxLimit="100" dataNode="dn1"> # 此处需特别注意
</schema>
<dataNode name="dn1" dataHost="localhost1" database="gpmall" />
<dataHost name="localhost1" maxCon="1000" minCon="10" balance="3" writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>
<writeHost host="hostM1" url="192.168.1.12:3306" user="root" password="123456">
<readHost host="hostS2" url="192.168.1.13:3306" user="root" password="123456" />
</writeHost>
</dataHost>
</mycat:schema>


************************************************************

[root@xnode1 ~]# vim /usr/local/mycat/conf/server.xml
<user name="root">
<property name="password">123456</property>
<property name="schemas">gpmall</property>
</user>
<!-- # 将server.xml配置文件中最后的<user name="user"></user>标签和里面的内容删除 -->

  • 启动服务
1
2
3
4
[root@xnode1 ~]# mycat restart
[root@xnode1 ~]# ss -luntp | grep -E "8066|9066"
tcp LISTEN 0 100 :::8066 :::* users:(("java",pid=5069,fd=81))
tcp LISTEN 0 100 :::9066 :::* users:(("java",pid=5069,fd=77))
  • 测试
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@xnode1 ~]# yum install mariadb -y
[root@xnode1 ~]# mysql -h 127.0.0.1 -P9066 -uroot -p123456

MySQL [(none)]> show @@datasource;
+----------+--------+-------+--------------+------+------+--------+------+------+---------+-----------+------------+
| DATANODE | NAME | TYPE | HOST | PORT | W/R | ACTIVE | IDLE | SIZE | EXECUTE | READ_LOAD | WRITE_LOAD |
+----------+--------+-------+--------------+------+------+--------+------+------+---------+-----------+------------+
| dn1 | hostM1 | mysql | 192.168.1.12 | 3306 | W | 0 | 10 | 1000 | 37 | 0 | 0 |
| dn1 | hostS2 | mysql | 192.168.1.13 | 3306 | R | 0 | 4 | 1000 | 30 | 0 | 0 |
+----------+--------+-------+--------------+------+------+--------+------+------+---------+-----------+------------+
2 rows in set (0.003 sec)

MySQL [(none)]> show databases;
+----------+
| DATABASE |
+----------+
| gpmall |
+----------+
1 row in set (0.001 sec)

导入商城数据

需要把gpmall.sql文件拷贝到xnode2节点,让其导入。

1
2
3
4
5
6
7
[root@xnode1 ~]# scp gpmall-cluster/gpmall.sql xnode2:/root/
gpmall.sql

[root@xnode2 ~]# mysql -uroot -p123456
MariaDB [(none)]> use gpmall;
Database changed
MariaDB [gpmall]> source /root/gpmall.sql;

搭建Zookeeper集群

所有节点需要安装java环境

xnode1把zookeeper压缩包拷贝至2和3

1
2
3
4
5
6
7
8
9
[root@xnode1 ~]# for i in xnode2 xnode3;do scp zookeeper-3.4.14.tar.gz $i:/root/;done
zookeeper-3.4.14.tar.gz 100% 36MB 116.2MB/s 00:00
zookeeper-3.4.14.tar.gz 100% 36MB 113.6MB/s 00:00

[root@xnode1 ~]# yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel -y

[root@xnode2 ~]# yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel -y

[root@xnode3 ~]# yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel -y

此处所有节点配置相同

1
2
3
4
5
6
[root@xnode1 ~]# tar xf zookeeper-3.4.14.tar.gz
[root@xnode1 ~]# cd zookeeper-3.4.14/conf/
[root@xnode1 conf]# cp zoo_sample.cfg zoo.cfg
[root@xnode1 conf]# echo "server.1=192.168.1.11:2888:3888
server.2=192.168.1.12:2888:3888
server.3=192.168.1.13:2888:3888" >> zoo.cfg

哎!重点来了,注意力

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@xnode1 ~]# mkdir /tmp/zookeeper
[root@xnode1 ~]# echo 1 > /tmp/zookeeper/myid
[root@xnode1 ~]# cat /tmp/zookeeper/myid
1

[root@xnode2 ~]# mkdir /tmp/zookeeper
[root@xnode2 ~]# echo 2 > /tmp/zookeeper/myid
[root@xnode2 ~]# cat /tmp/zookeeper/myid
2

[root@xnode3 ~]# mkdir /tmp/zookeeper
[root@xnode3 ~]# echo 3 > /tmp/zookeeper/myid
[root@xnode3 ~]# cat /tmp/zookeeper/myid
3

然后启动服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[root@xnode1 ~]# zookeeper-3.4.14/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

# 看似启动了,然鹅······
[root@xnode1 ~]# zookeeper-3.4.14/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Error contacting service. It is probably not running.

# 莫慌,把其他两个启动了就好了

[root@xnode2 ~]# zookeeper-3.4.14/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

[root@xnode3 ~]# zookeeper-3.4.14/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

[root@xnode1 ~]# zookeeper-3.4.14/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower

搭建kafka集群

解压

1
2
3
4
5
[root@xnode1 ~]# for i in xnode2 xnode3;do scp kafka_2.11-1.1.1.tgz $i:/root/;done
kafka_2.11-1.1.1.tgz 100% 55MB 81.4MB/s 00:00
kafka_2.11-1.1.1.tgz 100% 55MB 78.8MB/s 00:00

[root@xnode1 ~]# tar xf kafka_2.11-1.1.1.tgz

修改配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@xnode1 ~]# cd kafka_2.11-1.1.1/config/
broker.id=1
listeners=PLAINTEXT://192.168.1.11:9092
zookeeper.connect=192.168.1.11:2181,192.168.1.12:2181,192.168.1.13:2181

[root@xnode2 ~]# cd kafka_2.11-1.1.1/config/
broker.id=2
listeners=PLAINTEXT://192.168.1.12:9092
zookeeper.connect=192.168.1.11:2181,192.168.1.12:2181,192.168.1.13:2181

[root@xnode3 ~]# cd kafka_2.11-1.1.1/config/
broker.id=3
listeners=PLAINTEXT://192.168.1.13:9092
zookeeper.connect=192.168.1.11:2181,192.168.1.12:2181,192.168.1.13:2181

启动服务

1
2
3
4
5
6
7
8
[root@xnode1 config]# cd ..
[root@xnode1 kafka_2.11-1.1.1]# cd bin
[root@xnode1 bin]# ./kafka-server-start.sh -daemon ../config/server.properties
[root@xnode1 bin]# jps
5069 WrapperSimpleApp
15709 Kafka
15294 QuorumPeerMain
15775 Jps

安装redis服务

在xnode1上安装,并配置启动。

1
2
3
4
5
6
7
8
9
10
11
12
[root@xnode1 ~]# yum install redis -y

[root@xnode1 ~]# vim /etc/redis.conf
# bind 127.0.0.1
protected-mode no

[root@xnode1 ~]# systemctl start redis
[root@xnode1 ~]# systemctl enable redis

[root@xnode1 ~]# ss -luntp | grep 6379
tcp LISTEN 0 128 *:6379 *:* users:(("redis-server",pid=15944,fd=5))
tcp LISTEN 0 128 :::6379 :::* users:(("redis-server",pid=15944,fd=4))

安装Nginx服务

只需要在xnode1安装配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
[root@xnode1 ~]# yum install nginx -y

[root@xnode1 ~]# vim /etc/nginx/conf.d/default.conf
upstream gpuser {
server 192.168.1.11:8082;
server 192.168.1.12:8082;
server 192.168.1.13:8082;
ip_hash;
}

upstream gpshopping {
server 192.168.1.11:8081;
server 192.168.1.12:8081;
server 192.168.1.13:8081;
ip_hash;
}

upstream gpcashier {
server 192.168.1.11:8083;
server 192.168.1.12:8083;
server 192.168.1.13:8083;
ip_hash;
}



server {
listen 80;
server_name localhost;

#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;

location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location /user {
proxy_pass http://gpuser;
}
location /shopping {
proxy_pass http://gpshopping;
}
location /cashier {
proxy_pass http://gpcashier;
}
······
}

复制前端文件

1
2
3
4
[root@xnode1 ~]# rm -rf /usr/share/nginx/html/*
[root@xnode1 ~]# cp -r gpmall-cluster/dist/* /usr/share/nginx/html/
[root@xnode1 ~]# systemctl start nginx
[root@xnode1 ~]# systemctl enable nginx

部署Jar包

  • 拷贝
1
[root@xnode1 ~]# for i in xnode2 xnode3;do scp gpmall-cluster/*.jar $i:/root/;done
  • 后台运行

    只需要在xnode1上操作即可。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@xnode1 ~]# cd gpmall-cluster/
[root@xnode1 gpmall-cluster]# nohup java -jar shopping-provider-0.0.1-SNAPSHOT.jar &
[1] 16188
[root@xnode1 gpmall-cluster]# nohup: 忽略输入并把输出追加到"nohup.out"

[root@xnode1 gpmall-cluster]# nohup java -jar user-provider-0.0.1-SNAPSHOT.jar &
[2] 16233
[root@xnode1 gpmall-cluster]# nohup: 忽略输入并把输出追加到"nohup.out"

[root@xnode1 gpmall-cluster]# nohup java -jar gpmall-shopping-0.0.1-SNAPSHOT.jar &
[3] 16287
[root@xnode1 gpmall-cluster]# nohup: 忽略输入并把输出追加到"nohup.out"

[root@xnode1 gpmall-cluster]# nohup java -jar gpmall-user-0.0.1-SNAPSHOT.jar &
[4] 16347
[root@xnode1 gpmall-cluster]# nohup: 忽略输入并把输出追加到"nohup.out"

[root@xnode1 gpmall-cluster]# jobs
[1] 运行中 nohup java -jar shopping-provider-0.0.1-SNAPSHOT.jar &
[2] 运行中 nohup java -jar user-provider-0.0.1-SNAPSHOT.jar &
[3]- 运行中 nohup java -jar gpmall-shopping-0.0.1-SNAPSHOT.jar &
[4]+ 运行中 nohup java -jar gpmall-user-0.0.1-SNAPSHOT.jar &

最后浏览器访问192.168.1.11即可显示商城。

2.png