The previous few lessons provide great examples of the key functionalities of Vitess. We got to see examples of spinning up the many components, creating new keyspaces, sharding, and key workflows such as MoveTables
and Reshard
. However, these were all running on a single EC2 instance for demonstration purposes, and did not provide a realistic environment of running a cluster across multiple separate machines. Though we won't get to building out a full, production-ready cluster in this series, we will take a look at how we can re-run the same cluster configuration from 101_initial_cluster.sh
, but spread the load out across multiple server instances. We're going to accomplish this by making some modifications to 101_initial_cluster.sh
, breaking it up into separate steps and running different components on different servers.
In total, were going to spin up four EC2 instances for this. I'm going to place each of the three VTTablet / MySQL instances on separate servers. Everything else, including etcd
, vtctld
, vtorc
, and the lone vtgate
will all be housed on the same server we were using previously. We could further split these out on to separate servers, but this will be left as an exercise for the reader.
If you still have the cluster from the previous lessons running on your instance, go ahead an run the following to shut it down:
pkill -9 -f '(vtdataroot|VTDATAROOT|vitess|vtadmin)' ; ./401_teardown.sh ; rm -r vtdataroot
First, let’s log in to the main server that will host etcd, vtgate, etc. I'll refer to this server as vitess-main
. This is the same server that has been used in the previous lessons. In this one, we'll need to split up 101_initial_cluster.sh
into two parts, the setup before VTTablets are created and the remaining steps after. We can copy/paste the code before and after the two loops that started up MySQL and VTTablets and put it into a 101_before.sh
and 101_after.sh
script. Also ensure that you have source ../common/env.sh
at the beginning of each of these scripts.
In addition to this, a few other small adjustments will need to be made to some of the helper scripts. First, a few changes to env.sh
:
sed -i 's/ETCD_SERVER="localhost:2379"/ETCD_SERVER="${hostname}:2379"/' ../common/env.sh
sed -i 's/localhost:15999/${hostname}:15999/' ../common/env.sh
These changes replace localhost
with hostname
in a few key places, which is important for allowing this server to communicate with the other servers that will be set up. A few changes will also be needed in the script that starts up etcd
:
sed -i 's/--listen-client-urls "http:\/\/${ETCD_SERVER}"/--listen-client-urls "http:\/\/0.0.0.0:2379"/' ../common/scripts/etcd-up.sh
sed -i 's/"${cell}"/--topo-global-server-address "${ETCD_SERVER}" "${cell}"/' ../common/scripts/etcd-up.sh
This change tells etcd
to listen for connection from any IP address, and also adds the --topo-global-server-address
argument. Finally, we need to add the public IPv4 DNS address in the script that sets up the VTAdmin infrastructure. Go ahead and add hostname="YOUR_SERVER_PUBLIC_IPv4_DNS"
right after the source ../common/env.sh
line in ../common/scripts/vtadmin-up.sh
.
We next need to spin up three additional EC2 instances for the VTTablets and MySQL instances to live on. These should all be configured in the same way as vitess-main
, and will need to run the same steps to download and install Vitess. You can find those steps in this article. We will refer to these servers as vitess-tablet-1
, vitess-tablet-2
, and vitess-tablet-3
. After starting up, connecting to, and configuring these servers, there are a few customizations that need to be made on all three.
First, add the following tablet.sh
script to each server:
source ../common/env.sh
T_UID=${1}
echo "TABLET IDENTIFIER: ${T_UID}"
CELL=zone1 TABLET_UID=${T_UID} ../common/scripts/mysqlctl-up.sh
sleep 2 ; echo "Waiting for mysqlctls to start..."
wait ; echo "mysqlctls are running!"
CELL=zone1 KEYSPACE=commerce TABLET_UID=${T_UID} ../common/scripts/vttablet-up.sh
These will be used later to get the tablets up-and-running. On each server, open ../common/env.sh
and replace the line hostname=$(hostname -f)
with hostname="PRIVATE_IP_ADDRESS_OF_ETCD_SERVER"
PRIVATE_IP_ADDRESS_OF_ETCD_SERVER
is the private IP that EC2 assigned to vitess-main
. Also run the following updates:
sed -i 's/ETCD_SERVER="localhost:2379"/ETCD_SERVER="${hostname}:2379"/' ../common/env.sh
sed -i 's/localhost:15999/${hostname}:15999/' ../common/env.sh
sed -i '18i hostname="$(hostname -f)"' ../common/scripts/vttablet-up.sh
sed -i "s/tablet_hostname=''/tablet_hostname=\$hostname/" ../common/scripts/vttablet-up.sh
With all of that in place, the next step is to start the cluster. To get it up-and-running, execute the following sequence of scripts:
On vitess-main
execute: bash 101_before.sh
On vitess-tablet-1
execute: bash ./tablet.sh 100
On vitess-tablet-2
execute: bash ./tablet.sh 101
On vitess-tablet-3
execute: bash ./tablet.sh 102
On vitess-main
execute: bash 101_after.sh
You can now visit the URL that the 101_after.sh
script displays for you at the end to view the admin panel of your cluster. You can also connect to the cluster by executing the following from your vitess-main
instances:
mysql -P 15306 -u root --protocol tcp
Or alternatively, you could connect from your local machine using:
mysql -P 15306 -u root --protocol tcp -h VTGATE_PUBLIC_IP_ADDRESS