Spring Boot Mysql Replication | Mysql Replication 답을 믿으세요

당신은 주제를 찾고 있습니까 “spring boot mysql replication – MySQL replication“? 다음 카테고리의 웹사이트 you.honvietnam.com 에서 귀하의 모든 질문에 답변해 드립니다: https://you.honvietnam.com/blog/. 바로 아래에서 답을 찾을 수 있습니다. 작성자 Nguyễn Cương 이(가) 작성한 기사에는 조회수 4,805회 및 좋아요 8개 개의 좋아요가 있습니다.

spring boot mysql replication 주제에 대한 동영상 보기

여기에서 이 주제에 대한 비디오를 시청하십시오. 주의 깊게 살펴보고 읽고 있는 내용에 대한 피드백을 제공하세요!

d여기에서 MySQL replication – spring boot mysql replication 주제에 대한 세부정보를 참조하세요

Triển khai cài đặt mô hình máy chủ web và máy chủ CSDL độc lập với nhau. Đồng thời xây dựng mô hình cluster đối với máy chủ CSDL. Việc làm này giúp sao lưu dự phòng dữ liệu, phục vụ trường hợp máy chủ CSDL chính bị hỏng thì vẫn có máy chủ CSDL dự phòng đưa vào hoạt động, không gây gián đoạn dữ liệu của hệ thống

spring boot mysql replication 주제에 대한 자세한 내용은 여기를 참조하세요.

GitHub – dipanjal/mysql-replication-poc

Sending Request from our Spring Boot Application. Observe the Query Execution in the Master and Slave’s General log. Problem Discussion. Let’s say, you have a …

+ 더 읽기

Source: github.com

Date Published: 7/27/2022

View: 4582

Setup Spring transactions for MySQL Replication

This post describes how to setup Spring transaction to connect to MySQL database with Replication to direct all write operations to the …

+ 여기에 더 보기

Source: raymondhlee.wordpress.com

Date Published: 8/18/2021

View: 6994

A complete guide to setting up Master and Slave (or multiple …

In this article, I’ll be presenting a step by step gue to consuming data from two data sources. One will be our master, the other one slave.

+ 더 읽기

Source: medium.com

Date Published: 6/16/2021

View: 9249

mysql connector/j replication option – SQLException: No …

Trying to connect to this database cluster using Spring Boot “mysql-connector-java” dependency (version 8.0.15) so that the application will be …

+ 더 읽기

Source: stackoverflow.com

Date Published: 11/29/2021

View: 6399

Learn ProxySql & MySQL Replication and Spring Boot – Morioh

Learn ProxySql & MySQL Replication and Spring Boot. … MySQL DB Replication (1 master and 2 slaves) on Docker; Load Balancing them using ProxySQL on Docker …

+ 여기에 더 보기

Source: morioh.com

Date Published: 7/30/2021

View: 3688

9.4 Configuring Source/Replica Replication with Connector/J

put(“password”, “password”); // // Looks like a normal MySQL JDBC url, with a // comma-separated list of hosts, the first // being the ‘source’, the rest being …

+ 여기에 표시

Source: dev.mysql.com

Date Published: 12/25/2021

View: 7207

MySQL5. 7 master slave replication setup deployment and …

7 master slave replication setup deployment and integrated spring boot use … locally mysql Master-slave library , And combine springboot、mybatis、dynamic …

+ 여기를 클릭

Source: javamana.com

Date Published: 9/13/2022

View: 9103

Giới thiệu MySQL Replication – Kipalog

https://grokonez.com/frontend/angular/angular6/kotlinspringbootangular6crudhttpclientmysqlexamplespringdatajparestapisexample Kotlin Spring Boot …

+ 여기에 보기

Source: kipalog.com

Date Published: 4/29/2021

View: 6796

Routing Read/Write Datasource in Spring. | by Thanh Tran

Database replication is the process of copying data from a database… … spring-boot-starter-web; spring-boot-starter-thymeleaf; MySQL database …

+ 여기에 더 보기

Source: programmingsharing.com

Date Published: 2/29/2021

View: 9074

MySQL Master/Slave Load Balancing with JPA and Spring

Use special JDBC driver: com.mysql.jdbc.ReplicationDriver; Set replication: in the URL: jdbc:mysql:replication://master,slave1,slave2…

+ 여기에 보기

Source: www.dragishak.com

Date Published: 5/25/2022

View: 9911

주제와 관련된 이미지 spring boot mysql replication

주제와 관련된 더 많은 사진을 참조하십시오 MySQL replication. 댓글에서 더 많은 관련 이미지를 보거나 필요한 경우 더 많은 관련 기사를 볼 수 있습니다.

MySQL replication
MySQL replication

주제에 대한 기사 평가 spring boot mysql replication

  • Author: Nguyễn Cương
  • Views: 조회수 4,805회
  • Likes: 좋아요 8개
  • Date Published: 2011. 11. 23.
  • Video Url link: https://www.youtube.com/watch?v=JPzfC6i1Rq8

dipanjal/mysql-replication-poc: ProxySql, MySQL Replication, Spring Boot

MySQL Replication and ProxySQL

In this documentation, we will cover

Problem Discussion Common Database problems and solutions.

What is Database Replication?

When do you need this? MySQL DB Replication (1 master and 2 slaves) on Docker Load Balancing them using ProxySQL on Docker Sending Request from our Spring Boot Application. Observe the Query Execution in the Master and Slave’s General log

Problem Discussion

Let’s say, you have a single relational databases instance and you are probably building a web application. So you need a web server and a Web Developement Framework such as Spring Boot, django, nodejs bla bla. Your web application that essentially speaks to HTTP protocol that talks to the browser or some HTTP clients to execute the api endpoints and those apis eventually executing Read/Write operation to the database.

Assume, your database is getting reads/write requests from the client. For example a read request: SELECT * FROM products WHERE price < 5000 . Your application was working smoothly but as the time passed the you have noticed your query is performing slow, it's not much faster as earlier. It's taking some times which may reduce the user-experience. But what's the problem here ? Your product tables is growing larger in course of time and when you are executing the query above, it's doing a Full Table Scan which basically a Linear Search operation ( Time Complexity O(n) ). So you can say, ok fine I'm gonna add an Index to price column and my beautiful databse will arrange it in a Balanced Binary Tree or a B-Tree where the Tree Balances itself when a new record Inserted, Deleted or Updated the existing column value each time, to make the Read operations faster in O(log n) Time Complexity where “n” is the total number of elements in the B-Tree. But the balancing tree has a cost, besides renge query ( SELECT job FROM products WHERE price BETWEEN 5000 AND 10000 ) is not effecient in B-Tree , hence B+ Tree comes into the picture. But still, what if you have 1 millions records in the products table and you just have inserted a new record and bam!! Your DB is doing so much work to re-balance the large Tree. So, What can you do now ? Ok, You can do table Partitioning based on id let's say, because the id use-case fits best here. Partitioning is a technique to subdivide objects into smaller pieces. It actually breaks the huge table into many different partiotion tables by range depending on the partiotion key and these partition tables are again mapped by the pair where the partition_key as the KEY and partition_table_reference as the VALUE of the map. For example: your page size = 20000 so the ids from 1 to 20000 falls into partition_1 and ids from 20000 – 40000 goes into partition_2 and so on and so forth. But don’t worry guys, these partitions are managed implicitly by the Database itself. It knows, to get result for the for the Specific Read query ( SELECT * FROM products WHERE id = 18 ) in which partition it needs to look for. So it can be a solution to reduce the Tree Balancing Cost, because as you can feel the Search space is much smaller than before so the Balanced B+ Tree cost has optimized. Great, peorblem solved. But as your business grew, your user-base also grown. Now you have a thousands of concurrent users who are reading millions of records from your (Read Heavy) Database and the Single Database server is dealing with a huge number of concurrent TCP connections. Your single Database instance is junked up with these enourmous number of concurrent requestes and it might be ran out of it’s QPS (Query Per Second) limit too. Here’s Replication comes into the solution space.

What is DB Replication?

DB Replication is a kind of Horizontal Scaling making the same copy of the Full Database and Distribute them in Master/Slave architecture where the Master deals with all the Write operations and Periodically Updates it’s Slave Replicas, who will Handle only the Read queries. So your Database load is Distributed now. But remember, the Slaves must be Consistent with the Master so there must be a Replication Strategy.

How the Replication works from Master to Slaves?

After any Write Query (Insert. Update, Delete) executed in the Master, the DB somehow Replicate the changes to the Slaves. The Master triggers a change event and the Slaves pull the Changes from the Event and Update themselves. Let’s generate some ideas on this.

Idea 1: Can we Stream the SQL Statements?

So Basically, We will be Streaming the SQL Query Statements and the Slaves will pull them from the channel and Execute those SQL Statements inside themselves. Well, this can make the Replication Inconsistent. Let’s see how. Assume, you are Creating a new Product

INSERT INTO products (product_name, product_status, price, created_at) VALUES ( ‘ TP-Link Archar C60 ‘ , ‘ AVAILABLE ‘ , 3500 , sysdate ( 3 ))

Firstly, The Query will be executed at Master when the value of sysdate(3) = 2022-01-07 12:04:59.114

Secondly, The Query will be executed at Slave 1 when the value of sysdate(3) = 2022-01-07 12:05:00.100

Thirdly, The Query will be executed at Slave 2 when the value of sysdate(3) = 2022-01-07 12:05:00.405

Epic Fail!! Right? This will certainly create inconsitancy problem. So we need to drop this idea.

Idea 2: How about Transfering Bin Log files?

The binary log is a set of log files that contain information about data modifications made to a MySQL server instance. Simply it saves the Database States

So, When any Write Query executes in the Master Replica, the change is saved into the Bin Log. After that the Master will transfer these log files towards the Slave Databases asynchronusly and the Slaves will pull the change and update their states according to the bin logs.There are also other replication staratagies like Synchronus, Asynchronus and Semi-Asynchronus, but Mysql do Asynchronus replication by default so we are gonna use this for now.

But another problem is knocking at the door. How will you distribute the traffics to the appropriate DB Replicas? Since, you have a multiple instances of the same database depending on Read/Write purpose, how your application can differenfiate when to go to the Read Replica and when to the Master Replica.

DB connection information can change on the way

It is troublesome (but complicated) to use DB properly in the Read/Write logic of the application.

So we need a Reverse Proxy for sure to solve this problem. The proxy sits in-between of the Application and Database and Load Balance the Request to the different DB instances based on our operation types (Read/Write). But how the Proxy will distinguish the Request Type ? The answere is the Proxy must be SQLAware. We are going to use ProxySQL here.

What is ProxySql?

ProxySQL is an Opensource SQLAware Reverse Proxy unlike other HTTP or TCP proxy (Nginx, HAProxy etc) it can distinguish the Read/Write operations and deliver the packet to the Specific Replica either it’s Master or Slave. ProxySQL goes between the application and the DB and does the following:

Automatic Proxify to the Master/Slave depending on query

Load Distribution

Change seamless connection settings By the way, ProxySQL can be used with other DBs like Postgres as well.

MySQL Replica Configuration

Here is a good read to get the insight of Mysql Replica Configuration. I used MySQL 5.7 to prepare one master and two slaves, and set up replication settings. Here is the docker-compose.yml file I’m using

version : ‘ 3 ‘ services : mysql-master : image : mysql:5.7 container_name : proxysql-mysql-replication-master environment : MYSQL_ROOT_PASSWORD : password MYSQL_DATABASE : sbtest volumes : – ./master/my.cnf:/etc/mysql/my.cnf – ./master/data:/var/lib/mysql – ./master/init.sql:/docker-entrypoint-initdb.d/init.sql ports : – 3306:3306 networks : – mysql_cluster_net mysql-slave1 : image : mysql:5.7 container_name : proxysql-mysql-replication-slave1 environment : MYSQL_ROOT_PASSWORD : password MYSQL_DATABASE : sbtest volumes : – ./slave/my-slave1.cnf:/etc/mysql/my.cnf – ./slave/data/slave1:/var/lib/mysql – ./slave/init.sql:/docker-entrypoint-initdb.d/init.sql ports : – 3307:3306 depends_on : – mysql-master networks : – mysql_cluster_net mysql-slave2 : image : mysql:5.7 container_name : proxysql-mysql-replication-slave2 environment : MYSQL_ROOT_PASSWORD : password MYSQL_DATABASE : sbtest volumes : – ./slave/my-slave2.cnf:/etc/mysql/my.cnf – ./slave/data/slave2:/var/lib/mysql – ./slave/init.sql:/docker-entrypoint-initdb.d/init.sql ports : – 3308:3306 depends_on : – mysql-master networks : – mysql_cluster_net networks : mysql_cluster_net : driver : bridge

Let’s bring up the docker containers and check the Master and Slave’s status. docker-compose up -d

Check Master’s Status

docker-compose exec mysql-master sh -c “export MYSQL_PWD=password; mysql -u root sbtest -e ‘show master status\G'”

Expected Output:

*************************** 1. row *************************** File: mysql-bin.000003 Position: 194 Binlog_Do_DB: sbtest Binlog_Ignore_DB: Executed_Gtid_Set: 9618dc00-6f2a-11ec-a895-0242ac120002:1-9

Check Slave 1 Status

docker-compose exec mysql-slave1 sh -c “export MYSQL_PWD=password; mysql -u root sbtest -e ‘show slave status\G'”

Expected Output:

*************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: mysql-master Master_User: slave_user Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000003 Read_Master_Log_Pos: 194 Relay_Log_File: mysql-relay-bin.000004 Relay_Log_Pos: 407 Relay_Master_Log_File: mysql-bin.000003 Slave_IO_Running: Yes Slave_SQL_Running: Yes … Master_Server_Id: 1 Master_UUID: 9618dc00-6f2a-11ec-a895-0242ac120002 Master_Info_File: /var/lib/mysql/master.info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: 9618dc00-6f2a-11ec-a895-0242ac120002:1-9 Executed_Gtid_Set: 9618dc00-6f2a-11ec-a895-0242ac120002:1-9, 962ec4d2-6f2a-11ec-8a4d-0242ac120004:1-5 …

As you can se the Slave_IO_Running: Yes, Slave_SQL_Running: Yes means the slave is started properly.

Also Master_UUID: 9618dc00-6f2a-11ec-a895-0242ac120002 means it’s connected successfully with the Master. If the Slaves fails to connect to the master then run the ./clean-up.sh. it will gracefully shutdown the containers and clean the master, slave data directories and start the contains in -d demon mode

Check Slave 2 Status

docker-compose exec mysql-slave2 sh -c “export MYSQL_PWD=password; mysql -u root sbtest -e ‘show slave status\G'”

Expected Output:

*************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: mysql-master Master_User: slave_user Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000003 Read_Master_Log_Pos: 194 Relay_Log_File: mysql-relay-bin.000004 Relay_Log_Pos: 407 Relay_Master_Log_File: mysql-bin.000003 Slave_IO_Running: Yes Slave_SQL_Running: Yes … Master_Server_Id: 1 Master_UUID: 9618dc00-6f2a-11ec-a895-0242ac120002 Master_Info_File: /var/lib/mysql/master.info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: 9618dc00-6f2a-11ec-a895-0242ac120002:1-9 Executed_Gtid_Set: 9618dc00-6f2a-11ec-a895-0242ac120002:1-9, 9633aafa-6f2a-11ec-ba14-0242ac120003:1-5 …

Looks Good, Now it’s time to configure our ProxySQL

ProxySQL Configuration

The configuration file is as follows.

datadir = “/var/lib/proxysql” # ProxySQL Admin Configuration admin_variables = { admin_credentials = “admin:admin;admin2:pass2” mysql_ifaces = “0.0.0.0:6032” refresh_interval = 2000 stats_credentials = “stats:admin” } # ProxySQL configuration for MySQL Cluster mysql_variables = { threads = 4 max_connections = 2048 default_query_delay = 0 default_query_timeout = 36000000 have_compress = true poll_timeout = 2000 #Where the clinet application will be connected interfaces = “0.0.0.0:6033;/tmp/proxysql.sock” default_schema = “information_schema” stacksize = 1048576 server_version = “5.7” connect_timeout_server = 10000 monitor_history = 60000 monitor_connect_interval = 200000 monitor_ping_interval = 200000 ping_interval_server_msec = 10000 ping_timeout_server = 200 commands_stats = true sessions_sort = true # setting up mysql cluster monitoring credentials monitor_username = “monitor” monitor_password = “monitor” } # Host Group 10 = Master Group for Write # Host Group 20 = Slave Group for Read mysql_replication_hostgroups = ( { writer_hostgroup = 10 , reader_hostgroup = 20 , comment = “host groups” } ) # replication_lag, checks if the servers are alive or not. # replication_lag = 5 mean if any slave replica is unable to catch the the master change event within 5 sec, proxySQL will mark it as SHUNNED (kind of Banned) mysql_servers = ( { address = “mysql-master” , port = 3306 , hostgroup = 10 , max_connections = 100 , max_replication_lag = 5 } , { address = “mysql-slave1” , port = 3306 , hostgroup = 20 , max_connections = 100 , max_replication_lag = 5 } , { address = “mysql-slave2” , port = 3306 , hostgroup = 20 , max_connections = 100 , max_replication_lag = 5 } ) # The SQL Awareness Rules mysql_query_rules = ( { rule_id = 100 active = 1 match_pattern = “^SELECT .* FOR UPDATE” destination_hostgroup = 10 apply = 1 } , { rule_id = 200 active = 1 match_pattern = “^SELECT .*” destination_hostgroup = 20 apply = 1 } , { rule_id = 300 active = 1 match_pattern = “.*” destination_hostgroup = 10 apply = 1 } ) # ProxySql to Mysql Connection Credential. This credential will be used by our Spring Boot Application or any application you want to develop mysql_users = ( { username = “root” , password = “password” , default_hostgroup = 10 , active = 1 } )

Now Let’s add proxySql into our existing docker-compose file

… proxysql : image : proxysql/proxysql:2.0.12 container_name : proxysql-mysql-replication-proxysql ports : – 6032:6032 – 6033:6033 volumes : – ./proxysql/proxysql.cnf:/etc/proxysql.cnf – ./proxysql/data:/var/lib/proxysql networks : – mysql_cluster_net depends_on : – mysql-master – mysql-slave1 – mysql-slave2 networks : mysql_cluster_net : driver : bridge

Credentials

ProxySQL [SQL Aware LB] :6032 (admin, user : admin2, pass : pass2) :6033 (MySQL endpoint, user : root, pass : password)

MySQL replication master x 1 slave x 2 user : root, pass : password

Getting Started

Turn up all the docker containers docker-compose up -d and check docker processes

docker ps

Output:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5e5e850339d3 proxysql/proxysql:2.0.12 “proxysql -f -D /var…” 30 seconds ago Up 28 seconds 0.0.0.0:6032-6033->6032-6033/tcp, :::6032-6033->6032-6033/tcp proxysql-mysql-replication-proxysql b6a5296c7c27 mysql:5.7 “docker-entrypoint.s…” 31 seconds ago Up 30 seconds 33060/tcp, 0.0.0.0:3307->3306/tcp, :::3307->3306/tcp proxysql-mysql-replication-slave1 ef6d0cb4249b mysql:5.7 “docker-entrypoint.s…” 31 seconds ago Up 30 seconds 33060/tcp, 0.0.0.0:3308->3306/tcp, :::3308->3306/tcp proxysql-mysql-replication-slave2 d64d42b53e41 mysql:5.7 “docker-entrypoint.s…” 32 seconds ago Up 31 seconds 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060/tcp proxysql-mysql-replication-master

All containers are up and running.

Check Replication States now

Incase you don’t have mysql client installed in your machine then install it first, the execute the command bellow.

Mysql Installation (Ubuntu 20.04) (Optional)

$ wget https://dev.mysql.com/get/mysql-apt-config_0.8.20-1_all.deb $ dpkg -i mysql-apt-config_0.8.20-1_all.deb # and select mysql-8.0 $ sudo apt install mysql-server-8.0

ProxySQL

$ mysql -h 0.0.0.0 -P 6032 -u admin2 -p -e ‘select * from mysql_servers’

Enter password: pass2

+————–+————–+——+———–+——–+——–+————-+—————–+———————+———+—————-+———+ | hostgroup_id | hostname | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +————–+————–+——+———–+——–+——–+————-+—————–+———————+———+—————-+———+ | 10 | mysql-master | 3306 | 0 | ONLINE | 1 | 0 | 100 | 5 | 0 | 0 | | | 20 | mysql-slave2 | 3306 | 0 | ONLINE | 1 | 0 | 100 | 5 | 0 | 0 | | | 20 | mysql-slave1 | 3306 | 0 | ONLINE | 1 | 0 | 100 | 5 | 0 | 0 | | +————–+————–+——+———–+——–+——–+————-+—————–+———————+———+—————-+———+

Looks Good, All the Master and Slaves are Online and Synced up.

Now Open a new Terminal and Try to run some query and Monitor the General Logs of Master and Slaves

Showtime

All our tedious configuration is done, now lets open 3 terminals 1 for master and other 2 for slaves and place them side by side so that you can monitor all of them together. Try to run some Read/Write query from your Spring Boot application or any Database Client like Mysql Workbench and Monitor the General Logs of Master and Slaves

Read ALL Users

Output from SLAVE 1 Console

docker-compose exec mysql-slave1 sh -c ‘tail -f /var/log/mysql/*.log’

Output:

==> /var/log/mysql/general.log <== 2022-01-07T10:26:42.237593Z 4025 Query SHOW SLAVE STATUS 2022-01-07T10:26:42.237716Z 4025 Quit 2022-01-07T10:26:43.007045Z 3194 Query select userentity0_.id as id1_0_, userentity0_.name as name2_0_ from users userentity0_ 2022-01-07T10:26:43.074505Z 4026 Connect [email protected]c_my on using TCP/IP 2022-01-07T10:26:43.074741Z 4026 Query SELECT @@global.read_only read_only 2022-01-07T10:26:43.075023Z 4026 Query SET wait_timeout=2000 Execute the Same API again and Check the output in SLAVE 2 Console docker-compose exec mysql-slave2 sh -c 'tail -f /var/log/mysql/*.log' Output: 2022-01-07T10:42:42.119645Z 20 Query SELECT @@global.read_only read_only 2022-01-07T10:42:42.120000Z 20 Query SET wait_timeout=2000 2022-01-07T10:42:43.128917Z 20 Query SELECT @@global.read_only read_only 2022-01-07T10:42:44.143227Z 20 Query SELECT @@global.read_only read_only 2022-01-07T10:42:44.252141Z 21 Connect [email protected]_my on sbtest using TCP/IP 2022-01-07T10:42:44.252377Z 21 Query SET character_set_results=NULL 2022-01-07T10:42:44.252534Z 21 Query select userentity0_.id as id1_0_, userentity0_.name as name2_0_ from users userentity0_ 2022-01-07T10:42:45.128786Z 20 Query SELECT @@global.read_only read_only 2 CREATE NEW USER { "name" : " Jhon Doe " } Check MASTER Status: docker-compose exec mysql-master sh -c 'tail -f /var/log/mysql/*.log' Output from Master Console: 2022-01-07T11:05:11.305574Z 312 Query SELECT @@global.read_only read_only 2022-01-07T11:05:11.582792Z 35 Query SET autocommit=0 2022-01-07T11:05:11.583025Z 35 Query SET character_set_results=NULL 2022-01-07T11:05:11.583181Z 35 Query insert into users (name) values ('Jhon Doe') 2022-01-07T11:05:11.695636Z 35 Query commit 2022-01-07T11:05:12.117764Z 312 Query SHOW SLAVE STATUS Output from SLAVE 1 Console: 2022-01-07T11:05:11.326982Z 163 Query SELECT @@global.read_only read_only 2022-01-07T11:05:11.702399Z 2 Query BEGIN 2022-01-07T11:05:11.702534Z 2 Query COMMIT /* implicit, from Xid_log_event */ 2022-01-07T11:05:12.122072Z 163 Query SHOW SLAVE STATUS 2022-01-07T11:05:12.122218Z 163 Quit Output from SLAVE 2 Console: 2022-01-07T11:05:10.331996Z 162 Query SELECT @@global.read_only read_only 2022-01-07T11:05:11.316285Z 162 Query SELECT @@global.read_only read_only 2022-01-07T11:05:11.702399Z 2 Query BEGIN 2022-01-07T11:05:11.702534Z 2 Query COMMIT /* implicit, from Xid_log_event */ 2022-01-07T11:05:12.120590Z 162 Query SHOW SLAVE STATUS 2022-01-07T11:05:12.120696Z 162 Quit As you can see the Write query has executed in Master and the Bin Log has been Replicated from the Master to the Slave Replicas Try it yourself Download the Postman Collection Run the Spring Boot Application Try Executing the API endpoints Refernces

Setup Spring transactions for MySQL Replication

This post describes how to setup Spring transaction to connect to MySQL database with Replication to direct all write operations to the master and read operations to both master and slaves.

Database setup

The easiest way to setup MySQL database with replication for testing is via Amazon AWS. Create a RDS instance with MySQL as a master and then create a Read Replica from the master. Note AWS uses native MySQL replication to propagate database changes from the master to the slaves.

JDBC connection

To connect to MySQL Replication using JDBC, made the following 2 changes to your spring config:

Replace the jdbc driver class, e.g. com.mysql.jdbc.Driver, with com.mysql.jdbc.ReplicationDriver. Modify jdbc connection string to format

jdbc:mysql:replication://,/

See MySQL documentation here for list of configuration properties that can be appended to the jdbc URL.

Note the ReplicationDriver wraps one read and one write jdbc connection and can be used transparently with jdbc connection pools such as cp30.

Spring transaction

Use the readOnly attribute of the @Transaction annotation of Spring to direct a transaction to either the master or slave.

For write operations, use @Transactional(readOnly = false) and the database operations will go to the master only

For read only operations, use @Transactional(readOnly = true) and the database operations can go to the slave.

Note:

There are a few old articles on the web indicating the readOnly attribute is ignored. This seems to be outdated and the attribute is working as expected in Spring 3. I am using Spring 3.2.

To verify the setup, I have the following Spring test class:

@RunWith(SpringJUnit4ClassRunner.class) @TransactionConfiguration(transactionManager=”transactionManager”, defaultRollback=true) @ContextConfiguration(“classpath:spring/app-config-test.xml”) @Transactional(readOnly = true) @ActiveProfiles(profiles={“aws”}) public class ProductRepositoryImplTest { @Autowired private SessionFactory sessionFactory; @Autowired @Qualifier(“productRepository”) private IProductRepository repository; // … setup details omitted here @Transactional(readOnly = false) @Test @Repeat(value = 100) public void testReplicationWrite() { repository.create(createEntity()); } @Transactional(readOnly = true) @Test(expected = GenericJDBCException.class) public void testReplicationWriteFail() { repository.create(createEntity()); } @Transactional(readOnly = true) @Test @Repeat(value = 100) public void testReplicationRead() { repository.findProductByName(RandomStringUtils.randomAlphabetic(10)); } }

Note:

The test class above tests the productRepository bean which implements standard CRUD operations (details omitted here) for the entity Product. The first test method testReplicationWrite() will pass as the readOnly attribute of the @Transactional annotation is set to false. The second test method testReplicationWriteFail() will throw a GenericJDBCException, as expected by the test method. This confirms that the readOnly attribute works by sending the database operation to the slave and hence the exception. If you remove the “expected = GenericJDBCException.class”, the test will fail with the following error: “org.hibernate.exception.GenericJDBCException: Connection is read-only. Queries leading to data modification are not allowed” You can also verify on the MySQL master and slave by using the “show processlist” command while the tests are running. The @Repeat annotation runs the tests multiple (100) times to keep the connection process running. Of course, you can also enable the query log for this.

mysql connector/j replication option – SQLException: No database selected

I have the following mysql database cluster (1 master, 2 slave). The slaves can become master and master can become slaves. Trying to connect to this database cluster using Spring Boot “mysql-connector-java” dependency (version 8.0.15) so that the application will be able to dynamically find the master (read-write) and make read-write connections. But getting “No database selected” SQLException. What am I doing wrong? Is there a better way to connect master/slave database cluster in spring-boot?

Using the jdbc:mysql:replication with options loadBalanceConnectionGroup=first&allowMasterDownConnections=false in the spring datasource URL, with all the database hostnames. The application starts up but when running a rest API call which is in-turn running a query or updating the database table, getting the SQLException “No database selected”.

Here is my Spring Data Source Configuration –

spring.datasource.url=jdbc:mysql:replication://host1:3306,host2:3306/testdb?loadBalanceConnectionGroup=first&allowMasterDownConnections=false spring.datasource.username=myuser spring.datasource.password=mypassword spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver

Below is the exception stack trace –

Learn ProxySql & MySQL Replication and Spring Boot

1. Introduction

Amazon Aurora is a relational database management system (RDBMS) developed by AWS(Amazon Web Services). Aurora gives you the performance and availability of commercial-grade databases with full MySQL and PostgreSQL compatibility. In terms of high performance, Aurora MySQL and Aurora PostgreSQL have shown an increase in throughput of up to 5X over stock MySQL and 3X over stock PostgreSQL respectively on similar hardware. In terms of scalability, Aurora achieves enhancements and innovations in storage and computing, horizontal and vertical functions.

Aurora supports up to 128TB of storage capacity and supports dynamic scaling of storage layer in units of 10GB. In terms of computing, Aurora supports scalable configurations for multiple read replicas. Each region can have an additional 15 Aurora replicas. In addition, Aurora provides multi-primary architecture to support four read/write nodes. Its Serverless architecture allows vertical scaling and reduces typical latency to under a second, while the Global Database enables a single database cluster to span multiple AWS Regions in low latency.

Aurora already provides great scalability with the growth of user data volume. Can it handle more data and support more concurrent access? You may consider using sharding to support the configuration of multiple underlying Aurora clusters. To this end, a series of blogs, including this one, provides you with a reference in choosing between Proxy and JDBC for sharding.

1.1 Why sharding is needed

AWS Aurora offers a single relational database. Primary-secondary, multi-primary, and global database, and other forms of hosting architecture can satisfy various architectural scenarios above. However, Aurora doesn’t provide direct support for sharding scenarios, and sharding has a variety of forms, such as vertical and horizontal forms. If we want to further increase data capacity, some problems have to be solved, such as cross-node database Join , associated query, distributed transactions, SQL sorting, page turning, function calculation, database global primary key, capacity planning, and secondary capacity expansion after sharding.

1.2 Sharding methods

It is generally accepted that when the capacity of a MySQL table is less than 10 million, the time spent on queries is optimal because at this time the height of its BTREE index is between 3 and 5. Data sharding can reduce the amount of data in a single table and distribute the read and write loads to different data nodes at the same time. Data sharding can be divided into vertical sharding and horizontal sharding.

1. Advantages of vertical sharding

Address the coupling of business system and make clearer.

Implement hierarchical management, maintenance, monitoring, and expansion to data of different businesses, like micro-service governance.

In high concurrency scenarios, vertical sharding removes the bottleneck of IO, database connections, and hardware resources on a single machine to some extent.

2. Disadvantages of vertical sharding

After splitting the library, Join can only be implemented by interface aggregation, which will increase the complexity of development.

can only be implemented by interface aggregation, which will increase the complexity of development. After splitting the library, it is complex to process distributed transactions.

There is a large amount of data on a single table and horizontal sharding is required.

3. Advantages of horizontal sharding

There is no such performance bottleneck as a large amount of data on a single database and high concurrency, and it increases system stability and load capacity.

The business modules do not need to be split due to minor modification on the application client.

4. Disadvantages of horizontal sharding

Transaction consistency across shards is hard to be guaranteed;

The performance of associated query in cross-library Join is poor.

is poor. It’s difficult to scale the data many times and maintenance is a big workload.

Based on the analysis above, and the available studis on popular sharding middleware, we selected ShardingSphere, an open source product, combined with Amazon Aurora to introduce how the combination of these two products meets various forms of sharding and how to solve the problems brought by sharding.

ShardingSphere is an open source ecosystem including a set of distributed database middleware solutions, including 3 independent products, Sharding-JDBC, Sharding-Proxy & Sharding-Sidecar.

2. ShardingSphere introduction:

The characteristics of Sharding-JDBC are:

With the client end connecting directly to the database, it provides service in the form of jar and requires no extra deployment and dependence. It can be considered as an enhanced JDBC driver, which is fully compatible with JDBC and all kinds of ORM frameworks. Applicable in any ORM framework based on JDBC, such as JPA, Hibernate, Mybatis, Spring JDBC Template or direct use of JDBC. Support any third-party database connection pool, such as DBCP, C3P0, BoneCP, Druid, HikariCP; Support any kind of JDBC standard database: MySQL, Oracle, SQLServer, PostgreSQL and any databases accessible to JDBC. Sharding-JDBC adopts decentralized architecture, applicable to high-performance light-weight OLTP application developed with Java

Hybrid Structure Integrating Sharding-JDBC and Applications

Sharding-JDBC’s core concepts

Data node: The smallest unit of a data slice, consisting of a data source name and a data table, such as ds_0.product_order_0.

Actual table: The physical table that really exists in the horizontal sharding database, such as product order tables: product_order_0, product_order_1, and product_order_2.

Logic table: The logical name of the horizontal sharding databases (tables) with the same schema. For instance, the logic table of the order product_order_0, product_order_1, and product_order_2 is product_order.

Binding table: It refers to the primary table and the joiner table with the same sharding rules. For example, product_order table and product_order_item are sharded by order_id, so they are binding tables with each other. Cartesian product correlation will not appear in the multi-tables correlating query, so the query efficiency will increase greatly.

Broadcast table: It refers to tables that exist in all sharding database sources. The schema and data must consist in each database. It can be applied to the small data volume that needs to correlate with big data tables to query, dictionary table and configuration table for example.

3.1 Example project

Download the example project code locally. In order to ensure the stability of the test code, we choose shardingsphere-example-4.0.0 version.

git clone https://github.com/apache/shardingsphere-example.git

Project description:

shardingsphere-example ├── example-core │ ├── config-utility │ ├── example-api │ ├── example-raw-jdbc │ ├── example-spring-jpa #spring+jpa integration-based entity,repository │ └── example-spring-mybatis ├── sharding-jdbc-example │ ├── sharding-example │ │ ├── sharding-raw-jdbc-example │ │ ├── sharding-spring-boot-jpa-example #integration-based sharding-jdbc functions │ │ ├── sharding-spring-boot-mybatis-example │ │ ├── sharding-spring-namespace-jpa-example │ │ └── sharding-spring-namespace-mybatis-example │ ├── orchestration-example │ │ ├── orchestration-raw-jdbc-example │ │ ├── orchestration-spring-boot-example #integration-based sharding-jdbc governance function │ │ └── orchestration-spring-namespace-example │ ├── transaction-example │ │ ├── transaction-2pc-xa-example #sharding-jdbc sample of two-phase commit for a distributed transaction │ │ └──transaction-base-seata-example #sharding-jdbc distributed transaction seata sample │ ├── other-feature-example │ │ ├── hint-example │ │ └── encrypt-example ├── sharding-proxy-example │ └── sharding-proxy-boot-mybatis-example └── src/resources └── manual_schema.sql

Configuration file description:

application-master-slave.properties #read/write splitting profile application-sharding-databases-tables.properties #sharding profile application-sharding-databases.properties #library split profile only application-sharding-master-slave.properties #sharding and read/write splitting profile application-sharding-tables.properties #table split profile application.properties #spring boot profile

Code logic description:

The following is the entry class of the Spring Boot application below. Execute it to run the project.

The execution logic of demo is as follows:

3.2 Verifying read/write splitting

As business grows, the write and read requests can be split to different database nodes to effectively promote the processing capability of the entire database cluster. Aurora uses a reader/writer endpoint to meet users’ requirements to write and read with strong consistency, and a read-only endpoint to meet the requirements to read without strong consistency. Aurora’s read and write latency is within single-digit milliseconds, much lower than MySQL’s binlog -based logical replication, so there’s a lot of loads that can be directed to a read-only endpoint .

Through the one primary and multiple secondary configuration, query requests can be evenly distributed to multiple data replicas, which further improves the processing capability of the system. Read/write splitting can improve the throughput and availability of system, but it can also lead to data inconsistency. Aurora provides a primary/secondary architecture in a fully managed form, but applications on the upper-layer still need to manage multiple data sources when interacting with Aurora, routing SQL requests to different nodes based on the read/write type of SQL statements and certain routing policies.

ShardingSphere-JDBC provides read/write splitting features and it is integrated with application programs so that the complex configuration between application programs and database clusters can be separated from application programs. Developers can manage the Shard through configuration files and combine it with ORM frameworks such as Spring JPA and Mybatis to completely separate the duplicated logic from the code, which greatly improves the ability to maintain code and reduces the coupling between code and database.

3.2.1 Setting up the database environment

Create a set of Aurora MySQL read/write splitting clusters. The model is db.r5.2xlarge. Each set of clusters has one write node and two read nodes.

3.2.2 Configuring Sharding-JDBC

application.properties spring boot Master profile description:

You need to replace the green ones with your own environment configuration.

# Jpa automatically creates and drops data tables based on entities spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect spring.jpa.properties.hibernate.show_sql=true #spring.profiles.active=sharding-databases #spring.profiles.active=sharding-tables #spring.profiles.active=sharding-databases-tables #Activate master-slave configuration item so that sharding-jdbc can use master-slave profile spring.profiles.active=master-slave #spring.profiles.active=sharding-master-slave

application-master-slave.properties sharding-jdbc profile description:

spring.shardingsphere.datasource.names=ds_master,ds_slave_0,ds_slave_1 # data souce-master spring.shardingsphere.datasource.ds_master.driver-class-name=com.mysql.jdbc.Driver spring.shardingsphere.datasource.ds_master.password=Your master DB password spring.shardingsphere.datasource.ds_master.type=com.zaxxer.hikari.HikariDataSource spring.shardingsphere.datasource.ds_master.jdbc-url=Your primary DB data sourceurl spring.shardingsphere.datasource.ds_master.username=Your primary DB username # data source-slave spring.shardingsphere.datasource.ds_slave_0.driver-class-name=com.mysql.jdbc.Driver spring.shardingsphere.datasource.ds_slave_0.password= Your slave DB password spring.shardingsphere.datasource.ds_slave_0.type=com.zaxxer.hikari.HikariDataSource spring.shardingsphere.datasource.ds_slave_0.jdbc-url=Your slave DB data source url spring.shardingsphere.datasource.ds_slave_0.username= Your slave DB username # data source-slave spring.shardingsphere.datasource.ds_slave_1.driver-class-name=com.mysql.jdbc.Driver spring.shardingsphere.datasource.ds_slave_1.password= Your slave DB password spring.shardingsphere.datasource.ds_slave_1.type=com.zaxxer.hikari.HikariDataSource spring.shardingsphere.datasource.ds_slave_1.jdbc-url= Your slave DB data source url spring.shardingsphere.datasource.ds_slave_1.username= Your slave DB username # Routing Policy Configuration spring.shardingsphere.masterslave.load-balance-algorithm-type=round_robin spring.shardingsphere.masterslave.name=ds_ms spring.shardingsphere.masterslave.master-data-source-name=ds_master spring.shardingsphere.masterslave.slave-data-source-names=ds_slave_0,ds_slave_1 # sharding-jdbc configures the information storage mode spring.shardingsphere.mode.type=Memory # start shardingsphere log,and you can see the conversion from logical SQL to actual SQL from the print spring.shardingsphere.props.sql.show=true

3.2.3 Test and verification process description

Test environment data initialization: Spring JPA initialization automatically creates tables for testing.

Write data to the master instance

As shown in the ShardingSphere-SQL log figure below, the write SQL is executed on the ds_master data source.

Data query operations are performed on the slave library.

As shown in the ShardingSphere-SQL log figure below, the read SQL is executed on the ds_slave data source in the form of polling.

[INFO ] 2022-04-02 19:43:39,376 –main– [ShardingSphere-SQL] Rule Type: master-slave [INFO ] 2022-04-02 19:43:39,376 –main– [ShardingSphere-SQL] SQL: select orderentit0_.order_id as order_id1_1_, orderentit0_.address_id as address_2_1_, orderentit0_.status as status3_1_, orderentit0_.user_id as user_id4_1_ from t_order orderentit0_ ::: DataSources: ds_slave_0 —————————- Print OrderItem Data ——————- Hibernate: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_, orderiteme1_.user_id as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id [INFO ] 2022-04-02 19:43:40,898 –main– [ShardingSphere-SQL] Rule Type: master-slave [INFO ] 2022-04-02 19:43:40,898 –main– [ShardingSphere-SQL] SQL: select orderiteme1_.order_item_id as order_it1_2_, orderiteme1_.order_id as order_id2_2_, orderiteme1_.status as status3_2_, orderiteme1_.user_id as user_id4_2_ from t_order orderentit0_ cross join t_order_item orderiteme1_ where orderentit0_.order_id=orderiteme1_.order_id ::: DataSources: ds_slave_1

Note: As shown in the figure below, if there are both reads and writes in a transaction, Sharding-JDBC routes both read and write operations to the master library. If the read/write requests are not in the same transaction, the corresponding read requests are distributed to different read nodes according to the routing policy.

@Override @Transactional // When a transaction is started, both read and write in the transaction go through the master library. When closed, read goes through the slave library and write goes through the master library public void processSuccess() throws SQLException { System.out.println(“————– Process Success Begin —————“); List orderIds = insertData(); printData(); deleteData(orderIds); printData(); System.out.println(“————– Process Success Finish ————–“); }

3.2.4 Verifying Aurora failover scenario

The Aurora database environment adopts the configuration described in Section 2.2.1.

3.2.4.1 Verification process description

Start the Spring-Boot project

2. Perform a failover on Aurora’s console

3. Execute the Rest API request

4. Repeatedly execute POST (http://localhost:8088/save-user) until the call to the API failed to write to Aurora and eventually recovered successfully.

5. The following figure shows the process of executing code failover. It takes about 37 seconds from the time when the latest SQL write is successfully performed to the time when the next SQL write is successfully performed. That is, the application can be automatically recovered from Aurora failover, and the recovery time is about 37 seconds.

3.3 Testing table sharding-only function

3.3.1 Configuring Sharding-JDBC

application.properties spring boot master profile description

# Jpa automatically creates and drops data tables based on entities spring.jpa.properties.hibernate.hbm2ddl.auto=create-drop spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect spring.jpa.properties.hibernate.show_sql=true #spring.profiles.active=sharding-databases #Activate sharding-tables configuration items #spring.profiles.active=sharding-tables #spring.profiles.active=sharding-databases-tables # spring.profiles.active=master-slave #spring.profiles.active=sharding-master-slave

application-sharding-tables.properties sharding-jdbc profile description

## configure primary-key policy spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123 spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds.t_order_item_$->{0..1} spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2} spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123 # configure the binding relation of t_order and t_order_item spring.shardingsphere.sharding.binding-tables[0]=t_order,t_order_item # configure broadcast tables spring.shardingsphere.sharding.broadcast-tables=t_address # sharding-jdbc mode spring.shardingsphere.mode.type=Memory # start shardingsphere log spring.shardingsphere.props.sql.show=true

3.3.2 Test and verification process description

1. DDL operation

JPA automatically creates tables for testing. When Sharding-JDBC routing rules are configured, the client executes DDL, and Sharding-JDBC automatically creates corresponding tables according to the table splitting rules. If t_address is a broadcast table, create a t_address because there is only one master instance. Two physical tables t_order_0 and t_order_1 will be created when creating t_order .

2. Write operation

As shown in the figure below, Logic SQL inserts a record into t_order . When Sharding-JDBC is executed, data will be distributed to t_order_0 and t_order_1 according to the table splitting rules.

When t_order and t_order_item are bound, the records associated with order_item and order are placed on the same physical table.

3. Read operation

As shown in the figure below, perform the join query operations to order and order_item under the binding table, and the physical shard is precisely located based on the binding relationship.

The join query operations on order and order_item under the unbound table will traverse all shards.

3.4 Testing database sharding-only function

3.4.1 Setting up the database environment

Create two instances on Aurora: ds_0 and ds_1

When the sharding-spring-boot-jpa-example project is started, tables t_order , t_order_item , t_address will be created on two Aurora instances.

3.4.2 Configuring Sharding-JDBC

application.properties springboot master profile description

# Jpa automatically creates and drops data tables based on entities spring.jpa.properties.hibernate.hbm2ddl.auto=create spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect spring.jpa.properties.hibernate.show_sql=true # Activate sharding-databases configuration items spring.profiles.active=sharding-databases #spring.profiles.active=sharding-tables #spring.profiles.active=sharding-databases-tables #spring.profiles.active=master-slave #spring.profiles.active=sharding-master-slave

application-sharding-databases.properties sharding-jdbc profile description

spring.shardingsphere.datasource.names=ds_0,ds_1 # ds_0 spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver spring.shardingsphere.datasource.ds_0.jdbc-url= spring.shardingsphere.datasource.ds_0.username= spring.shardingsphere.datasource.ds_0.password= # ds_1 spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver spring.shardingsphere.datasource.ds_1.jdbc-url= spring.shardingsphere.datasource.ds_1.username= spring.shardingsphere.datasource.ds_1.password= spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2} spring.shardingsphere.sharding.binding-tables=t_order,t_order_item spring.shardingsphere.sharding.broadcast-tables=t_address spring.shardingsphere.sharding.default-data-source-name=ds_0 spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123 spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123 # sharding-jdbc mode spring.shardingsphere.mode.type=Memory # start shardingsphere log spring.shardingsphere.props.sql.show=true

3.4.3 Test and verification process description

1. DDL operation

JPA automatically creates tables for testing. When Sharding-JDBC’s library splitting and routing rules are configured, the client executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address is a broadcast table, physical tables will be created on ds_0 and ds_1 . The three tables, t_address , t_order and t_order_item will be created on ds_0 and ds_1 respectively.

2. Write operation

For the broadcast table t_address , each record written will also be written to the t_address tables of ds_0 and ds_1 .

The tables t_order and t_order_item of the slave library are written on the table in the corresponding instance according to the slave library field and routing policy.

3. Read operation

Query order is routed to the corresponding Aurora instance according to the routing rules of the slave library .

Query Address . Since address is a broadcast table, an instance of address will be randomly selected and queried from the nodes used.

As shown in the figure below, perform the join query operations to order and order_item under the binding table, and the physical shard is precisely located based on the binding relationship.

3.5 Verifying sharding function

3.5.1 Setting up the database environment

As shown in the figure below, create two instances on Aurora: ds_0 and ds_1

When the sharding-spring-boot-jpa-example project is started, physical tables t_order_01 , t_order_02 , t_order_item_01 ,and t_order_item_02 and global table t_address will be created on two Aurora instances.

3.5.2 Configuring Sharding-JDBC

application.properties springboot master profile description

# Jpa automatically creates and drops data tables based on entities spring.jpa.properties.hibernate.hbm2ddl.auto=create spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect spring.jpa.properties.hibernate.show_sql=true # Activate sharding-databases-tables configuration items #spring.profiles.active=sharding-databases #spring.profiles.active=sharding-tables spring.profiles.active=sharding-databases-tables #spring.profiles.active=master-slave #spring.profiles.active=sharding-master-slave

application-sharding-databases.properties sharding-jdbc profile description

spring.shardingsphere.datasource.names=ds_0,ds_1 # ds_0 spring.shardingsphere.datasource.ds_0.type=com.zaxxer.hikari.HikariDataSource spring.shardingsphere.datasource.ds_0.driver-class-name=com.mysql.jdbc.Driver spring.shardingsphere.datasource.ds_0.jdbc-url= 306/dev?useSSL=false&characterEncoding=utf-8 spring.shardingsphere.datasource.ds_0.username= spring.shardingsphere.datasource.ds_0.password= spring.shardingsphere.datasource.ds_0.max-active=16 # ds_1 spring.shardingsphere.datasource.ds_1.type=com.zaxxer.hikari.HikariDataSource spring.shardingsphere.datasource.ds_1.driver-class-name=com.mysql.jdbc.Driver spring.shardingsphere.datasource.ds_1.jdbc-url= spring.shardingsphere.datasource.ds_1.username= spring.shardingsphere.datasource.ds_1.password= spring.shardingsphere.datasource.ds_1.max-active=16 # default library splitting policy spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2} spring.shardingsphere.sharding.binding-tables=t_order,t_order_item spring.shardingsphere.sharding.broadcast-tables=t_address # Tables that do not meet the library splitting policy are placed on ds_0 spring.shardingsphere.sharding.default-data-source-name=ds_0 # t_order table splitting policy spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1} spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2} spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123 # t_order_item table splitting policy spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1} spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2} spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123 # sharding-jdbc mdoe spring.shardingsphere.mode.type=Memory # start shardingsphere log spring.shardingsphere.props.sql.show=true

3.5.3 Test and verification process description

1. DDL operation

JPA automatically creates tables for testing. When Sharding-JDBC’s sharding and routing rules are configured, the client executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address is a broadcast table, t_address will be created on both ds_0 and ds_1 . The three tables, t_address , t_order and t_order_item will be created on ds_0 and ds_1 respectively.

2. Write operation

For the broadcast table t_address , each record written will also be written to the t_address tables of ds_0 and ds_1 .

The tables t_order and t_order_item of the sub-library are written to the table on the corresponding instance according to the slave library field and routing policy.

3. Read operation

The read operation is similar to the library split function verification described in section2.4.3.

3.6 Testing database sharding, table sharding and read/write splitting function

3.6.1 Setting up the database environment

The following figure shows the physical table of the created database instance.

3.6.2 Configuring Sharding-JDBC

application.properties spring boot master profile description

# Jpa automatically creates and drops data tables based on entities spring.jpa.properties.hibernate.hbm2ddl.auto=create spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect spring.jpa.properties.hibernate.show_sql=true # activate sharding-databases-tables configuration items #spring.profiles.active=sharding-databases #spring.profiles.active=sharding-tables #spring.profiles.active=sharding-databases-tables #spring.profiles.active=master-slave spring.profiles.active=sharding-master-slave

application-sharding-master-slave.properties sharding-jdbc profile description

The url, name and password of the database need to be changed to your own database parameters.

spring.shardingsphere.datasource.names=ds_master_0,ds_master_1,ds_master_0_slave_0,ds_master_0_slave_1,ds_master_1_slave_0,ds_master_1_slave_1 spring.shardingsphere.datasource.ds_master_0.type=com.zaxxer.hikari.HikariDataSource spring.shardingsphere.datasource.ds_master_0.driver-class-name=com.mysql.jdbc.Driver spring.shardingsphere.datasource.ds_master_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0.username= spring.shardingsphere.datasource.ds_master_0.password= spring.shardingsphere.datasource.ds_master_0.max-active=16 spring.shardingsphere.datasource.ds_master_0_slave_0.type=com.zaxxer.hikari.HikariDataSource spring.shardingsphere.datasource.ds_master_0_slave_0.driver-class-name=com.mysql.jdbc.Driver spring.shardingsphere.datasource.ds_master_0_slave_0.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_0.username= spring.shardingsphere.datasource.ds_master_0_slave_0.password= spring.shardingsphere.datasource.ds_master_0_slave_0.max-active=16 spring.shardingsphere.datasource.ds_master_0_slave_1.type=com.zaxxer.hikari.HikariDataSource spring.shardingsphere.datasource.ds_master_0_slave_1.driver-class-name=com.mysql.jdbc.Driver spring.shardingsphere.datasource.ds_master_0_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_0_slave_1.username= spring.shardingsphere.datasource.ds_master_0_slave_1.password= spring.shardingsphere.datasource.ds_master_0_slave_1.max-active=16 spring.shardingsphere.datasource.ds_master_1.type=com.zaxxer.hikari.HikariDataSource spring.shardingsphere.datasource.ds_master_1.driver-class-name=com.mysql.jdbc.Driver spring.shardingsphere.datasource.ds_master_1.jdbc-url= spring.shardingsphere.datasource.ds_master_1.username= spring.shardingsphere.datasource.ds_master_1.password= spring.shardingsphere.datasource.ds_master_1.max-active=16 spring.shardingsphere.datasource.ds_master_1_slave_0.type=com.zaxxer.hikari.HikariDataSource spring.shardingsphere.datasource.ds_master_1_slave_0.driver-class-name=com.mysql.jdbc.Driver spring.shardingsphere.datasource.ds_master_1_slave_0.jdbc-url= spring.shardingsphere.datasource.ds_master_1_slave_0.username= spring.shardingsphere.datasource.ds_master_1_slave_0.password= spring.shardingsphere.datasource.ds_master_1_slave_0.max-active=16 spring.shardingsphere.datasource.ds_master_1_slave_1.type=com.zaxxer.hikari.HikariDataSource spring.shardingsphere.datasource.ds_master_1_slave_1.driver-class-name=com.mysql.jdbc.Driver spring.shardingsphere.datasource.ds_master_1_slave_1.jdbc-url= spring.shardingsphere.datasource.ds_master_1_slave_1.username=admin spring.shardingsphere.datasource.ds_master_1_slave_1.password= spring.shardingsphere.datasource.ds_master_1_slave_1.max-active=16 spring.shardingsphere.sharding.default-database-strategy.inline.sharding-column=user_id spring.shardingsphere.sharding.default-database-strategy.inline.algorithm-expression=ds_$->{user_id % 2} spring.shardingsphere.sharding.binding-tables=t_order,t_order_item spring.shardingsphere.sharding.broadcast-tables=t_address spring.shardingsphere.sharding.default-data-source-name=ds_master_0 spring.shardingsphere.sharding.tables.t_order.actual-data-nodes=ds_$->{0..1}.t_order_$->{0..1} spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.sharding-column=order_id spring.shardingsphere.sharding.tables.t_order.table-strategy.inline.algorithm-expression=t_order_$->{order_id % 2} spring.shardingsphere.sharding.tables.t_order.key-generator.column=order_id spring.shardingsphere.sharding.tables.t_order.key-generator.type=SNOWFLAKE spring.shardingsphere.sharding.tables.t_order.key-generator.props.worker.id=123 spring.shardingsphere.sharding.tables.t_order_item.actual-data-nodes=ds_$->{0..1}.t_order_item_$->{0..1} spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.sharding-column=order_id spring.shardingsphere.sharding.tables.t_order_item.table-strategy.inline.algorithm-expression=t_order_item_$->{order_id % 2} spring.shardingsphere.sharding.tables.t_order_item.key-generator.column=order_item_id spring.shardingsphere.sharding.tables.t_order_item.key-generator.type=SNOWFLAKE spring.shardingsphere.sharding.tables.t_order_item.key-generator.props.worker.id=123 # master/slave data source and slave data source configuration spring.shardingsphere.sharding.master-slave-rules.ds_0.master-data-source-name=ds_master_0 spring.shardingsphere.sharding.master-slave-rules.ds_0.slave-data-source-names=ds_master_0_slave_0, ds_master_0_slave_1 spring.shardingsphere.sharding.master-slave-rules.ds_1.master-data-source-name=ds_master_1 spring.shardingsphere.sharding.master-slave-rules.ds_1.slave-data-source-names=ds_master_1_slave_0, ds_master_1_slave_1 # sharding-jdbc mode spring.shardingsphere.mode.type=Memory # start shardingsphere log spring.shardingsphere.props.sql.show=true

3.6.3 Test and verification process description

1. DDL operation

JPA automatically creates tables for testing. When Sharding-JDBC’s library splitting and routing rules are configured, the client executes DDL, and Sharding-JDBC will automatically create corresponding tables according to table splitting rules. If t_address is a broadcast table, t_address will be created on both ds_0 and ds_1 . The three tables, t_address , t_order and t_order_item will be created on ds_0 and ds_1 respectively.

2. Write operation

For the broadcast table t_address , each record written will also be written to the t_address tables of ds_0 and ds_1 .

The tables t_order and t_order_item of the slave library are written to the table on the corresponding instance according to the slave library field and routing policy.

3. Read operation

The join query operations on order and order_item under the binding table are shown below.

3. Conclusion

As an open source product focusing on database enhancement, ShardingSphere is pretty good in terms of its community activitiy, product maturity and documentation richness.

Among its products, ShardingSphere-JDBC is a sharding solution based on the client-side, which supports all sharding scenarios. And there’s no need to introduce an intermediate layer like Proxy, so the complexity of operation and maintenance is reduced. Its latency is theoretically lower than Proxy due to the lack of intermediate layer. In addition, ShardingSphere-JDBC can support a variety of relational databases based on SQL standards such as MySQL/PostgreSQL/Oracle/SQL Server, etc.

However, due to the integration of Sharding-JDBC with the application program, it only supports Java language for now, and is strongly dependent on the application programs. Nevertheless, Sharding-JDBC separates all sharding configuration from the application program, which brings relatively small changes when switching to other middleware.

In conclusion, Sharding-JDBC is a good choice if you use a Java-based system and have to to interconnect with different relational databases — and don’t want to bother with introducing an intermediate layer.

Author

Sun Jinhua

A senior solution architect at AWS, Sun is responsible for the design and consult on cloud architecture. for providing customers with cloud-related design and consulting services. Before joining AWS, he ran his own business, specializing in building e-commerce platforms and designing the overall architecture for e-commerce platforms of automotive companies. He worked in a global leading communication equipment company as a senior engineer, responsible for the development and architecture design of multiple subsystems of LTE equipment system. He has rich experience in architecture design with high concurrency and high availability system, microservice architecture design, database, middleware, IOT etc.

J 8.0 Developer Guide :: 9.4 Configuring Source

9.4 Configuring Source/Replica Replication with Connector/J

This section describe a number of features of Connector/J’s support for replication-aware deployments.

The replication is configured at the initial setup stage of the server connection by the connection URL, which has a similar format as the general JDBC URL for MySQL connection, but a specialized scheme:

jdbc:mysql:replication://[source host][:port],[replica host 1][:port][,[replica host 2][:port]]…[/[database]] » [?propertyName1=propertyValue1[&propertyName2=propertyValue2]…]

Users may specify the property allowSourceDownConnections=true to allow Connection objects to be created even though no source hosts are reachable. Such Connection objects report they are read-only, and isSourceConnection() returns false for them. The Connection tests for available source hosts when Connection.setReadOnly(false) is called, throwing an SQLException if it cannot establish a connection to a source, or switching to a source connection if the host is available.

Users may specify the property allowReplicasDownConnections=true to allow Connection objects to be created even though no replica hosts are reachable. A Connection then, at runtime, tests for available replica hosts when Connection.setReadOnly(true) is called (see explanation for the method below), throwing an SQLException if it cannot establish a connection to a replica, unless the property readFromSourceWhenNoReplicas is set to be “true” (see below for a description of the property).

Scaling out Read Load by Distributing Read Traffic to Replicas

Connector/J supports replication-aware connections. It can automatically send queries to a read/write source host, or a failover or round-robin loadbalanced set of replicas based on the state of Connection.getReadOnly() .

An application signals that it wants a transaction to be read-only by calling Connection.setReadOnly(true) . The replication-aware connection will use one of the replica connections, which are load-balanced per replica host using a round-robin scheme. A given connection is sticky to a replica until a transaction boundary command (a commit or rollback) is issued, or until the replica is removed from service. After calling Connection.setReadOnly(true) , if you want to allow connection to a source when no replicas are available, set the property readFromSourceWhenNoReplicas to “true.” Notice that the source host will be used in read-only state in those cases, as if it is a replica host. Also notice that setting readFromSourceWhenNoReplicas=true might result in an extra load for the source host in a transparent manner.

If you have a write transaction, or if you have a read that is time-sensitive (remember, replication in MySQL is asynchronous), set the connection to be not read-only, by calling Connection.setReadOnly(false) and the driver will ensure that further calls are sent to the source MySQL server. The driver takes care of propagating the current state of autocommit, isolation level, and catalog between all of the connections that it uses to accomplish this load balancing functionality.

To enable this functionality, use the specialized replication scheme ( jdbc:mysql:replication:// ) when connecting to the server.

Here is a short example of how a replication-aware connection might be used in a standalone application:

import java.sql.Connection; import java.sql.ResultSet; import java.util.Properties; import java.sql.DriverManager; public class ReplicationDemo { public static void main(String[] args) throws Exception { Properties props = new Properties(); // We want this for failover on the replicas props.put(“autoReconnect”, “true”); // We want to load balance between the replicas props.put(“roundRobinLoadBalance”, “true”); props.put(“user”, “foo”); props.put(“password”, “password”); // // Looks like a normal MySQL JDBC url, with a // comma-separated list of hosts, the first // being the ‘source’, the rest being any number // of replicas that the driver will load balance against // Connection conn = DriverManager.getConnection(“jdbc:mysql:replication://source,replica1,replica2,replica3/test”, props); // // Perform read/write work on the source // by setting the read-only flag to “false” // conn.setReadOnly(false); conn.setAutoCommit(false); conn.createStatement().executeUpdate(“UPDATE some_table ….”); conn.commit(); // // Now, do a query from a replica, the driver automatically picks one // from the list // conn.setReadOnly(true); ResultSet rs = conn.createStatement().executeQuery(“SELECT a,b FROM alt_table”); ……. } }

Consider using the Load Balancing JDBC Pool (lbpool) tool, which provides a wrapper around the standard JDBC driver and enables you to use DB connection pools that includes checks for system failures and uneven load distribution. For more information, see Load Balancing JDBC Driver for MySQL (mysql-lbpool).

Support for Multiple-Source Replication Topographies

Connector/J supports multi-source replication topographies.

The connection URL for replication discussed earlier (i.e., in the format of jdbc:mysql:replication://source,replica1,replica2,replica3/test ) assumes that the first (and only the first) host is the source host. Supporting deployments with an arbitrary number of sources and replicas requires the “address-equals” URL syntax for multiple host connection discussed in Section 6.2, “Connection URL Syntax”, with the property type=[source|replica] ; for example:

jdbc:mysql:replication://address=(type=source)(host=source1host),address=(type=source)(host=source2host),address=(type=replica)(host=replica1host)/database

Connector/J uses a load-balanced connection internally for management of the source connections, which means that ReplicationConnection , when configured to use multiple sources, exposes the same options to balance load across source hosts as described in Section 9.3, “Configuring Load Balancing with Connector/J”.

Live Reconfiguration of Replication Topography

Connector/J also supports live management of replication host (single or multi-source) topographies. This enables users to promote replicas for Java applications without requiring an application restart.

The replication hosts are most effectively managed in the context of a replication connection group. A ReplicationConnectionGroup class represents a logical grouping of connections which can be managed together. There may be one or more such replication connection groups in a given Java class loader (there can be an application with two different JDBC resources needing to be managed independently). This key class exposes host management methods for replication connections, and ReplicationConnection objects register themselves with the appropriate ReplicationConnectionGroup if a value for the new replicationConnectionGroup property is specified. The ReplicationConnectionGroup object tracks these connections until they are closed, and it is used to manipulate the hosts associated with these connections.

Some important methods related to host management include:

getSourceHosts() : Returns a collection of strings representing the hosts configured as source hosts

getReplicaHosts() : Returns a collection of strings representing the hosts configured as replica hosts

addReplicaHost(String host) : Adds new host to pool of possible replica hosts for selection at start of new read-only workload

promoteReplicaToSource(String host) : Removes the host from the pool of potential replica hosts for future read-only processes (existing read-only process is allowed to continue to completion) and adds the host to the pool of potential source hosts

removeReplicaHost(String host, boolean closeGently) : Removes the host (host name match must be exact) from the list of configured replica hosts; if closeGently is false, existing connections which have this host as currently active will be closed hardly (application should expect exceptions)

removeSourceHost(String host, boolean closeGently) : Same as removeReplicaHost() , but removes the host from the list of configured source hosts

Some useful management metrics include:

getConnectionCountWithHostAsReplica(String host) : Returns the number of ReplicationConnection objects that have the given host configured as a possible replica host

getConnectionCountWithHostAsSource(String host) : Returns the number of ReplicationConnection objects that have the given host configured as a possible source host

getNumberOfReplicasAdded() : Returns the number of times a replica host has been dynamically added to the group pool

getNumberOfReplicasRemoved() : Returns the number of times a replica host has been dynamically removed from the group pool

getNumberOfReplicaPromotions() : Returns the number of times a replica host has been promoted to be a source host

getTotalConnectionCount() : Returns the number of ReplicationConnection objects which have been registered with this group

getActiveConnectionCount() : Returns the number of ReplicationConnection objects currently being managed by this group

ReplicationConnectionGroupManager

com.mysql.cj.jdbc.ha.ReplicationConnectionGroupManager provides access to the replication connection groups, together with some utility methods.

getConnectionGroup(String groupName) : Returns the ReplicationConnectionGroup object matching the groupName provided

The other methods in ReplicationConnectionGroupManager mirror those of ReplicationConnectionGroup , except that the first argument is a String group name. These methods will operate on all matching ReplicationConnectionGroups, which are helpful for removing a server from service and have it decommissioned across all possible ReplicationConnectionGroups .

These methods might be useful for in-JVM management of replication hosts if an application triggers topography changes. For managing host configurations from outside the JVM, JMX can be used.

Using JMX for Managing Replication Hosts

When Connector/J is started with ha.enableJMX=true and a value set for the property replicationConnectionGroup , a JMX MBean will be registered, allowing manipulation of replication hosts by a JMX client. The MBean interface is defined in com.mysql.cj.jdbc.jmx.ReplicationGroupManagerMBean , and leverages the ReplicationConnectionGroupManager static methods:

public abstract void addReplicaHost(String groupFilter, String host) throws SQLException; public abstract void removeReplicaHost(String groupFilter, String host) throws SQLException; public abstract void promoteReplicaToSource(String groupFilter, String host) throws SQLException; public abstract void removeSourceHost(String groupFilter, String host) throws SQLException; public abstract String getSourceHostsList(String group); public abstract String getReplicaHostsList(String group); public abstract String getRegisteredConnectionGroups(); public abstract int getActiveSourceHostCount(String group); public abstract int getActiveReplicaHostCount(String group); public abstract int getReplicaPromotionCount(String group); public abstract long getTotalLogicalConnectionCount(String group); public abstract long getActiveLogicalConnectionCount(String group);

Configuring Source/Replica Replication with DNS SRV

See Section 6.14, “Support for DNS SRV Records” for details.

Giới thiệu MySQL Replication

1. Giới thiệu

Right tool for right job. Trước tiên phải hiểu là MySQL Replication không phải là giải pháp giải quyết mọi bài toán về quá tải hệ thống cơ sở dữ liệu. Để mở rộng một hệ thống ta có hai phương pháp mở rộng là scale up và scale out. Bắt đầu với 1 máy chủ thì hai phương pháp trên được diễn giải như sau:

Scale up có nghĩa là với một máy chủ ta làm cách nào đó để nó có thể phục vụ nhiều hơn số lượng kết nối, truy vấn. Nghĩa là giá trị 1/(số kết nối phục vụ) càng nhỏ thì càng tốt. Để đạt được mục đích này thì có 2 phương pháp:

Tăng phần cứng lên cho máy chủ. Nghĩa là với CPU là 4 core, RAM là 8 GB phục vụ được 500 truy vấn thì giờ ta tăng CPU lên 24 core, RAM tăng lên 32GB -> máy chủ có thể phục vụ được số lượng kết nối truy vấn nhiều hơn.

Optimize ứng dụng, câu truy vấn. Ví dụ với câu truy vấn lấy dữ liệu tốn 5s để lấy được dữ liệu, sau đó mới trả lại tài nguyên cho hệ thống phục vụ các truy vấn khác. Máy chủ có thể đồng thời phục vụ 500 truy vấn dạng như vậy thì nếu ta tối ưu để truy vấn lấy dữ liệu chỉ tốn 1s => Máy chủ có thể phục vụ đồng thời nhiều truy vấn hơn

Scale out là giải pháp tăng số lượng server và dùng các giải pháp load-balacer để phân phối truy vấn ra nhiều server. Ví dụ bạn có 1 server có khả năng phục vụ 500 truy vấn. Nếu ta dựng thêm 5 server nữa có cấu hình tương tự, đặt thêm một LB phía trước để phân phối thì có khả năng hệ thống có thể phục vụ đc 5×500 truy vấn đồng thời.

MySQL Replication là một giải pháp scale out (tăng số lượng instance MySQL) nhưng không phải bài toán nào cũng dùng được. Các bài toán mà MySQL Replication sẽ giải quyết tốt:

Scale Read

Data Report

Real time backup

1.1 Scale Read

Scale Read thường gặp ở các ứng dụng mà số truy vấn đọc dữ liệu nhiều hơn ghi, tỉ lệ read/write có thể 80/20 hoặc hơn. Các ứng dụng thường gặp là báo, trang tin tức.

Với scale read ta sẽ chỉ có một Master instance phục vụ cho việc đọc/ghi dữ liệu. Có thể có một hoặc nhiều Slave instance chỉ phục vụ cho việc đọc dữ liệu

Một số ứng dụng write nhiều (thương mại điện tử) cũng có sử dụng MySQL Replication để scale out hệ thống

1.2 Data Report

Một số hệ thống cho phép một số người (leader, manager, người làm report, thống kê, data) truy cập vào dữ liệu trên production phục vụ cho công việc của họ. Việc chọc thẳng vào data production sẽ rất nguy hiểm vì:

Vô tình chỉnh sửa làm sai lệnh dữ liệu (nếu có quyền insert, update)

Vô tình thực thi các câu truy vấn tốn nhiều tài nguyên, thời gian truy vấn dài làm treo hệ thống

Việc setup một máy chủ làm data report (application cũng sẽ không kết nối tới server này) làm giảm thiểu 2 rủi ro trên

1.3 Real time backup

Với cơ sở dữ liệu lớn việc backup không thể thực hiện thường xuyên được (hàng giờ, hàng phút). Với các ứng dụng giao dịch tài chính, thanh toán, TMDT nếu bị mất dữ liệu 1 giờ, 1 ngày thì thiệt hại sẽ rất lớn (máy chủ chính tư dưng bị hỏng). Real time backup là một giải pháp bổ sung cho offline backup, chạy đồng thời cả 2 phương pháp này để bảo đảm sự an toàn cho dữ liệu.

2. Hoạt động như thế nào?

2.1 Một số mô hình

Với cả hai mô hình ta luôn chỉ có 1 Master database phục vụ cho Write dữ liệu, có thể có một hoặc nhiều Slave database. Tùy từng mô hình ta có thể cấu hình mỗi web node connect vào một Slave DB tương ứng hoặc có thể dùng một LB đặt trước cụm Slave để LB tự động phân phối connection vào từng Slave DB theo thuật toán của LB

2.2 Cách hoạt động

Trên Master:

Các kết nối từ web app tới Master DB sẽ mở một Session_Thread khi có nhu cầu ghi dữ liệu. Session_Thread sẽ ghi các statement SQL vào một file binlog (ví dụ với format của binlog là statement-based hoặc mix). Binlog được lưu trữ trong data_dir (cấu hình my.cnf) và có thể được cấu hình các thông số như kích thước tối đa bao nhiêu, lưu lại trên server bao nhiêu ngày.

khi có nhu cầu ghi dữ liệu. sẽ ghi các statement SQL vào một file binlog (ví dụ với format của binlog là statement-based hoặc mix). Binlog được lưu trữ trong (cấu hình my.cnf) và có thể được cấu hình các thông số như kích thước tối đa bao nhiêu, lưu lại trên server bao nhiêu ngày. Master DB sẽ mở một Dump_Thread và gửi binlog tới cho I/O_Thread mỗi khi I/O_Thread từ Slave DB yêu cầu dữ liệu

Trên Slave:

Trên mỗi Slave DB sẽ mở một I/O_Thread kết nối tới Master DB thông qua network, giao thức TCP (với MySQL 5.5 replication chỉ hỗ trợ Single_Thread nên mỗi Slave DB sẽ chỉ mở duy nhất một kết nối tới Master DB, các phiên bản sau 5.6, 5.7 hỗ trợ mở đồng thời nhiều kết nối hơn) để yêu cầu binlog.

kết nối tới Master DB thông qua network, giao thức TCP (với MySQL 5.5 replication chỉ hỗ trợ nên mỗi Slave DB sẽ chỉ mở duy nhất một kết nối tới Master DB, các phiên bản sau 5.6, 5.7 hỗ trợ mở đồng thời nhiều kết nối hơn) để yêu cầu binlog. Sau khi Dump_Thread gửi binlog tới I/O_Thead , I/O_Thread sẽ có nhiệm vụ đọc binlog này và ghi vào relaylog.

gửi binlog tới , sẽ có nhiệm vụ đọc binlog này và ghi vào relaylog. Đồng thời trên Slave sẽ mở một SQL_Thread , SQL_Thread có nhiệm vụ đọc các event từ relaylog và apply các event đó vào Slave => quá trình replication hoàn thành.

Về logic mỗi Slave DB sẽ chỉ nhận dữ liệu từ Master DB, mọi hành động cập nhật dữ liệu BẮT BUỘC phải được thực hiện trên Master. Về nguyên tắc nếu ghi dữ liệu trực tiếp lên Slave DB => hỏng replication. Nhưng thực chất ta hoàn toàn có thể ghi dữ liệu trên Slave miễn sao khi Slave đọc binlog và apply không đụng gì tới những trường dữ liệu mà ta mới ghi vào thì sẽ không bị lỗi (mục này sẽ nói thêm ở các phần sau)

Với MySQL 5.5 thì mỗi slave sẽ chỉ có một slave_thread connect tới Master, tuy nhiên từ phiên bản 5.6 chúng ta có thể cấu hình nhiều slave_thread để việc apply bin log tới các slave nhanh hơn.

3. Hướng dẫn cài đặt và cấu hình

Mô hình:

Master DB: 172.17.0.1

Slave DB: 172.17.0.2

Trên Master DB

Cấu hình my.cnf

event-scheduler = on bind-address = 172.17.0.1 server-id = 1 log-bin binlog-format=row binlog-do-db=dwh_prod binlog-ignore-db=mysql binlog-ignore-db=test sync_binlog=0 expire_logs_days=2

Tạo user replication

GRANT REPLICATION SLAVE ON *.* TO ‘slave_user’@’172.16.0.2’ IDENTIFIED BY ‘[email protected]’; FLUSH PRIVILEGES;

Tạo schema, dữ liệu để test

CREATE SCHEMA dwh_prod CHARACTER SET utf8 COLLATE utf8_general_ci; CREATE TABLE tb1 ( id INT, data VARCHAR(100) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; CREATE TABLE tb2 ( id INT, data VARCHAR(100) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; SHOW TABLES;

Trên Slave DB

Cấu hình my.cnf

event_scheduler=off bind-address = 172.17.0.2 server-id=2 log-bin binlog-format=row binlog-do-db=dwh_prod binlog-ignore-db=mysql binlog-ignore-db=test transaction-isolation=read-committed sync_binlog=0 expire_logs_days=2

Tạo replication và kiểm tra

Nguyên tắc khi tạo replication là phải LOCK tất cả các table trên Master DB, để dữ liệu không thay đổi, sau đó xác định binlog và position, 2 thông số dùng để cấu hình trên Slave xác định đoạn dữ liệu bắt đầu đồng bộ

Trên Master DB

FLUSH TABLES WITH READ LOCK; SHOW MASTER STATUS; +—————-+———-+————–+——————+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +—————-+———-+————–+——————+ | m01-bin.000001 | 827 | dwh_prod | mysql,test | +—————-+———-+————–+——————+

Giá trị cần quan tâm là

m01-bin.000001

827

Sau đó ta sẽ dump dữ liệu từ Master DB và đẩy qua Slave DB (sau khi dump xong có thể UNLOCK TABLES; để Master DB có thể hoạt động lại).

mysqldump -uroot -p dwh_prod > dwh_prod_03072015.sql rsync -avz -P -e’ssh’ dwh_prod_03072015.sql [email protected]:/root/

Trên Slave

mysql -uroot -p dwh_prod < /root/dwh_prod_03072015.sql > CHANGE MASTER TO MASTER_HOST=’172.17.0.1′,MASTER_USER=’slave_user’, MASTER_PASSWORD=’[email protected]’, MASTER_LOG_FILE=’m01-bin.000001′, MASTER_LOG_POS=827; > START SLAVE > SHOW SLAVE STATUS\G

Một số thông tin đã được lược bỏ cho dễ nhìn

*************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 172.0.0.1 Master_User: slave_user Master_Log_File: m01-bin.000001 Read_Master_Log_Pos: 827 Relay_Log_File: m02-relay-bin.000002 Relay_Log_Pos: 251 Relay_Master_Log_File: m01-bin.000001 Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 827 Relay_Log_Space: 405 Seconds_Behind_Master: 0 Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Master_Server_Id: 1

Các thông số cần quan tâm là

Last_Error: 0

Last_SQL_Error

Seconds_Behind_Master: 0

Hai thông số đầu tiên là lỗi khi Slave DB thực thi các event đọc từ relay log. Thông số Seconds_Behind_Master cho ta biết dữ liệu của Slave DB đang bị trễ (delay, lag) bao nhiêu giây so với Master DB. Các phần sau ta sẽ nói kỹ hơn về replication lag này.

4. Vận hành hệ thống MySQL Replicatione

4.1 Test logic replication

Ở trạng thái bình thường dữ liệu trên Slave DB đã đồng bộ với Master DB. Kiểm tra

Trên Master

mysql> USE dwh_prod mysql> SHOW TABLES; +——————–+ | Tables_in_dwh_prod | +——————–+ | tb1 | | tb2 | | tb3 | | tb4 | +——————–+

Trên Slave

mysql> USE dwh_prod mysql> SHOW TABLES; +——————–+ | Tables_in_dwh_prod | +——————–+ | tb1 | | tb2 | | tb3 | | tb4 | +——————–+

mysql -e -p ‘SHOW SLAVE STATUS\G’ | grep -i ‘error\|seconds’ Last_Error: Seconds_Behind_Master: 0 Last_IO_Error: Last_SQL_Error:

Mọi thứ đều ổn, không lỗi và không có Lag.

Giờ giả sử ta sẽ tạo một table với tên là tb00 trên Slave và kiểm tra xem có đúng là khi ghi dữ liệu lên Slave DB thì replication có bị hỏng hay không.

mysql> CREATE TABLE tb00 ( id INT, data VARCHAR(100) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; mysql> SHOW TABLES; +——————–+ | Tables_in_dwh_prod | +——————–+ | tb00 | | tb1 | | tb2 | | tb3 | | tb4 | +——————–+ 5 rows in set (0.00 sec)

Kiểm tra các table trên Master DB

mysql> SHOW TABLES; +——————–+ | Tables_in_dwh_prod | +——————–+ | tb1 | | tb2 | | tb3 | | tb4 | +——————–+

Và kiểm tra lại trạng thái của replication

mysql -e ‘SHOW SLAVE STATUS\G’ | grep -i ‘error\|seconds’ Last_Error: Seconds_Behind_Master: 0 Last_IO_Error: Last_SQL_Error:

=> Như ta thấy rõ ràng là dữ liệu trên Slave và Master đã khác nhau (Slave có tb00 nhưng Master thì không) nhưng trạng thái của replication vẫn hoàn toàn ổn.

Giờ chúng ta sẽ thử thêm một trường hợp nữa là trên Master ta sẽ tạo một table tên là tb6 để kiểm tra xem chuyện gì sẽ xảy ra

mysql> CREATE TABLE tb6 ( id INT, data VARCHAR(100) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; mysql> SHOW TABLES; +——————–+ | Tables_in_dwh_prod | +——————–+ | tb1 | | tb2 | | tb3 | | tb4 | | tb6 | +——————–+

Kiểm tra các table trên Slave DB

mysql> SHOW TABLES; +——————–+ | Tables_in_dwh_prod | +——————–+ | tb00 | | tb1 | | tb2 | | tb3 | | tb4 | | tb6 | +——————–+

=> bảng tb6 đã được đồng bộ từ Master qua, kiểm tra trạng thái replication

mysql -e ‘SHOW SLAVE STATUS\G’ | grep -i ‘error\|seconds’ Last_Error: Seconds_Behind_Master: 0 Last_IO_Error: Last_SQL_Error:

=> Mọi thứ vẫn ổn, nghĩa là dù ta có ghi dữ liệu vào Slave, nhưng nếu Master thực thi các câu truy vấn không đụng gì tới dữ liệu được ghi mới ở Slave thì trạng thái của replication vẫn ổn.

Giờ ta sẽ thực hiện thêm một thử nghiệm nữa là trên Master ta tạo một table tên là tb00, trùng với table đã tạo lúc trước ở Slave phía trên và kiểm tra lại trạng thái của replication

Kiểm tra trạng thái replication trên Slave

mysql -e ‘SHOW SLAVE STATUS\G’ | grep -i ‘error\|seconds’ Last_Error: Error ‘Table ‘tb00′ already exists’ on query. Default database: ‘dwh_prod’. Query: ‘CREATE TABLE tb00 ( Seconds_Behind_Master: NULL Last_IO_Error: Last_SQL_Error: Error ‘Table ‘tb00′ already exists’ on query. Default database: ‘dwh_prod’. Query: ‘CREATE TABLE tb00 (

=> như ta thấy hệ thống báo lỗi do trên Slave không thể thực thi hành động tạo table tb00 từ Master đẩy xuống (do table này đã tồn tại trước đó)

Kết Luận: Việc ghi dữ liệu vào Slave là có thể thực hiện được, nhưng sẽ gây ra rủi ro hỏng replication ở một lúc nào đó. Nhất là các câu truy vấn dạng SELECT … UPDATE. Tốt nhất là nên tránh ghi dữ liệu vào Slave

4.2 Replication Lag

Replication Lag là độ trễ dữ liệu của Slave so với Master. Khi triển khai một hệ thống MySQL Replication thì Lag là vấn đề chắc chắn gặp phải. Ta chỉ có thể giảm thiểu độ trễ dữ liệu trong mức chấp nhận được chứ không thể không có lag. Lí do là việc đồng bộ dữ liệu là Asynchronous, nghĩa là các Slave server không cần thông báo cho Master biết khi transaction thực hiện trên Slave thành công -> điều này giúp giữ nguyên hiệu suất (khác với cơ chế đồng bộ synchronous, một transaction được gọi là thành công khi nó committed trên master server và master server nhận được một thông báo từ slave server là transaction này đã được write và committed. Quá trình này đảm bảo tính thống nhất giữa master và slave server nhưng đồng thời nó làm giảm hiệu suất đi một nữa do các vấn đề về network, bandwidth, location.)

Vấn đề của replication lag ảnh hưởng tới các truy vấn vừa write dữ liệu xuống là read dữ liệu lên liền. Ví dụ

Một trang thương mại điện tử có tính năng add vào giỏ hàng một sản phẩm. Sau khi sản phẩm được add vào giỏ hàng sẽ trừ số lượng trong tồn kho. 2 user thực hiện mua sản phẩm đó (sản phẩm đó có số lượng tồn kho là 1). Cả 2 đều thấy sản phẩm đó trên website hiển thị trạng thái CÒN HÀNG. Khi một user mua sản phẩm đó và thanh toán thành công. Do độ trễ dữ liệu (ví dụ 5s) nên dữ liệu chưa đc cập nhật tồn kho xuống Slave là sản phẩm đã hết hàng. Khi user đó add giỏ hàng và thanh toán thì lúc này dữ liệu mới cập nhật và trả về mã lỗi là thanh toán không thành công do số lượng tồn kho không đáp ứng => ảnh hưởng tới trải nghiệm của user trên hệ thống.

Thường với những trường hợp này (truy vấn write xong là read liền) thì nên sử dụng cấu hình truy vấn trên Master (đây là lí do Master có thể vừa write vừa read chứ không nhất thiết là chỉ có write)

4.3 Lb mysql và healthchk

Như 1 trong 2 mô hình phía trên thì với mô hình thứ 2 ta có thể dùng haproxy làm lb cho các MySQL Instance.

Với mô hình 1 nhược điểm là nếu MySQL instance bị delay quá lâu, server quá tải hoặc rủi ro nhất là instace đó bị down thì ta không có cách nào check hoặc remove instance đó ra được.

Với mô hình 2 nhược điểm là ta mất thêm 1 layer (haproxy) nữa mới có thể kết nối tới MySQL (tốn thời gian, xử lí nhiều lớp) nhưng lợi điểm là có thể cấu hình healthchk, hoặc remove instance theo một số điều kiện.

5. Troubleshoot

6. Một số lưu ý

6.1 Vấn đề về server, phần cứng

Các vấn đề về CPU, RAM, đĩa cứng (kích thước, loại đĩa cứng, SSD hay HDD, tốc độ đọc ghi của đĩa cứng)

Với một hệ thống DB các thông số phần cứng NÊN quan tâm là

CPU: Càng nhiều core càng tốt, tốc độ càng nhanh càng tốt

RAM: RAM càng nhiều càng tốt

Với đĩa cứng

Nên sử dụng RAID 5, 6, 10

Nên sử dụng SSD (Enterprise thì càng tốt) IOPS càng cao càng tốt

Đĩa cứng nên có dung lượng ít nhất là x2 lần dung lượng của CSDL (sẽ cần thiết trong trường hợp dump, backup dữ liệu để fix replication)

Khác với các ứng dụng khác như web, static (thường CPU không cần nhiều core, đĩa cứng không cần nhiều và nhanh), máy chủ CSDL sẽ cần nhiều các thông số trên

Với AWS khi chọn Instance cũng nên chú ý các điểm trên

6.2 Các vấn đề về kích thước dữ liệu

Vấn đề kích thước dữ liệu ảnh hưởng khá nhiều đến vận hành một hệ thống MySQL Replication. Dữ liệu lớn thì quá trình replication đầu tiên hoặc khi hỏng replication sẽ rất lâu => Slave không thể sử dụng được trong thời gian replication, đến khi Second_Behind_Master = 0 thì mới có thể sử dụng được.

Ngoài ra các yếu tố về ổ đĩa cứng (SSD, tốc độ đọc ghi) cũng ảnh hưởng nhiều đến việc import hoặc apply các binlog từ Master

Dưới đây là một mô tả thực tế:

Dữ liệu thô /var/lib/mysql có kích thước 80-100GB

Dữ liệu dump ra chưa nén 18-30GB

Dữ liệu nén bằng chuẩn tgz ~ 2-3GB

Máy chủ 24 core, 32GB RAM, SSD Plextor M6 PRO (4×256, RAID 10)

Thời gian dump dữ liệu là 1h-1h30

Thời gian sync bản dump qua các server (local, port 1Gb) -> không nhớ

Thời gian import dữ liệu -> không nhớ

Thời gian Second_Behind_Master sau khi import xong -> không nhớ

7. Failover

Một vấn đề khác ngoài chuyện scale đó là nếu master db chết thì chuyện gì xảy ra?. Có một số mindset mà bạn bắt buộc phải hiểu khi chọn giải pháp replication master-slave đó là:

Quá trình promote 1 Slave thay thế Master là thủ công, không thể tự động switch sang slave mà hệ thống không có vấn đề gì.

Vẫn sẽ có downtime nếu master db chết, tuy nhiên việc dùng slave đảm bảo thời gian downtime tối thiểu nhất có thể.

Quay trở lại mô hình 1 master và 2 slave (gọi lần lượt là S1 và S2), ta cần trả lời là nếu master chết thì chuyện gì xảy ra với hệ thống và cách promote một slave lên thay thế master là gì?

Mặc định, Slave vẫn sẽ có binlog, và binlog này là của riêng slave chứ không giống với binlog của master (binlog của master khi đẩy qua slave sẽ thành relay-log), có nghĩa là nếu S1 đẩy lên làm master thì S2 sẽ không còn đồng bộ với S1 nữa và ta sẽ cần build lại S2.

Để giải quyết vấn đề này, mysql khuyến cáo chúng ta bật –skip-log-slave-updates trên Slave, chuyện này đảm bảo:

Slave vẫn sẽ có binlog nhưng với các hành vi apply relay-log (update dữ liệu như master) thì slave sẽ không ghi ra binlog.

Khi master chết, ta có nhu cầu promote S1 lên làm master, ta sẽ cần reset master của S2 trỏ về S1, tuy nhiên như ở trên ta sẽ cần chỉ định file binlog và position của file log, và do S1 sau khi đc đổi thành master thì mới bắt đầu sinh ra binlog, nên trên S2 ta chỉ cần trỏ về file binlog và position đầu tiên của S2 là đủ. => chuyện này đảm bảo rằng S2 sẽ đồng bộ dữ liệu với S1.

Sau khi việc promote hoàn thành, ta có thể cập nhật lại ở phía client địa chỉ củ a S1 và hoàn thành việc bảo trì hệ thống. Tuy nhiên để ý là quá trình trên là thủ công và ta vẫn có downtime trong quá trình promote.

Tuy nhiên, điều trên chỉ đúng chỉ đúng khi slave sync với master trước khi master chết với second_behind_master = 0 .

https://dev.mysql.com/doc/refman/8.0/en/replication-solutions-switch.html

8. Semi-synchronous

Có một vấn đề với asynchronous đó là nếu bạn có nhu cầu đọc ngay dữ liệu vừa ghi xuống thì có thể dữ liệu sẽ sai, do slave chưa kịp apply dữ liệu từ master (lag dữ liệu), có 2 cách giải quyết tạm:

Với trường hợp vừa ghi và đọc liền dữ liệu, ta nên dùng ở master.

Dùng cơ chế semi-synchronous để giảm lag dữ liệu.

Semi-synchronous là một kiểu lai giữa asynchronous và synchronous. Bình thường nếu xài synchronous thì càng nhiều slave thì càng giảm tốc độ ghi dữ liệu, do slave phải committed và trả lời ngược về master. Với semi-synchronous thì master coi như ghi thành công là khi có tối thiểu một slave đã nhận và ghi ra relay log event mà master gửi qua. Điểm khác biệt là không cần tất cả các slave gửi tín hiệu ngược lại master, và event cũng không bắt buộc phải được execute và commited trên slave, chỉ cần đảm bảo là đã nhận và ghi ra relay log là đủ.

Như mô tả ở trên thì slave vẫn có thể không có dữ liệu nếu relay log bị tác động với con người, hoặc server bị hỏng ngay khi chưa kịp apply relay log, tuy nhiên nhờ việc đảm bảo binlog event đã đc nhận với slave và ghi xuống đĩa đã làm giảm thời gian delay và vấn đề về data race condition có thể được hạn chế phần nào.

Bài viết gốc được đăng tải tại https://xluffy.github.io/post/intro-mysql-replication

Slave Load Balancing with JPA and Spring — Dragisa Krsmanovic

MySQL Connector/J driver has built-in feature for load balancing.

If you have a cluster of read/write MySQL servers. Putting loadbalance: in the JDBC URL will ensure both read and write operations are distributed across servers.

jdbc:mysql:loadbalance://master1,master2,master3…/database?loadBalanceBlacklistTimeout=5000&loadBalanceConnectionGroup=cgroup&loadBalanceEnableJMX=true&autoReconnect=true&autoReconnectForPools=true

What we needed is all write operations to go to master server and read-only operations to be equally distributed among multiple read-only slaves.

For that you need to:

Use special JDBC driver: com.mysql.jdbc.ReplicationDriver Set replication: in the URL:

jdbc:mysql:replication://master,slave1,slave2…/database?loadBalanceBlacklistTimeout=5000&loadBalanceConnectionGroup=ugc&loadBalanceEnableJMX=true&autoReconnect=true&autoReconnectForPools=true

After setting our connection pool like this, all load still ended up going to our single read/write master server.

The reason is that, for the ReplicationDriver to know that queries can go to read-only slaves, two conditions need to be met:

Auto commit needs to be turned off. (*) Connection needs to be set to read-only.

(*)There is a workaround to allow auto commit: Connector/J load-balancing for auto-commit-enabled deployments

Turns out, even if transaction is set to read-only, neither Spring nor JPA providers like Hibernate or EclipseLink will set JDBC connection to readOnly.

To ensure JDBC Connection is set to read-only, I created an annotation and a simple AOP interceptor.

Here is an example code:

키워드에 대한 정보 spring boot mysql replication

다음은 Bing에서 spring boot mysql replication 주제에 대한 검색 결과입니다. 필요한 경우 더 읽을 수 있습니다.

이 기사는 인터넷의 다양한 출처에서 편집되었습니다. 이 기사가 유용했기를 바랍니다. 이 기사가 유용하다고 생각되면 공유하십시오. 매우 감사합니다!

사람들이 주제에 대해 자주 검색하는 키워드 MySQL replication

  • Replication (computer Science)
  • MySQL

MySQL #replication


YouTube에서 spring boot mysql replication 주제의 다른 동영상 보기

주제에 대한 기사를 시청해 주셔서 감사합니다 MySQL replication | spring boot mysql replication, 이 기사가 유용하다고 생각되면 공유하십시오, 매우 감사합니다.

See also  지방직 공무원 직렬 | 후회하지 않기 위해 공무원 직렬 선택 시 반드시 알아야 할 것. [꿀보직, 워라밸, 공시생, 9급 공무원, 공무원 시험] 답을 믿으세요

Leave a Comment