Motto/theme: Maintenance and initial MySQL Fabric support
Note:
This is the current development series. All
features are at an early stage. Changes may happen at any time without
prior notice. Please, do not use this version in production environments.
The documentation may not reflect all changes yet.
Bug fixes
Won't fix: #66616 R/W split fails: QOS with mysqlnd_get_last_gtid with built-in MySQL GTID
This is not a bug in the plugins implementation but a server side feature
limitation not considered and documented before. MySQL 5.6 built-in GTIDs cannot be
used to ensure session consistency when reading from slaves in all cases.
In the worst case the plugin will not consider using the slaves and
fallback to using the master. There will be no wrong results but no benefit
from doing GTID checks either.
Fixed #66064 - Random once load balancer ignoring weights
Due to a config parsing bug random load balancing has ignored node
weights if, and only if, the sticky flag was set (random once).
Fixed #65496 - Wrong check for slave delay
The quality of service filter has erroneously ignored slaves
that lag for zero (0) seconds if a any maximum lag had been set.
Although a slave was not lagging behind, it was excluded from the
load balancing list if a maximum age was set by the QoS filter.
This was due to using the wrong comparison operator in the source
of the filter.
Fixed #65408 - Compile failure with -Werror=format-security
Feature changes
Introduced an internal connection pool. When using Fabric and switching
from shard group A to shard group B, we are replacing the entire list of
masters and slaves. This troubles the connections state alignment logic and
some filters. Some filters cache information on the master and slave lists.
The new internal connection pool abstraction allows us to inform the filters
of changes, hence they can update their caches.
Later on, the pool can also be used to reduce connection overhead. Assume
you are switching from a shard group to another and back again. Whenever
the switch is done, the pool's active server (and connection) lists are
replaced. However, no longer used connections are not necessarily closed
immediately but can be kept in the pool for later reuse.
Please note, the connection pool is internalat this point. There are some
new statistics to monitor it. However, you cannot yet configure pool size
of behaviour.
Added a basic distributed transaction abstraction.
XA transactions can are supported ever since using standard SQL calls.
This is inconvenient as XA participants must be managed manually.
PECL/mysqlnd_ms introduces API calls to control XA transaction among MySQL servers.
When using the new functions, PECL/mysqlnd_ms acts as a transaction coordinator.
After starting a distributed transaction, the plugin tracks all servers
involved until the transaction is ended and issues appropriate SQL statements
on the XA participants.
This is useful, for example, when using Fabric and sharding. When using
Fabric the actual shard servers involved in a business transaction may not be known
in advance. Thus, manually controlling a transaction that spawns multiple shards becomes
difficult.
Please, be warned about current limitations.
Introduced automatic retry loop for
transient errors and
corresponding statistic
to count the number of implicit retries. Some distributed
database clusters use transient errors to hint a client to retry
its operation in a bit. Most often, the client is then supposed
to halt execution (sleep) for a short moment before retrying the
desired operation. Immediately failing over to another node
is not necessary in response to the error. Instead, a retry loop
can be performed. Common situation when using MySQL Cluster.
Introduced automatic retry loop for
transient errors and
corresponding statistic
to count the number of implicit retries. Some distributed
database clusters use transient errors to hint a client to retry
its operation in a bit. Most often, the client is then supposed
to halt execution (sleep) for a short moment before retrying the
desired operation. Immediately failing over to another node
is not necessary in response to the error. Instead, a retry loop
can be performed. Common situation when using MySQL Cluster.
Introduced most basic support
for the MySQL Fabric High Availability and sharding framework.
Please, consider this pre-alpha quality. Both the
server side framework and the client side code is supposed to work flawless
considering the MySQL Fabric quickstart examples only. However, testing has not been
performed to the level of prior plugin alpha releases.
Either sides are moving targets, API changes may happen at any
time without prior warning.
As this is work in progress, the manual may not yet reflect allow
feature limitations and known bugs.
New
statistics
to monitor the Fabric XML RPC call sharding.lookup_servers:
fabric_sharding_lookup_servers_success,
fabric_sharding_lookup_servers_failure,
fabric_sharding_lookup_servers_time_total,
fabric_sharding_lookup_servers_bytes_total,
fabric_sharding_lookup_servers_xml_failure.