MySQL

Webinar Thursday June 22, 2017: Deploying MySQL in Production

MySQL Performance Blog - Tue, 2017-06-20 22:42

Join Percona’s Senior Operations Engineer, Daniel Kowalewski as he presents Deploying MySQL in Production on Thursday, June 22, 2017 at 11:00 am PDT / 2:00 pm EDT (UTC-7).

Register Now  MySQL is famous for being something you can install and get going in less than five minutes in terms of development. But normally you want to run MySQL in production, and at scale. This requires some planning and knowledge. So why not learn the best practices around installation, configuration, deployment and backup?

This webinar is a soup-to-nuts talk that will have you going from zero to hero in no time. It includes discussion of the best practices for installation, configuration, taking backups, monitoring, etc.

Register for the webinar here.

Daniel Kowalewski, Senior Technical Operations Engineer

Daniel has been designing and deploying solutions around MySQL for over ten years. He lives for those magic moments where response time drops by 90%, and loves adding more “nines” to everything.

Categories: MySQL

The MySQL High Availability Landscape in 2017 (The Elders)

MySQL Performance Blog - Tue, 2017-06-20 22:36

In this blog, we’ll look at different MySQL high availability options.

The dynamic MySQL ecosystem is rapidly evolving many technologies built around MySQL. This is especially true for the technologies involved with the high availability (HA) aspects of MySQL. When I joined Percona back in 2009, some of these HA technologies were very popular – but have since been almost forgotten. During the same interval, new technologies have emerged. In order to give some perspective to the reader, and hopefully help to make better choices, I’ll review the MySQL HA landscape as it is in 2017. This review will be in three parts. The first part (this post) will cover the technologies that have been around for a long time: the elders. The second part will focus on the technologies that are very popular today: the adults. Finally, the last part will try to extrapolate which technologies could become popular in the upcoming years: the babies.

Quick disclaimer, I am reporting on the technologies I see the most. There are likely many other solutions not covered here, but I can’t talk about technologies I have barely or never used. Apart from the RDS-related technologies, all the technologies covered are open-source. The target audience for this post are people relatively new to MySQL.

The Elders

Let’s define the technologies in the elders group. These are technologies that anyone involved with MySQL for last ten years is sure to be aware of. I could have called this group the “classics”.  I include the following technologies in this group:

  • Replication
  • Shared storage
  • NDB cluster

Let’s review these technologies in the following sections.

Replication Simple replication topology

 

MySQL replication is very well known. It is one of the main features behind the wide adoption of MySQL. Replication gets used almost everywhere. The reasons for that are numerous:

  • Replication is simple to setup. There are tons of how-to guides and scripts available to add a slave to a MySQL server. With Amazon RDS, adding a slave is just a few clicks.
  • Slaves allow you to easily scale reads. The slaves are accessible and can be used for reads. This is the most common way of scaling up a MySQL database.
  • Slaves have little impact on the master. Apart from the added network traffic, the presence of slaves does not impact the master performance significantly.
  • It is well known. No surprises here.
  • Used for failover. Your master died, promote a slave and use it as your new master.
  • Used for backups. You don’t want to overload your master with the backups, run them off a slave.

Of course, replication also has some issues:

  • Replication can lag. Replication used to be single-threaded. That means a master with a concurrent load could easily outpace a slave. MySQL 5.6 and MariaDB 10.0 have introduced some parallelism to the slave. Newer versions have further improved to a point where today’s slaves are many times faster than they were.
  • Slaves can diverge. When you modify data on the master, the slave must perform the exact same update. That seems easy, but there are many ways an update can be non-deterministic with statement-based replication. They fixed many issues, and the introduction of row-based replication has been another big step forward. Still, if you write directly to a slave you are asking for trouble. There is a read_only setting, but if the MySQL user has the “SUPER” privilege it is just ignored. That’s why there is now the “super_read_only” setting. Tools like pt-table-checksum and pt-table-sync from the Percona toolkit exist to solve this problem.
  • Replication can impact the master. I wrote above that the presence of slaves does not affect the master, but logging changes are more problematic. The most common issue is the InnoDB table-level locking for auto_increment values with statement-based replication. Only one thread can insert new rows at a time. You can avoid this issue with row-based replication and properly configuring settings.
  • Data gets lost. Replication is asynchronous. That means the master will reply “done” after a commit statement even though the slaves have not received updates yet. Some transactions can get lost if the master crashes.

Although an old technology, a lot of work has been done on replication. It is miles away from the replication implementation of 5.0.x. Here’s a list, likely incomplete, of the evolution of replication:

  • Row based replication (since 5.1). The binary internal representation of the rows is sent instead of the SQL statements. This makes replication more robust against slave divergence.
  • Global transaction ID (since 5.6). Transactions are uniquely identified. Replication can be setup without knowing the binlog file and offset.
  • Checksum (since 5.6). Binlog events have checksum values to validate their integrity.
  • Semi-sync replication (since 5.5). An addition to the replication protocol to make the master aware of the reception of events by the slaves. This helps to avoid losing data when a master crashes.
  • Multi-source replication (since 5.7). Allows a slave to have more than one master.
  • Multi-threaded replication (since 5.6). Allows a slave to use multiple threads. This helps to limit the slave lag.

Managing replication is a tedious job. The community has written many tools to manage replication:

  • MMM. An old Perl tool that used to be quite popular, but had many issues. Now rarely used.
  • MHA. The most popular tool to manage replication. It excels at reconfiguring replication without losing data, and does a decent at handling failover.  It is also simple. No wonder it is popular.
  • PRM. A Pacemaker-based solution developed to replace MMM. It’s quite good at failover, but not as good as MHA at reconfiguring replication. It’s also quite complex, thanks to Pacemaker. Not used much.
  • Orchestrator. The new cool tool. It can manage complex topologies and has a nice web-based interface to monitor and control the topology.

 

Shared Storage Simple shared storage topology

 

Back when I was working for MySQL ten years ago, shared storage HA setups were very common. A shared storage HA cluster uses one copy of the database files between one of two servers. One server is active, the other one is passive. In order to be shared, the database files reside on a device that can be mounted by both servers. The device can be physical (like a SAN), or logical (like a Linux DRBD device). On top of that, you need a cluster manager (like Pacemaker) to handle the resources and failovers. This solution is very popular because it allows for failover without losing any transactions.

The main drawback of this setup is the need for an idle standby server. The standby server cannot have any other assigned duties since it must always be ready to take over the MySQL server. A shared storage solution is also obviously not resilient to file-level corruption (but that situation is exceptional). Finally, it doesn’t play well with a cloud-based environment.

Today, newly-deployed shared storage HA setups are rare. The only ones I encountered over the last year were either old implementations needing support, or new setups that deployed because of existing corporate technology stacks. That should tell you about the technology’s loss of popularity.

NDB Cluster A simple NDB Cluster topology

 

An NDB Cluster is a distributed clustering solution that has been around for a long time. I personally started working with this technology back in 2008. An NDB Cluster has three types of nodes: SQL, management and data. A full HA cluster requires a minimum of four nodes.

An NDB Cluster is not a general purpose database due to its distributed nature. For suitable workloads, it is extraordinary good. For unsuitable workloads, it is miserable. A suitable workload for an NDB Cluster contains high concurrency, with a high rate of small primary key oriented transactions. Reaching one million trx/s on an NDB Cluster is nothing exceptional.

At the other end of the spectrum, a poor workload for an NDB Cluster is a single-threaded report query on a star-like schema. I have seen some extreme cases where just the network time of a reporting query amounted to more than 20 minutes.

Although NDB Clusters have improved, and are still improving, their usage has been pushed toward niche-type applications. Overall, the technology is losing ground and is now mostly used for Telcos and online gaming applications.

Categories: MySQL

Upcoming HA Webinar Wed 6/21: Percona XtraDB Cluster, Galera Cluster, MySQL Group Replication

MySQL Performance Blog - Mon, 2017-06-19 18:53

Join Percona’s MySQL Practice Manager Kenny Gryp and QA Engineer, Ramesh Sivaraman as they present a high availability webinar around Percona XtraDB Cluster, Galera Cluster, MySQL Group Replication on Wednesday, June 21, 2017 at 10:00 am PDT / 1:00 pm EDT (UTC-7).

Register Now

What are the implementation differences between Percona XtraDB Cluster 5.7, Galera Cluster 5.7 and MySQL Group Replication?

  • How do they work?
  • How do they behave differently?
  • Do these methods have any major issues?

This webinar will describe the differences and shed some light on how QA is done for each of the different technologies.

Register for the webinar here.

Ramesh Sivaraman, QA Engineer

Ramesh joined the Percona QA Team in March 2014. He has almost six years of experience in database administration and, before joining Percona, was giving MySQL database support to various service and product based internet companies. Ramesh’s professional interests include writing shell/Perl script to automate routine tasks and new technology. Ramesh lives in Kerala, the southern part of India, close to his family.

Kenny Gryp, MySQL Practice Manager

Kenny is currently MySQL Practice Manager at Percona.

Categories: MySQL

MariaDB Server 10.2 GA Release Overview

MySQL Performance Blog - Mon, 2017-06-19 18:36

This blog post looks at the recent MariaDB Server 10.2 GA release.

Congratulations to the MariaDB Foundation for releasing a generally available (GA) stable version of MariaDB Server 10.2! We’ll definitely spend the next few weeks talking about MariaDB Server 10.2, but here’s a quick overview in the meantime. Keep in mind that when thinking about compatibility, this is meant to be the equivalent of MySQL 5.7 (GA: October 21, 2015, with Percona Server for MySQL 5.7 GA available February 23, 2016).

Some of the highlights include:

  • Window functions – this is the first release in the MySQL ecosystem that includes Window functions and Recursive Common Table Expression. At the time of this writing, MariaDB hasn’t completed the documentation. It is worth noting that the implementation of Window functions in MariaDB Server 10.2 differs from what you see in MariaDB ColumnStore.
  • JSON functions – Many JSON functions to query, update, index and validate JSON. It’s worth noting that MariaDB Server 10.2 does not include a JSON data type as compared to MySQL 5.7). This means you can’t do CREATE TABLE t1 (jdoc JSON) – instead you need to use a VARCHAR or TEXT column. There are also other differences that produce different result sets, and seemingly no column path operator.
  • There is also support for GeoJSON functionality, but when we tried ST_AsGeoJSON (yes, documentation needs work), we noticed that the output could vary from MySQL 5.7.
  • MyRocks – MariaDB added the hot new storage engine MyRocks as an alpha. You will have to install the MyRocks engine package separably. It isn’t fully merged yet. Watch the umbrella task MDEV-9658.
  • SHOW CREATE USER – A new SHOW CREATE USER statement allows you to look at user limitation, as you can now limit users to a maximum number of queries, updates and connections per hour, as well as a maximum number of connections permitted by the user (see setting account resource limits for the MySQL 5.7 equivalent). You’ll want to read the updated documentation around CREATE USER. Don’t be surprised when you see something like “ERROR 1226 (42000): User ‘foo’ has exceeded the ‘max_user_connections’ resource (current value: 1)”. This also is an extension to ALTER USER.
  • Flashback – binary log based rollback, aka flashback, can rollback tables and databases to an older snapshot. This should help when the DBA or a user makes an error. This tool works well as long as its a DML statement. This feature came from Alibaba’s AliSQL tree.
  • Time delayed replication – new in MariaDB Server 10.2.
  • OpenSSL 1.1 – now there is support for OpenSSL 1.1, LibreSSL
  • MariaDB Connector/C – most importantly, MariaDB Connector/C replaces libmysql (see: MDEV-9055). This should be API and ABI compatible, but naturally there are some teething problems (see: MDEV-12950).
  • Amazon Key Management plugin – from a key management standpoint, the Amazon Key Management plugin is now available to use for encryption. It’s compiled and available as a package. Previously, you had to compile it yourself.

Some of the important things to take note of are:

  • As of this release, MariaDB now ships with InnoDB as the default storage engine (as opposed to Percona XtraDB). This means that from here on out, the improvements and fixes to Percona XtraDB won’t necessarily be available in MariaDB. This also means that Percona XtraDB parameters might get ignored (as reported in MDEV-12472).
  • In MySQL 5.6+, you can use SHA-256 pluggable authentication. However, this features is still not implemented in MariaDB Server 10.2 (see: MDEV-9804). You can use the ed25519 authentication plugin as a replacement, however.
  • When it comes to replication, MySQL 5.7 defaults to row-based replication. MariaDB Server 10.2 defaults to mixed-mode replication (see the discussion around this at MDEV-7635).
  • It is worth noting that in order to make MariaDB Server more “Oracle compatible,” DECIMAL now goes up to 38 decimals instead of 30 decimals. MDEV-10138 tells you what happens when you migrate from a long decimal to a default decimal type install (i.e., if you’re moving to another variant in the MySQL ecosystem).
  • If you’re familiar with how MySQL 5.7 manages passwords and a new install, the MariaDB Server 10.2 method hasn’t changed.

All in all, this release took a little over a year to make (Alpha was 18 April 2016, GA was 23 May 2017). It is extremely important to read the release notes and the changelogs of each and every release, as MariaDB Server diverges from MySQL quite a bit. At Percona, we will monitor Jira closely to ensure that you always stay informed of the latest changes.

Categories: MySQL

Peter Zaitsev’s Speaking Schedule: Percona University Belgium / PG Day / Meetups

MySQL Performance Blog - Fri, 2017-06-16 15:05

This blog shows Peter Zaitsev’s speaking schedule for this summer.

Summer 2017 Speaking Engagements

This week I spoke at the DB Tech Showcase OSS conference in Japan and am now heading to Europe. I have a busy schedule in June and early July, but there are events and places where we can cross paths and have a quick conversation. Let’s meet at these events if you need anything from Percona (or me personally). 

Below is a full list of places I’ll be at this summer:

Amsterdam, Netherlands

On June 20 I am speaking at the In-Memory Computing Summit 2017 with Denis Magda (Product Manager, Gridgain Systems). Our talk “Accelerate MySQL® for Demanding OLAP and OLTP Use Cases with Apache® Ignite™” starts at 2:35 pm.

On the same day in Amsterdam, Denis and I will speak at the local MySQL User Group meetup. I will share some how-tos for MySQL monitoring with Percona Monitoring and Management (PMM), along with a PMM demo.

Ghent, Belgium

On June 22 we are organizing a Percona University event in Ghent, Belgium, which is a widely known tech hub in the region. I will give several talks there on MySQL, MongoDB and PMM monitoring. Dimitri Vanoverbeke from Percona will discuss MySQL in the Cloud. We have also invited guest speakers: Frederic Descamps from Oracle, and Julien Pivotto from Inuits.

Percona University technical events are 100% free to attend, and so far we are getting very positive attendee feedback on them. To check the full agenda for the Belgium edition, and to register, please use this link.

St. Petersburg, Russia

Percona is sponsoring PG Day’17 Russia, the PostgreSQL conference. This year they added a track on open source databases (and I was happy to be their Committee member for the OSDB track). The conference starts on July 5, and on that day I will give a tutorial on InnoDB Architecture and Performance Optimization. Sveta Smirnova will also present a tutorial on MySQL Performance Troubleshooting.

On July 6-7, you can expect four more talks given by Perconians at PG Day. We invite you to stop by our booth (“Percona”) and ask us any tough questions you might have.

Moscow, Russia

On July 11 I will speak at a Moscow MySQL User Group meetup at the Mail.Ru Group office. While we’re still locking down the agenda, we always have a great selection of speakers at the MMUG meetups. Make sure you don’t miss this gathering!

Thank you, and I hope to see many of you at these events.

Categories: MySQL

Three Methods of Installing Percona Monitoring and Management

MySQL Performance Blog - Thu, 2017-06-15 18:54

In this blog post, we’ll look at three different methods for installing Percona Monitoring and Management (PMM).

Percona offers multiple methods of installing Percona Monitoring and Management, depending on your environment and scale. I’ll also share comments on which installation methods we’ve decided to forego for now. Let’s begin by reviewing the three supported methods:

  1. Virtual Appliance
  2. Amazon Machine Image
  3. Docker
Virtual Appliance

We ship an OVF/OVA method to make installation as simple as possible, with the least amount of effort required and at the lowest cost to you. You can leverage the investment in your virtualization deployment platform. OVF is an open standard for packaging and distributing virtual appliances, designed to be run in virtual machines.

Using OVA with VirtualBox as a first step is common in order to quickly play with a working PMM system, and get right to adding clients and observing activity within your own environment against your MySQL and MongoDB instances. But you can also use the OVA file for enterprise deployments. It is a flexible file format that can be imported into other popular hypervisor systems such as VMware, Red Hat Virtualization, XenServer, Microsoft System Centre Virtual Machine Manager and others.

We’d love to hear your feedback on this installation method!

AWS AMI

We also have an AWS AMI in order to provide easy scaling of PMM Server in AWS, so that you can deploy onto any instance size required for your monitoring instance. Depending on the AWS region you’re in, you’ll need to choose from the appropriate AMI Instance ID. Soon we’ll be moving to the AWS Marketplace for even easier deployment. When this is implemented, you will no longer need to clone an existing AMI ID.

Docker

Docker is our most common production deployment method. It is easy (three commands) and scalable (tuning passed on the command line to Docker run). While we recognize that Docker is still a relatively new deployment system for many users, it is dramatically gaining adoption. It is also where Percona is investing the bulk of our development efforts. We deploy PMM Server as two Docker containers: one for storing the data that persists across restarts/upgrades, and the other for running the actual PMM Server binaries (Grafana, Prometheus, consul, Orchestrator, QAN, etc.).

Where are the RPM/DEB/tar.gz packages?!

A common question I hear is why doesn’t Percona support binary-based installation?

We hear you: RPM/DEB/tar.gz methods are commonly used today for many of your own applications. Percona is striving for simplicity in our deployment of PMM Server, and we spend considerable development and QA effort validating the specific versions of Grafana/Prometheus/QAN/consul/Orchestrator all work seamlessly together.

Percona wants to ensure OS compatibility and long-term support of PMM, and to do binary distribution “right” means it can quickly get expensive to build and QA across all the popular Linux distributions available today. We’re in no way against binary distributions. For example, see our list of the nine supported platforms for which we provide bug fix support.

Percona decided to focus our development efforts on stability and features, and less on the number of supported platforms. Hence the hyper-focus on Docker. We don’t have any current plans to move to a binary deployment method for PMM, but we are always open to hearing your feedback. If there is considerable interest, then please let me know via the comments below. We’ll take these thoughts into consideration for PMM planning in the second half of 2017.

Which other methods of installing Percona Monitoring and Management would you like to see?

Categories: MySQL

MySQL Triggers and Updatable Views

MySQL Performance Blog - Wed, 2017-06-14 18:54

In this post we’ll review how MySQL triggers can affect queries.

Contrary to what the documentation states, we can activate triggers even while operating on views:

https://dev.mysql.com/doc/refman/5.7/en/triggers.html

Important: MySQL triggers activate only for changes made to tables by SQL statements. They do not activate for changes in views, nor by changes to tables made by APIs that do not transmit SQL statements to the MySQL server.

Be on the lookout if you use and depend on triggers, since it’s not the case for updatable views! We have reported a documentation bug for this but figured it wouldn’t hurt to mention this as a short blog post, too.

Categories: MySQL

Webinar Thursday, June 15, 2017: Demystifying Postgres Logical Replication

MySQL Performance Blog - Tue, 2017-06-13 19:21

Join Percona’s Senior Technical Services Engineer Emanuel Calvo as he presents Demystifying Postgres Logical Replication on Thursday, June 15, 2017 at 7 am PDT / 10 am EDT (UTC-7).

Register Now

The Postgres logical decoding feature was added in version 9.4, and thankfully it is continuously improving due to the vibrant open source community. In this webinar, we are going to walk through its concepts, usage and some of the new things coming up in future releases.

Logical decoding is one of the features under the BDR implementation, allowing bidirectional streams of data between Postgres instances. It also allows you to stream data outside Postgres into many other data systems.

Register for the webinar here.

Emanuel Calvo, Percona Sr. Technical Services

Emanuel has worked with MySQL for more than eight years. He is originally from Argentina, but also lived and worked in Spain and other Latin American countries. He lectures and presents at universities and local events. Emanuel currently works as a Sr. Technical Services at Percona, focusing primarily on MySQL. His professional background includes experience at telecommunication companies, educational institutions and data warehousing solutions. In his career, he has worked as a developer, SysAdmin and DBA in companies like Pythian, Blackbird.io/PalominoDB, Siemens IT Solutions, Correo Argentino (Argentinian Postal Services), Globant-EA, SIU – Government Educational Institution and Aedgency among others. As a community member he has lectured and given talks in Argentina, Brazil, United States, Paraguay, Spain and Belgium as well as written several technical papers.

Categories: MySQL

Q & A: MySQL In the Cloud – Migration, Best Practices, High Availability, Scaling

MySQL Performance Blog - Fri, 2017-06-09 20:36

In this blog, we will provide answers to the Q & A for the MySQL In the Cloud: Migration, Best Practices, High Availability, Scaling webinar.

First, we want to thank everybody for attending the June 7, 2017 webinar. The recording and slides for the webinar are available here. Below is the list of your questions that we were unable to answer during the webinar:

How does Percona XtraDB cluster work with AWS for MySQL clustering?

Percona XtraDB Cluster works especially well in cloud environments, including Amazon EC2. Since Percona XtraDB Cluster only requires one network round trip per transaction for write transactions commits, and keeps all reads local, allows it to deploy high performance multi AZ and even multi region clusters. The fact that each Percona XtraDB Cluster node contains all the data allows it to avoid reliance on the EBS storage. You can run Percona XtraDB Cluster on NVMe storage based i3 EC2 nodes to achieve high performance even with very IO-intensive workloads. Automatic provisioning and cluster self healing allows you to easily scale the cluster. We have simple tutorial on how to deploy Percona XtraDB Cluster on AWS – check it out here.

How do you approach master-master model? Are there enough reasons to use the model to implement multi-site scaling?

There are two distinct multi-master modes in existence. A synchronous Master-Master solution, like the one offered by Percona XtraDB Cluster (virtually synchronous to be exact), guarantees there are no data conflicts as you connect to the nodes located at different sites. The downside of this model is that writes can be expensive. As such, it works well in environments with low latency between the different sites, or when high latency for updates can be tolerated. Percona XtraDB Cluster is greatly optimized in that it requires only one network roundtrip to complete a commit transaction. This significantly reduces the added latency compared to many other solutions.

In contrast, asynchronous Master-Master means you can perform writes locally, without waiting on a network round trip.  It comes with the downside of possible data conflicts. In MySQL, it can be implemented using MySQL Replication. MySQL Replication only detects conflicts at this point, however, and stops if it detects a conflict. It has no good built-in conflict resolution. Ensuring conflicts do not happen on the application level is hard and error prone, and only recommended in rare cases. Most applications out there do not use Active Master-Master, but rather design an architecture where each database replication set operates with a only a single writable node.

Do the Percona tools work in the cloud, like in Amazon Aurora?

We try to make Percona software in the cloud when it makes sense. For example, Percona Toolkit and Percona Monitoring Management support Amazon RDS and Amazon Aurora. Percona XtraBackup does not, as it requires physical access to the database files (Amazon RDS and Aurora don’t provide that).  Having said that, Amazon recently updated its Aurora migration documentation to include the use of XtraBackup. Amazon Aurora supports backups taken by Percona XtraBackup as a way to import data.

What is the fastest way to verify and validate backups created by XtraBackup for databases around 2-3TB?

In the big picture, you test backups by doing some sort of restore and validation. This can be done manually, but is much better if automated. There are three levels of such validation:

  • Basic Validation. Run –apply-log and ensure it completes successfully. Start the MySQL instance and run some basic queries to ensure it works. Often running some queries to see that recent data is present is a good idea.  
  • Consistency Validation.  Additionally, run Check Table on all tables to ensure there is no corruption. This way, tables and indexes data structures are validated.   
  • Full Validation. Restore the backup and connect the restored backup as a MySQL slave (possibly to one of the existing slaves). Let it catch up and then run pt-table-checksum to validate consistency and ensure that the data in backup matches what is in the source.

Running a checktable on databases on AWS IO optimized instances takes up to eight hours. Any other suggestions on how to replace checktable in validation?”

Without knowing the table size, it is hard for me to assess whether eight hours is reasonable for your environment. However, generally speaking you should not run a Full Validation on every backup. Full Validation first and foremost validates the backup and restore pipeline. If you’re not seeing issues, doing it once per month is plenty. You want to do lighter checks on a daily and weekly basis. 

What approach would you recommend for a data warehouse needing about 80,000IOPS, currently on FusionIO bare metal? Which cloud solution would be my best bet?

This is complicated question. To answer it properly requires more information. We need to know what type of operations your database performs. Working with a Percona Consultant to do an A&D for your environment would give you best answer. In general though, EBS (even with a large number of provisioned IOPs) would not match FusionIO in IO request latency. I3 high IO instances with NVMe storage is closer match. If budget is not a concern, you can look into X1 instances. These can have up to 2TB of memory and often allow getting all (or a large portion) of the database in memory for even higher performance.

Thanks for attending the MySQL In the Cloud: Migration, Best Practices, High Availability, Scaling webinar! Post any more MySQL in the cloud comments below.

Categories: MySQL

Blog Poll: What Operating System Do You Run Your Production Database On?

MySQL Performance Blog - Thu, 2017-06-08 20:24

In this post, we’ll use a blog poll to find out what operating system you use to run your production database servers.

As databases grow to meet more challenges and expanding application demands, they must try and get the maximum amount of performance out of available resources. How they work with an operating system can affect many variables, and help or hinder performance. The operating system you use for your database can impact consumable choices (such as hardware and memory). The operation system you use can also impact your choice of database engine as well (or vice versa).

Please let us know what operating system you use to run your database. For this poll, we’re asking which operating system you use to actually run your production database server (not the base operating system).

If you’re running virtualized Linux on Windows, please select Linux as the OS used for development. Pick up to three that apply. Add any thoughts or other options in the comments section:

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.
Thanks in advance for your responses – they will help the open source community determine how database environments are being deployed.

Categories: MySQL

ProxySQL Admin Interface Is Not Your Typical MySQL Server!

MySQL Performance Blog - Wed, 2017-06-07 17:20

In this blog post, I’ll look at how ProxySQL Admin behaves in some unusual and unexpected ways from a MySQL perspective.

ProxySQL allows you to connect to its admin interface using the MySQL protocol and use familiar tools, like the MySQL command line client, to manage its configuration as a set of configuration tables. This ability may trick you into thinking that you’re working with a stripped-down MySQL server – and expect it to behave like MySQL. 

It would be a mistake to think this! In fact, ProxySQL embeds the SQLite database to store its configuration. As such, it behaves much closer to SQLite!

Below, I’ll show you a few things that confused me at first. All of these are as of ProxySQL 1.3.6 (in case behavior changes in the future).

Fake support for Use command

mysql> show databases; +-----+---------+-------------------------------+ | seq | name    | file                          | +-----+---------+-------------------------------+ | 0   | main    |                               | | 2   | disk    | /var/lib/proxysql/proxysql.db | | 3   | stats   |                               | | 4   | monitor |                               | +-----+---------+-------------------------------+ 4 rows in set (0.00 sec) mysql> select database(); +------------+ | DATABASE() | +------------+ | admin      | +------------+ 1 row in set (0.00 sec) mysql> use stats; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> select database(); +------------+ | DATABASE() | +------------+ | admin      | +------------+ 1 row in set (0.00 sec) mysql> use funkydatabase; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed

So here we can see that:

  • There is a concept of multiple databases in the ProxySQL admin interface
  • The ProxySQL admin interface supports the select database(); function, which is always same value independent of the database you tried to set. Typically it will be “admin” or “stats”, depending on what user you use to connect to the database.
  • You can use the “use” command to change the database – but it does not really change the database. This is a required command, because if you don’t support it many MySQL clients will not connect.

Invisible tables

mysql> show tables; +--------------------------------------+ | tables                               | +--------------------------------------+ | global_variables                     | | mysql_collations                     | | mysql_query_rules                    | | mysql_replication_hostgroups         | | mysql_servers                        | | mysql_users                          | | runtime_global_variables             | | runtime_mysql_query_rules            | | runtime_mysql_replication_hostgroups | | runtime_mysql_servers                | | runtime_mysql_users                  | | runtime_scheduler                    | | scheduler                            | +--------------------------------------+ 13 rows in set (0.00 sec) mysql> show tables from stats; +--------------------------------+ | tables                         | +--------------------------------+ | global_variables               | | stats_mysql_commands_counters  | | stats_mysql_connection_pool    | | stats_mysql_global             | | stats_mysql_processlist        | | stats_mysql_query_digest       | | stats_mysql_query_digest_reset | | stats_mysql_query_rules        | +--------------------------------+ 8 rows in set (0.00 sec) mysql> select count(*) from stats_mysql_commands_counters; +----------+ | count(*) | +----------+ | 52       | +----------+ 1 row in set (0.00 sec)

We can query a list of tables in our default database (which can’t change), and we also get lists of tables in the “stats” database with very familiar MySQL syntax. But we can also query the “stats” table directly without specifying the “stats” database, even if it is not shown in “show tables” for our current database.

Again this is SQLite behavior!

Categories: MySQL

MySQL Encryption at Rest – Part 1 (LUKS)

MySQL Performance Blog - Tue, 2017-06-06 19:00

In this first of a series of blog posts, we’ll look at MySQL encryption at rest.

At Percona, we work with a number of clients that require strong security measures for PCI, HIPAA and PHI compliance, where data managed by MySQL needs to be encrypted “at rest.” As with all things open source, there several options for meeting the MySQL encryption at rest requirement. In this three-part series, we cover several popular options of encrypting data and present the various pros and cons to each solution. You may want to evaluate which parts of these tutorials work best for your situation before using them in production.

Part one of this series is implementing disk-level encryption using crypt+LUKS.

In MySQL 5.7, InnoDB has built-in encryption features. This solution has some cons, however. Specifically, InnoDB tablespace encryption doesn’t cover undo logs, redo logs or the main ibdata1 tablespace. Additionally, binary-logs and slow-query-logs are not covered under InnoDB encryption.

Using crypt+LUKS, we can encrypt everything (data + logs) under one umbrella – provided that all files reside on the same disk. If you separate the various logs on to different partitions, you will have to repeat the tutorial below for each partition.

LUKS Tutorial

The Linux Unified Key Setup (LUKS) is the current standard for disk encryption. In the examples below, the block device /dev/sda4 on CentOS 7 is encrypted using a generated key, and then mounted as the default MySQL data directory at /var/lib/mysql.

WARNING! Loss of the key means complete loss of data! Be sure to have a backup of the key.

Install the necessary utilities:

# yum install cryptsetup

Creating, Formatting and Mounting an Encrypted Disk

The cryptsetup command initializes the volume and sets an initial key/passphrase. Please note that the key is not recoverable, so do not forget it. Take the time now to decide where you will securely store a copy of this key. LastPass Secure Notes are a good option, as they allow file attachments. This enhances our backup later on.

Create a passphrase for encryption. Choose something with high entropy (i.e., lots of randomness). Here are two options (pick one):

# openssl rand -base64 32 # date | md5 | rev | head -c 24 | md5 | tail -c 32

Next, we need to initialize and format our partition for use with LUKS. Any mounted points using this block device must be unmounted beforehand.

WARNING! This command will delete ALL DATA ON THE DEVICE! BE SURE TO COMPLETE ANY BACKUPS BEFORE YOU RUN THIS!

# cryptsetup -c aes-xts-plain -v luksFormat /dev/sda4

You will be prompted for a passphrase. Provide the phrase you generated above. After you provide a passphrase, you now need to “open” the encrypted disk and provide a device mapper name (i.e., an alias). It can be anything, but for our purposes, we will call it “mysqldata”:

# cryptsetup luksOpen /dev/sda4 mysqldata

You will be prompted for the passphrase you used above. On success, you should see the device show up:

# ls /dev/mapper/ lrwxrwxrwx 1 root root 7 Jun 2 11:50 mysqldata -> ../dm-0

You can now format this encrypted block device and create a filesystem:

# mkfs.ext4 /dev/mapper/mysqldata

Now you can mount the encrypted block device you just formatted:

# mount /dev/mapper/mysqldata /var/lib/mysql

Unfortunately you cannot add this to /etc/fstab to automount on a server reboot, since the key is needed to “open” the device. Please keep this in mind that if your server ever reboots MySQL will not start since the data directory is unavailable until opened and mounted (we will look at how to make this work using scripts in Part Two of this series).

Creating a Backup of Encryption Information

The header of a LUKS block device contains information regarding the current encryption key(s). Should this ever get damaged, or if you need to recover because you forgot the new passphrase, you can restore this header information:

# cryptsetup luksHeaderBackup --header-backup-file ${HOSTNAME}_`date +%Y%m%d`_header.dat /dev/sda4

Go ahead and make a SHA1 of this file now to verify that it doesn’t get corrupted later on in storage:

# sha1sum ${HOSTNAME}_`date +%Y%m%d`_header.dat

GZip the header file. Store the SHA1 and the .gz file in a secure location (for example, attach it to the secure note created above). Now you have a backup of the key you used and a backup of the header which uses that key.

Unmounting and Closing a Disk

If you know you will be storing a disk, or just want to make sure the contents are not visible (i.e., mounted), you can unmount and “close” the encrypted device:

# umount /var/lib/mysql/ # cryptsetup luksClose mysqldata

In order to mount this device again, you must “open” it and provide one of the keys.

Rotating Keys (Adding / Removing Keys)

Various compliance and enforcement rules dictate how often you need to rotate keys. You cannot rotate or change a key directly. LUKS supports up to eight keys per device. You must first add a new key to any slot (other than the slot currently occupying the key you are trying to remove), and then remove the older key.

Take a look at the existing header information:

# cryptsetup luksDump /dev/sda4 LUKS header information for /dev/sda4 Version: 1 Cipher name: aes Cipher mode: cbc-essiv:sha256 Hash spec: sha1 Payload offset: 4096 MK bits: 256 MK digest: 81 37 51 6c d5 c8 32 f1 7a 2d 47 7c 83 62 70 d9 f7 ce 5a 6e MK salt: ae 4b e8 09 c8 7a 5d 89 b0 f0 da 85 7e ce 7b 7f 47 c7 ed 51 c1 71 bb b5 77 18 0d 9d e2 95 98 bf MK iterations: 44500 UUID: 92ed3e8e-a9ac-4e59-afc3-39cc7c63e7f6 Key Slot 0: ENABLED Iterations: 181059 Salt: 9c a9 f6 12 d2 a4 2a 3d a4 08 b2 32 b0 b4 20 3b 69 13 8d 36 99 47 42 9c d5 41 35 8c b3 d0 ff 0e Key material offset: 8 AF stripes: 4000 Key Slot 1: DISABLED Key Slot 2: DISABLED Key Slot 3: DISABLED Key Slot 4: DISABLED Key Slot 5: DISABLED Key Slot 6: DISABLED Key Slot 7: DISABLED

Here we can see a key is currently occupying “Key Slot 0”. We can add a key to any DISABLED key slot. Let’s use slot #1:

# cryptsetup luksAddKey --key-slot 1 -v /dev/sda4 Enter any passphrase: Key slot 0 unlocked. Enter new passphrase for key slot: Verify passphrase: Command successful.

LUKS asks for “any” passphrase to authenticate us. Had there been keys in other slots, we could have used any one of them. As only one is currently saved, we have to use it. We can then add a new passphrase for slot 1.

Now that we have saved the new key in slot 1, we can remove the key in slot 0.

# cryptsetup luksKillSlot /dev/sda4 0 Enter any remaining LUKS passphrase: No key available with this passphrase.

In the example above, the existing passphrase stored in slot 0 was used. This is not allowed. You cannot provide the passphrase for the same slot you are attempting to remove.

Repeat this command and provide the passphrase for slot 1, which was added above. We are now able to remove the passphrase stored in slot 0:

# cryptsetup luksKillSlot /dev/sda4 0 Enter any remaining LUKS passphrase: # cryptsetup luksDump /dev/sda4 LUKS header information for /dev/sda4 Version: 1 Cipher name: aes Cipher mode: cbc-essiv:sha256 Hash spec: sha1 Payload offset: 4096 MK bits: 256 MK digest: 81 37 51 6c d5 c8 32 f1 7a 2d 47 7c 83 62 70 d9 f7 ce 5a 6e MK salt: ae 4b e8 09 c8 7a 5d 89 b0 f0 da 85 7e ce 7b 7f 47 c7 ed 51 c1 71 bb b5 77 18 0d 9d e2 95 98 bf MK iterations: 44500 UUID: 92ed3e8e-a9ac-4e59-afc3-39cc7c63e7f6 Key Slot 0: DISABLED Key Slot 1: ENABLED Iterations: 229712 Salt: 5d 71 b2 3a 58 d7 f8 6a 36 4f 32 d1 23 1a df df cd 2b 68 ee 18 f7 90 cf 58 32 37 b9 02 e1 42 d6 Key material offset: 264 AF stripes: 4000 Key Slot 2: DISABLED Key Slot 3: DISABLED Key Slot 4: DISABLED Key Slot 5: DISABLED Key Slot 6: DISABLED Key Slot 7: DISABLED

After you change the passphrase, it’s a good idea to repeat the header dump steps we performed above and store the new passphrase in your vault.

Conclusion

Congratulations, you have now learned how to encrypt and mount a partition using LUKS! You can now use this mounted device just like any other. You can also restore a backup and start MySQL.

In Part Two, we will cover using InnoDB tablespace encryption.

Categories: MySQL

Upcoming Webinar Thursday June 8, 2017: MongoDB Shell – A Primer

MySQL Performance Blog - Tue, 2017-06-06 18:35

Join Percona’s Solutions Engineer, Rick Golba as he presents MongoDB Shell: A Primer on Thursday, June 8, 2017, at 11 am PDT / 2 pm EDT (UTC-7).

Register Now

Every good DBA should be a master of the database shell. In this webinar, we will help you understand how to structure shell commands and discuss all the advanced functions and ways to chain commands in the mongo shell.

This webinar will teach you how to:

  • Limit the number of documents, or skip documents, when running a query
  • Work with the MongoDB aggregation pipeline
  • View an explain plan for a MongoDB query
  • Understand the MongoDB write concerns
  • Validate the contents of a database on various nodes in a replica set
  • Understand the MongoDB read preference

We will touch on CRUD functions, but a great deal more time will be spent on the areas above. We will have a dedicated webinar for mastering CRUD operations in MongoDB in the future.

Register for the webinar here.

Rick Golba, Solutions Engineer

Rick Golba is a Solutions Engineer at Percona. Rick has over 20 years of experience working with databases. Prior to Percona, he worked as a Technical Trainer for HP/Vertica.

 

Categories: MySQL

Percona Live Open Source Database Conference Europe 2017 in Dublin, Ireland Call for Papers is Open!

MySQL Performance Blog - Tue, 2017-06-06 01:08

Announcing the opening of the Percona Live Open Source Database Conference Europe 2017 in Dublin, Ireland call for papers. It will be open from now until July 17, 2017.*

Do you have a big idea to explain, use case to share or skill to teach? Submit your speaking proposal for either breakout or tutorial sessions. This is your chance to put your developer ideas, business and case studies, and operational expertise in front of an intelligent, engaged audience of open source technology users.

The theme of Percona Live Europe 2017 is “Championing Open Source Databases,” with sessions in MySQL, MariaDB, MongoDB and other open source databases, including time series databases, PostgreSQL and RocksDB. Are you:

  • Working with MongoDB as a developer?
  • Creating a new MySQL-variant time series database?
  • Deploying MariaDB in a novel way?
  • Using open source database technology to solve a particular business issue?

We are looking for topics that address a variety of open source issues. The tracks at this year’s conference are:

  • Developers
  • Business / Case Studies
  • Operations

We invite you to submit your speaking proposal for breakout, tutorial or lightning talk sessions. Share your open source database experiences with peers and professionals in the open source community by presenting a:

  • Breakout Session. Broadly cover a technology area using specific examples. Sessions should be either 25 minutes or 50 minutes in length (including Q&A).
  • Tutorial Session. Present a technical session that aims for a level between a training class and a conference breakout session. Encourage attendees to bring and use laptops for working on detailed and hands-on presentations. Tutorials will be three or six hours in length (including Q&A).
  • Lightning Talk. Give a five-minute presentation focusing on one key point that interests the open source community: technical, lighthearted or entertaining talks on new ideas, a successful project, a cautionary story, a quick tip or demonstration.

Speaking at Percona Live Europe is a great way to build your personal and company brands. If selected, you will receive a complimentary full conference pass!

Submit your talks now.

*NOTE: We have changed our registration platform this year, so you will need to register before submitting a talk idea (even if you have previously registered).

Tips for Submitting

Include presentation details, but be concise. Clearly state:

  • Purpose of the talk (problem, solution, action format, etc.)
  • Covered technologies
  • Target audience
  • Audience takeaway

Keep proposals free of sales pitches. The Committee is looking for in-depth technical talks, not ones that sound like a commercial.

Be original! Make your presentation stand out by submitting a proposal that focuses on real-world scenarios, relevant examples, and knowledge transfer.

Submit your proposals as soon as you can – the call for papers closes July 17, 2017!

Categories: MySQL

Webinar June 7, 2017: MySQL In the Cloud – Migration, Best Practices, High Availability, Scaling

MySQL Performance Blog - Mon, 2017-06-05 18:09

Join Percona’s CEO and Founder Peter Zaitsev as he presents MySQL In the Cloud: Migration, Best Practices, High Availability, Scaling on Wednesday, June 7, 2017, at 10 am PDT / 1:00 pm EDT (UTC-7).

Register Now

Businesses are moving many of the systems and processes they once owned to offsite “service” models: Platform as a Service (PaaS), Software as a Service (SaaS), Infrastructure as a Service (IaaS), etc. These services are usually referred to as being “in the cloud” – meaning that the infrastructure and management of the service in question are not maintained by the enterprise using the service.

When it comes to database environment and infrastructure, more and more enterprises are moving to MySQL in the cloud to manage this vital part of their business organization. We often refer to database services provided in the cloud as Database as a Service (DBaaS). The next question after deciding to move your database to the cloud is “How to I plan properly to as to avoid a disaster?”

Before moving to the cloud, it is important to carefully define your database needs, plan for the migration and understand what putting a solution into production entails. This webinar discusses the following subjects on moving to the cloud:

  • Public and private cloud
  • Migration to the cloud
  • Best practices
  • High availability
  • Scaling

Register for the webinar here.

Peter Zaitsev, Percona CEO and Founder

Peter Zaitsev co-founded Percona and assumed the role of CEO in 2006. As one of the foremost experts on MySQL strategy and optimization, Peter leveraged both his technical vision and entrepreneurial skills to grow Percona from a two-person shop to one of the most respected open source companies in the business. With over 150 professionals in 20+ countries, Peter’s venture now serves over 3000 customers – including the “who’s who” of internet giants, large enterprises and many exciting startups. Percona was named to the Inc. 5000 in 2013, 2014 and 2015.

Peter was an early employee at MySQL AB, eventually leading the company’s High Performance Group. A serial entrepreneur, Peter co-founded his first startup while attending Moscow State University where he majored in Computer Science. Peter is a co-author of High Performance MySQL: Optimization, Backups, and Replication, one of the most popular books on MySQL performance. Peter frequently speaks as an expert lecturer at MySQL and related conferences, and regularly posts on the Percona Database Performance Blog. Fortune and DZone often tap Peter as a contributor, and his recent ebook Practical MySQL Performance Optimization is one of percona.com’s most popular downloads.

Categories: MySQL

Heavier-Than-Air Flight Is Impossible

Xaprb, home of innotop - Sun, 2017-06-04 19:57

When I was a child, my parents put a page from a newspaper on our refrigerator door. I remember it as a yellowed, faded piece of paper that seemed like it had always been there. It was filled with little oval portraits of famous people proclaiming that heavier-than-air flight was impossible. My memory is that there were perhaps 40 of them, each with a quote and a date within a few years of the Wright Brothers’ flight at Kitty Hawk.

This might be one of the childhood influences that resonates most strongly in me today. There are dozens of examples of people disdainfully saying “you can’t do that” during my career.

I tried to find an image of the page I remember, but didn’t (maybe it’s impossible?), so I looked for quotes and am producing my own version of it below.

Is it not demonstrated that a true flying machine, self-raising, self-sustaining, self-propelling, is physically impossible?
— Joseph LeConte, November 1888

It is apparent to me that the possibilities of the aeroplane, which two or three years ago were thought to hold the solution to the [flying machine] problem, have been exhausted, and that we must turn elsewhere.
— Thomas Edison, November 1895

I can state flatly that heavier than air flying machines are impossible.
— Lord Kelvin, 1895

I have not the smallest molecule of faith in aerial navigation other than ballooning, or of the expectation of good results from any of the trials we heard of. So you will understand that I would not care to be a member of the Aeronautical Society.
— Lord Kelvin, 1896

The present generation will not [fly in the next century], and no practical engineer would devote himself to the problem now.
— Worby Beaumont, January 1900

There is no basis for the ardent hopes and positive statements made as to the safe and successful use of the dirigible balloon or flying machine, or both, for commercial transportation or as weapons of war.
— George Melville, December 1901

The demonstration that no possible combination of known substances, known forms of machinery and known forms of force, can be united in a practical machine by which men shall fly along distances through the air, seems to the writer as complete as it is possible for the demonstration to be.
— Simon Newcomb, 1900

Flight by machines heavier than air is unpractical and insignificant, if not utterly impossible.
— Simon Newcomb, 1902

It is complete nonsense to believe flying machines will ever work.
— Stanley Mosley, 1905

The aeroplane will never fly.
— Lord Haldane, 1907

Note that the quote from Haldane, a high-ranking British official, came after successful flight. Many people refused to believe it, though, citing “proof” such as the widespread skepticism about it, and the lack of reporting by newspapers.

Of course, today we know that heavier-than-air flight is possible, and many similar things are too, but this pattern will continue for as long as people dare to dream of achieving things whose success is unclear. Just a few years later, the Boston Evening Transcript reported that there simply weren’t enough pilots.

I’ll leave you with a newspaper clipping that may prove inspiring, from The Nation, Oct 19, 1859:

On this subject of the hygienic effects of ballooning, Mr. Wise is enthusiastic. The blood we are told, begins to course more freely when up a mile or two with a balloon—the exretory vessels are more freely opened—the gastric juice pours into the stomach more rapidly—the liver, kidneys, and heart work under expanded action in a highly calorified atmosphere—the brain receives and gives more exalted inspirations—the whole animal and mental system become intensely quickened and more of the chronic morbid matter is exhaled and thrown off in an hour or two while two miles up of a fine summer’s day, than the invalid can get rid of in a voyage from New York to Madeira. The appetite is immensely increased, and the spirits elevated.

In reading Mr. Wise, one feels aerial inclined, and predisposed to believe with him that great problems, commercial, international, and social, are yet to come of navigating the air; and that the art will be redeemed from its present very magnificent and sublime but unproductive state.

Categories: MySQL

Percona XtraDB Cluster 5.7.18-29.20 is now available

MySQL Performance Blog - Fri, 2017-06-02 19:41

Percona announces the release of Percona XtraDB Cluster 5.7.18-29.20 on June 2, 2017. Binaries are available from the downloads section or our software repositories.

NOTE: You can also run Docker containers from the images in the Docker Hub repository.

NOTE: Due to new package dependency, Ubuntu/Debian users should use apt-get dist-upgrade or apt-get installpercona-xtradb-cluster-57 to upgrade.

Percona XtraDB Cluster 5.7.18-29.20 is now the current release, based on the following:

All Percona software is open-source and free.

Fixed Bugs

  • PXC-749: Fixed memory leak when running INSERT on a table without primary key defined and wsrep_certify_nonPK disabled (set to 0).

    NOTE: We recommend you define primary keys on all tables for correct write-set replication.

  • PXC-812: Fixed SST script to leave the DONOR keyring when JOINER clears the datadir.

  • PXC-813: Fixed SST script to use UTC time format.

  • PXC-816: Fixed hook for caching GTID events in asynchronous replication. For more information, see #1681831.

  • PXC-820: Enabled querying of pxc_maint_mode by another client during the transition period.

  • PXC-823: Fixed SST flow to gracefully shut down JOINER node if SST fails because DONOR leaves the cluster due to network failure. This ensures that the DONOR is then able to recover to synced state when network connectivity is restored For more information, see #1684810.

  • PXC-824: Fixed graceful shutdown of Percona XtraDB Cluster node to wait until applier thread finishes.

Other Improvements

  • PXC-819: Added five new status variables to expose required values from wsrep_ist_receive_status and wsrep_flow_control_interval as numbers, rather than strings that need to be parsed:

    • wsrep_flow_control_interval_low
    • wsrep_flow_control_interval_high
    • wsrep_ist_receive_seqno_start
    • wsrep_ist_receive_seqno_current
    • wsrep_ist_receive_seqno_end

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Categories: MySQL

Percona XtraDB Cluster 5.6.36-26.20 is Now Available

MySQL Performance Blog - Fri, 2017-06-02 19:40

Percona announces the release of Percona XtraDB Cluster 5.6.36-26.20 on June 2, 2017. Binaries are available from the downloads section or our software repositories.

Percona XtraDB Cluster 5.6.36-26.20 is now the current release, based on the following:

All Percona software is open-source and free.

NOTE: Due to end of life, Percona will stop producing packages for the following distributions after July 31, 2017:

  • Red Hat Enterprise Linux 5 (Tikanga)
  • Ubuntu 12.04 LTS (Precise Pangolin)

You are strongly advised to upgrade to latest stable versions if you want to continue using Percona software.

Fixed Bugs

  • PXC-749: Fixed memory leak when running INSERT on a table without primary key defined and wsrep_certify_nonPK disabled (set to 0).

    NOTE: We recommended you define primary keys on all tables for correct write set replication.

  • PXC-813: Fixed SST script to use UTC time format.

  • PXC-823: Fixed SST flow to gracefully shut down JOINER node if SST fails because DONOR leaves the cluster due to network failure. This ensures that the DONOR is then able to recover to synced state when network connectivity is restored For more information, see #1684810.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Categories: MySQL

ProxySQL-Assisted Percona XtraDB Cluster Maintenance Mode

MySQL Performance Blog - Wed, 2017-05-31 18:34

In this blog post, we’ll look at how Percona XtraDB Cluster maintenance mode uses ProxySQL to take cluster nodes offline without impacting workloads.

Percona XtraDB Cluster Maintenance Mode

Since Percona XtraDB Cluster offers a high availability solution, it must consider a data flow where a cluster node gets taken down for maintenance (through isolation from a cluster or complete shutdown).

Percona XtraDB Cluster facilitated this by introducing a maintenance mode. Percona XtraDB Cluster maintenance mode reduces the number of abrupt workload failures if a node is taken down using ProxySQL (as a load balancer).

The central idea is delaying the core node action and allowing ProxySQL to divert the workload.

How ProxySQL Manages Percona XtraDB Cluster Maintenance Mode

With Percona XtraDB Cluster maintenance mode, ProxySQL marks the node as OFFLINE when a user triggers a shutdown signal (or wants to put a specific node into maintenance mode):

  • When a user triggers a shutdown, Percona XtraDB Cluster node sets pxc_maint_mode to SHUTDOWN (from the DISABLED default) and sleep for x seconds (dictated by pxc_maint_transition_period  — 10 secs by default). ProxySQLauto detects this change and marks the node as OFFLINE. With this change, ProxySQL avoids opening new connections for any DML transactions, but continues to service existing queries until pxc_maint_transition_period. Once the sleep period is complete, Percona XtraDB Cluster delivers a real shutdown signal — thereby giving ProxySQL enough time to transition the workload.
  • If the user needs to take a node into maintenance mode, the user can simply set pxc_maint_mode to MAINTENANCE. With that, pxc_maint_mode is updated and the client connection updating it goes into sleep for x seconds (as dictated by pxc_maint_transition_period) before giving back control to the user. ProxySQL auto-detects this change and marks the node as OFFLINE. With this change ProxySQL avoids opening new connections for any DML transactions but continues to service existing queries.
  • ProxySQL auto-detects this change in maintenance state and then automatically re-routes traffic, thereby reducing abrupt workload failures.

Technical Details:

  • The ProxySQL Galera checker script continuously monitors the state of individual nodes by checking the pxc_maint_mode variable status (in addition to the existing wsrep_local_state) using the ProxySQL scheduler feature
  • Scheduler is a Cron-like implementation integrated inside ProxySQL, with millisecond granularity.
  • If proxysql_galera_checker detects pxc_maint_mode = SHUTDOWN | MAINTENANCE, then it marks the node as OFFLINE_SOFT.  This avoids the creation of new connections (or workloads) on the node.

Sample proxysql_galera_checker log:

Thu Dec  8 11:21:11 GMT 2016 Enabling config Thu Dec  8 11:21:17 GMT 2016 Check server 10:127.0.0.1:25000 , status ONLINE , wsrep_local_state 4 Thu Dec  8 11:21:17 GMT 2016 Check server 10:127.0.0.1:25100 , status ONLINE , wsrep_local_state 4 Thu Dec  8 11:21:17 GMT 2016 Check server 10:127.0.0.1:25200 , status ONLINE , wsrep_local_state 4 Thu Dec  8 11:21:17 GMT 2016 Changing server 10:127.0.0.1:25200 to status OFFLINE_SOFT due to SHUTDOWN Thu Dec  8 11:21:17 GMT 2016 Number of writers online: 2 : hostgroup: 10 Thu Dec  8 11:21:17 GMT 2016 Enabling config Thu Dec  8 11:21:22 GMT 2016 Check server 10:127.0.0.1:25000 , status ONLINE , wsrep_local_state 4 Thu Dec  8 11:21:22 GMT 2016 Check server 10:127.0.0.1:25100 , status ONLINE , wsrep_local_state 4 Thu Dec  8 11:21:22 GMT 2016 Check server 10:127.0.0.1:25200 , status OFFLINE_SOFT , wsrep_local_state 4

Ping us below with any questions or comments.

Categories: MySQL

Percona XtraBackup 2.4.7-2 is Now Available

MySQL Performance Blog - Tue, 2017-05-30 16:53

Percona announces the GA release of Percona XtraBackup 2.4.7-2 on May 29, 2017. You can download it from our download site and apt and yum repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups.

Bug Fixed:
  • Fixed build failure on Debian 9.0 (Stretch). Bug fixed #1678947.

Release notes with all the bugfixes for Percona XtraBackup 2.4.7-2 are available in our online documentation. Please report any bugs to the launchpad bug tracker.

Categories: MySQL
Syndicate content