MySQL

This Week in Data with Colin Charles: Percona Live Europe!

MySQL Performance Blog - Fri, 2017-08-18 16:36

Join Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

Has a week passed already? Welcome back to the second column. A lot of time has been spent neck deep in getting speakers accepted and scheduled for Percona Live Open Source Database Conference Europe 2017 in Dublin, as well organizing the conference sponsors.

Percona Live Europe Dublin

At the time of writing, we are six weeks away from the conference, so a little over a month! Have you registered yet?

We have 12 tutorials that cover a wide range of topics: ProxySQL (from the author Rene Cannao), Orchestrator (from the author Shlomi Noach), practical Couchbase (to name a few). If we did a technology word cloud, the coverage includes MongoDB, Docker, Elastic, Percona Monitoring and Management (PMM), Percona XtraDB Cluster 5.7, MySQL InnoDB Cluster and Group Replication.

In addition to that, if you’re a MySQL beginner (or thinking of a career change) there is a six-hour boot camp titled MySQL in a Nutshell (Part 1 and Part 2)!. Come prepared with your laptop, and leave a MySQL DBA!

Sessions are scheduled, and most of the content is already online: check out day 1, and day 2. We have 104 sessions scheduled, so there’s plenty to choose from.

Remember that you have till 7:00 a.m. UTC-1, August 16th, 2017 to book the group rate at the event venue for €250/night. Use code PERCON.

Releases
  • orchestrator/raft: Pre-release 3.0 is available. I’m a huge fan of Orchestrator, and now you can setup high availability for orchestrator via the Raft consensus protocol.
  • MariaDB 10.0.32 is out, and it comes with a new Percona XtraDB, Percona TokuDB and a new InnoDB. You’ll want this release if you’re using TokuDB, as it merges from TokuDB 5.6.36-82.1 (which fixes the two issues problem).
  • If you encountered the TokuDB problems above, you’ll want to look at MariaDB 10.1.26. One surprise hidden in the release notes: MariaDB Backup is now a stable/GA release. Have you used it yet?
Link List

I look forward to feedback/tips via e-mail at colin.charles@percona.com or I’m @bytebot on Twitter.

Categories: MySQL

IMDb Data in a Graph Database

MySQL Performance Blog - Thu, 2017-08-17 19:36

In this first of its kind, Percona welcomes Dehowe Feng, Software Developer from Bitnine as a guest blogger. In his blog post, Dehowe discusses how viewing imported data from IMDb into a graph database (AgensGraph) lets you quickly see how data nodes relate to each other. This blog echoes a talk given by Bitnine at the Percona Live Open Source Database Conference 2017.

Graphs help illustrate the relationships between entities through nodes, drawing connections between people and objects. Relationships in IMDb are inherently visual. Seeing how things are connected grants us a better understanding of the context underneath. By importing IMDb data as graph data, you simplify the schema can obtain key insights.

In this post, we will examine how importing IMDb into a graph database (in this case, AgensGraph) allows us to look at data relationships in a much more visual way, providing more intuitive insights into the nature of related data.

For install instructions to the importing scripts, go here.

Internet Movie Database (IMDb) owned by Amazon.com is one of the largest movie databases. It contains 4.1 million titles and 7.7 million personalities (https://en.wikipedia.org/wiki/IMDb).

Relational Schema for IMDb

Relational Schema of IMDb Info

Picture courtesy of user ofthelit on StackOverflow, https://goo.gl/SpS6Ca

Because IMDb’s file format is not easy to read and parse, rather than implementing the file directly we use an additional step to load it into relational tables. For this project, we used IMDbpy to load relational data into AgensGraph in relational form. The above figure is the relational schema which IMDbpy created. This schema is somewhat complicated, but essentially there are four basic entries: Production, Person, Company and Keyword. Because there are many N-to-N relationships between these entities, the relational schema has more tables than the number of entities. This makes the schema harder to understand. For example, a person can be related to many movies and a movie can have many characters.

Concise Graph Modeling

From there, we developed our own graph schema using Production, Person, Company and Keyword as our nodes (or end data points).

Productions lie at the “center” of the graph, with everything leading to them. Keywords describing Productions, Persons and Companies are credited for their contributions to Productions. Productions are linked to other productions as well.

Simplified Graph Database Schema

With the data in graph form, one can easily see the connections between all the nodes. The data can be visualized as a network and querying the data with Cypher allows users to explore the connections between entities.

Compared to the relational schema of IMDb, the graph schema is much simpler to understand. By merging related information for the main entities into nodes, we can access all relevant information to that node through that node, rather than having to match IDs across tables to get the information that we need. If we want to examine how a node relates to another node, we can query its edges to see the connections it forms. Being able to visually “draw a connection” from one node to another helps to illustrate how they are connected.

Furthermore, the labels of the edges describe how the nodes are connected. Edge labels in the IMDb Graph describe what kind of connection is formed, and pertinent information may be stored in attributes in the edges. For example, for the connections ACTOR_IN and ACTRESS_IN, we store role data, such as character name and character id.

Data Migration

To make vertexes’ and edges’ properties we use “views”, which join related tables. The data is migrated into a graph format by querying the relational data using selects and joins into a single table with the necessary information for creating each node.

For example, here is the SQL query used to create the jsonb_keyword view:

CREATE VIEW jsonb_keyword AS SELECT row_to_json(row(keyword)) AS data FROM keyword;

We use a view to make importing queries simpler. Once this view is created, its content can be migrated into the graph. After the graph is created, the graph_path is set, and the VLABEL is created, we can use the convenient LOAD keyword to load the JSON values from the relational table into the graph:

LOAD FROM jsonb_keyword AS keywords CREATE (a:Keyword = data(keywords) );

Note that here LOAD is used to load data in from a relational table, but LOAD can also be used to load data from external sources as well.

Creating edges is a similar process. We load edges from the tables that store id tuples of the between the entities after creating their ELABELs:

LOAD FROM movie_keyword AS rel_key_movie MATCH (a:Keyword), (b:Production) WHERE a.id::int = (rel_key_movie).keyword_id AND b.id::int = (rel_key_movie).movie_id CREATE (a)-[:KEYWORD_OF]->(b);

As you can see, AgensGraph is not restricted to the CSV format when importing data. We can import relational data into its graph portion using the LOAD feature and SQL statements to refine our data sets.

How is information stored?

Most of the pertinent information is held in the nodes (vertexes). Nodes are labeled either as Productions, Persons, Companies or Keywords, and their relative information is stored as JSONs. Since IMDB information is constantly updated, many fields for certain entities are left incomplete. Since JSON is semi-structured, if an entity does not have a certain piece of information the field will not exist at all – rather than having a field and marking it as NULL.

We also use nested JSON arrays to store data that may have multiple fields, such as quotes that persons might have said or alternate titles to productions. This makes it possible to store “duplicate” fields in each node.

How can this information be used?

In the graph IMDb database, querying between entities is very easy to learn. Using the Cypher Query Language, a user can find things such as all actors that acted in a certain production, all productions that a person has worked on or all other companies that have worked with a certain company on any production. Graph database strength is the simplicity of visualizing the data. There are many ways you can query a graph database to find what you need!

Find the name of all actors that acted in Night at the Museum:

MATCH (a:Person)-[:ACTOR_IN]->(b:Production) WHERE title = 'Night at the Museum' RETURN a.name,b.title;

Result:

name | title -----------------------+--------------------- Asprinio, Stephen | Night at the Museum Blais, Richard | Night at the Museum Bougere, Teagle F. | Night at the Museum Bourdain, Anthony | Night at the Museum Cherry, Jake | Night at the Museum Cheng, Paul Chih-Ping | Night at the Museum ... (56 rows)

Find all productions that Ben Stiller worked on:

MATCH (a:Person)-[b]->(c:Production) WHERE a.name = 'Stiller, Ben' RETURN a.name,label(b),c.title;

Result:

name | label | title -------------+-------------+----------------------------------------------- ... Stiller, Ben | actor_in | The Heartbreak Kid: The Egg Toss Stiller, Ben | producer_of | The Hardy Men Stiller, Ben | actor_in | The Heartbreak Kid: Ben & Jerry Stiller, Ben | producer_of | The Polka King Stiller, Ben | actor_in | The Heartbreak Kid Stiller, Ben | actor_in | The Watch Stiller, Ben | actor_in | The History of 'Walter Mitty' Stiller, Ben | producer_of | The Making of 'The Pick of Destiny' Stiller, Ben | actor_in | The Making of 'The Pick of Destiny' ... (901 rows)

Find all actresses that worked with Sarah Jessica Parker:

MATCH (a:Person)-[b:ACTRESS_IN]->(c:Production)<-[d:ACTRESS_IN]-(e:Person) WHERE a.name = 'Parker, Sarah Jessica' RETURN DISTINCT e.name;

Result:

name --------------------------------- Aaliyah Aaron, Caroline Aaron, Kelly Abascal, Nati Abbott, Diane Abdul, Paula ... (3524 rows)

Summary

The most powerful aspects of a graph database are flexibility and visualization capabilities.

In the future, we plan to implement a one-step importing script. Currently, the importing script is two-phased: the first step is to load into relational tables and the second step is to load into the graph. Additionally, AgensGraph has worked with Gephi to release a data import plugin. The Gephi Connector allows for graph visualization and analysis. For more information, please visit www.bitnine.net and www.agensgraph.com.

Categories: MySQL

Percona Monitoring and Management 1.2.1 is Now Available

MySQL Performance Blog - Wed, 2017-08-16 17:31

Percona announces the release of Percona Monitoring and Management 1.2.1 on August 16, 2017.

For install and upgrade instructions, see Deploying Percona Monitoring and Management.

This hotfix release improves memory consumption.

Changes in PMM Server

We’ve introduced the following changes in PMM Server 1.2.1:

Bug fixes
  • PMM-1280: PMM server affected by nGinx CVE-2017-7529. An integer overflow exploit could result in a DOS (Denial of Service) for the affected nginx service with the max_ranges directive not set. This problem is solved by setting the set max_ranges directive to 1 in the nGinx configuration.

Improvements
  • PMM-1232: Update the default value of the METRICS_MEMORY configuration setting. Previous versions of PMM Server used a different value for the METRICS_MEMORY configuration setting which allowed Prometheus to use up to 768MB of memory. PMM Server 1.2.0 used the storage.local.target-heap-size setting, its default value is 256MB. Unintentionally, this value reduced the amount of memory that Prometheus could use. As a result, the performance of Prometheus was affected. To improve the performance of Prometheus, the default setting of storage.local.target-heap-size has been set to 768 MB.

About Percona Monitoring and Management

Percona Monitoring and Management (PMM) is an open-source platform for managing and monitoring MySQL and MongoDB performance. Percona developed it in collaboration with experts in the field of managed database services, support and consulting.

PMM is a free and open-source solution that you can run in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

A live demo of PMM is available at pmmdemo.percona.com.

We’re always happy to help! Please provide your feedback and questions on the PMM forum.

If you would like to report a bug or submit a feature request, please use the PMM project in JIRA.

Categories: MySQL

Upcoming Webinar Thursday, August 17: Efficient CRUD Queries in MongoDB

MySQL Performance Blog - Wed, 2017-08-16 16:02

Join Percona’s, Senior Technical Operations Architect Tim Vaillancourt as he presents “Efficient CRUD Queries in MongoDB” on Thursday, August 17, 2017, at 10:00 am PDT / 1:00 pm EDT (UTC-7).

Reserve Your Spot MongoDB has its own commands and function structures that ask the database to do work. In this talk, we will discuss how queries, updates, deletes and inserts work. However, we will go beyond these actions and also review what operators you should and shouldn’t use, and how they might actually drive your schema choices. Then we will talk about operationally sound ways for bulk deleting and inserting when you want to limit the impact on production (if other tools are too aggressive).

Register for the webinar here.

Timothy Vaillancourt, Sr. Technical Operations Architect for MongoDB Tim joined Percona in 2016 as Sr. Technical Operations Architect for MongoDB, with the goal of making MongoDB operations as smooth as possible. With experience operating infrastructures in industries such as government, online marketing/publishing, SaaS and gaming, combined with experience tuning systems from the hard disk all the way up to the end-user, Tim has spent time in nearly every area of the modern IT stack (with many lessons learned). Tim is based in Amsterdam, NL and enjoys traveling, coding and music. Before Percona, Tim was the Lead MySQL DBA of Electronic Arts’ DICE studios, helping launch and operate some of the largest games in the world (“Battlefield” series, “Mirrors Edge” series, “Star Wars: Battlefront”) smoothly. At the same time, he also led the automation of MongoDB deployments for EA systems. Before the role of DBA at EA’s DICE studio, Tim served as a subject matter expert in NoSQL databases, queues and search on the Online Operations team at EA SPORTS. Prior to moving to the gaming industry, Tim served as a Database/Systems Admin operating a large MySQL-based SaaS infrastructure at AbeBooks/Amazon Inc.
Categories: MySQL

Upcoming Webinar Wednesday August 16: Lock, Stock and Backup – Data Guaranteed

MySQL Performance Blog - Tue, 2017-08-15 16:45

Join Percona’s, Technical Services Manager, Jervin Real as he presents Lock, Stock and Backup: Data Guaranteed on Wednesday, August 16, 2017 at 7:00 am PDT / 10:00 am EDT (UTC-7).

Reserve Your Spot

Backups are crucial in a world where data is digital and uptime is revenue. Environments are no longer bound to traditional data centers, and span multiple cloud providers and many heterogeneous environments. We need bulletproof backups and impeccable recovery processes. This talk aims to answer the question “How should I backup my MySQL databases?” by providing 3-2-1 backup designs, best practices and real-world solutions leveraging key technologies, automation techniques and major cloud provider services.

Register for the webinar here.

Jervin Real As Technical Services Manager, Jervin partners with Percona’s customers on building reliable and highly performant MySQL infrastructures while also doing other fun stuff like watching cat videos on the internet. Jervin joined Percona in April 2010. Starting as a PHP programmer, Jervin quickly learned the LAMP stack. He has worked on several high-traffic sites and a number of specialized web applications (such as mobile content distribution). Before joining Percona, Jervin also worked with several hosting companies, providing care for customer hosted services and data on both Linux and Windows.
Categories: MySQL

Learning MySQL 5.7: Q & A

MySQL Performance Blog - Fri, 2017-08-11 18:00

In this post I’ll answer questions I received in my Wednesday, July 19, 2017, webinar Learning MySQL 5.7!

First, thank you all who attended the webinar. The link to the slides and the webinar recording can be found here.

I received a number of interesting questions in the webinar that I’ve followed up with below.

Would there be a big difference on passing from 5.1 to 5.6 before going to 5.7 or, at this point, would it be roughly the same?

The biggest risk of jumping between versions, in this case 5.1 to 5.6, is reverting in case of problems. Rollbacks don’t happen often, but they do happen and you have to make sure you have the infrastructure in place whenever you decide to execute. These upgrade steps are not officially supported by Oracle nor even recommended here at Percona. Having said that, as long as your tests (checksums, pt-upgrade) and rollback plan works, this shouldn’t be a problem.

One unforgettable issue I have personally encountered is an upgrade from 5.1 via dump and reload to 5.6. The 5.6 version ran with ROW binlog format preventing replication back to 5.1 because of the limitation with the TIMESTAMP columns. Similarly, downgrading without replication means you have to deal with changes to the MySQL system schema, which obviously require some form of downtime.

Additionally, replication from 5.7 to 5.5 will not work because of the additional metadata information that 5.7 creates (i.e., GTID even when GTID is disabled).

After in-place upgrade a Percona XtraDB Cluster from 5.5 to 5.7 (through 5.6), innodb_file_per_table is enabled by default and the database is now almost twice the size. It was a 40 GB DB now it’s 80 GB due to every table has its own file but ibdata1 is still 40 GB. Is there any solution for this (that doesn’t involve mysqldump and drop tables) and how can this be avoided in future upgrades?

The reason this might be the case is that after upgrading, a number (or possibly all) of tables were [re]created. This would obviously create separate tablespaces for each. One way I can think of reclaiming that disk space is through a familiar upgrade path:

  1. Detach one of the nodes and make is an async replica of the remaining nodes in the cluster
  2. Dump and reload data from this node, then resume replication
  3. Join the other nodes from the cluster as additional nodes of a new cluster using the async replica
  4. Once there is only one node remaining in the original cluster, you can switch to the new cluster for production
  5. Rejoin the last node from the original cluster into the new cluster to complete the process

Depending on the semantics of your switch, it may or may not involve a downtime. For example, if you use ProxySQL this should be a transparent operation.

One way to avoid this problem is by testing. Testing the upgrade process in a lab will expose this kind of information even before deploying the new version into production, allowing you to adjust your process accordingly.

What is a possible impact on upgrades going from the old table format to Barracuda?

So far I am not aware of any negative impact – except if you upgrade and need to downgrade but have since created indexes with prefixes larger than what was supported on the previous version (see large_index_prefix and Barracuda documentation).

Upgrading to Barracuda and one of the supported row formats specifically allows memory constrained systems to save a little more. With BLOB/TEXT column stored off the page, they will not fill the buffer pool unless they are needed.

How do you run mysql_upgrade in parallel?

Good question, I actually wrote about it here.

Can you elaborate on ALTER progress features, and is it also applicable to “Optimization ” query?

I was not able to get more details on the “Optimization” part of this question. I can only assume this too was meant to be table rebuild via OPTIMIZE TABLE. First I would like to point out that OPTIMIZE has been an online DDL operation from 5.6 (with few limitations). As such, there is almost no point in monitoring. Also, for the cases where the online DDL does not apply to OPTIMIZE, under the hood, this is ALTER TABLE .. FORCE – a full table rebuild.

Now, for the actual ALTER process doing a table copy/rebuild, MySQL 5.7 provides some form of progress indication as to how much work has been done. However, it does not necessarily provide an estimate of the actual time it would take to complete. Each ALTER process has different phases which can vary under different conditions. Alternatively, you can also employ other ways of monitoring progress as described in the post.

We are migrated from 5.7.11 to 5.7.17 Percona Server and facing “Column 1 of table 'x.x' cannot be converted from type 'varchar(100)' to type 'varchar(100)'”.

This is interesting – what we have seen so far are errors with different datatypes or sizes, which most likely means inconsistency from the table structures if the error is coming from replication. We will need more information on what steps were taken during the upgrade to tell what happened here. Our forums would be the best place to continue this conversation. To begin with, perhaps slave_type_conversions might help if the table structures in replication are the same.

Is the Boost Geometry almost on par with Postgres GIS functions?

I cannot answer this with authority or certainty. I’ve used GIS functions in MySQL, but not developed code for it. Although Boost::Geometry was chosen because of its well-designed API, rapid development and license compatibility, it does not necessarily mean it is more mature than PostGIS (which is widely adopted).

What is the best bulk insert method for MySQL 5.7?

The best option can be different in many situations, so we have to put context here. For this reason, let me give some example scenarios and what might work best:

  • On an upgrade process where you are doing a full dump and reload, parallelizing the process by using mydumper/myloader or mysqlpump will save a lot of time depending the hardware resource available.
  • Bulk INSERT from your application that happens at regular intervals – multi-row inserts are always ideal to reduce disk writes per insert. LOAD DATA INFILE is also a popular option if you can.

Again, thank you for attending the webinar – if you have additional questions head on out to the Percona Forums!

Categories: MySQL

This Week in Data: Thoughts from Percona Chief Evangelist Colin Charles

MySQL Performance Blog - Thu, 2017-08-10 13:58

Welcome to a new weekly column on the Percona blog. My name is Colin Charles, Percona Chief Evangelist, and I have been involved in MySQL, MariaDB Server and the open source community for over a decade. Now I am at Percona, and this is my weekly column.

When you start a column, you have ask yourself what you’ll be writing about. Keeping the focus on the reader is what’s crucial. With this in mind, I plan to cover happenings, pointers and maybe even musings in this column. It’s August, and while many are away on summer vacations, there’s still plenty happening in the database world. So maybe this will be a little like the now-defunct, Weekly MySQL News. It will be broader than just MySQL, however, and focus on open source databases (after all, Percona’s mission is to to champion unbiased open source database solutions).

So let’s get started! I look forward to feedback/tips via comments, or you can email me directly at colin.charles@percona.com. Feel free to socialize with me! I’m @bytebot on Twitter.

Percona Live Europe Dublin

Percona Live Open Source Database Conference Europe 2017 started in London, moved to Amsterdam (where last year it sold out at 400+ attendees) and is now docking itself in Dublin. Dublin, famous for Guinness, is also now famous a European tech hub. With our expanded conference focus beyond just the MySQL ecosystem, Percona Live Europe also includes MongoDB, PostgreSQL and other open source databases.

Where are we at with the event? The sneak peak schedule is out, and we aim to have a more or less full conference schedule by the second week of August. The conference committee is at its most diverse, with two MongoDB Masters to ensure tighter content around MongoDB, and also two whom are prominent in the DevOps world.

Naturally, evolution is good because you are now getting the “best of the best” talks, as there are less slots to compete when it comes to topics! Registration is open, and you’ll want to sign up as soon as possible to lock in the best available rates.

Percona Live Europe in Dublin is also a great place to be a sponsor as a smaller, intimate event helps ensure that people pop by your expo hall booths. This is great for promoting your products, hiring new folks and so on. Find out more about sponsorship here.

Releases Link List

In coming posts, I expect to cover upcoming events that I’m participating in, and also thoughts about one’s that I’ve been to. See you soon!

Categories: MySQL

How to Configure Aurora RDS Parameters

MySQL Performance Blog - Wed, 2017-08-09 13:51

In this blog post, we’ll look at some tips on how to configure Aurora RDS parameters.

I was recently deploying a few Aurora RDS instances, a process very similar to configuring a regular RDS instance. I noticed a few minor differences in the way you configure Aurora RDS parameters, and very few articles on how the commands should be structured (for RDS as well as Aurora). The only real literature available is the official Amazon RDS documentation.

This blog provides a concise “how-to” guide to quickly change Aurora RDS parameters using the AWS CLI. Aurora retains the parameter group model introduced with RDS, with new instances having the default read only parameter groups. For a new instance, you need to create and allocate a new parameter group (this requires a DB reboot). After that, you can apply changes to dynamic variables immediately. In other words, the first time you add the DB parameter group you’ll need to reboot even if the variable you are configuring is dynamic. It’s best to create a new DB parameter group when initializing your clusters. Nothing stops you from adding more than one host to the same DB Parameter Group rather than creating one per instance.

In addition to the DB Parameter Group, each instance is also allocated a DB Cluster Parameter Group. The DB Parameter Group is used for instance-level parameters, while the DB Cluster Parameter Group is used for cluster-level parameters (and applies to all instances in a cluster). You’ll find some of the MySQL engine variables can only be found in the DB Cluster Parameter Group. Here you will find a handy reference of all the DB cluster and DB instance parameters that are viewable or configurable for Aurora instances.

To run these commands, you’ll need to have the “aws” cli tool installed and configured. Note that the force-failover option used for RDS instances doesn’t apply to Aurora. You should perform either a controlled failover or let Aurora handle this. Also, the group family to use for Aurora is “oscar5.6”. The commands to set this up are as follows:

aws rds create-db-parameter-group --db-parameter-group-name percona-opt --db-parameter-group-family oscar5.6 --description "Percona Optimizations" aws rds modify-db-parameter-group --db-parameter-group-name percona-opt --parameters "ParameterName=max_connections,ParameterValue=5000,ApplyMethod=immediate" # For each instance-name: aws rds modify-db-instance --db-instance-identifier <instance-name> --db-parameter-group-name=percona-opt aws rds reboot-db-instance --db-instance-identifier <instance-name>

Once you create the initial DB parameter group, configure the variables as follows:

aws rds modify-db-parameter-group --db-parameter-group-name <instance-name> --parameters "ParameterName=max_connect_errors,ParameterValue=999999,ApplyMethod=immediate" aws rds modify-db-parameter-group --db-parameter-group-name <instance-name> --parameters "ParameterName=max_connect_errors,ParameterValue=999999,ApplyMethod=immediate" ## Verifying change: aws rds describe-db-parameters --db-parameter-group-name aurora-instance-1 | grep -B7 -A2 'max_connect_errors'

Please keep in mind, it can take a few seconds to propagate changes to nodes. Give it a moment before checking the values with “show global variables”. You can configure the DB Cluster Parameter group similarly, for example:

# Create a new db cluster parameter group aws rds create-db-cluster-parameter-group --db-cluster-parameter-group-name percona-cluster --db-parameter-group-family oscar5.6 --description "new cluster group" # Tune a variable on the db cluster parameter group aws rds modify-db-cluster-parameter-group --db-cluster-parameter-group-name percona-cluster --parameters "ParameterName=innodb_flush_log_at_trx_commit,ParameterValue=2,ApplyMethod=immediate" # Allocate the new db cluster parameter to your cluster aws rds modify-db-cluster --db-cluster-identifier <cluster_identifier> --db-cluster-parameter-group-name=percona-cluster # And of course, for viewing the cluster parameters aws rds describe-db-cluster-parameters --db-cluster-parameter-group-name=percona-cluster

I hope you find this article useful, please make sure to share with the community!

Categories: MySQL

Avoiding the “An optimized (without redo logging) DDL operation has been performed” Error with Percona XtraBackup

MySQL Performance Blog - Tue, 2017-08-08 13:51

This blog discusses newly added options for Percona XtraBackup 2.4.8 and how they can impact your database backups.

To avoid issues with MySQL 5.7 skipping the redo log for DDL, Percona XtraBackup has implemented three new options (xtrabackup --lock-ddl, xtrabackup --lock-ddl-timeout, xtrabackup --lock-ddl-per-table) that can be used to place MDL locks on tables while they are copied.

So why we need those options? Let’s discuss the process used to get there.

Originally, we found problems while running DDLs: Percona XtraBackup produced corrupted backups as described in two reports:

After experimenting, it was clear that the core cause of those fails was MySQL 5.7 skipping redo logging for some DDLs. This is a newly added feature to MySQL named Sorted Index BuildsYou can read more from following links:

To prevent this we introduced a solution: wWhen Percona XtraBackup detects skipping the redo log), it aborts the backup to prevent creating a corrupted backup.

The scary error message you get with this fix is:

[FATAL] InnoDB: An optimized(without redo logging) DDLoperation has been performed. All modified pages may not have been flushed to the disk yet.
Percona XtraBackup will not be able to take a consistent backup. Retry the backup operation

We need to avoid aborting backup with this message. So how do we do that? Let’s create a test case first and reproduce the issue.

Prepare two tables:

sysbench /usr/share/sysbench/oltp_insert.lua --db-driver=mysql --mysql-db=db1 --mysql-user=msandbox --mysql-password=msandbox --table-size=2000000 --mysql-socket=/tmp/mysql_sandbox20393.sock prepare sysbench /usr/share/sysbench/oltp_insert.lua --db-driver=mysql --mysql-db=db2 --mysql-user=msandbox --mysql-password=msandbox --table-size=2000000 --mysql-socket=/tmp/mysql_sandbox20393.sock prepare

Create a test.sh file and place it in the sandbox:

#!/bin/bash echo "drop table if exists db1.sb1"|./use echo "create table sb1 as select id,c from sbtest1 where id < 150000;"|./use db1 echo "create unique index ix on sb1 (id)"|./use db1 sleep 1 echo "drop table if exists db2.sb1"|./use echo "create table sb1 as select id,c from sbtest1 where id < 150000;"|./use db2 echo "create unique index ix on sb1 (id)"|./use db2

Run the script in a loop while the backup is taken:

$ while true; do bash test.sh; done

Try to take a backup:

xtrabackup --defaults-file=/home/shahriyar.rzaev/sandboxes/rsandbox_Percona-Server-5_7_18/master/my.sandbox.cnf --user=msandbox --password='msandbox' --target-dir=/home/shahriyar.rzaev/backup_dir/ps_5.7_master/full/2017-07-26_11-11-45 --backup --host=127.0.0.1 --port=20393 --binlog-info=AUTO --galera-info --parallel 4 --check-privileges --no-version-check

You will likely get something like:

InnoDB: An optimized (without redo logging) DDLoperation has been performed. All modified pages may not have been flushed to the disk yet. PXB will not be able take a consistent backup. Retry the backup operation

Ok, now we have reproduced the error. To avoid this error, XtraBackup has the new options as mentioned above.

Using --lock-ddl:

xtrabackup --defaults-file=/home/shahriyar.rzaev/sandboxes/rsandbox_Percona-Server-5_7_18/master/my.sandbox.cnf --user=msandbox --password='msandbox' --target-dir=/home/shahriyar.rzaev/backup_dir/ps_5.7_master/full/2017-07-26_11-16-56 --backup --host=127.0.0.1 --port=20393 --binlog-info=AUTO --galera-info --parallel 4 --check-privileges --no-version-check --lock-ddl

The new thing you should notice is:

170726 11:16:56 Executing LOCK TABLES FOR BACKUP...

And the backup status:

xtrabackup: Transaction log of lsn (2808294311) to (2808304872) was copied. 170726 11:20:42 completed OK!

Another new option is --lock-ddl-per-table:

xtrabackup --defaults-file=/home/shahriyar.rzaev/sandboxes/rsandbox_Percona-Server-5_7_18/master/my.sandbox.cnf --user=msandbox --password='msandbox' --target-dir=/home/shahriyar.rzaev/backup_dir/ps_5.7_master/full/2017-07-26_11-31-56 --backup --host=127.0.0.1 --port=20393 --binlog-info=AUTO --galera-info --parallel 4 --check-privileges --no-version-check --lock-ddl-per-table

The new output will look like this:

170726 11:32:33 [01] Copying ./ibdata1 to /home/shahriyar.rzaev/backup_dir/ps_5.7_master/full/2017-07-26_11-31-56/ibdata1 170726 11:32:33 Locking MDL for db1.sb1 170726 11:32:33 [02] Copying ./db1/sb1.ibd to /home/shahriyar.rzaev/backup_dir/ps_5.7_master/full/2017-07-26_11-31-56/db1/sb1.ibd 170726 11:32:33 Locking MDL for db1.sbtest1 170726 11:32:33 Locking MDL for db2.sb1 170726 11:32:33 [03] Copying ./db1/sbtest1.ibd to /home/shahriyar.rzaev/backup_dir/ps_5.7_master/full/2017-07-26_11-31-56/db1/sbtest1.ibd 170726 11:32:33 [04] Copying ./db2/sb1.ibd to /home/shahriyar.rzaev/backup_dir/ps_5.7_master/full/2017-07-26_11-31-56/db2/sb1.ibd 170726 11:32:33 [04] ...done 170726 11:32:33 >> log scanned up to (2892754398) 170726 11:32:34 Locking MDL for db2.sbtest1

The result of the backup:

170726 11:35:45 Unlocking MDL for all tables xtrabackup: Transaction log of lsn (2871333326) to (2892754764) was copied. 170726 11:35:45 completed OK!

The another thing I should add here is about using --lock-ddl with non-Percona Server for MySQL servers. For example., using it with MariaDB:

2017-07-26 12:08:32 ERROR FULL BACKUP FAILED! 2017-07-26 12:08:37 ERROR 170726 12:08:32 Connecting to MySQL server host: 127.0.0.1, user: msandbox, password: set, port: 10207, socket: /tmp/mysql_sandbox10207.sock Using server version 10.2.7-MariaDB 170726 12:08:32 Error: LOCK TABLES FOR BACKUP is not supported.

But you can use --lock-ddl-per-table with any server. Use --lock-ddl-per-table with caution, it can block updates to tables for highly loaded servers under some circumstances. Let’s explore one:

Table: CREATE TABLE t1 (a INT AUTO_INCREMENT PRIMARY KEY, b TEXT); Cases: connection 1: - BEGIN; SELECT * FROM sb1 LIMIT 1; <--- MDL connection 2: - UPDATE sb1 SET c = '288' WHERE id = 34; <--- completes OK connection 3: - CREATE INDEX sb1_1 ON sb1 (c(10)); <--- WAITING for MDL connection 2: - UPDATE sb1 SET c = '288' WHERE id = 34; <--- WAITING for MDL connection 1: - COMMIT; connection 2 and 3 are able to complete now

If one connection holds an MDL lock, and another connection does ALTER TABLE (CREATE INDEX is mapped to an ALTER TABLE statement to create indexes), then updates to that table are blocked.

Testing this with the backup process is quite easy:

Sample table:

CREATE TABLE `sb1` ( `id` int(11) NOT NULL DEFAULT '0', `c` char(120) NOT NULL DEFAULT '', UNIQUE KEY `ix` (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 select count(*) from sb1; +----------+ | count(*) | +----------+ | 149999 | +----------+ select * from sb1 limit 3; +----+-------------------------------------------------------------------------------------------------------------------------+ | id | c | +----+-------------------------------------------------------------------------------------------------------------------------+ | 1 | 83868641912-28773972837-60736120486-75162659906-27563526494-20381887404-41576422241-93426793964-56405065102-33518432330 | | 2 | 38014276128-25250245652-62722561801-27818678124-24890218270-18312424692-92565570600-36243745486-21199862476-38576014630 | | 3 | 33973744704-80540844748-72700647445-87330233173-87249600839-07301471459-22846777364-58808996678-64607045326-48799346817 | +----+-------------------------------------------------------------------------------------------------------------------------+

So our “connection 1:” is an xtrabackup command:

xtrabackup --defaults-file=/home/shahriyar.rzaev/sandboxes/rsandbox_Percona-Server-5_7_18/master/my.sandbox.cnf --user=msandbox --password='msandbox' --target-dir=/home/shahriyar.rzaev/backup_dir/ps_5.7_master/full/2017-07-28_07-55-30 --backup --host=127.0.0.1 --port=20393 --binlog-info=AUTO --galera-info --parallel 4 --check-privileges --no-version-check --lock-ddl-per-table

So after running the backup command and doing the same steps for “connection 2” and “connection 3,” the result is something like this in processlist:

show processlist; +----+----------+-----------------+------+---------+------+---------------------------------+----------------------------------------+-----------+---------------+ | Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined | +----+----------+-----------------+------+---------+------+---------------------------------+----------------------------------------+-----------+---------------+ | 4 | root | localhost | db1 | Sleep | 28 | | NULL | 0 | 1 | | 10 | root | localhost | db1 | Query | 26 | Waiting for table metadata lock | CREATE INDEX sb1_1 ON sb1 (c(10)) | 0 | 0 | | 11 | root | localhost | db1 | Query | 6 | Waiting for table metadata lock | UPDATE sb1 SET c = '288' WHERE id = 34 | 0 | 0 | | 12 | root | localhost | NULL | Query | 0 | starting | show processlist | 0 | 0 | | 13 | msandbox | localhost:36546 | NULL | Sleep | 31 | | NULL | 1 | 116 | | 14 | msandbox | localhost:36550 | NULL | Sleep | 17 | | NULL | 1 | 1 | +----+----------+-----------------+------+---------+------+---------------------------------+----------------------------------------+-----------+---------------+ 6 rows in set (0.00 sec)

Updates are only completed after the backup, as described. It should be clear now why you should use it with caution.

The last thing we should discuss is if you do not want to use any “hacks” with xtrabackup , you can do things on the MySQL side such as:

  1. Avoiding bad DDLs
Categories: MySQL

Webinar Wednesday August 9, 2017: MongoDB Security – Making Things Secure by Default

MySQL Performance Blog - Mon, 2017-08-07 19:01

Join Percona’s Senior Technical Services Engineer, Adamo Tonete as he presents MongoDB Security: Making Things Secure by Default on Wednesday, August 9, 2017 at 11:00 am PDT / 2:00 pm EDT (UTC-7).

Register Now

MongoDB security breaches have regularly been in the news. What isn’t mentioned, however, is that it’s all been avoidable. Simple mistakes add up to giant headaches, and loss of management and customer confidence in you (the DBA). We will cut through the marketing hype and give you a real-world playbook on what to do and how to do it when securing your system. This includes enabling authentications, understand and building custom roles and users, knowing how to encrypt your data, using LDAP/AD and more.

Register for the webinar here.

Adamo Tonete, Senior Technical Services Engineer

Adamo joined Percona in 2015, after working as a MongoDB/MySQL Database Administrator for three years. As the main database admin of a startup, he was responsible for suggesting the best architecture and data flows for a worldwide company in a 24/7 environment. Before that, he worked as a Microsoft SQL Server DBA for a large e-commerce company, mainly on performance tuning and automation. Adamo has almost eight years of experience working as a DBA, and in the past three years he has moved to NoSQL technologies without giving up relational databases. He plays videogames and studies everything related to engines. Adamo lives with his wife in São Paulo, Brazil.

 

Categories: MySQL

Meditation: Captive vs Directed Attention

Xaprb, home of innotop - Sat, 2017-08-05 14:07

I wrote previously about meditation, and in a subsequent post I wrote about the meditative qualities I found in writing cursive. As my meditation practice continues, I find myself seeing these experiences differently than I did before.

Daily Practice Helps

If I miss a day of practice, I notice that my mind wanders more.

It helps me to remember that there’s no “wrong” way to “do” meditation. The mind’s job is to think. In meditation, my goal is not to get “better at it” or to silence my mind. My goal is not to suppress thoughts. It is solely to notice and release thoughts, ideally without getting involved in them.

Noticing that my mind is getting distracted and thinking about things is actually the point of meditation. It’s working. Just release it and return to the breath.

Positive Thinking and Affirmations

I have not found positive thinking or affirmations to work for me. Meditation has helped me see that positive thinking is like trying to put good thoughts into a cup that’s already filled. I must empty my cup before I can fill it again.

In turning to meditation to allow my cup to empty (more through evaporation than pouring out), I discovered that when the cup empties, positive thoughts are not necessary. The underlying state of the mind is already good. Underneath the ripples, I am already the person I thought I wanted to be.

The Hardy Boys

I read a lot of Hardy Boys books when I was young. I remember a scene where Joe Hardy learned to withstand torture by focusing on the pain, leaving no room for fear of the pain.

I had a somewhat painful childhood. I tried my version of the Joe Hardy technique—detaching and taking my mind far away—and it worked. But eventually, like Dr. Jekyll, I became unable to remain present in the moment when I wanted to. Farm work is extremely dangerous and I had some close calls. Living on autopilot is not a good way to stay safe. Meditation helps me practice existing.

In a Trance

Other people have reacted in various ways to my meditation practice. Despite that I write about it publicly on this blog, I don’t really make it a part of my interactions with others. But some of those who noticed that I was meditating offered their opinions to me. One person told me that I was wrong to think that meditation would increase my intelligence. (I didn’t think it would; this was his interpretation).

Another person referred to meditation as a trance. To be fair, when I began meditating I did think of it as a trance-like state. In my previous blog post on cursive writing, I mentioned this in the context of losing awareness of the world. Since then, I have come to see it very differently. When I am practicing skillfully, I am anything but absorbed or oblivious to the world. I’m hyper-alert, relaxed, focused, aware, poised, attentive. I’m directing my attention, and observing what arises in the mind. It’s an incredibly joyful experience, by the way.

In contrast, I now experience being absorbed in a task as very different. When I am in a state of “flow” my observer mind is gone, and my attention is captive, not directed; I have no awareness of myself, my thoughts, the passage of time, etc.

Meditation, for me, is hyper-awareness and focus on the present moment. The familiar trance state of focus is the opposite of this.

Books That Help

I’ve learned meditation by doing it, but I’ve also benefited from reading many books. If I were to start over from scratch, I think I would only read two:

  1. Wherever You Go, There You Are: Mindfulness Meditation in Everyday Life, by Jon Kabat-Zinn.
  2. Zen Mind, Beginner’s Mind: Informal Talks on Zen Meditation and Practice, by Shunryu Suzuki.

I hope these thoughts are helpful to you.

Pic Credit

Categories: MySQL

Saturation Metrics in PMM 1.2.0

MySQL Performance Blog - Fri, 2017-08-04 13:23

One of the new graphs added to Percona Monitoring and Management (PMM) is saturation metrics. This blog post explains how to use the information provided by these graphs.

You might have heard about Brendan Gregg’s USE Method  (Utilization-Saturation-Errors) as a way to analyze the performance of any system. Our goal in PMM is to support this method fully over time, and these graphs take us one step forward.

When it comes to utilization, there are many graphs available in PMM. There is the CPU Usage graph:

There is also Disk IO Utilization:

And there is Network Traffic:

If you would like to look at saturation type metrics, there is classical the Load Average graph:

While Load Average is helpful for understanding system saturation in general, it does not really distinguish whether it is the CPU or Disk that is saturated. Load Average, as the name says, is already averaged — so we can’t really observe short saturation spikes with Load Average. It is averaged for at least one minute. Finally, the problem with Load Average is it does not keep the number of CPU cores/threads into account. Suppose I have a CPU-bound Load Average of 16, for example. That is quite a load and will cause high saturation and queueing if you have two CPU threads. But if you have 64 threads, then 16 becomes a trivial load with no saturation at all.

Let’s take a look at the Saturation Metrics graph:

It provides us two metrics: one showing the CPU load and another is showing the IO load.These values roughly correspond to  the “r” and “b” columns in VMSTAT output:

These are sampled every second and then averaged over the reporting interval.

We also normalize the CPU load by dividing the raw number of runnable processes by a number of threads available. “Rocky” has 56 threads, which is why the normalized CPU load is about one even though the number of runnable processes shown by VMSTAT is around 50.

We do not normalize the IO load, as systems can have multiple IO devices and a number of requests they can handle in parallel is largely unknown. If you want to understand specific IO device performance, you should check out the Disk Performance Dashboard.

Testing Saturation Metrics in Practice

Let’s see if saturation graphs indeed show us when CPU saturation is the issue. I will use a sysbench CPU test for illustration, run as:

sysbench cpu  --cpu-max-prime=100000 --threads=1 --time=60 run

This will use the said number of threads to execute compute jobs, each of which will compute the said number of prime numbers. If we have enough CPU resources available, with no saturation, the latency of executing such requests should be about the same. When we overload the system, so there are not enough CPU execution units to process everything in the parallel, the average latency should increase.   

root@ts140i:/mnt/data# sysbench cpu  --cpu-max-prime=100000 --threads=1 --time=300 run sysbench 1.0.7 (using bundled LuaJIT 2.1.0-beta2) Running the test with following options: Number of threads: 1 Initializing random number generator from current time Prime numbers limit: 100000 Initializing worker threads... Threads started! General statistics:    total time:                          300.0234s    total number of events:              12784 Latency (ms):         min:                                 23.39         avg:                                 23.47         max:                                 28.07         95th percentile:                     23.52         sum:                             300018.06

As we can see with one thread working, the average time it takes to handle a single request is 23ms. Obviously, there is no saturation happening in this case:

“Ts140i” has four CPU cores, and as you can see the Normalized CPU load stays below one. You may wonder why isn’t it closer to 0.25 in this case, with one active thread and four cores available? The reason is at exactly the time when the metrics are being captured, there often happen to be an additional two to three threads active to facilitate the process. They are only active for a very few milliseconds at the time, so they do not produce much load — but they tend to skew the number a little bit.

Let’s now run with four threads. The number of threads matches the number of CPU cores available (and it is true cores in this case, no hyperthreading). In this case, don’t expect too much increase in the event processing time.

root@ts140i:/mnt/data# sysbench cpu  --cpu-max-prime=100000 --threads=4 --time=300 run sysbench 1.0.7 (using bundled LuaJIT 2.1.0-beta2) Running the test with following options: Number of threads: 4 Initializing random number generator from current time Prime numbers limit: 100000 Initializing worker threads... Threads started! General statistics:    total time:                          300.0215s    total number of events:              48285 Latency (ms):         min:                                 24.19         avg:                                 24.85         max:                                 43.61         95th percentile:                     24.83         sum:                            1200033.93

As you see test confirms the theory – we have avg latency increased just by about 6%  with Normalized CPU load in saturation metrics mostly hovering between 1 and 2:

Let’s now do the test with 16 threads, which is four times more than available CPU cores. We should see the latency dramatically increase due to CPU overload (or saturation). The same will happen to your CPU bound MySQL queries if you have more concurrency than CPUs available.

root@ts140i:/mnt/data# sysbench cpu  --cpu-max-prime=100000 --threads=16 --time=300 run sysbench 1.0.7 (using bundled LuaJIT 2.1.0-beta2) Running the test with following options: Number of threads: 16 Initializing random number generator from current time Prime numbers limit: 100000 Initializing worker threads... Threads started! General statistics:    total time:                          300.0570s    total number of events:              48269 Latency (ms):         min:                                 27.83         avg:                                 99.44         max:                                189.05         95th percentile:                    121.08         sum:                            4799856.52

We can see it takes about four times longer to process each request due to CPU overload and queueing. Let’s see what saturation metrics tell us:

As you can see, Normalized CPU Load floats between four and five on the graph, consistent with saturation we’re observing.

You may ask does the CPU utilization graph help us here? Not really. You will see 100% CPU usage for both the run with four threads and 16 threads, while request latencies are completely different.   

Summary

As we can see from our test, Normalized CPU Load is very helpful for understanding when the CPU is overloaded. An overloaded CPU causes response times to increase and performance to degrade. Furthermore, you can use it to (roughly) see how serious the overload is. As a rule of thumb, if you see Normalized CPU saturation over two, it indicates your CPUs are overloaded.

Categories: MySQL

Percona Live Europe 2017 Sneak Peek Schedule Up Now! See Available Sessions!

MySQL Performance Blog - Thu, 2017-08-03 14:42

We are excited to announce that the sneak peek schedule for the Percona Live Open Source Database Conference Europe 2017 is up! The Percona Live Open Source Database Conference Europe 2017 is September 25 – 27, at the Radisson Blu Royal Hotel.

The theme of Percona Live Europe 2017 is Championing Open Source Databases, with sessions on MySQL, MariaDB, MongoDB and other open source database technologies, including time series databases, PostgreSQL and RocksDB. This year’s conference will feature one day of tutorials and two days of keynote talks and breakout sessions related to open source databases and software. Tackling subjects such as analytics, architecture and design, security, operations, scalability and performance, Percona Live Europe provides in-depth discussions for your high-availability, IoT, cloud, big data and other changing business needs.

Below are some of our top picks for MySQL, MongoDB and open source database sessions:

Tutorials Breakout Talks

MySQL:

MongoDB:

Other Open Source Database Topics:

Registration Prices Increase August 9, 2017 – Get Tickets Now for the Best Price!

Just a reminder to everyone out there that the Early Bird discount rate for the Percona Live Open Source Database Conference Europe 2017 ends August 8! The price increases as of August 9, so buy now. The Early Bird rate gets you all the excellent and amazing opportunities that Percona Live Europe offers, at a very reasonable price! Get your tickets as soon as possible for the best price.

Percona Live Europe 2017 Open Source Database Conference will be held at the Radisson Blu Royal Hotel, at Golden Lane 8, Dublin, Ireland.

The Radisson Blu Royal Hotel is a prime location in the heart of Dublin. Enjoy this spacious venue with complementary WiFi, expert on-site staff and three great restaurants offering a wide variety of meals. Staying for a couple extra days? Take time to enjoy the different tourist attractions, like traditional beer pubs and XII century castles, located minutes away.

A special hotel rate of EUR 250.00 is available for Percona Live Europe 2017 until August 14, 2017.

You can reserve a room by booking through the Radisson Blu’s reservation site.

  1. Click BOOK NOW at the top right.
  2. Enter your preferred check-in and check-out dates, and how many rooms.
  3. From the drop-down “Select Rate Type,” choose Promotional Code.
  4. Enter the code PERCON to get the discount

This special deal includes breakfast each morning! The group rate only applies if used within the Percona Live Europe group block dates (September 25-27, 2017).

Sponsor Percona Live

Become a conference sponsor! We have sponsorship opportunities available for this annual MySQL, MongoDB and open source database event. Sponsors become a part of a dynamic and growing ecosystem and interact with hundreds of DBAs, sysadmins, developers, CTOs, CEOs, business managers, technology evangelists, solutions vendors, and entrepreneurs who attend the event.

Categories: MySQL

Percona Toolkit 3.0.4 is Now Available

MySQL Performance Blog - Wed, 2017-08-02 16:44

Percona announces the release of Percona Toolkit 3.0.4 on August 2, 2017.

Percona Toolkit is a collection of advanced command-line tools that perform a variety of MySQL and MongoDB server and system tasks too difficult or complex for DBAs to perform manually. Percona Toolkit, like all Percona software, is free and open source.

You download Percona Toolkit packages from the web site or install from official repositories.

This release includes the following changes:

New Features

  • PT-90: Added collection of information about prepared statements by pt-stalk when Performance Schema is enabled. For more information, see #1642750.
  • PT-91: Added the --preserve-triggers option for pt-online-schema-change to support AFTER triggers.
  • PT-138: Added --output-format option for pt-mongodb-summary to choose between JSON format and the default plain text.
  • PT-141: Added the --output-format=csv parameter for pt-archiver to archive rows in CSV format.
  • PT-142: Added the --only-same-schema-fks option for pt-online-schema-change to check foreigns keys only on tables with the same schema as the original table. This should speed up the tool’s execution, but keep in mind that if you have foreign keys referencing tables in other schemas, they won’t be detected. For more information, see #1690122.
  • PT-153: Added the --check-unique-key-change option for pt-online-schema-change to abort if the specified statement for --alter is trying to add a unique index. This is supposed to avoid adding duplicate keys that might lead to silently losing data.
  • PT-173: Added the --truncate-replicate-table option for pt-table-checksum to ensure stale data is removed.

Bug fixes

  • PT-136: Fixed pt-table-checksum to support tables that have columns with different collations or charsets. For more information, see #1674266.
  • PT-143: Fixed primary key handling by pt-archiver. For more information, see #1691630.
  • PT-144: Limited constraint name in the new table when running pt-online-schema-change. For more information, see #1491674.
  • PT-146: Fixed the --no-check-binlog-format option for pt-table-checksum to work as expected.
  • PT-148: Fixed the use of uninitialized value in printf() for pt-online-schema-change. For more information, see #1693614.
  • PT-151: Fixed pt-table-sync to prevent field type point to be taken as decimal.
  • PT-154: Reverted PT-116 to remove the --use-insert-ignore option from pt-online-schema-change.
  • PT-161: Fixed the --skip-check-slave-lag feature for pt-table-checksum to safely check for undefined values.
  • PT-178: Fixed regression in --check-slave-lag option for pt-online-schema-change.
  • PT-180: Fixed regression in --skip-check-slave-lag option for pt-online-schema-change.
  • PT-181: Fixed syntax error in pt-online-schema-change.

Other Improvements

  • PT-162: Updated list of tables ignored by pt-table-checksum.

You can find release details in the release notes. Report bugs in Toolkit’s launchpad bug tracker.

Categories: MySQL

Percona Server for MongoDB 3.4.6-1.7 is Now Available

MySQL Performance Blog - Wed, 2017-08-02 13:58

Percona announces the release of Percona Server for MongoDB 3.4.6-1.7 on August 2, 2017. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB is an enhanced, open source, fully compatible, highly-scalable, zero-maintenance downtime database supporting the MongoDB v3.4 protocol and drivers. It extends MongoDB with Percona Memory Engine and MongoRocks storage engine, as well as several enterprise-grade features:

Percona Server for MongoDB requires no changes to MongoDB applications or code.

NOTE: Red Hat Enterprise Linux 5 (including CentOS 5 and other derivatives), Ubuntu 12.04 and older versions are no longer supported by Percona software.

This release is based on MongoDB 3.4.6 and includes the following additional bug fix:

  • #PSMDB-155: Fixed mongod startup on NUMA systems.
Categories: MySQL

Percona Server for MySQL 5.6.36-82.1 is Now Available

MySQL Performance Blog - Wed, 2017-08-02 00:41

Percona is glad to announce the release of Percona Server for MySQL 5.6.36-82.1 on August 1, 2017 (Downloads are available here and from the Percona Software Repositories).

Based on MySQL 5.6.36, including all the bug fixes in it, Percona Server for MySQL 5.6.34-79.1 is the current GA release in the Percona Server for MySQL 5.6 series. All of Percona‘s software is open-source and free, all the details of the release can be found in the 5.6.36-82.1 milestone at Launchpad.

Please note that RHEL 5, CentOS 5 and Ubuntu versions 12.04 and older are not supported in future releases of Percona Server for MySQL and no further packages are added for these distributions.

New Features

  • Percona Server for MySQL can now be built with support of OpenSSL 1.1.
  • Percona Server for MySQL is now available on Debian 9 (stretch). The support only covers the amd64 architecture.
  • TokuDB enables to kill a query that is awaiting an FT locktree lock.
Bugs Fixed
  • Row counts in TokuDB could be lost intermittently after restarts. Bug fixed #2.
  • In TokuDB, two races in the fractal tree lock manager could significantly affect transactional throughput for some applications that used a small number of concurrent transactions. These races manifested as transactions unnecessarily waiting for an available lock. Bug fixed #3.
  • TokuDB could assert when opening a dictionary with no useful information to error log. Bug fixed #23.
  • TokuDB could assert for various reasons deserializing nodes with no useful error output. Bug fixed #24.
  • Percona Server could crash when running a query over a partitioned table that uses an index to read a range of rows if this range was not covered by any existing partition. Bug fixed #1657941 (upstream #76418).
  • With two client connections to a server (debug server build), the server could crash after one of the clients set the global option userstat and flushed the client statistics (FLUSH CLIENT_STATISTICS) and then both clients were closed. Bug fixed #1661488.
  • TokuDB did not pass cmake flags on to snappy cmake. Bug fixed #41. The progress status for partitioned TokuDB table ALTERs was misleading. Bug fixed #42.
  • When a client application is connecting to the Aurora cluster end point using SSL (--ssl-verify-server-cert or --ssl-mode=VERIFY_IDENTITY option), wildcard and SAN enabled SSL certificates were ignored. See also Compatibility Matrix. Note that the --ssl-verify-server-cert option is deprecated in Percona Server 5.7. Bug fixed #1673656 (upstream #68052).
  • Killing a stored procedure execution could result in an assert failure on a debug server build. Bug fixed #1689736 (upstream #86260).
  • It was not possible to build Percona Server on Debian 9 (stretch) due to issues with OpenSSL 1.1. Bug fixed #1702903 (upstream #83814).
  • The SET STATEMENT .. FOR statement changed the global instead of the session value of a variable if the statement occurred immediately after the SET GLOBAL or SHOW GLOBAL STATUS command. Bug fixed #1385352.
  • The synchronization between the LRU manager and page cleaner threads was not done at shutdown. Bug fixed #1689552.

Other bugs fixed: #6#44#65#1160986#1676740#1689989#1689998#1690012#1699788, and #1684601 (upstream #86016).

Compatibility Matrix Feature YaSSL OpenSSL < 1.0.2 OpenSSL >= 1.0.2 ‘commonName’ validation Yes Yes Yes SAN validation No Yes Yes Wildcards support No No Yes
Categories: MySQL

Group Replication: The Sweet and the Sour

MySQL Performance Blog - Tue, 2017-08-01 15:54

In this blog, we’ll look at group replication and how it deals with flow control (FC) and replication lag. 

Overview

In the last few months, we had two main actors in the MySQL ecosystem: ProxySQL and Group-Replication (with the evolution to InnoDB Cluster). 

While I have extensively covered the first, my last serious work on Group Replication dates back to some lab version years past.

Given that Oracle decided to declare it GA, and Percona’s decision to provide some level of Group Replication support, I decided it was time for me to take a look at it again.

We’ve seen a lot of coverage already too many Group Replication topics. There are articles about Group Replication and performance, Group Replication and basic functionalities (or lack of it like automatic node provisioning), Group Replication and ProxySQL, and so on.

But one question kept coming up over and over in my mind. If Group Replication and InnoDB Cluster have to work as an alternative to other (virtually) synchronous replication mechanisms, what changes do our customers need to consider if they want to move from one to the other?

Solutions using Galera (like Percona XtraDB Cluster) must take into account a central concept: clusters are data-centric. What matters is the data and the data state. Both must be the same on each node at any given time (commit/apply). To guarantee this, Percona XtraDB Cluster (and other solutions) use a set of data validation and Flow Control processes that work to the ensure a consistent cluster data set on each node.

The upshot of this principle is that an application can query ANY node in a Percona XtraDB Cluster and get the same data, or write to ANY node and know that the data is visible everywhere in the cluster at (virtually) the same time.

Last but not least, inconsistent nodes should be excluded and either rebuild or fixed before rejoining the cluster.

If you think about it, this is very useful. Guaranteeing consistency across nodes allows you to transparently split write/read operations, failover from one node to another with very few issues, and more.

When I conceived of this blog on Group Replication (or InnoDB Cluster), I put myself in the customer shoes. I asked myself: “Aside from all the other things we know (see above), what is the real impact of moving from Percona XtraDB Cluster to Group Replication/InnoDB Cluster for my application? Since Group Replication still (basically) uses replication with binlogs and relaylog, is there also a Flow Control mechanism?” An alarm bell started to ring in my mind.

My answer is: “Let’s do a proof of concept (PoC), and see what is really going on.”

The POC

I setup a simple set of servers using Group Replication with a very basic application performing writes on a single writer node, and (eventually) reads on the other nodes. 

You can find the schema definition here. Mainly I used the four tables from my windmills test suite — nothing special or specifically designed for Group Replication. I’ve used this test a lot for Percona XtraDB Cluster in the past, so was a perfect fit.

Test Definition

The application will do very simple work, and I wanted to test four main cases:

  1. One thread performing one insert at each transaction
  2. One thread performing 50 batched inserts at each transaction
  3. Eight threads performing one insert to each transaction
  4. Eight threads performing 50 batched inserts at each transaction

As you can see, a pretty simple set of operations. Then I decided to test it using the following four conditions on the servers:

  1. One slave worker FC as default
  2. One slave worker FC set to 25
  3. Eight slave workers FC as default
  4. Eight slave workers FC set to 25

Again nothing weird or strange from my point of view. I used four nodes:

  1. Gr1 Writer
  2. Gr2 Reader
  3. Gr3 Reader minimal latency (~10ms)
  4. Gr4 Reader minimal latency (~10ms)

Finally, I had to be sure I measured the lag in a way that allowed me to reference it consistently on all nodes. 

I think we can safely say that the incoming GTID (last_ Received_transaction_set from replication_connection_status) is definitely the last change applied to the master that the slave node knows about. More recent changes could have occurred, but network delay can prevent them from being “received.” The other point of reference is GTID_EXECUTED, which refers to the latest GTID processed on the node itself.

The closest query that can track the distance will be:

select @last_exec:=SUBSTRING_INDEX(SUBSTRING_INDEX(SUBSTRING_INDEX( @@global.GTID_EXECUTED,':',-2),':',1),'-',-1) last_executed;select  @last_rec:=SUBSTRING_INDEX(SUBSTRING_INDEX(SUBSTRING_INDEX( Received_transaction_set,':',-2),':',1),'-',-1) last_received FROM performance_schema.replication_connection_status WHERE Channel_name = 'group_replication_applier'; select (@last_rec - @last_exec) as real_lag

Or in the case of a single worker:

select @last_exec:=SUBSTRING_INDEX(SUBSTRING_INDEX( @@global.GTID_EXECUTED,':',-1),'-',-1) last_executed;select  @last_rec:=SUBSTRING_INDEX(SUBSTRING_INDEX(Received_transaction_set,':',-1),'-',-1) last_received FROM performance_schema.replication_connection_status WHERE Channel_name = 'group_replication_applier'; select (@last_rec - @last_exec) as real_lag;

The result will be something like this:

+---------------+ | last_executed | +---------------+ | 23607         | +---------------+ +---------------+ | last_received | +---------------+ | 23607         | +---------------+ +----------+ | real_lag | +----------+ |        0 | +----------+

The whole set of tests can be found here, with all the commands you need to run the application (you can find it here) and replicate the tests. I will focus on the results (otherwise this blog post would be far too long), but I invite you to see the details.

The Results Efficiency on Writer by Execution Time and Rows/Sec

Using the raw data from the tests (Excel spreadsheet available here), I was interested in identifying if and how the Writer is affected by the use of Group Replication and flow control.

Reviewing the graph, we can see that the Writer has a linear increase in the execution time (when using default flow control) that matches the increase in the load. Nothing there is concerning, and all-in-all we see what is expected if the load is light. The volume of rows at the end justifies the execution time.

It’s a different scenario if we use flow control. The execution time increases significantly in both cases (single worker/multiple workers). In the worst case (eight threads, 50 inserts batch) it becomes four times higher than the same load without flow control.

What happens to the inserted rows? In the application, I traced the rows inserted/sec. It is easy to see what is going on there:

We can see that the Writer with flow control activated inserts less than a third of the rows it processes without flow control. 

We can definitely say that flow control has a significant impact on the Writer performance. To clarify, let’s look at this graph:

Without flow control, the Writer processes a high volume of rows in a limited amount of time (results from the test of eight workers, eight threads, 50 insert batch). With flow control, the situation changes drastically. The Writer takes a long time processing a significantly smaller number of rows/sec. In short, performance drops significantly.

But hey, I’m OK with that if it means having a consistent data-set cross all nodes. In the end, Percona XtraDB Cluster and similar solutions pay a significant performance price match the data-centric principle. 

Let’s see what happen on the other nodes.

Entries Lag

Well, this scenario is not so good:

When NOT using flow control, the nodes lag behind the writer significantly. Remember that by default flow control in Group Replication is set to 25000 entries (I mean 25K of entries!!!).

What happens is that as soon as I put some salt (see load) on the Writer, the slave nodes start to lag. When using the default single worker, that will have a significant impact. While using multiple workers, we see that the lag happens mainly on the node(s) with minimal (10ms) network latency. The sad thing is that is not really going down with respect to the single thread worker, indicating that the simple minimal latency of 10ms is enough to affect replication.

Time to activate the flow control and have no lag:

Unfortunately, this is not the case. As we can see, the lag of single worker remains high for Gr2 (154 entries). While using multiple workers, the Gr3/4 nodes can perform much better, with significantly less lag (but still high at ~1k entries).

It is important to remember that at this time the Writer is processing one-third or less of the rows it is normally able to. It is also important to note that I set 25 to the entry limit in flow control, and the Gr3 (and Gr4) nodes are still lagging more than 1K entries behind.

To clarify, let check the two graphs below:

Using the Writer (Master) as a baseline in entry #N, without flow control, the nodes (slaves) using Group Replication start to significantly lag behind the writer (even with a light load).

The distance in this PoC ranged from very minimal (with 58 entries), up to much higher loads (3849 entries):

Using flow control, the Writer (Master) diverges less, as expected. If it has a significant drop in performance (one-third or less), the nodes still lag. The worst-case is up to 1363 entries. 

I need to underline here that we have no further way (that I am aware of, anyway) to tune the lag and prevent it from happening.

This means an application cannot transparently split writes/reads and expect consistency. The gap is too high.

A Graph That Tells Us a Story

I used Percona Monitoring and Management (PMM) to keep an eye on the nodes while doing the tests. One of the graphs really showed me that Group Replication still has some “limits” as the replication mechanism for a cluster:

This graph shows the MySQL queries executed on all the four nodes, in the testing using 8-50 threads-batch and flow control. 

As you can see, the Gr1 (Writer) is the first one to take off, followed by Gr2. Nodes Gr3 and Gr4 require a bit more, given the binlog transmission (and 10ms delay). Once the data is there, they match (inconsistently) the Gr2 node. This is an effect of flow control asking the Master to slow down. But as previously seen, the nodes will never match the Writer. When the load test is over, the nodes continue to process the queue for additional ~130 seconds. Considering that the whole load takes 420 seconds on the Writer, this means that one-third of the total time on the Writer is spent syncing the slave AFTERWARDS.

The above graph shows the same test without flow control. It is interesting to see how the Writer moved above 300 queries/sec, while G2 stayed around 200 and Gr3/4 far below. The Writer was able to process the whole load in ~120 seconds instead 420, while Gr3/4 continue to process the load for an additional ~360 seconds.

This means that without flow control set, the nodes lag around 360 seconds behind the Master. With flow control set to 25, they lag 130 seconds.

This is a significant gap.

Conclusions

Going back to the reason why I was started this PoC, it looks like my application(s) are not a good fit for Group Replication given that I have set Percona XtraDB Cluster to scale out the reads and efficiently move my writer to another when I need to. 

Group Replication is still based on asynchronous replication (as my colleague Kenny said). It makes sense in many other cases, but it doesn’t compare to solutions based on virtually synchronous replication. It still requires a lot of refinement.

On the other hand, for applications that can afford to have a significant gap between writers and readers it is probably fine. But … doesn’t standard replication already cover that? 

Reviewing the Oracle documentations (https://dev.mysql.com/doc/refman/5.7/en/group-replication-background.html), I can see why Group Replication as part of the InnoDB cluster could help improve high availability when compared to standard replication. 

But I also think it is important to understand that Group Replication (and derived solutions like InnoDB cluster) are not comparable or a replacement for data-centric solutions as Percona XtraDB Cluster. At least up to now.

Good MySQL to everyone.

Categories: MySQL

Platform End of Life (EOL) Announcement for RHEL 5 and Ubuntu 12.04 LTS

MySQL Performance Blog - Tue, 2017-08-01 00:28

Upstream platform vendors have announced the general end of life (EOL) for Red Hat Enterprise Linux 5 (RHEL 5) and its derivatives, as well as Ubuntu 12.04 LTS. With this announcement comes some implications to support for Percona software running on these operating systems.

RHEL 5 was EOL as of March 31st, 2017 and Ubuntu 12.04 LTS was end of life as of April 28th, 2017. Pursuant to our end of life policies, we are announcing that these EOLs will go into effect for Percona software on August 1st, 2017. As of this date, we will no longer be producing new packages, binary builds, hotfixes, or bug fixes for Percona software on these platforms.

We generally align our platform end of life dates with those of the upstream platform vendor. The platform end of life dates are published in advance on our website under the page Supported Linux Platforms and Versions.

Per our policies, Percona will continue to provide operational support for your databases on EOLed platforms. However, we will be unable to provide any bug fixes, builds or OS-level assistance if you encounter an issue outside the database itself.

Each platform vendor has a supported migration or upgrade path to their next major release.  Please reach out to us if you need assistance in migrating your database to your vendor’s supported platform – Percona will be happy to assist you.

Categories: MySQL

Webinar Wednesday August 2, 2017: MySQL Disk Encryption with LUKS

MySQL Performance Blog - Mon, 2017-07-31 22:12

Join Percona’s, Senior Architect, Matthew Boehm as he presents MySQL Disk Encryption with LUKS on Wednesday, August 2, 2017, at 1:00 pm PDT / 4:00 pm EDT (UTC-7).

Register Now

Clients require strong security measures for PCI, HIPAA or PHI. You must encrypt MySQL “at rest” to satisfy the data managed under these standards. InnoDB’s built-in encryption features work, but there are some caveats to that solution.

In this talk, you’ll see how to encrypt your entire disk to protect everything from data, redo logs and binary logs.

Register for the webinar here.

Matthew Boehm, Architect

Matthew joined Percona in the fall of 2012 as a MySQL consultant. His areas of knowledge include the traditional Linux/Apache/MySQL/PHP stack, memcached, MySQL Cluster, massive sharding topologies, PHP development and a bit of MySQL-C-API development. Previously, Matthew DBAed for the 5th largest MySQL installation at eBay/PayPal, and also hails from managed hosting environments. During his off-hours, Matthew is a nationally-ranked competitive West Coast Swing dancer, and travels to competitions around the US. He enjoys working out, camping, biking and playing MMOs with his son.

Categories: MySQL

Percona Server for MySQL 5.7.18-16 Is Now Available

MySQL Performance Blog - Fri, 2017-07-28 18:49

Percona is glad to announce the GA release of Percona Server for MySQL 5.7.18-16 on July 28, 2017 (Downloads are available here and from the Percona Software Repositories).

Based on MySQL 5.7.18, including all the bug fixes in it, Percona Server for MySQL 5.7.18-16 is the current GA release in the Percona Server for MySQL 5.7 series. All of Percona‘s software is open-source and free, and you can find all the release details in the 5.7.18-16 milestone at Launchpad

Please note that RHEL 5, CentOS 5 and Ubuntu versions 12.04 and older are not supported in future releases of Percona Server and no further packages are added for these distributions.

New Features:

  • Percona Server for MySQL is now available on Debian 9 (stretch). The support only covers the amd64 architecture.
  • Percona Server for MySQL can now be built with the support of OpenSSL 1.1.
  • MyRocks storage engine has been merged into Percona Server.
  • TokuDB enables to kill a query that is awaiting an FT locktree lock.
  • TokuDB enables using the MySQL DEBUG_SYNC facility within Percona FT.

Bugs Fixed:

  • Row counts in TokuDB could be lost intermittently after restarts. Bug fixed #2.
  • In TokuDB, two races in the fractal tree lock manager could significantly affect transactional throughput for some applications that used a small number of concurrent transactions. These races manifested as transactions unnecessarily waiting for an available lock. Bug fixed #3.
  • Percona FT could assert when opening a dictionary with no useful information to an error log. Bug fixed #23.
  • Percona FT could assert for various reasons deserializing nodes with no useful error output. Bug fixed #24.
  • It was not possible to build Percona Server on Debian 9 (stretch) due to issues with OpenSSL 1.1. Bug fixed #1702903 (upstream #83814).
  • Packaging was using the dpkg --verify command which is not available on wheezy/precise. Bug fixed #1694907.
  • Enabling and disabling the slow query log rotation spuriously added the version suffix to the next slow query log file name. Bug fixed #1704056.
  • With two client connections to a server (debug server build), the server could crash after one of the clients set the global option userstat and flushed the client statistics (FLUSH CLIENT_STATISTICS) and then both clients were closed. Bug fixed #1661488.
  • Percona FT did not pass cmake flags on to snappy cmake. Bug fixed #41. The progress status for partitioned TokuDB table ALTERs was misleading. Bug fixed #42.
  • When a client application is connecting to the Aurora cluster end point using SSL (--ssl-verify-server-cert or --ssl-mode=VERIFY_IDENTITY option), wildcard and SAN enabled SSL certificates were ignored. Note that the --ssl-verify-server-cert option is deprecated in Percona Server 5.7. Bug fixed #1673656 (upstream #68052).
  • Killing a stored procedure execution could result in an assert failure on a debug server build. Bug fixed #1689736 (upstream #86260).
  • The SET STATEMENT .. FOR statement changed the global instead of the session value of a variable if the statement occurred immediately after the SET GLOBAL or SHOW GLOBAL STATUS command. Bug fixed #1385352.
  • When running SHOW ENGINE INNODB STATUS, the Buffer pool size, bytes entry contained 0. BUg fixed #1586262.
  • The synchronization between the LRU manager and page cleaner threads was not done at shutdown. Bug fixed #1689552.
  • Spurious lock_wait_timeout_thread wakeup in lock_wait_suspend_thread() could occur. Bug fixed #1704267 (upstream #72123).

Other bugs fixed: #1686603#6#44#65#1160986#1686934#1688319#1689989#1690012#1691682#1697700#1699788#1121072, and #1684601 (upstream #86016).

The release notes for Percona Server for MySQL 5.7.18-16 are available in the online documentation. Please report any bugs on the launchpad bug tracker.

Note

Due to new package dependency, Ubuntu/Debian users should use apt-get dist-upgrade or apt-get install percona-server-server-5.7 to upgrade.

Categories: MySQL
Syndicate content