MySQL

Percona XtraDB Cluster 5.7.18-29.20 is now available

MySQL Performance Blog - Fri, 2017-06-02 19:41

Percona announces the release of Percona XtraDB Cluster 5.7.18-29.20 on June 2, 2017. Binaries are available from the downloads section or our software repositories.

NOTE: You can also run Docker containers from the images in the Docker Hub repository.

NOTE: Due to new package dependency, Ubuntu/Debian users should use apt-get dist-upgrade or apt-get installpercona-xtradb-cluster-57 to upgrade.

Percona XtraDB Cluster 5.7.18-29.20 is now the current release, based on the following:

All Percona software is open-source and free.

Fixed Bugs

  • PXC-749: Fixed memory leak when running INSERT on a table without primary key defined and wsrep_certify_nonPK disabled (set to 0).

    NOTE: We recommend you define primary keys on all tables for correct write-set replication.

  • PXC-812: Fixed SST script to leave the DONOR keyring when JOINER clears the datadir.

  • PXC-813: Fixed SST script to use UTC time format.

  • PXC-816: Fixed hook for caching GTID events in asynchronous replication. For more information, see #1681831.

  • PXC-820: Enabled querying of pxc_maint_mode by another client during the transition period.

  • PXC-823: Fixed SST flow to gracefully shut down JOINER node if SST fails because DONOR leaves the cluster due to network failure. This ensures that the DONOR is then able to recover to synced state when network connectivity is restored For more information, see #1684810.

  • PXC-824: Fixed graceful shutdown of Percona XtraDB Cluster node to wait until applier thread finishes.

Other Improvements

  • PXC-819: Added five new status variables to expose required values from wsrep_ist_receive_status and wsrep_flow_control_interval as numbers, rather than strings that need to be parsed:

    • wsrep_flow_control_interval_low
    • wsrep_flow_control_interval_high
    • wsrep_ist_receive_seqno_start
    • wsrep_ist_receive_seqno_current
    • wsrep_ist_receive_seqno_end

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Categories: MySQL

Percona XtraDB Cluster 5.6.36-26.20 is Now Available

MySQL Performance Blog - Fri, 2017-06-02 19:40

Percona announces the release of Percona XtraDB Cluster 5.6.36-26.20 on June 2, 2017. Binaries are available from the downloads section or our software repositories.

Percona XtraDB Cluster 5.6.36-26.20 is now the current release, based on the following:

All Percona software is open-source and free.

NOTE: Due to end of life, Percona will stop producing packages for the following distributions after July 31, 2017:

  • Red Hat Enterprise Linux 5 (Tikanga)
  • Ubuntu 12.04 LTS (Precise Pangolin)

You are strongly advised to upgrade to latest stable versions if you want to continue using Percona software.

Fixed Bugs

  • PXC-749: Fixed memory leak when running INSERT on a table without primary key defined and wsrep_certify_nonPK disabled (set to 0).

    NOTE: We recommended you define primary keys on all tables for correct write set replication.

  • PXC-813: Fixed SST script to use UTC time format.

  • PXC-823: Fixed SST flow to gracefully shut down JOINER node if SST fails because DONOR leaves the cluster due to network failure. This ensures that the DONOR is then able to recover to synced state when network connectivity is restored For more information, see #1684810.

Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!

Categories: MySQL

ProxySQL-Assisted Percona XtraDB Cluster Maintenance Mode

MySQL Performance Blog - Wed, 2017-05-31 18:34

In this blog post, we’ll look at how Percona XtraDB Cluster maintenance mode uses ProxySQL to take cluster nodes offline without impacting workloads.

Percona XtraDB Cluster Maintenance Mode

Since Percona XtraDB Cluster offers a high availability solution, it must consider a data flow where a cluster node gets taken down for maintenance (through isolation from a cluster or complete shutdown).

Percona XtraDB Cluster facilitated this by introducing a maintenance mode. Percona XtraDB Cluster maintenance mode reduces the number of abrupt workload failures if a node is taken down using ProxySQL (as a load balancer).

The central idea is delaying the core node action and allowing ProxySQL to divert the workload.

How ProxySQL Manages Percona XtraDB Cluster Maintenance Mode

With Percona XtraDB Cluster maintenance mode, ProxySQL marks the node as OFFLINE when a user triggers a shutdown signal (or wants to put a specific node into maintenance mode):

  • When a user triggers a shutdown, Percona XtraDB Cluster node sets pxc_maint_mode to SHUTDOWN (from the DISABLED default) and sleep for x seconds (dictated by pxc_maint_transition_period  — 10 secs by default). ProxySQLauto detects this change and marks the node as OFFLINE. With this change, ProxySQL avoids opening new connections for any DML transactions, but continues to service existing queries until pxc_maint_transition_period. Once the sleep period is complete, Percona XtraDB Cluster delivers a real shutdown signal — thereby giving ProxySQL enough time to transition the workload.
  • If the user needs to take a node into maintenance mode, the user can simply set pxc_maint_mode to MAINTENANCE. With that, pxc_maint_mode is updated and the client connection updating it goes into sleep for x seconds (as dictated by pxc_maint_transition_period) before giving back control to the user. ProxySQL auto-detects this change and marks the node as OFFLINE. With this change ProxySQL avoids opening new connections for any DML transactions but continues to service existing queries.
  • ProxySQL auto-detects this change in maintenance state and then automatically re-routes traffic, thereby reducing abrupt workload failures.

Technical Details:

  • The ProxySQL Galera checker script continuously monitors the state of individual nodes by checking the pxc_maint_mode variable status (in addition to the existing wsrep_local_state) using the ProxySQL scheduler feature
  • Scheduler is a Cron-like implementation integrated inside ProxySQL, with millisecond granularity.
  • If proxysql_galera_checker detects pxc_maint_mode = SHUTDOWN | MAINTENANCE, then it marks the node as OFFLINE_SOFT.  This avoids the creation of new connections (or workloads) on the node.

Sample proxysql_galera_checker log:

Thu Dec  8 11:21:11 GMT 2016 Enabling config Thu Dec  8 11:21:17 GMT 2016 Check server 10:127.0.0.1:25000 , status ONLINE , wsrep_local_state 4 Thu Dec  8 11:21:17 GMT 2016 Check server 10:127.0.0.1:25100 , status ONLINE , wsrep_local_state 4 Thu Dec  8 11:21:17 GMT 2016 Check server 10:127.0.0.1:25200 , status ONLINE , wsrep_local_state 4 Thu Dec  8 11:21:17 GMT 2016 Changing server 10:127.0.0.1:25200 to status OFFLINE_SOFT due to SHUTDOWN Thu Dec  8 11:21:17 GMT 2016 Number of writers online: 2 : hostgroup: 10 Thu Dec  8 11:21:17 GMT 2016 Enabling config Thu Dec  8 11:21:22 GMT 2016 Check server 10:127.0.0.1:25000 , status ONLINE , wsrep_local_state 4 Thu Dec  8 11:21:22 GMT 2016 Check server 10:127.0.0.1:25100 , status ONLINE , wsrep_local_state 4 Thu Dec  8 11:21:22 GMT 2016 Check server 10:127.0.0.1:25200 , status OFFLINE_SOFT , wsrep_local_state 4

Ping us below with any questions or comments.

Categories: MySQL

Percona XtraBackup 2.4.7-2 is Now Available

MySQL Performance Blog - Tue, 2017-05-30 16:53

Percona announces the GA release of Percona XtraBackup 2.4.7-2 on May 29, 2017. You can download it from our download site and apt and yum repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups.

Bug Fixed:
  • Fixed build failure on Debian 9.0 (Stretch). Bug fixed #1678947.

Release notes with all the bugfixes for Percona XtraBackup 2.4.7-2 are available in our online documentation. Please report any bugs to the launchpad bug tracker.

Categories: MySQL

Webinar May 31, 2017: Online MySQL Backups with Percona XtraBackup

MySQL Performance Blog - Tue, 2017-05-30 16:40

Please join Percona’s solution engineer, Dimitri Vanoverbeke as he presents Online MySQL Backups with Percona XtraBackup on Wednesday, May 31, 2017 at 7:00 am PDT / 10:00 am EDT (UTC-7).

Register Now

Percona XtraBackup is a free, open source, complete online backup solution for all versions of Percona Server, MySQL® and MariaDB®. Percona XtraBackup provides:

  • Fast and reliable backups
  • Uninterrupted transaction processing during backups
  • Savings on disk space and network bandwidth with better compression
  • Automatic backup verification
  • Higher uptime due to faster restore time

This webinar will discuss the different features of Percona XtraBackup, including:

  • Full and incremental backups
  • Compression, streaming, and encryption of backups
  • Backing up to the cloud (swift)
  • Percona XtraDB Cluster / Galera Cluster
  • Percona Server specific features
  • MySQL 5.7 support
  • Tools that use Percona XtraBackup
  • Limitations

Register for the webinar here.

Dimitri Vanoverbeke, Solution Engineer

At the age of seven, Dimitri received his first computer. Since then he has felt addicted to anything with a digital pulse. Dimitri has been active in IT professionally since 2003, when he took various roles from internal system engineering to consulting. Prior to joining Percona, Dimitri worked as a consultant for a leading open source software consulting firm in Belgium. During his career, Dimitri has become familiar with a broad range of open source solutions and with the devops philosophy. Whenever he’s not glued to his computer screen, he enjoys traveling, cultural activities, basketball and the great outdoors.

 

Categories: MySQL

Percona Monitoring and Management 1.1.4 is Now Available

MySQL Performance Blog - Mon, 2017-05-29 19:28

Percona announces the release of Percona Monitoring and Management 1.1.4 on May 29, 2017.

For installation instructions, see the Deployment Guide.

This release includes experimental support for MongoDB in Query Analytics, including updated QAN interface.

Query Analytics for MongoDB

To enable MongoDB query analytics, use the mongodb:queries alias when adding the service. As an experimental feature, it also requires the --dev-enable option:

sudo pmm-admin add --dev-enable mongodb:queries

NOTE: Currently, it monitors only collections that are present when you enable MongoDB query analytics. Query data for collections that you add later is not gathered. This is a known issue and it will be fixed in the future.

Query Analytics Redesign

The QAN web interface was updated for better usability and functionality (including the new MongoDB query analytics data). The new UI is experimental and available by specifying /qan2 after the URL of PMM Server.

NOTE: The button on the main landing page still points to the old QAN interface.

You can check out the new QAN web UI at https://pmmdemo.percona.com/qan2

New in PMM Server
  • PMM-724: Added the Index Condition Pushdown (ICP) graph to the MySQL InnoDB Metrics dashboard.
  • PMM-734: Fixed the MySQL Active Threads graph in the MySQL Overview dashboard.
  • PMM-807: Fixed the MySQL Connections graph in the MySQL Overview dashboard.
  • PMM-850: Updated the MongoDB RocksDB and MongoDB WiredTiger dashboards.
  • Removed the InnoDB Deadlocks and Index Collection Pushdown graphs from the MariaDB dashboard.
  • Added tooltips with descriptions for graphs in the MySQL Query Response Time dashboard.Similar tooltips will be gradually added to all graphs.
New in PMM Client
  • PMM-801: Improved PMM Client upgrade process to preserve credentials that are used by services.
  • Added options for pmm-admin to enable MongoDB cluster connections.
About Percona Monitoring and Management

Percona Monitoring and Management is an open-source platform for managing and monitoring MySQL and MongoDB performance. Percona developed it in collaboration with experts in the field of managed database services, support and consulting.

PMM is a free and open-source solution that you can run in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

A live demo of PMM is available at pmmdemo.percona.com.

Please provide your feedback and questions on the PMM forum.

If you would like to report a bug or submit a feature request, use the PMM project in JIRA.

Categories: MySQL

How To Write Exciting Conference Proposals

Xaprb, home of innotop - Sun, 2017-05-28 18:06

Most conference proposals are too boring, even when the speakers and topics are great. This is a pity. I think something about the process of submitting to a CfP sets a trap for most speakers. This post is my advice for avoiding that trap.

TL;DR: Your proposal should focus on your story about what you’ve done personally and what you’ve learned. Your story, not the topic. And, don’t tell us anything about the importance of the topic or how high the stakes are.

I’ve written twice before (1, 2) about how to write better conference proposals. But while recently reviewing another thousand or so proposals, I suddenly understood something simple that separates the boring ones from the exciting ones. And I realized that I make the same mistake myself when I submit proposals!

Because I only see this pattern myself when I’m reviewing, not when I’m writing proposals, this time I’ll try to help you experience what it’s like to review conference proposals. Remember, your first audience is the review committee, who are reading hundreds of proposals.

What follows are real conference proposal abstracts. I have lightly edited where necessary to avoid identifying people explicitly. If you see one of your proposals below, please don’t take offense or jump to any conclusions about implied quality of your talk or expertise. Your proposal is hopefully serving a valuable purpose as part of this blog post.

Read through the following, spending no more than 8-10 seconds on each:

The rate of business change is accelerating. Organisations must build the right things better and faster to survive. Agile, Continuous Delivery and Lean help, but we need practices to ensure the “right things” flow into these processes. This talk explores design sprints in the enterprise to: Bridge the business-IT divide; Build the right thing; Increase organisational agility.

Databases require capacity planning (and to those coming from traditional RDBMS solutions, this can be thought of as a sizing guide). Capacity planning prevents resource exhaustion. Capacity planning can be hard. This talk has a heavier leaning on MySQL, but the concepts and addendum will help with any other data store.

The popularity of containers has driven the need for distributed systems that can provide a secure substrate for container deployments. In this talk, we will go over how Acme has been working on multiple components to secure every part of the container platform - from the newly announced LinuxKit all the way up to SwarmKit - and how all of these blocks fit together!

The Continuous Delivery (CD) movement has been around for more than a decade, and its lessons are even more relevant when embracing container technology. The talk will provide guidance and specific technical solutions on how to implement CD best practices on a reference implementation based on Git, Docker, Kubernetes and Terraform.

With ever increasing demands for fast business change how can we ensure our digital channels have the increasingly exacting standards of performance our customers (and business owners) expect? What does this look like in an age of DevOps and Continuous Delivery? We’ll take you through our experiences as we build a strategy for shifting left and automating performance analysis.

End-to-end engineering ownership - or DevOps - is the only way to scale operations in a micro-services world. Achieving this in large engineering organizations requires the right mindset, architecture and processes. The Acme Small Business Group (SBG) is on the journey to turning it’s 1000 engineers into full lifecycle engineers who own their services from cradle to grave.

Creating a fast & reliable pipeline poses numerous hurdles that cause most dev teams to fall back to slow release cycles, low levels of test & platform coverage & straight up buggy apps in production. To keep re-work & technical debt down to a minimum, defects must be identified & addressed as early as possible & frequently across multiple environments & under realistic conditions.

Do you see any similarities in these proposals? I selected only 7, but already the pattern emerges, and most of them feel very alike in some ways:

  • They point to significant changes or important circumstances, to create context for the talk.
  • They mention specific challenges that arise in these circumstances.
  • They refer to the urgency of learning what the speaker will share.
  • They pose questions, but don’t answer them, as a way of showing what the audience will learn from the talk.

On reflection, all of these are very natural things to do, and seem like good things to do for that matter. And I think conferences usually ask speakers to make sure proposals include this information! Unfortunately, though, the result is mind-numbing when you’re reading through proposals and trying to decide which to vote for (or which to attend, if you’re the audience).

What’s going wrong? Although well-intentioned, the effort to illustrate the context and importance of the topic ends up having a very different effect:

  • The first several sentences of most proposals focus on obvious, well-established facts that don’t need to be stated. The speaker is trying to frame the topic and its importance, but it ends up being equivalent to humans need oxygen to live or something similarly banal.
  • The speaker tries to indicate that the subject matter will be widely applicable and everyone will find it relevant, but it ends up coming across as regurgitation of best practices.

As soon as I realized this, I was able to pinpoint two specific things about clear, brisk proposals that make them more fresh and exciting. To illustrate this, here are two proposals on a similar topic, hiring:

Today hiring the best people for the job is getting difficult and keeping them is another struggle. Not forgetting that people and plants are similar in that in order to become their best they need to have the right environment. This talk is about how to build a workplace (culture) that attracts talent and enables them to give their best.

Eric Schmidt’s quote, “Hiring is the most important thing you do,” is easy to say and very hard to implement. In this session, we’ll share the highs and lows of our journey to rebuild our recruiting program from scratch and create an approach that scales to the needs of our business and increase our annual hiring rate by 400% at the same time.

What’s different?

  • One is about the topic and the other is about the speaker’s experience.
  • One starts with context and the other starts with a challenge.

After I realized this, it became clear to me that a lively proposal is usually a personal story about learning from an experience, and a boring one is usually about a topic and best practices.

Here’s another lively one:

Our company did releases every 4 weeks by IT Ops at early hours 2 years ago. We transformed our way of working in less than 2 years to teams deploying at all hours, with the responsibility of monitoring their applications during business hours. How do you handle 400 IT-professionals with different skill sets? We created a guide for DevOps in our organization and want to share our transformation.

The general tone of this proposal is “we did something, learned something, and we want to share it with you.” I’m instantly excited to attend this session!

What’s my advice to people writing conference proposals? Tell a story about what you have actually done yourself, and don’t worry about setting the context, motivation, importance, or urgency, unless it’s non-obvious. Stories always win, and everyone knows that the pace of business change is accelerating. People want to hear your story, your mistakes, and your lessons learned.

Also, remember that busy reviewers will only give your proposals a few seconds of attention.

What am I not saying in this post? I’m not saying that writing needs to be high quality. When I realized that I didn’t really care a lot about passive voice and bad grammar while reviewing, I felt guilty that I’ve emphasized the need for “good” writing in the past. I’m not saying it doesn’t help. It does: clear, active, direct, compelling writing always helps. But it’s less important than the other points I’ve emphasized in this post. Some of the exciting proposals have pretty egregious writing mistakes, and I don’t care! A run-on sentence and passive voice is entirely forgivable as long as the speaker’s experience comes through clearly.

I know it’s hard. It’s so hard that I don’t follow this advice myself. But I’ll try harder in the future. I hope this helps.

PS: You’ll note that not all of the first seven “bad” proposals are wholly boring. Some of them do point to the speaker’s experience and story. I chose those seven almost sequentially from a conference, skipping only a couple that seemed exceptional. They’re not bad, they’re just ordinary, and many of them will be great talks even if the proposals are hard to read. My point is simply that when you read 350 ordinary proposals in a row, a non-ordinary one leaps out and screams for a high rating.

Pic Credit

Categories: MySQL

Percona Server for MongoDB 3.0.15-1.10 is Now Available

MySQL Performance Blog - Fri, 2017-05-26 17:28

Percona announces the release of Percona Server for MongoDB 3.0.15-1.10 on May 26, 2017. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB is an enhanced, open source, fully compatible, highly-scalable, zero-maintenance downtime database supporting the MongoDB v3.0 protocol and drivers. It extends MongoDB with PerconaFT and MongoRocks storage engines, as well as several enterprise-grade features:

NOTE: PerconaFT was deprecated and is not available in later versions. TokuBackup was replaced with Hot Backup for WiredTiger and MongoRocks storage engines.

Percona Server for MongoDB requires no changes to MongoDB applications or code.

This release is based on MongoDB 3.0.15 and includes the following additional change:

Percona Server for MongoDB 3.0.15-1.10 release notes are available in the official documentation.

Categories: MySQL

Percona Server for MySQL 5.7.18-15 is Now Available

MySQL Performance Blog - Fri, 2017-05-26 17:05

Percona announces the GA release of Percona Server for MySQL 5.7.18-15 on May 26, 2017. Download the latest version from the Percona web site or the Percona Software Repositories. You can also run Docker containers from the images in the Docker Hub repository.

Based on MySQL 5.7.18, including all the bug fixes in it, Percona Server for MySQL 5.7.18-15 is the current GA release in the Percona Server for MySQL 5.7 series. Percona’s provides completely open-source and free software. Find release details in the 5.7.18-15 milestone at Launchpad.

Bugs Fixed:
  • The server would crash when querying partitioning table with a single partition. Bug fixed #1657941 (upstream #76418).
  • Running a query on InnoDB table with ngram full-text parser and a LIMIT clause could lead to a server crash. Bug fixed #1679025 (upstream #85835).

The release notes for Percona Server for MySQL 5.7.18-15 are available in the online documentation. Please report any bugs on the launchpad bug tracker.

Categories: MySQL

Monitoring with Artificial Intelligence and Machine Learning

Xaprb, home of innotop - Fri, 2017-05-26 14:42

Artificial intelligence and machine learning (AI and ML) are so over-hyped today that I usually don’t talk about them. But there are real and valid uses for these technologies in monitoring and performance management. Some companies have already been employing ML and AI with good results for a long time. VividCortex’s own adaptive fault detection uses ML, a fact we don’t generally publicize.

AI and ML aren’t magic, and I think we need a broader understanding of this. And understanding that there are a few types of ML use cases, especially for monitoring, could be useful to a lot of people.

I generally think about AI and ML in terms of three high-level results they can produce, rather than classifying them in terms of how they achieve those results.

1. Predictive Machine Learning

Predictive machine learning is the most familiar use case in monitoring and performance management today. When used in this fashion, a data scientist creates algorithms that can learn how systems normally behave. The result is a model of normal behavior that can predict a range of outcomes for the next data point to be observed. If the next observation falls outside the bounds, it’s typically considered an anomaly. This is the basis of many types of anomaly detection.

Preetam Jinka and I wrote the book on using anomaly detection for monitoring. Although we didn’t write extensively about machine learning, machine learning is just a better way (in some cases) to do the same techniques. It isn’t a fundamentally different activity.

Who’s using machine learning to predict how our systems should behave? There’s a long list of vendors and monitoring projects. Netuitive, DataDog, Netflix, Facebook, Twitter, and many more. Anomaly detection through machine learning is par for the course these days.

2. Descriptive Machine Learning

Descriptive machine learning examines data and determines what it means, then describes that in ways that humans or other machines can use. Good examples of this are fairly widespread. Image recognition, for example, uses descriptive machine learning and AI to decide what’s in a picture and then express it in a sentence. You can look at captionbot.ai to see this in action.

What would descriptive ML and AI look like in monitoring? Imagine diagnosing a crash: “I think MySQL got OOM-killed because the InnoDB buffer pool grew larger than memory.” Are any vendors doing this today? I’m not aware of any. I think it’s a hard problem, perhaps not easier than captioning images.

3. Generative Machine Learning

Generative machine learning is descriptive in reverse. Google’s software famously performs this technique, the results of which you can see on their inceptionism gallery.

I can think of a very good use for generative machine learning: creating realistic load tests. Current best practices for evaluating system performance when we can’t observe the systems in production are to run artificial benchmarks and load tests. These clean-room, sterile tests leave a lot to be desired. Generating realistic load to test applications might be commercially useful. Even generating realistic performance data is hard and might be useful.

4. Prescriptive

Bonus: prescriptive. (I added this later, after a discussion with someone.) Prescriptive machine learning would produce instructions: what to do to a system.

Photo Credit

Categories: MySQL

What About ProxySQL and Mirroring?

MySQL Performance Blog - Thu, 2017-05-25 22:45

In this blog post, we’ll look at how ProxySQL and mirroring go together.

Overview

Let me be clear: I love ProxySQL, and I think it is a great component for expanding architecture flexibility and high availability. But not all that shines is gold! In this post, I want to correctly set some expectations, and avoid selling carbon for gold (carbon has it’s own uses, while gold has others).

First of all, we need to cover the basics of how ProxySQL manages traffic dispatch (I don’t want to call it mirroring, and I’ll explain further below).

ProxySQL receives a connection from the application, and through it we can have a simple SELECT or a more complex transaction. ProxySQL gets each query, passes them to the Query Processor, processes them, identifies if a query is mirrored, duplicates the whole MySQL session ProxySQL internal object and associates it to a mirror queue (which refer to a mirror threads pool). If the pool is free (has an available active slot in the concurrent active threads set) then the query is processed right away. If not, it will stay in the queue. If the queue is full, the query is lost.

Whatever is returned from the query goes to /dev/null, and as such no result set is passed back to the client.

The whole process is not free for a server. If you check the CPU utilization, you will see that the “mirroring” in ProxySQL actually doubles the CPU utilization. This means that the traffic on server A is impacted because of resource contention.

Summarizing, ProxySQL will:

  1. Send the query for execution in different order
  2. Completely ignore any transaction isolation
  3. Have different number of query executed on B with respect to A
  4. Add significant load on the server resources

This point, coupled with the expectations I mention in the reasoning at the end of this article, it is quite clear to me that at the moment we cannot consider ProxySQL as a valid mechanism to duplicate a consistent load from server A to server B.

Personally, I don’t think that the ProxySQL development team (Rene :D) should waste time on fixing this issue, as there are so many other things to cover and improve on in ProxySQL.

After working extensively with ProxySQL, and doing a deep QA on mirroring, I think that either we keep it as basic blind traffic dispatcher. Otherwise, a full re-conceptualization is required. But once we have clarified that, ProxySQL “traffic dispatch” (still don’t want to call it mirroring) remains a very interesting feature that can have useful applications – especially since it is easy to setup.

The following test results should help set the correct expectations.

The tests were simple: load data in a Percona XtraDB Cluster and use ProxySQL to replicate the load on a MySQL master-slave environment.

  • Machines for MySQL/Percona XtraDB Cluster: VM with CentOS 7, 4 CPU 3 GB RAM, attached storage
  • Machine for ProxySQL: VM CentOS 7, 8 CPU 8GB RAM

Why did I choose to give ProxySQL a higher volume of resources? I knew in advance I could need to play a bit with a couple of settings that required more memory and CPU cycles. I wanted to be sure I didn’t get any problems from ProxySQL in relation to CPU and memory.

The application that I was using to add load is a Java application I develop to perform my tests. The app is at https://github.com/Tusamarco/blogs/blob/master/stresstool_base_app.tar.gz, and the whole set I used to do the tests are here:  https://github.com/Tusamarco/blogs/tree/master/proxymirror.

I used four different tables:

+------------------+ | Tables_in_mirror | +------------------+ | mirtabAUTOINC | | mirtabMID | | mirtabMIDPart | | mirtabMIDUUID |

Ok so let start. Note that the meaningful tests are the ones below. For the whole set, refer to the whole set package. First setup ProxySQL:

First setup ProxySQL:

delete from mysql_servers where hostgroup_id in (500,501,700,701); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.5',500,3306,60000,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.5',501,3306,100,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.21',501,3306,20000,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.231',501,3306,20000,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.7',700,3306,1,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.7',701,3306,1,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.25',701,3306,1,400); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.0.43',701,3306,1,400); LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK; delete from mysql_users where username='load_RW'; insert into mysql_users (username,password,active,default_hostgroup,default_schema,transaction_persistent) values ('load_RW','test',1,500,'test',1); LOAD MYSQL USERS TO RUNTIME;SAVE MYSQL USERS TO DISK; delete from mysql_query_rules where rule_id=202; insert into mysql_query_rules (rule_id,username,destination_hostgroup,mirror_hostgroup,active,retries,apply) values(202,'load_RW',500,700,1,3,1); LOAD MYSQL QUERY RULES TO RUNTIME;SAVE MYSQL QUERY RULES TO DISK;

Test 1

The first test is mainly a simple functional test during which I insert records using one single thread in Percona XtraDB Cluster and MySQL. No surprise, here I have 3000 loops and at the end of the test I have 3000 records on both platforms.

To have a baseline we can see that the ProxySQL CPU utilization is quite low:

At the same time, the number of “questions” against Percona XtraDB Cluster and MySQL very similar:

Percona XtraDB Cluster

MySQL

The other two metrics we want to keep an eye on are Mirror_concurrency and Mirror_queue_length. These two refer respectively to mysql-mirror_max_concurrency and mysql-mirror_max_queue_length:

These two new variables and metrics were introduced in ProxySQL 1.4.0, with the intent to control and manage the load ProxySQL generates internally related to the mirroring feature. In this case, you can see we have a max of three concurrent connections and zero queue entries (all good).

Now that we have a baseline, and that we know at functional level “it works,” let see what happens when increasing the load.

Test 2

The scope of the test was identifying how ProxySQL behaves with a standard configuration and increasing load. It comes up that as soon as ProxySQL has a little bit more load, it starts to lose some queries along the way.

Executing 3000 loops for 40 threads only results in 120,000 rows inserted in all the four tables in Percona XtraDB Cluster. But the table in the secondary (mirrored) platform only has a variable number or inserted rows, between 101,359 and 104,072. This demonstrates consistent loss of data.

After reviewing and comparing the connections running in Percona XtraDB Cluster and the secondary, we can see that (as expected) Percona XtraDB Cluster’s number of connections is scaling and serving the number of incoming requests, while the connections on the secondary are limited by the default value of mysql-mirror_max_concurrency=16.

Is also interesting to note that the ProxySQL transaction process queue maintains its connection to the Secondary longer than the connection to Percona XtraDB Cluster.

As we can see above, the queue is an evident bell curve that reaches 6K entries (which is quite below the mysql-mirror_max_queue_length limit (32K)). Yet queries were dropped by ProxySQL, which indicates the queue is not really enough to accommodate the pending work.

CPU-wise, ProxySQL (as expected) take a few more cycles, but nothing crazy. The overhead for the simple mirroring queue processing can be seen when the main load stops around 12:47.

Another interesting graph to keep an eye on is the one describing the executed commands inside Percona XtraDB Cluster and the secondary:

Percona XtraDB Cluster

Secondary

As you can see, the traffic on the secondary was significantly less (669 on average, compared to Percona XtraDB Cluster’s 1.17K). Then it spikes when the main load on the Percona XtraDB Cluster node terminates. In short it is quite clear that ProxySQL is not able to scale following the traffic existing in Percona XtraDB Cluster, and actually loses a significant amount of data on the secondary.

Doubling the load in Test 3 shows the same behavior, with ProxySQL reaches its limit for traffic duplication.

But can this be optimized?

The answer is, of course, yes! This is what the mysql-mirror_max_concurrency is for, so let;’s see what happens if we increase the value from 16 to 100 (just to make it crazy high).

Test 4 (two app node writing)

The first thing that comes to attention is that both Percona XtraDB Cluster and secondary report the same number of rows in the tables (240,000). That is a good first win.

Second, note the number of running connections:

The graphs are now are much closer, and the queue drops to just a few entries.

Commands executed in Percona XtraDB Cluster:

And commands executed in the secondary:

Average execution reports the same value, and very similar trends.

Finally, what was the CPU cost and effect?

Percona XtraDB Cluster and secondary CPU utilization:

     

As expected, some difference in the CPU usage distribution exists. But the trend is consistent between the two nodes, and the operations.

The ProxySQL CPU utilization is definitely higher than before:

But it’s absolutely manageable, and still reflects the initial distribution.

What about CRUD? So far I’ve only tested the insert operation, but what happen if we run a full CRUD set of tests?

Test 7 (CRUD)

First of all, let’s review the executed commands in Percona XtraDB Cluster:

And the secondary:

While in appearance we have very similar workloads, selects aside the behavior will significantly diverge. This is because in the secondary the different operations are not encapsulated by the transaction. They are executed as they are received. We can see a significant difference in update and delete operations between the two.

Also, the threads in the execution show a different picture between the two platforms:

Percona XtraDB Cluster

Secondary

It appears quite clear that Percona XtraDB Cluster is constantly running more threads and more connections. Nevertheless, both platforms process a similar total number of questions:

Percona XtraDB Cluster

Secondary

Both have an average or around 1.17K/second questions.

This is also another indication of how much the impact of concurrent operation on behavior, with no respect to the isolation or execution order. Below we can clearly see different behavior by reviewing the CPU utilization:

Percona XtraDB Cluster

Secondary

Conclusions

To close this article, I want to go back to the start. We cannot consider the mirror function in ProxySQL as a real mirroring, but more as traffic redirection (check here for more reasoning on mirroring from my side).

Using ProxySQL with this approach is still partially effective in testing the load and the effect it has on a secondary platform. As we know, data consistency is not guaranteed in this scenario, and Selects, Updates and Deletes are affected (given the different data-set and result-set they manage).

The server behaviors change between the original and mirror, if not in the quantity or the quality.

I am convinced that when we need a tool able to test our production load on a different or new platform, we would do better to look to something else. Possibly query Playback, recently reviewed and significantly patched by DropBox (https://github.com/Percona-Lab/query-playback).

In the end, ProxySQL is already a cool tool. If it doesn’t cover mirroring well, I can live with that. I am interested in having it working as it should (and it does in many other functionalities).

Acknowledgments

As usual, to Rene, who worked on fixing and introducing new functionalities associated with mirroring, like queue and concurrency control.

To the Percona team who developed Percona Monitoring and Management (PMM): all the graphs here (except 3) come from PMM (some of them I customized).

Categories: MySQL

Percona Software and Roadmap Update with CEO Peter Zaitsev: Q2 2017

MySQL Performance Blog - Wed, 2017-05-24 22:24

This blog post is a summary of the Percona Software and Roadmap Update – Q2 2017 webinar given by Peter Zaitsev on May 4, 2017. This webinar reflects changes and updates since the last update (Q1 2017).

A full recording of this webinar, along with the presentation slide deck, can be found here.

Percona Software

Below are the latest and upcoming features in Percona’s software. All of Percona’s software is 100% free and open source, with no restricted “Enterprise” version. Percona doesn’t restrict users with open core or “open source, eventually” (BSL) licenses.

Percona Server for MySQL 5.7

Latest Improvements

Features About To Be Released 

  • Integration of TokuDB and Performance Schema
  • MyRocks integration in Percona Server
  • Starting to look towards MySQL 8
Percona XtraBackup 2.4

Latest Improvements

Percona Toolkit

Latest Improvements

Percona Server for MongoDB 3.4

Latest Improvements

Percona XtraDB Cluster 5.7

Latest Improvements

Performance Improvement Benchmarks

Below, you can see the benchmarks for improvements to Percona XtraDB Cluster 5.7 performance. You can read about the improvements and benchmark tests in more detail here and here.

Percona XtraDB Cluster 5.7 Integrated with ProxySQL 1.3

Percona Monitoring and Management

New in Percona Monitoring and Management

Advanced MariaDB Dashboards in PMM (Links go to PMM Demo)

Improved MongoDB Dashboards in PMM (Links go to PMM Demo)

Check out the PMM Demo

Thanks for tuning in for an update on Percona Software and Roadmap Update – Q2 2017.

New Percona Online Store – Easy to Buy, Pay Monthly
Categories: MySQL

How to Save and Load Docker Images to Offline Servers

MySQL Performance Blog - Tue, 2017-05-23 22:16

In this post, we’ll see how to make Docker images available to servers that don’t have access to the Internet (i.e., machines where docker pull <image_name> does not work).

As a specific example, we will do this with the latest Percona Monitoring and Management Docker images, since we had requests for this from users and customers. With the following steps, you’ll be able to deploy PMM within your secure network, without access to the Internet. Additionally, the same steps can be used when you need to upgrade the containers’ version in future releases.

There are two ways in which we can do this:

  • the easy way, by using docker save and docker load, or
  • the not-so-easy way, by setting up our own registry

We’ll focus on the first option, since the latter is a bit more convoluted. If you need your own registry, you are probably looking into something else rather than simply avoiding a firewall to pull one image to a server. Check out the Docker online docs in case option two fits your needs better.

As of this writing, 1.1.3 is the latest PMM version, so this is what we’ll use in the example. An image name is comprised of three parts, namely:

  • user_account/ (note the ‘/’ at the end); or empty string (and no ‘/’) for the official Docker repo
  • image_name
  • :tag (note the ‘:’ at the beginning)

The PMM Docker images have the following syntax: percona/pmm-server:1.1.3, but you can change this in the following examples to whatever image name you want, and it will work just the same. Before moving on to the commands needed, let’s imagine that serverA is the machine that has access to the Internet and serverB is the machine behind the firewall.

The steps are simple enough. On serverA, get the image, and save it to a file:

serverA> docker pull percona/pmm-server:1.1.3 1.1.3: Pulling from percona/pmm-server 45a2e645736c: Pull complete 7a3c6f252004: Pull complete 2cc1d8878ff1: Pull complete 6c49ea4e9955: Pull complete bc4630d3a194: Pull complete 75f0952c00bd: Pull complete 79d583a1689c: Pull complete 5a820193ac79: Pull complete 927a0614b164: Pull complete Digest: sha256:5310b23066d00be418a7522c957b2da4155a63c3e7b08663327aef075674bc2e Status: Downloaded newer image for percona/pmm-server:1.1.3 serverA> docker save percona/pmm-server:1.1.3 > ~/pmm-server_1.1.3.tar

Now, all you need to do is move the generated tar file to serverB (by using “scp” or any other means), and execute the following:

serverB> docker load < ~/pmm-server_1.1.3.tar serverB> docker images REPOSITORY           TAG                 IMAGE ID            CREATED             VIRTUAL SIZE percona/pmm-server   1.1.3               acc9af2459a4        3 weeks ago         1.146 GB

Now you’ll be able to use the image as if you had used docker pull percona/pmm-server:1.1.3​:

serverB> docker create ... percona/pmm-server:1.1.3 /bin/true 301a9e89ee95886f497482038aa6601d6cb2e21c0532e1077fa44213ef597f38 serverB> docker run -d ... percona/pmm-server:1.1.3 dbaffa80f62bc0b80239b922bbc746d828fbbeb212a638cfafea92b827141abb serverB> curl http://localhost | grep "Percona Monitoring and Management" ...                    <p>Percona Monitoring and Management (PMM) is a free and open-source solution for managing and monitoring performance on MySQL and MongoDB, and provides time-based analysis of performance to ensure that your data works as efficiently as possible.</p> ...

Lastly, let me add the relevant documentation links, so you have them at hand, if needed:

https://www.percona.com/doc/percona-monitoring-and-management/deploy/server/docker.html

https://docs.docker.com/engine/reference/commandline/save/

https://docs.docker.com/engine/reference/commandline/load/

Categories: MySQL

ICP Counters in information_schema.INNODB_METRICS

MySQL Performance Blog - Mon, 2017-05-22 20:53

In this blog, we’ll look at ICP counters in the information_schema.INNODB_METRICS. This is part two of the Index Condition Pushdown (ICP) counters blog post series. 

As mentioned in the previous post, in this blog we will look at how to check on ICP counters on MySQL and Percona Server for MySQL. This also applies to MariaDB, since the INNODB_METRICS table is also available for MariaDB (as opposed to the Handler_icp_% counters being MariaDB-specific). We will use the same table and data set as in the previous post.

For simplicity we’ll show the examples on MySQL 5.7.18, but they also apply to the latest Percona Server for MySQL (5.7.18) and MariaDB Server (10.2.5):

mysql [localhost] {msandbox} (test) > SELECT @@version, @@version_comment; +-----------+------------------------------+ | @@version | @@version_comment            | +-----------+------------------------------+ | 5.7.18    | MySQL Community Server (GPL) | +-----------+------------------------------+ 1 row in set (0.00 sec) mysql [localhost] {msandbox} (test) > SHOW CREATE TABLE t1G *************************** 1. row ***************************       Table: t1 Create Table: CREATE TABLE `t1` (  `f1` int(11) DEFAULT NULL,  `f2` int(11) DEFAULT NULL,  `f3` int(11) DEFAULT NULL,  KEY `idx_f1_f2` (`f1`,`f2`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 1 row in set (0.00 sec) mysql [localhost] {msandbox} (test) > SELECT COUNT(*) FROM t1; +----------+ | COUNT(*) | +----------+ |  3999996 | +----------+ 1 row in set (3.98 sec) mysql [localhost] {msandbox} (test) > SELECT * FROM t1 LIMIT 12; +------+------+------+ | f1   | f2   | f3   | +------+------+------+ |    1 |    1 |    1 | |    1 |    2 |    1 | |    1 |    3 |    1 | |    1 |    4 |    1 | |    2 |    1 |    1 | |    2 |    2 |    1 | |    2 |    3 |    1 | |    2 |    4 |    1 | |    3 |    1 |    1 | |    3 |    2 |    1 | |    3 |    3 |    1 | |    3 |    4 |    1 | +------+------+------+ 12 rows in set (0.00 sec)

Before proceeding with the examples, let’s see what counters we have available and how to enable and query them. The documentation page is at the following link: https://dev.mysql.com/doc/refman/5.7/en/innodb-information-schema-metrics-table.html.

The first thing to notice is that we are advised to check the validity of the counters for each version where we want to use them. The counters represented in the INNODB_METRICS table are subject to change, so for the most up-to-date list it’s best to query the running MySQL server:

mysql [localhost] {msandbox} (test) > SELECT NAME, SUBSYSTEM, STATUS FROM information_schema.INNODB_METRICS WHERE NAME LIKE '%icp%'; +------------------+-----------+----------+ | NAME             | SUBSYSTEM | STATUS   | +------------------+-----------+----------+ | icp_attempts     | icp       | disabled | | icp_no_match     | icp       | disabled | | icp_out_of_range | icp       | disabled | | icp_match        | icp       | disabled | +------------------+-----------+----------+ 4 rows in set (0.00 sec)

Looking good! We have all the counters we expected, which are:

  • icp_attempts: the number of rows where ICP was evaluated
  • icp_no_match: the number of rows that did not completely match the pushed WHERE conditions
  • icp_out_of_range: the number of rows that were checked that were not in a valid scanning range
  • icp_match: the number of rows that completely matched the pushed WHERE conditions

This link to the code can be used for reference: https://github.com/mysql/mysql-server/blob/5.7/include/my_icp.h.

After checking which counters we have at our disposal, you need to enable them (they are not enabled by default). For this, we can use the “modules” provided by MySQL to group similar counters for ease of use. This is also explained in detail in the documentation link above, under the “Counter Modules” section. INNODB_METRICS counters are quite inexpensive to maintain, as you can see in this post by Peter Z.

mysql [localhost] {msandbox} (test) > SET GLOBAL innodb_monitor_enable = module_icp; Query OK, 0 rows affected (0.00 sec) mysql [localhost] {msandbox} (test) > SELECT NAME, SUBSYSTEM, STATUS FROM information_schema.INNODB_METRICS WHERE NAME LIKE '%icp%'; +------------------+-----------+---------+ | NAME             | SUBSYSTEM | STATUS  | +------------------+-----------+---------+ | icp_attempts     | icp       | enabled | | icp_no_match     | icp       | enabled | | icp_out_of_range | icp       | enabled | | icp_match        | icp       | enabled | +------------------+-----------+---------+ 4 rows in set (0.00 sec)

Perfect, we now know what counters we need, and how to enable them. We just need to know how to query them, and we can move on to the examples. However, before rushing into saying that a simple SELECT against the INNODB_METRICS table will do, let’s step back a bit and see what columns we have available that can be of use:

mysql [localhost] {msandbox} (test) > DESCRIBE information_schema.INNODB_METRICS; +-----------------+--------------+------+-----+---------+-------+ | Field           | Type         | Null | Key | Default | Extra | +-----------------+--------------+------+-----+---------+-------+ | NAME            | varchar(193) | NO   |     |         |       | | SUBSYSTEM       | varchar(193) | NO   |     |         |       | | COUNT           | bigint(21)   | NO   |     | 0       |       | | MAX_COUNT       | bigint(21)   | YES  |     | NULL    |       | | MIN_COUNT       | bigint(21)   | YES  |     | NULL    |       | | AVG_COUNT       | double       | YES  |     | NULL    |       | | COUNT_RESET     | bigint(21)   | NO   |     | 0       |       | | MAX_COUNT_RESET | bigint(21)   | YES  |     | NULL    |       | | MIN_COUNT_RESET | bigint(21)   | YES  |     | NULL    |       | | AVG_COUNT_RESET | double       | YES  |     | NULL    |       | | TIME_ENABLED    | datetime     | YES  |     | NULL    |       | | TIME_DISABLED   | datetime     | YES  |     | NULL    |       | | TIME_ELAPSED    | bigint(21)   | YES  |     | NULL    |       | | TIME_RESET      | datetime     | YES  |     | NULL    |       | | STATUS          | varchar(193) | NO   |     |         |       | | TYPE            | varchar(193) | NO   |     |         |       | | COMMENT         | varchar(193) | NO   |     |         |       | +-----------------+--------------+------+-----+---------+-------+ 17 rows in set (0.00 sec)

There are two types: %COUNT and %COUNT_RESET. The former counts since the corresponding counters were enabled, and the latter since they were last reset (we have the TIME_% columns to check when any of these were done). This is why in our examples we are going to check the %COUNT_RESET counters, so we can reset them before running each query (as we did with FLUSH STATUS in the previous post).

Without further ado, let’s check how this all works together:

mysql [localhost] {msandbox} (test) > SET GLOBAL innodb_monitor_reset = module_icp; Query OK, 0 rows affected (0.00 sec) mysql [localhost] {msandbox} (test) > SELECT * FROM t1 WHERE f1 < 3 AND (f2 % 4) = 1; +------+------+------+ | f1   | f2   | f3   | +------+------+------+ |    1 |    1 |    1 | |    2 |    1 |    1 | +------+------+------+ 2 rows in set (0.00 sec) mysql [localhost] {msandbox} (test) > SELECT NAME, COUNT_RESET FROM information_schema.INNODB_METRICS WHERE NAME LIKE 'icp%'; +------------------+-------------+ | NAME             | COUNT_RESET | +------------------+-------------+ | icp_attempts     |           9 | | icp_no_match     |           6 | | icp_out_of_range |           1 | icp_match        |           2 | +------------------+-------------+ 4 rows in set (0.00 sec) mysql [localhost] {msandbox} (test) > EXPLAIN SELECT * FROM t1 WHERE f1 < 3 AND (f2 % 4) = 1; +----+-------------+-------+------------+-------+---------------+-----------+---------+------+------+----------+-----------------------+ | id | select_type | table | partitions | type  | possible_keys | key       | key_len | ref  | rows | filtered | Extra                 | +----+-------------+-------+------------+-------+---------------+-----------+---------+------+------+----------+-----------------------+ |  1 | SIMPLE      | t1    | NULL       | range | idx_f1_f2     | idx_f1_f2 | 5       | NULL |    8 |   100.00 | Using index condition | +----+-------------+-------+------------+-------+---------------+-----------+---------+------+------+----------+-----------------------+ 1 row in set, 1 warning (0.00 sec)

If you checked the GitHub link above, you might have noted that the header file only contains three of the counters. This is because icp_attempts is computed as the sum of the rest. As expected, icp_match equals the number of returned rows, which makes sense. icp_no_match should also make sense if we check the amount of rows present without the WHERE conditions on f2.

mysql [localhost] {msandbox} (test) > SELECT * FROM t1 WHERE f1 < 3; +------+------+------+ | f1   | f2   | f3   | +------+------+------+ |    1 |    1 |    1 | |    1 |    2 |    1 | |    1 |    3 |    1 | |    1 |    4 |    1 | |    2 |    1 |    1 | |    2 |    2 |    1 | |    2 |    3 |    1 | |    2 |    4 |    1 | +------+------+------+ 8 rows in set (0.00 sec)

So, 8 – 2 = 6, which is exactly icp_no_match‘s value. Finally, we are left with icp_out_of_range. For each end of range the ICP scan detects, this counter is incremented by one. We only scanned one range in the previous query, so let’s try something more interesting (scanning three ranges):

mysql [localhost] {msandbox} (test) > SET GLOBAL innodb_monitor_reset = module_icp; Query OK, 0 rows affected (0.00 sec) mysql [localhost] {msandbox} (test) > SELECT * FROM t1 WHERE ((f1 < 2) OR (f1 > 4 AND f1 < 6) OR (f1 > 8 AND f1 < 12)) AND (f2 % 4) = 1; +------+------+------+ | f1   | f2   | f3   | +------+------+------+ |    1 |    1 |    1 | |    5 |    1 |    1 | |    9 |    1 |    1 | |   10 |    1 |    1 | |   11 |    1 |    1 | +------+------+------+ 5 rows in set (0.00 sec) mysql [localhost] {msandbox} (test) > SELECT NAME, COUNT_RESET FROM information_schema.INNODB_METRICS WHERE NAME LIKE 'icp%'; +------------------+-------------+ | NAME             | COUNT_RESET | +------------------+-------------+ | icp_attempts     |          23 | | icp_no_match     |          15 | | icp_out_of_range |           3 | | icp_match        |           5 | +------------------+-------------+ 4 rows in set (0.01 sec)

We have now scanned three ranges on f1, namely: (f1 < 2), (4 < f1 < 6) and (8 < f1 < 12). This is correctly reflected in the corresponding counter. Remember that the MariaDB Handler_icp_attempts status counter we looked at in the previous post does not take into account the out-of-range counts. This means the two “attempts” counters will not be the same!

mysql [localhost] {msandbox} (test) > SET GLOBAL innodb_monitor_reset = module_icp; SET GLOBAL innodb_monitor_reset = dml_reads; FLUSH STATUS; ... mysql [localhost] {msandbox} (test) > SELECT * FROM t1 WHERE ((f1 < 2) OR (f1 > 4 AND f1 < 6) OR (f1 > 8 AND f1 < 12)) AND (f2 % 4) = 1; ... 5 rows in set (0.00 sec) mysql [localhost] {msandbox} (test) > SELECT NAME, COUNT_RESET FROM information_schema.INNODB_METRICS WHERE NAME LIKE 'icp_attempts'; +--------------+-------------+ | NAME         | COUNT_RESET | +--------------+-------------+ | icp_attempts |          23 | +--------------+-------------+ 1 row in set (0.00 sec) mysql [localhost] {msandbox} (test) > SHOW STATUS LIKE 'Handler_icp_attempts'; +----------------------+-------+ | Variable_name        | Value | +----------------------+-------+ | Handler_icp_attempts | 20    | +----------------------+-------+ 1 row in set (0.00 sec)

It can be a bit confusing to have two counters that supposedly measure the same counts yielding different values, so watch this if you use MariaDB.

ICP Counters in PMM

Today you can find an ICP counters graph for MariaDB (Handler_icp_attempts) in PMM 1.1.3.

Additionally, in release 1.1.4 you’ll find graphs for ICP metrics from information_schema.INNODB_METRICS: just look for the INNODB_METRICS-based graph on the InnoDB Metrics dashboard!

I hope you found this blog post series useful! Let me know if you have any questions or comments below.

Categories: MySQL

Webinar May 23, 2017: MongoDB Monitoring and Performance for the Savvy DBA

MySQL Performance Blog - Mon, 2017-05-22 16:54

Join Percona’s Senior Technical Services Engineer Bimal Kharel on Tuesday, May 23, 2017, as he presents a webinar on MongoDB monitoring called How to Help Your DBA’s Sleep Better at Night at 10:00 am PDT / 1:00 pm EDT (UTC-7).

Register Now

Are you trying to stay on top of your database before things turn ugly? Between metrics for throughput, database performance, resource utilization, resource saturation, errors (asserts) and many others, how do you know which one needs to be looked at NOW (and which can wait)?

Both DBAs and system admins must stay on top of the systems they manage. But filtering between metrics that need immediate attention and those that should be watched over time is challenging. In this webinar, Bimal narrows down the list of metrics that help you decide whether the on-call DBA gets their recommended eight hours of shuteye, or gets to run on caffeine with no sleep.

Bimal also discusses which graphs relate to each other, with examples from Percona’s Monitoring and Management (PMM) tool, to help you understand how things in MongoDB can impact other areas.

Please register for the webinar here.

Bimal Kharel, Senior Technical Services Engineer, Percona

Bimal is a MongoDB support engineer at Percona. Before Percona he worked as a MongoDB DBA at EA and Charles Schwab. He has been in various roles throughout his career, from graphics to web developer to systems administration. MongoDB was the first database Bimal got into (he used MySQL for some websites but never other relational databases).

Categories: MySQL

Percona Toolkit 3.0.3 is Now Available

MySQL Performance Blog - Fri, 2017-05-19 16:23

Percona announces the release of Percona Toolkit 3.0.3 on May 19, 2017.

Percona Toolkit is a collection of advanced command-line tools that perform a variety of MySQL and MongoDB server and system tasks too difficult or complex for DBAs to perform manually. Percona Toolkit, like all Percona software, is free and open source.

You download Percona Toolkit packages from the web site or install from official repositories.

This release includes the following changes:

New Features

  • Added the --skip-check-slave-lag option for pt-table-checksum, pt-online-schema-change, and pt-archiverdp.This option can be used to specify a list of servers where to skip checking for slave lag.
  • 1642754: Added support for collecting replication slave information in pt-stalk.
  • PT-111: Added support for collecting information about variables from Performance Schema in pt-stalk. For more information, see 1642753.
  • PT-116: Added the --[no]use-insert-ignore option for pt-online-schema-change to force or prevent using IGNORE on INSERT statements. For more information, see 1545129.

Bug Fixes

  • PT-115: Fixed OptionParser to accept repeatable DSNs.
  • PT-126: Fixed pt-online-schema-change to correctly parse comments. For more information, see 1592072.
  • PT-128: Fixed pt-stalk to include memory usage information. For more information, see 1510809.
  • PT-130: Fixed pt-mext to work with non-empty RSA public key. For more information, see 1587404.
  • PT-132: Fixed pt-online-schema-change to enable --no-drop-new-table when --no-swap-tables and --no-drop-triggers are used.

You can find release details in the release notes. Report bugs in Toolkit’s launchpad bug tracker.

Categories: MySQL

MongoDB Authentication and Roles: Creating Your First Personalized Role

MySQL Performance Blog - Wed, 2017-05-17 18:53

In this blog post, we’ll walk through the native MongoDB authentication and roles, and learn how to create personalized roles. It is a continuation of Securing MongoDB instances.

As said before, MongoDB features a few authentication methods and built-in roles that offer great control of both who is connecting to the database and what they are allowed to do. However, some companies have their own security policies that are often not covered by default roles. This blog post explains not only how to create personalized roles, but also how to grant minimum access to a user.

Authentication Methods

SCRAM-SHA-1 and MONGODB-CR are challenge-response protocols. All the users and passwords are saved encrypted in the MongoDB instance. Challenge-response authentication methods are widely used on the internet in several server-client software. These authentication methods do not send passwords as plain text to the server when the client is starting an authentication. Each new session has a different hash/code, which stops people from getting the password when sniffing the network.

The MONGODB-CR method was deprecated in version 3.0.

The x.509 authentication is an internal authentication that allows instances and clients to communicate to each other. All certificates are signed by the same Certificate Authority and must be valid. All the network traffic is encrypted by a given key, and it is only possible to read data with a valid certificate signed by such key.

MongoDB also offers external authentications such as LDAP and Kerberos. When using LDAP, users can log in to MongoDB using their centralized passwords. The LDAP application is commonly used to manage users and passwords in wide networks. Kerberos is a service that allows users to login only once, and then generates access tickets so that the users are allowed to access other services. Some configuration is necessary to use external authentication.

Built in roles

  • read: collStats,dbHash,dbStats,find,killCursors,listIndexes,listCollections,
  • readWrite: all read privileges + convertToCapped, createCollection,dbStats, dropCollection, createIndex, dropIndex, emptycapped, insert, listIndexes,remove, renameCollectionSameDB, update.
  • readAnyDatabase: allows the user to perform read in any database except the local and the config databases.

And so on…

In this tutorial, we are going to give specific privileges to a user who is allowed to only read the database, although he is allowed to write in a specific collection.

For this tutorial, we are using MongoDB 3.4 with previously configured authentication.

Steps:

  1. Create the database:
    mongo --authenticationDatbase admin -u superAdmin -p use percona db.foo.insert({x : 1}) db.foo2.insert({x : 1})
  2. Create a new user:
    > db.createUser({user : 'client_read', pwd : '123', roles : ['read']}) Successfully added user: { "user" : "client_read", "roles" : [ "read" ] }
  3. Log in with the user that has just been created and check the user access:
    ./mongo localhost/percona -u client_read -p MongoDB shell version v3.4.0-rc5 Enter password: db.foo.find() { "_id" : ObjectId("586bc2e9cac0bbb93f325d11"), "x" : 1 } db.foo2.find().count() 1 // If user try to insert documents will receive an error: > db.foo.insert({x : 2}) WriteResult({             "writeError" : {             "code" : 13,             "errmsg" : "not authorized on percona to execute command                  { insert: "foo", documents: [ { _id: ObjectId('586bc36e7b114fb2517462f3'), x: 2.0 } ], ordered: true }"             } })
  4. Log out and log in again with administrator user to create a new role for this user:
    mongo --authenticationDatabase admin -u superAdmin -p db.createRole({ role : 'write_foo2_Collection', privileges : [ {resource : {db : "percona", collection : "foo2"}, actions : ["insert","remove"]} ], roles : ["read"] }) db.updateUser('client_read', roles : ['write_foo2_Collection'])
  5. Check the new access:
    ./mongo db.auth('client_read','123') 1 > show collections foo foo2 > db.foo.find() { "_id" : ObjectId("586bc2e9cac0bbb93f325d11"), "x" : 1 } > db.foo2.insert({y : 2}) WriteResult({ "nInserted" : 1 }) > db.foo.insert({y : 2}) //does not have permission. WriteResult({       "writeError" : {             "code" : 13,             "errmsg" : "not authorized on percona to execute command { insert: "foo", documents: [ { _id: ObjectId('586bc5e26f05b3a5db849359'), y: 2.0 } ], ordered: true }"                      } })
  6. We can also add access to other database resources. Let’s suppose we would like to grant this just created user permission to execute a getLog command. This command is available in the clusterAdmin role, but we do not want to give all this role’s access to him. See https://docs.mongodb.com/v3.0/reference/privilege-actions/#authr.getLog.

    There is a caveat/detail/observation here. If we want to grant cluster privileges to a user, we should create the role in the admin database. Otherwise, the command will fail:

    db.grantPrivilegesToRole(      "write_foo2_Collection",            [                   {resource : {cluster : true}, actions : ["getLog"] }            ] ) Roles on the 'percona' database cannot be granted privileges that target other databases or the cluster :

  7. We are creating the same role in the admin database. This user only works properly if the admin database is present in a possible restore. Otherwise, the privileges fail:

    use admin db.createRole({      role : 'write_foo2_Collection_getLogs',      privileges : [                        {resource : {db : "percona", collection : "foo2"}, actions : ["insert","remove"]},                        {resource : {cluster : true}, actions : ["getLog"]}],        roles : [ {role : "read", db: "percona"}] }) use percona db.updateUser( "client_read", {      roles : [           { role : "write_foo2_Collection_getLogs", db : "admin" }                 ] } )

  8. Now the user has the same privileges as before, plus the getLog permission. We can test this user new access with:

    mongo --authenticationDatabase percona -u read_user -p db.adminCommand({getLog : 'global'}) {           "totalLinesWritten" : 287,           "log" : [.... …. }

I hope you find this post useful. Please feel free to ping me on twitter @AdamoTonete or @percona and let us know your thoughts.

Categories: MySQL

Percona Live Open Source Database Conference 2017 Slides and Videos Available

MySQL Performance Blog - Tue, 2017-05-16 22:13

The slides and videos from the Percona Live Open Source Database Conference 2017 are available for viewing and download. The videos and slides cover the keynotes, breakout sessions and MySQL and MongoDB 101 sessions.

To view slides, go to the Percona Live agenda, and select the talk you want slides for from the schedule, and click through to the talk web page. The slides are available below the talk description. There is also a page with all the slides that is searchable by topic, talk title, speaker, company or keywords.

To view videos, go to the Percona Live 2017 video page. The available videos are searchable by topic, talk title, speaker, company or keywords.

There are a few slides and videos outstanding due to unforeseen circumstances. However, we will upload those as they become available.

Some examples of videos and slide decks from the Percona Live conference:

MongoDB 101: Efficient CRUD Queries in MongoDB
Adamo Tonete, Senior Technical Engineer, Percona
Video: https://www.percona.com/live/17/content/efficient-crud-queries-mongodb
Slides: https://www.percona.com/live/17/sessions/efficient-crud-queries-mongodb

MySQL 101: Choosing a MySQL High Availability Solution
Marcos Albe, Principal Technical Services Engineer, Percona
Video: https://www.percona.com/live/17/content/choosing-mysql-high-availability-solution
Slides: https://www.percona.com/live/17/sessions/choosing-mysql-high-availability-solution

Breakout Session: Using the MySQL Document Store
Mike Zinner, Sr. Software Development Director and Alfredo Kojima, Sr. Software Development Manager, Oracle
Video: https://www.percona.com/live/17/content/using-mysql-document-store
Slides: https://www.percona.com/live/17/sessions/using-mysql-document-store

Keynote: Continuent is Back! But What Does Continuent Do Anyway?
Eero Teerikorpi, Founder and CEO and MC Brown, VP Products, Continuent
Video: https://www.percona.com/live/17/content/continuent-back-what-does-continuent-do-anyway
Slides: https://www.percona.com/live/17/sessions/continuent-back-what-does-continuent-do-anyway

Please let us know if you have any issues. Enjoy the videos!

Percona Live Europe 2017: Dublin, Ireland!

This year’s Percona Live Europe will take place September 25th-27th, 2017, in Dublin, Ireland. Put it on your calendar now! Information on speakers, talks, sponsorship and registration will be available in the coming months.

We have developed multiple sponsorship options to allow participation at a level that best meets your partnering needs. Our goal is to create a significant opportunity for our partners to interact with Percona customers, other partners and community members. Sponsorship opportunities are available for Percona Live Europe 2017.

Download a prospectus here.

We look forward to seeing you there!

Categories: MySQL

Percona Server for MongoDB 3.2.13-3.3 is Now Available

MySQL Performance Blog - Mon, 2017-05-15 17:55

Percona announces the release of Percona Server for MongoDB 3.2.13-3.3 on May 15, 2017. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB is an enhanced, open-source, fully compatible, highly scalable, zero-maintenance downtime database supporting the MongoDB v3.2 protocol and drivers. It extends MongoDB with MongoRocks, Percona Memory Engine, and PerconaFT storage engine, as well as enterprise-grade features like External Authentication, Audit Logging, Profiling Rate Limiting, and Hot Backup at no extra cost. Percona Server for MongoDB requires no changes to MongoDB applications or code.

NOTE: We deprecated the PerconaFT storage engine. It will not be available in future releases.

This release is based on MongoDB 3.2.13 and includes the following additional changes:

  • #PSMDB-127: Fixed cleanup of deleted documents and indexes for MongoRocks. When you upgrade to this release, deferred compaction may occur and cause database size to decrease significantly.
  • #PSMDB-133: Added the wiredTigerCheckpointSizeMB variable, set to 1000 in the configuration template for WiredTiger. Valid values are 32 to 2048 (2GB), with the latter being default.
  • #PSMDB-138: Implemented SERVER-23418 for MongoRocks.

Percona Server for MongoDB 3.2.13-3.3 release notes are available in the official documentation.

Categories: MySQL

Percona Server for MySQL 5.7.18-14 is Now Available

MySQL Performance Blog - Fri, 2017-05-12 18:00

Percona announces the GA release of Percona Server for MySQL 5.7.18-14 on May 12, 2017. Download the latest version from the Percona web site or the Percona Software Repositories. You can also run Docker containers from the images in the Docker Hub repository.

Based on MySQL 5.7.18, including all the bug fixes in it, Percona Server for MySQL 5.7.18-14 is the current GA release in the Percona Server for MySQL 5.7 series. Percona’s provides completely open-source and free software. Find release details in the 5.7.18-14 milestone at Launchpad.

New Features: Bugs Fixed:
  • Deadlock could occur in I/O-bound workloads when server was using several small buffer pool instances in combination with small redo log files and variable innodb_empty_free_list_algorithm set to backoff algorithm. Bug fixed #1651657.
  • Fixed a memory leak in Percona TokuBackup. Bug fixed #1669005.
  • Compressed columns with dictionaries could not be added to a partitioned table by using ALTER TABLE. Bug fixed #1671492.
  • Fixed a memory leak that happened in case of failure to create a multi-threaded slave worker thread. Bug fixed #1675716.
  • In-place upgrade from Percona Server 5.6 to 5.7 by using standalone packages would fail if /var/lib/mysql wasn’t defined as the datadir. Bug fixed #1687276.
  • Combination of using any audit API-using plugin, like Audit Log Plugin and Response Time Distribution, with multi-byte collation connection and PREPARE statement with a parse error could lead to a server crash. Bug fixed #1688698 (upstream #86209).
  • Fix for a #1433432 bug caused a performance regression due to suboptimal LRU manager thread flushing heuristics. Bug fixed #1631309.
  • Creating Compressed columns with dictionaries in MyISAM tables by specifying partition engines would not result in error. Bug fixed #1631954.
  • It was not possible to configure basedir as a symlink. Bug fixed #1639735.
  • Replication slave did not report Seconds_Behind_Master correctly when running in multi-threaded slave mode. Bug fixed #1654091 (upstream #84415).
  • DROP TEMPORARY TABLE would create a transaction in binary log on a read-only server. Bug fixed #1668602 (upstream #85258).
  • Processing GTIDs in the relay log that were already been executed were causing write/fsync amplification. Bug fixed #1669928 (upstream #85141).
  • Text/BLOB fields were not handling sorting of the empty string consistently between InnoDB and filesort. Bug fixed #1674867 (upstream #81810) by porting a Facebook patch for MySQL.
  • InnoDB adaptive hash index was using a partitioning algorithm which would produce uneven distribution when the server contained many tables with an identical schema. Bug fixed #1679155 (upstream #81814).
  • For plugin variables that are signed numbers, doing a SHOW VARIABLES would always show an unsigned number. Fixed by porting a Facebook patch for MySQL.

Other bugs fixed: #1629250 (upstream #83245), #1660828 (upstream #84786), #1664519 (upstream #84940), #1674299, #1670588 (upstream #84173), #1672389, #1674507, #1675623, #1650294, #1659224, #1662908, #1669002, #1671473, #1673800, #1674284, #1676441, #1676705, #1676847 (upstream #85671), #1677130 (upstream #85678), #1677162, #1677943, #1678692, #1680510 (upstream #85838), #1683993, #1684012, #1684078, #1684264, #1687386, #1687432, #1687600, and #1674281.

The release notes for Percona Server for MySQL 5.7.18-14 are available in the online documentation. Please report any bugs on the launchpad bug tracker.

Categories: MySQL
Syndicate content