MySQL

MySQL Sharding Models for SaaS Applications

MySQL Performance Blog - Mon, 2017-01-30 23:16

In this blog post, I’ll discuss MySQL sharding models, and how they apply to SaaS application environments.

MySQL is one of the most popular database technologies used to build many modern SaaS applications, ranging from simple productivity tools to business-critical applications for the financial and healthcare industries.

Pretty much any large scale SaaS application powered by MySQL uses sharding to scale. In this blog post, we will discuss sharding choices as they apply to these kinds of applications.

In MySQL, unlike in some more modern technologies such as MongoDB, there is no standard sharding implementation that the vast majority of applications use. In fact, if anything “no standard” is the standard. The common practice is to roll your own sharding framework, as famous MySQL deployments such as Facebook and Twitter have done. MySQL Cluster – the MySQL software that has built-in Automatic Sharding functionality – is rarely deployed (for a variety of reasons). MySQL Fabric, which has been the official sharding framework, has no traction either.

When sharding today, you have a choice of rolling your own system from scratch, using comprehensive sharding platform such as Vitess or using a proxy solution to assist you with sharding. For proxy solutions, MySQL Router is the official solution. But in reality, third party solutions such as open source ProxySQL, commercial ScaleArc and semi-commercial (BSL)  MariaDB MaxScale are widely used. Keep in mind, however, that traffic routing is only one of the problems that exist in large scale sharding implementations.

Beneath all these “front end” choices for sharding on the application database connection framework or database proxy, there are some lower level decisions that you’ve got to make. Namely, around how your data is going to be led out and organized on the MySQL nodes.

When it comes to SaaS applications, at least one answer is simple. It typically makes sense to shard your data by “customer” or “organization” using some sort of mapping tables. In the vast majority of cases, single node (or replicated cluster) should be powerful enough to handle all the data and load coming from each customer.

What Should I Ask Myself Now?

The next set questions you should ask yourself are around your SaaS applications:

  • How much revenue per customer are you generating?
  • Do your customers (or regulations) require data segregation?
  • Are all the customers about the same, or are there outliers?
  • Are all your customers running the same database schema?

I address the answers in the sections below.

How Much Revenue?

How much revenue per customer you’re generating is an important number. It defines how much infrastructure costs per customer you can afford. In the case of “freemium” models, and customers generating less than $1 a month an average, you might need to ensure low overhead per customer (even if you have to compromise on customer isolation).

How much revenue per customer you’re generating is an important number. It defines how much infrastructure costs per customer you can afford. In the case of “freemium” models, and customers generating less than $1 a month an average, you might need to ensure low overhead per customer (even if you have to compromise on customer isolation).

Typically with low revenue customers, you have to co-locate the data inside the same MySQL instance (potentially even same tables). In the case of high revenue customers, isolation in separate MySQL instances (or even containers or virtualized OS instances) might be possible.

Data Segregation?

Isolation is another important area of consideration. Some enterprise customers might require that their data is physically separate from others. There could also be government regulations in play that require customer data to be stored in a specific physical location. If this is the case, you’re looking at completely dedicated customer environments. Or at the very least, separate database instances (which come with additional costs).

Customer Types?

Customer size and requirements are also important. A system designed to handle all customers of approximately the same scale (for example, personal accounting) is going to be different than if you are in the business of blog hosting. Some blogs might be 10,000 times more popular than the average.

Same Database Schema?

Finally, there is a there is the big question of whether all your customers are running the same database schema and same software version. If you want to support different software versions (if your customers require a negotiated maintenance window for software upgrades, for example) or different database schemas (if the schema is dependent on the custom functionality and modules customers might use, for example), keeping such customers in different MySQL schemas make sense.

Sharding Models

This gets us to the following sharding isolation models, ranging from lowest to highest:

  • Customers Share Schemas. This is the best choice when you have very large numbers of low-revenue customers. In this case, you would map multiple customers to the same set of tables, and include something like a customer_id field in them to filter customer data. This approach minimizes customer overhead and reduces customer isolation. It’s harder to backup/restore data for individual customers, and it is easier to introduce coding mistakes that can access other customers data. This method does not mean there is only one schema, but that there is a one-to-many relationship between schemas and customers.  For example, you might have 100 schema’s per MySQL instance, each handling 1000 to 10000 customers (depending on the application). Note that with a well-designed sharding implementation, you should be able to map customers individually to schemas. This allows you to have key customer data stored in dedicated schemas, or even on dedicated nodes.
  • Schema per Customer. This is probably the most common sharding approach in MySQL powered SaaS applications. Especially ones that have substantial revenue ($10+ per month / per customer). In this model, each customer’s data is stored in its own schema (database). This makes it very easy to backup/restore individual customers. It allows customers to have different schemas (i.e., add custom tables). It also allows them to run different versions of the application if desired. This approach allows the application server to use different MySQL users connecting on behalf of different customers, which adds an extra level of protection from accidental (or intentional) access of data that belongs to different customers. The schema per customer approach also makes it easier to move the shards around, and limits maintenance impact. The downside of this approach is higher overhead. It also results in a large number of tables per instance, and potentially larger numbers of files (which can be hard to manage).
  • Database Instance per Customer. You achieve even better isolation by having a MySQL instance per customer. This approach, however, increases overhead even further. The recent rise of light virtualization technologies and containers has reduced its usage.
  • OS Instance/Container per Customer. This approach allows you to improve isolation even further. It can be used for any customer, but can also be applied to selected customers in a model that uses Schema per Customer model for a majority of them.  Dedicated OS Instance, with improved isolation and better performance SLAs, might be a feature of some premium customer tiers. This method not only allows better isolation, but it also let’s you handle outliers better. You might chose to run a majority of your customers on the hardware (or cloud instance) that has best price/performance numbers, and also place some of the larger customers on the highest performance nodes.
  • Environment per customer. Finally, if you take this all the way you can build completely separate environments for customers. This includes databases, application servers and other required components. This is especially useful if you need to deploy the application close to the customer – which includes the appliance model, or deployment in the customer’s data center or cloud provider. This also allows you to accommodate customers if their data must be stored in a specific location. This is often due to government regulations. It is worth noting that many SaaS applications, even if they do not quite have one environment per customer, have multiple independent environments. These are often hosted in different locations or availability zones. Such setups allow you to reduce the impact of large-scale failures to only a portion of your customers. This avoids overloading your customer service group and allowing the operational organization to focus on repairing smaller environments.

The farther you go down this route – from the shared schema to an environment per customer – the more important is to have a high level of automation. With a shared schema, you often can get by with little automation (and some environments manually set up) and all the schema’s pre-created. If customer sign up requires setting up dedicated database instance or the whole environment, manual implementation doesn’t scale. For this type of setup, you need state-of-the-art automation and orchestration.

Conclusion

I hope this helps you to understand your options for MySQL sharding models. Each of the different sharding models for SaaS applications powered by MySQL have benefits and drawbacks. As you can see, many of these approaches require you to work with a large number of tables in the MySQL – this will be the topic of one of my next posts!

Categories: MySQL

MariaDB ColumnStore

MySQL Performance Blog - Mon, 2017-01-30 17:29

Last month, MariaDB officially released MariaDB ColumnStore, their column store engine for MySQL. This post discusses what it is (and isn’t), why it matters and how you can approach a test of it.

What is ColumnStore?

ColumnStore is a storage engine that turns traditional MySQL storage concepts on their head. Instead of storing the data by row, a column store stores the data by column (obviously). This provides advantages for certain types of data, and certain types of queries run against that data. See my previous post for more details on column-based storage systems.

ColumnStore is a fork of InfiniDB and carries forward many of the concepts behind that product. InfiniDB ceased operations in 2014. With the front end managed through MariaDB, you get access to all of the expected security and audit options of MariaDB. MariaDB designed ColumnStore as a massively parallel database, working best in an environment with multiple servers. This is somewhat different than a traditional row store database.

ColumnStore stores columnar data in a concept called an “extent.” An extent contains a range of values for a single column. Each extent contains no more than 8 million values. It stores additional values in a new extent. The extents for a single column get distributed across the database nodes, known as “Performance Modules” in ColumnStore. It stores each unique extent on more than one node, thus providing data redundancy and removing the need for replication. If a node is down, and it contains an extent needed for a query, that same extent is found on another node in the environment. This data redundancy also provides a high availability environment.

The query engine determines which extents process query requests. Since the data in an extent is often preordered (time series data, for example), many queries can ignore individual extents since they cannot contain any data needed for the query. If we are only looking for data from February 2017, for example, extents containing data outside of that range get ignored. However, if a query requires data from many or all extents on a single column, the query takes much longer to complete.

Unlike some traditional column store vendors, that take an all or nothing approach to storage, MariaDB decided to go with a mixed concept. In a MariaDB MySQL database, you can mix traditional InnoDB storage with the new ColumnStore storage, just like you used to mix InnoDB and MyISAM. This presents some nice options, not the least of which is that it provides a way to “dip your toe” into the world of column stores. On the other hand, it could lead to people using the ColumnStore engine in non-optimal ways. Also, the differences in what is considered optimal architecture between these two storage options make it hard to see how this plays out in the real world.

Data Definition

As discussed in the earlier post, column storage works great for specific types of data and queries. It is important that your data definitions are as tight as possible, and that your queries are appropriate for column-based data.

Many people set their field definition as VARCHAR(256) when setting up a new database. They might not know what type of data gets stored in the new field. This broad definition allows you to store whatever you throw at the database. The negative effect for row store is that it can cause over-allocation of storage – but it only has a minimal effect on queries.

In a column store, the field definition can drive decisions about the compression methods for storing the data, along with sorting implications. Columnar data can use storage more efficiently than a row store, since the data for a single column is well-defined. This leads to selecting the best compression algorithm for the data. If that data is poorly defined, the selected compression algorithm might not be the best for the data.

Sorting is also a problem in a column store when the data types are not well-defined. We’ve all seen integer or date data that is sorted alphabetically. While it can be annoying, we can still adjust to that sorting method to find what we need. Since a column store is often used to perform analytical queries over a range of data, this poorly-sorted data can present a bigger problem. If you specify a column to be VARCHAR and only include date information, that data is sorted alphabetically. The same column defined as DATE causes the data to be sorted by date. This chart shows the difference (date format is mm/dd/yy)

Alphabetic Sort Date Sort 01/01/17 01/01/17 01/11/17 01/02/17 01/02/17 01/04/17 01/21/17 01/11/17 01/4/17 01/21/17 11/01/17 02/01/17 11/02/17 11/01/17 02/01/17 11/02/17

 

Imagine running a query over a range of dates (requesting all activity in the months of January and February 2017, for example). In the alphabetic sort, this requires working through the whole file, since the data for November shows up between the data for January and February. In the date sort, the query only reads the until the end of February. We know there can be no more matching data after that. The alphabetic sort leads to more I/O, more query time and less happiness on the part of the user.

Why Should You Care About ColumnStore?

The first reason is that it allows you to try out column storage without doing a massive shift in technology and with minimal effort. By setting up some tables in a MariaDB database to use the ColumnStore engine, you can benefit from the storage efficiencies and faster query capabilities, provided that the data you’ve selected for this purpose is sound. This means that the data definitions should be tight (always a good plan anyway), and the queries should be more analytical than transactional. For a purely transactional workflow, a row store is the logical choice. For a purely analytical workflow, a column store is the logical choice. ColumnStore allows you to easily mix the two storage options so that you can have the best match possible. It is still important to know what type of workflow you’re dealing with, and match the storage engine to that need.

Another solid reason is that it is a great fit if you are already doing analysis over massive amounts of data. Any column store shines when asked to look at relatively few columns of data (ideally the column or two that are being aggregated and other columns to locate and group the data). If you are already running these types of queries in MySQL, ColumnStore would likely be a good fit.

But There Be Dragons!

As with any new technology, ColumnStore might not be a good fit for everyone. Given that you can mix and match your storage engines, with ColumnStore for some tables and InnoDB for others, it can be tempting to just go ahead with a ColumnStore test doing things the same way you always did in the past. While this still yields results, those results might not be a true test of the technology. It’s like trying to drive your minivan the same way you used to drive your sports car. “Hey, my Alfa Romeo never flipped over taking these turns at high speed!”

To effectively use ColumnStore, it’s important to match it to a proper analytical workload. This means that you will likely do more bulk loading into these tables, since there is additional overhead in writing the data out into the column files. The overall workflow should be more read-intensive. The queries should only look for data from a small set of fields, but can span massive amounts of data within a single column. In my earlier post, there’s also a discussion about normalization of data and how denormalizing data is more common in columnar databases.

You should address these issues in your testing for a valid conclusion.

The minimum specifications for ColumnStore also point to a need for a more advanced infrastructure than is often seen for transactional data. This is to support batch loading, read intensive workloads and possibly different ETL processes for each type of data. In fact, MariaDB states in the installation documentation for ColumnStore that it must be completed as a new installation of MariaDB. You must remove any existing installations of MariaDB or MySQL before installing the ColumnStore-enabled RPM on a system.

Is It Right for Me?

ColumnStore might fit well into your organization. But like haggis, it’s not for everyone. If you need analytical queries, it is a great option. If your workload is more read-intensive, it could still work for you. As we move to a more Internet of Things (IoT) world, we’re likely to see a need for more of this type of query work. In order to accurately present each user with the best possible Internet experience, we might want to analyze their activities over spans of time and come up with the best match for future needs.

Seriously consider if making the move to ColumnStore is right for you. It is newer software (version 1.0.6, the first GA version, was released on December 14, 2016, and 1.0.7 was released on January 23, 2017), so it might go through changes as it matures. Though a new product, it is based on InfiniDB concepts (which are somewhat dated). MariaDB has lots of plans for additional integrations and support for ancillary products that are absent in the current release.

MariaDB took a huge step forward with ColumnStore. But do yourself a favor and consider whether it is right for you before testing it out. Also, make sure that you are not trying to force your current workflow into the column store box. Kids know that we cannot put a square peg in a round hole, but we adults often try to do just that. Finding the right peg saves you lots of time, hassle and annoyance.

Categories: MySQL

Leadership Behaviors

Xaprb, home of innotop - Sat, 2017-01-28 16:15

Leadership is, by definition, a process of transition. None of us is born to be a leader. Those who are leaders have become leaders through change, and it is a very difficult and unnatural process. A lot of this process involves learning, through repeatedly asking and answering the question, “what does a leader do?”

As a first-time CEO, I ask this question of other CEOs again and again, and write down their answers. I also find food for thought in blogs and books. As you might expect, answers from different CEOs differ a lot, but they have similarities.

Here are compilations of leadership advice from several CEOs.

Marten Mickos

Leadership is:

  1. Learn to lead yourself
  2. Treat others with respect
  3. Believe in the positive energy
  4. Align around a purpose and set concrete goals
  5. Reward and give feedback
  6. Bad news is good news
  7. Run great meetings
  8. Communicate

For more detail, please read Marten’s original article.

Ben Horowitz

Giving feedback turns out to be the unnatural atomic building block atop which the unnatural skill set of management gets built… Here are the keys to being effective:

  • Be authentic.
  • Come from the right place.
  • Don’t get personal.
  • Don’t clown people in front of their peers.
  • Feedback is not one size fits all.
  • Be direct, but not mean.

For details, please read Ben’s book The Hard Thing About Hard Things or this blog post.

Andy Grove

Andy Grove was the CEO of Intel and the author of High Output Management. In his view, the only tools a manager can use to influence is training and motivation.

David Eherlich

David is the CEO of Aktana. In David’s view, as a CEO if you’re doing anything other than one of the following four activities, you’re likely getting in the way more than adding value:

  1. Espouse and reinforce the vision. Never miss a chance to repeat the vision and draw a direct line between whatever’s happening in the moment and how it aligns with the vision.
  2. Right people, right jobs. As CEO, you must build the absolute best team.
  3. Ensure adequate resources. You’ve now got the right people in the right jobs knowing what to accomplish (the vision). It’s your job to ensure they’re armed for success. They have to have the resources to do their jobs.
  4. Build the company culture. Cultures should be authentic and should reinforce the behaviors required for success (see Built to Last: Successful Habits of Visionary Companies by Jim Collins and Jerry Porras). Only the CEO, through her/his actions can reinforce an authentic, powerful culture.

If you’re doing anything other than these things, then by definition you haven’t done these things well. For example, if you’re making a decision about a department or team’s work, then you haven’t hired the right executives or given them an adequate understanding of the vision. If you’re jumping in and helping with their work, then either you don’t have the right people in the right jobs or they don’t have the resources they need.

Anonymous

I haven’t gotten permission from this CEO yet so he will remain anonymous for now. Being a leader, according to this CEO, consists of:

  1. Creating a clear vision.
  2. Assembling a world-class team.
  3. Focusing maniacally on feedback loops.
  4. Acknowledging the phases of growth as you scale and synchronizing the organization with them.

When all of these things fall into place, there’s a sense of ease and lack of friction, like running downhill.

Brad Eure

Brad is my CEO coach. In his words:

There are a few, simple, underlying truths of leadership. But, just like a diet, you must develop the discipline to make sure that you follow them in order to be successful. Here are those simple, yet hard to stick to leadership tenets:

  • First, know thyself. Knowing and being transparent about your own strengths and weaknesses enables you to understand others.
  • If you are doing, you are not leading. Do not be the hub of the wheel.
  • Define your direct reports’ roles and hire the people that are the best suited to those roles.
  • Explain to your direct reports what you want done and how their work contributes to the success of the team and the organization.
  • Give them the tools, training, coaching and resources to do their job and then let them succeed or fail. Do not coddle them.
  • Give them timely, constant, caring and honest feedback about their performance.
  • Praise publicly and criticize privately.
  • Listen.

For more, read his blog post.

Tom Tunguz

Tom’s blog is among those that have been most valuable to me. This blog is not specifically about his view as a leader, as much as a commentary on how many different styles of leadership there are. Recognizing this, it becomes apparent that what’s important is to lead in a way that’s true to who you are:

The cornerstone of great leadership is authenticity. Leaders ask us to believe, to give more, to sacrifice. To trust a leader, his or her leadership must be genuine, a true reflection of their strengths, values and culture. Leaders can engender that trust in many different ways, leading from the front or the back, provided their authenticity is apparent.

Read the whole post.

Pic Credit

Categories: MySQL

Percona Software News and Roadmap Update with CEO Peter Zaitsev: Q1 2017

MySQL Performance Blog - Fri, 2017-01-27 22:06

This blog post is a summary of the webinar Percona Software News and Roadmap Update – Q1 2017 given by Peter Zaitsev on January 12, 2017.

Over the last few months, I’ve had the opportunity to meet and talk with many of Percona’s customers. I love these meetings, and I always get a bunch of questions about what we’re doing, what our plans are and what releases are coming.

I’m pleased to say there is a great deal going on at Percona, and I thought giving a quick talk about our current software and services, along with our plans, would provide a simple reference for many of these questions.

A full recording of this webinar, along with the presentation slide deck, can be found here.

Percona Solutions and Services

Let me start by laying out Percona’s company purpose:

To champion unbiased open source database solutions.

What does this mean? It means that we write software to offer you better solutions, and we use the best of what software and technology exist in the open source community.

Percona stands by a set of principles that we feel define us as a company, and are a promise to our customers:

  • 100% free and open source software
  • Focused on finding solution to maximize your success
  • Open source database strategy consulting and implementation
  • No vendor lock-in required

We offer trusted and unbiased expert solutions, support and resource in a broad software ecosystem, including:

We also have specialization options for PaaS, IaaS, and SaaS solutions like Amazon Web Services, OpenStack, Google Cloud Platform, OpenShift, Ceph, Docker and Kubernetes.

Percona’s immediate business focus includes building long-term partnership relationships through support and managed services.

The next few sections detail our current service offerings, with some outlook on our plans.

98% Customer Satisfaction Rating

Over the last six months, Percona has consistently maintained a 98% Customer Satisfaction Rating!

Customer Success Team

Our expanded Customer Success Team is here to ensure you’re getting most out of your Percona Services Subscription.

Managed Services Best Practices

  • Unification based on best practices
  • Organization changes to offer more personal service
  • Increased automation

Ongoing Services

Consulting and Training. Our consulting and training services are available to assist you with whatever project or staff needs you have.

  • Onsite and remote
  • 4 hours to full time (weeks or months)
  • Project and staff augmentation

Advanced HA Included with Enterprise and Premier Support. Starting this past Spring, we included advance high availability (HA) support as part of our Enterprise and Premier support tiers. This advanced support includes coverage for:

  • Percona XtraDB Cluster
  • MariaDB Galera Cluster
  • Galera Cluster for MySQL
  • Upcoming MySQL group replication
  • Upcoming MySQL Innodb Cluster

Enterprise Wide Support Agreements. Our new Enterprise Wide Support option allows you to buy per-environment support coverage that covers all of the servers in your environment, rather than on a per-server basis. This method of support can save you money, because it:

  • Covers both “MySQL” and “MongoDB”
  • Means you don’t have to count servers
  • Provides highly customized coverage

 

Simplified Support Pricing. Get easy to understand support pricing quickly.

To discuss how Percona Support can help your business, please call us at +1-888-316-9775 (USA),
+44 203 608 6727 (Europe), or have us contact you.

New Percona Online Store – Easy to Buy, Pay Monthly

Percona Software

Below are the latest and upcoming features in Percona’s software. All of Percona’s software adheres to the following principles:

  • 100% free and open source
  • No restricted “Enterprise” version
  • No “open core”
  • No BS-licensing (BSL)
Percona Server for MySQL 5.7

Overview

  • 100% Compatible with MySQL 5.7 Community Edition
  • 100% Free and Open Source
  • Includes Alternatives to Many MySQL Enterprise Features
  • Includes TokuDB Storage Engine
  • Focus on Performance and Operational Visibility

Latest Improvements

Features about to be Released 

  • Integration of TokuDB and Performance Schema
  • MyRocks integration in Percona Server
  • MySQL Group Replication
  • Starting to look towards MySQL 8
Percona XtraBackup 2.4

Overview

  • #1 open source binary hot backup solution for MySQL
  • Alternative to MySQL Enterprise backup
  • Parallel backups, incremental backups, streaming, encryption
  • Supports MySQL, MariaDB, Percona Server

New Features

  • Support SHA256 passwords and secure connection to server
  • Improved Security (CVE-2016-6225)
  • Wrong passphrase detection
Percona Toolkit

Overview

  • “Swiss Army Knife” of tools
  • Helps DBAs be more efficient
  • Helps DBAs make fewer mistakes
  • Supports MySQL, MariaDB, Percona Server, Amazon RDS MySQL

New Features

  • Improved fingerprinting in pt-query-digest
  • Pause file for pt-online-schema-change
  • Provide information about transparent huge pages

Coming Soon

  • Working towards Percona Toolkit 3.0 release
  • Comprehensive support for MongoDB
  • New tools are now implemented in Go
Percona Server for MongoDB 3.2

Overview

  • 100% compatible with MongoDB 3.2 Community Edition
  • 100% open source
  • Alternatives for many MongoDB Enterprise features
  • MongoRocks (RocksDB) storage engine
  • Percona Memory Engine

New

  • Percona Server for MongoDB 3.2 – GA
  • Support for MongoRocks storage engine
  • PerconaFT storage engine depreciated
  • Implemented Percona Memory Engine

Coming Soon

  • Percona Server for MongoDB 3.4
  • Fully compatible with MongoDB 3.4 Community Edition
  • Updated RocksDB storage engine
  • Universal hot backup for WiredTiger and MongoRocks
  • Profiling rate limiting (query sampling)
Percona Memory Engine for MongoDB

Benchmarks

Percona Memory Engine for MongoDB® is a 100 percent open source in-memory storage engine for Percona Server for MongoDB.

Based on the in-memory storage engine used in MongoDB Enterprise Edition, WiredTiger, Percona Memory Engine for MongoDB delivers extremely high performance and reduced costs for a variety of use cases, including application cache, sophisticated data manipulation, session management and more.

Below are some benchmarks that we ran to demonstrate Percona Memory Engine’s performance.

Percona XtraDB Cluster 5.7

Overview

  • Based on Percona Server 5.7
  • Easiest way to bring HA in your MySQL environment
  • Designed to work well in the cloud
  • Multi-master replication with no conflicts
  • Automatic node provisioning for auto-scaling and self-healing

Goals

  • Brought PXC development in-house to server our customers better
  • Provide complete clustering solution, not set of LEGO pieces
  • Improve usability and ease of use
  • Focus on quality

Highlights

  • Integrated cluster-aware load balancer with ProxySQL
  • Instrumentation with Performance Schema
  • Support for data at rest encryption (InnoDB tablespace encryption)
  • Your data is safe by default with “strict mode” – prevents using features that do not work correctly
  • Integration with Percona Monitoring and Management (PMM)

New in Percona XtraDB Cluster 5.7

  • One option to secure all network communication: pxc-encrypt-cluster-traffic
  • Zero downtime maintenance with ProxySQL and Maintenance Mode
Percona Monitoring and Management

Overview

  • Comprehensive database-focused monitoring
  • 100% open source, roll-your-own solution
  • Easy to install and use
  • Supports MySQL and MongoDB
  • Version 1.0 focuses on trending and query analyses
  • Management features to come

Examples of PMM Screens

What queries are causing the load?

Why are they causing this load?

How to fix them:

System information:

What happens on OS and hardware level:

As well as the database level:

New in Percona Monitoring and Management

  • Continuing to improve and expand dashboards with every release
  • Includes Grafana 4.0 (with basic Alerting)
  • SSL support for server-agent communications
  • Supports authentication for server-agent communication
  • Added support for Amazon RDS
  • Reduced space consumption by using advanced compression

Coming Soon 

  • PMM server available as AMI and Virtual Appliance image
  • Better MongoDB dashboards
  • Continuing work on dashboards Improvement
  • Query analytics application refinements
  • Short term continuing focus on monitoring functionality

Check out the Demo

Percona Live Open Source Database Conference 2017 is right around the corner!

The Percona Live Open Source Database Conference is the premier event for the diverse and active open source database community, as well as businesses that develop and use open source database software. The conferences have a technical focus with an emphasis on the core topics of MySQL, MongoDB, PostgreSQL and other open source databases. Tackling subjects such as analytics, architecture and design, security, operations, scalability and performance, Percona Live provides in-depth discussions for your high-availability, IoT, cloud, big data and other changing business needs. This conference is an opportunity to network with peers and technology professionals by bringing together accomplished DBA’s, system architects and developers from around the world to share their knowledge and experience – all to help you learn how to tackle your open source database challenges in a whole new way

This conference has something for everyone!

Register now and get the early bird rate, but hurry prices go up Jan 31st.

Sponsorship opportunities are available as well. Become a Percona Live Sponsor, download the prospectus here.

 

Categories: MySQL

When MySQL Lies: Wrong seconds_behind_master with slave_parallel_workers > 0

MySQL Performance Blog - Fri, 2017-01-27 19:03

In today’s blog, I will show an issue with seconds_behind_master that one of our clients faced when running slave_parallel_works > 0. We found out that the reported seconds_behind_master from SHOW SLAVE STATUS was lying. To be more specific, I’m talking about bugs #84415 and #1654091.

The Issue

MySQL will not report the correct slave lag if you have slave_parallel_workers> 0. Let’s show it in practice.

I’ll use MySQL Sandbox to speed up one master and two slaves on MySQL version 5.7.17, and sysbench to populate the database:

# Create sandboxes make_replication_sandbox /path/to/mysql/5.7.17 # Create table with 1.5M rows on it sysbench --test=/usr/share/sysbench/tests/db/oltp.lua --mysql-host=localhost --mysql-user=msandbox --mysql-password=msandbox --mysql-socket=/tmp/mysql_sandbox20192.sock --mysql-db=test --oltp-table-size=1500000 prepare # Add slave_parallel_workers=5 and slave_pending_jobs_size_max=1G" on node1 echo "slave_parallel_workers=5" >> node1/my.sandbox.cnf echo "slave_pending_jobs_size_max=1G" >> node1/my.sandbox.cnf node1/restart

Monitor Replication lag via SHOW SLAVE STATUS:

for i in {1..1000}; do ( node1/use -e "SHOW SLAVE STATUSG" | grep "Seconds_Behind_Master" | awk '{print "Node1: " $2}' & sleep 0.1 ; node2/use -e "SHOW SLAVE STATUSG" | grep "Seconds_Behind_Master" | awk '{print "Node2: " $2}' & ); sleep 1; done

On a separate terminal, DELETE some rows in the test.sbtest1 table on the master, and monitor the above output once the master completes the delete command:

DELETE FROM test.sbtest1 WHERE id > 100;

Here is a sample output:

master [localhost] {msandbox} (test) > DELETE FROM test.sbtest1 WHERE id > 100; Query OK, 1499900 rows affected (46.42 sec) . . . Node1: 0 Node2: 0 Node1: 0 Node2: 48 Node1: 0 Node2: 48 Node1: 0 Node2: 49 Node1: 0 Node2: 50 . . . Node1: 0 Node2: 90 Node1: 0 Node2: 91 Node1: 0 Node2: 0 Node1: 0 Node2: 0

As you can see, node1 (which is running with slave_parallel_workers = 5) doesn’t report any lag.

The Workaround

We can workaround this issue by querying performance_schema.threads:

SELECT PROCESSLIST_TIME FROM performance_schema.threads WHERE NAME = 'thread/sql/slave_worker' AND (PROCESSLIST_STATE IS NULL or PROCESSLIST_STATE != 'Waiting for an event from Coordinator') ORDER BY PROCESSLIST_TIME DESC LIMIT 1;

Let’s modify our monitoring script, and use the above query to monitor the lag on node1:

for i in {1..1000}; do ( node1/use -BNe "SELECT PROCESSLIST_TIME FROM performance_schema.threads WHERE NAME = 'thread/sql/slave_worker' AND (PROCESSLIST_STATE IS NULL or PROCESSLIST_STATE != 'Waiting for an event from Coordinator') ORDER BY PROCESSLIST_TIME DESC LIMIT 1 INTO @delay; SELECT IFNULL(@delay, 0) AS 'lag';" | awk '{print "Node1: " $1}' & sleep 0.1 ; node2/use -e "SHOW SLAVE STATUSG" | grep "Seconds_Behind_Master" | awk '{print "Node2: " $2}' & ); sleep 1; done

master [localhost] {msandbox} (test) > DELETE FROM test.sbtest1 WHERE id > 100; Query OK, 1499900 rows affected (45.21 sec) Node1: 0 Node2: 0 Node1: 0 Node2: 0 Node1: 45 Node2: 45 Node1: 46 Node2: 46 . . . Node1: 77 Node2: 77 Node1: 78 Node2: 79 Node1: 0 Node2: 80 Node1: 0 Node2: 81 Node1: 0 Node2: 0 Node1: 0 Node2: 0

Please note that in our query to performance_schema.threads, we are filtering PROCESSLIST_STATE “NULL” and “!= Waiting for an event from Coordinator”. The correct state is “Executing Event”, but it seems like it doesn’t correctly report that state (#84655).

Summary

MySQL parallel replication is a nice feature, but we still need to make sure we are aware of any potential issues it might bring. Most monitoring systems use the output of SHOW SLAVE STATUS to verify whether or not the slave is lagging behind the master. As shown above, it has its caveats.

As always, we should test, test and test again before implementing any change like this in production!

Categories: MySQL

Percona Live Featured Tutorial with Frédéric Descamps — MySQL InnoDB Cluster & Group Replication in a Nutshell: Hands-On Tutorial

MySQL Performance Blog - Thu, 2017-01-26 23:58

Welcome to another post in the series of Percona Live featured tutorial speakers blogs! In these blogs, we’ll highlight some of the tutorial speakers that will be at this year’s Percona Live conference. We’ll also discuss how these tutorials can help you improve your database environment. Make sure to read to the end to get a special Percona Live 2017 registration bonus!

In this Percona Live featured tutorial, we’ll meet Frédéric Descamps, MySQL Community Manager at Oracle. Frédéric is probably better known in the community as “LeFred” (Twitter: @lefred)! His tutorial is MySQL InnoDB Cluster and Group Replication in a Nutshell: Hands-On Tutorial (along with  Part 2). Frédéric is delivering this talk with Percona’s MySQL Practice Manager Kenny Gryp. Attendees will get their hands on virtual machines and migrate standard Master/Slave architecture to the new MySQL InnoDB Cluster (native Group Replication) with minimal downtime. I had a chance to speak with Frédéric and learn a bit more about InnoDB Cluster and Group Replication:

Percona: How did you get into database technology? What do you love about it?

Frédéric: I started with SQL on VMS and DBASE during my IT courses. I wrote my thesis on SQLServer replication. To be honest, the implementation at the time wasn’t particularly good. At the same time, I enjoyed hacking a Perl module that allowed you to run SQL queries against plain text files. Then I worked as a Progress developer for MFG/Pro (a big customized ERP). When I decided to work exclusively for an open source company, I was managing more and more customers’ databases in MySQL (3.23 and 4.x), so my colleagues and I took MySQL training. It was still MySQL AB at that time. I passed my MySQL 4.1 Core and Professional Certification, and I ended up delivering MySQL Training for MySQL AB too. Some might also remember that I worked for a company called Percona for a while, before moving on to Oracle!

Percona: Your and Kenny’s tutorial is called “MySQL InnoDB Cluster & Group Replication in a nutshell: hands-on tutorial.” Why would somebody want to migrate to this technology?

Frédéric: The main reason to migrate to MySQL InnoDB is to achieve high availability (HA) easily for your MySQL database, while still maintaining performance.

Thanks to  MySQLShell’s adminAPI, you can create a cluster in less than five minutes. All the management can be done remotely, with just some shell commands. Additionally, for Group Replication our engineers have leveraged existing standard MySQL infrastructures (GTIDs, Multi-Source Replication, Multi-threaded Slave Applier, Binary Logs, etc.), which are already well known by the majority of MySQL DBAs.

Percona: How can moving to a high availability environment help businesses?

Frédéric: Time is money! Many businesses can’t afford downtime anymore. Having a reliable HA solution is crucial for businesses on the Internet. Expectations have changed: users want this to be natively supported inside the database technology versus externally. For example, when I started to work in IT, it was very common to have several maintenance windows during the night. But currently, and almost universally, the customer base is spread worldwide. Any time is during business hours somewhere in the world!

Percona: What do you want attendees to take away from your tutorial session? Why should they attend?

Frédéric: I think that current HA solutions are too complex to setup and manage. They use external tools. In our tutorial, Kenny and I will demonstrate how MySQL InnoDB Cluster, being an internal implementation, is extremely easy to use. We will also cover some scenarios where things go wrong, and how to deal with them. Performance and ease-of-use were two key considerations in the design of InnoDB Cluster.

Percona: What are you most looking forward to at Percona Live?

Frédéric: Like every time I’ve attended, my main goal is to bring to the audience all the new and amazing stuff we are implementing in MySQL. MySQL has evolved quickly these last few years, and we don’t really plan to slow down. I also really enjoy the feedback from users and other MySQL professionals. This helps focus us on what really matters for our users. And finally, it’s a great opportunity to re-connect with ex-colleagues and friends.

You can find out more about Frédéric Descamps and his work with InnoDB Cluster at his Twitter handle @lefredRegister for Percona Live Data Performance Conference 2017, and see Frédéric and Kenny present their MySQL InnoDB Cluster and Group Replication in a Nutshell: Hands-On Tutorial talk. Use the code FeaturedTutorial and receive $30 off the current registration price!

Percona Live Data Performance Conference 2017 is the premier open source event for the data performance ecosystem. It is the place to be for the open source community as well as businesses that thrive in the MySQL, NoSQL, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Data Performance Conference will be April 24-27, 2017 at the Hyatt Regency Santa Clara & The Santa Clara Convention Center.

Categories: MySQL

Performance Schema Benchmarks: OLTP RW

MySQL Performance Blog - Thu, 2017-01-26 19:19

In this blog post, we’ll look at Performance Schema benchmarks for OLTP Read/Write workloads.

I am in love with Performance Schema and talk a lot about it. Performance Schema is a revolutionary MySQL troubleshooting instrument, but earlier versions had performance issues. Many of these issues are fixed now, and the default options work quickly and reliably. However, there is no free lunch! It is expected that the more instruments you turn ON, the more overhead you’ll have.

The advice I give my customers is that when in doubt, only turn ON the instruments that are required to troubleshoot your issue. Many of them ask: what exactly are the overhead costs for one instrumentation or another? I believe the best answer is “test on your system!” No generic benchmark can exactly repeat a workload on your site. But while I was working on the “OpenSource Databases on Big Machines” project, I decided to test the performance of Performance Schema as well.

I only tested a Read/Write workload. I used the same fast machine (144 CPU cores), the same MySQL options and the same SysBench commands that I described in this post. The option innodb_flush_method was changed to O_DIRECT, because it’s more reasonable for real-life workloads. I also upgraded the MySQL Server version to Oracle’s MySQL 5.7.17. The reason for the upgrade was to test if the issue described in this post is repeatable with latest Oracle MySQL server version. But since I tested Performance Schema, the effect on Percona Server for MySQL should be same.

I tested nine different scenarios:

  1. “All disabled”: Performance Schema is ON, but all instruments and consumers are disabled.
    update setup_consumers set enabled='no'; update setup_instruments set enabled='no';
  2. “All enabled”: Performance Schema is ON, and all instruments and consumers are enabled.
    update setup_instruments set enabled='yes'; update setup_consumers set enabled='yes';
  3. “Default”: Performance Schema is ON, and only default instruments and consumers are enabled.
  4. “MDL only”: only Metadata Lock instrumentation is enabled.
    update setup_consumers set enabled='no'; update setup_instruments set enabled='no'; update setup_consumers set enabled='yes' where name= 'global_instrumentation'; update setup_consumers set enabled='yes' where name= 'thread_instrumentation'; update setup_instruments set enabled='yes' where name='wait/lock/metadata/sql/mdl';
  5. “Memory only”: only Memory instrumentation enabled.
    update setup_consumers set enabled='no'; update setup_instruments set enabled='no'; update setup_consumers set enabled='yes' where name= 'global_instrumentation'; update setup_consumers set enabled='yes' where name= 'thread_instrumentation'; update setup_instruments set enabled='yes' where name like 'memory%';
  6. “Stages and Statements”: only Stages and Statements instrumentation enabled.
    update setup_consumers set enabled='no'; update setup_instruments set enabled='no'; update setup_consumers set enabled='yes' where name= 'global_instrumentation'; update setup_consumers set enabled='yes' where name= 'thread_instrumentation'; update setup_instruments set enabled='yes' where name like 'statement%'; update setup_consumers set enabled='yes' where name like 'events_statements%'; update setup_instruments set enabled='yes' where name like 'stage%'; update setup_consumers set enabled='yes' where name like 'events_stages%';
  7. “Stages only”: only Stages instrumentation enabled.
    update setup_consumers set enabled='no'; update setup_instruments set enabled='no'; update setup_consumers set enabled='yes' where name= 'global_instrumentation'; update setup_consumers set enabled='yes' where name= 'thread_instrumentation'; update setup_instruments set enabled='yes' where name like 'stage%'; update setup_consumers set enabled='yes' where name like 'events_stages%';
  8. “Statements only”: only Statements instrumentation enabled.
    update setup_consumers set enabled='no'; update setup_instruments set enabled='no'; update setup_consumers set enabled='yes' where name= 'global_instrumentation'; update setup_consumers set enabled='yes' where name= 'thread_instrumentation'; update setup_instruments set enabled='yes' where name like 'statement%'; update setup_consumers set enabled='yes' where name like 'events_statements%';
  9. “Waits only”: only Waits instrumentation enabled.
    update setup_consumers set enabled='no'; update setup_instruments set enabled='no'; update setup_consumers set enabled='yes' where name= 'global_instrumentation'; update setup_consumers set enabled='yes' where name= 'thread_instrumentation'; update setup_instruments set enabled='yes' where name like 'wait%' ; update setup_consumers set enabled='yes' where name like 'events_waits%';

Here are the overall results.

As you can see, some instrumentation only slightly affects performance, while others affect it a lot. I created separate graphs to make the picture clearer.

As expected, enabling all instrumentation makes performance lower:

Does this mean to use Performance Schema, you need to start the server with it ON and then disable all instruments? No! The default options have very little effect on performance:

The same is true for Metadata Locks, Memory and Statements instrumentation:

Regarding statements, I should note that I used prepared statements (which are instrumented in version 5.7). But it makes sense to repeat the tests without prepared statements.

The Stages instrumentation starts affecting performance:

However, the slowdown is reasonable and it happens only after we reach 32 concurrent threads. It still provides great insights on what is happening during query execution.

The real performance killer is Waits instrumentation:

It affects performance close to the same way as all instruments ON.

Conclusion

Using Performance Schema with the default options, Memory, Metadata Locks and Statements instrumentation doesn’t have a great impact on read-write workload performance. You might notice slowdowns with Stages instrumentation after reaching 32 actively running parallel connections. The real performance killer is Waits instrumentation. And even with it on, you will start to notice a performance drop only after 10,000 transactions per second.

Save

Save

Save

Save

Save

Save

Save

Categories: MySQL

ProxySQL Admin Configuration

MySQL Performance Blog - Wed, 2017-01-25 23:15

ProxySQL Admin (proxysql-admin) is a powerful tool for configuring Percona XtraDB Cluster nodes into ProxySQL. The proxysql-admin tool comes with the ProxySQL package from Percona apt/yum repositories.

ProxySQL 1.3.2-1 is now available from the Percona repositories. This release is based on ProxySQL v1.3.2a and introduces the following new changes: proxysql-admin_v1.3.2a.md.

Installing on Red Hat or CentOS

If you are running an RPM-based Linux distribution, use the yum package manager to install ProxySQL and proxysql-admin from the official Percona software repository.

First, if your system does not already have the Percona’s yum repository configured, please run the following command:

$ sudo yum install http://www.percona.com/downloads/percona-release/redhat/0.1-4/percona-release-0.1-4.noarch.rpm

Next, install the proxysql/proxysql-admin package:

$ sudo yum install proxysql

Installing on Debian or Ubuntu

If you are running a DEB-based Linux distribution, use the apt package manager to install ProxySQL and proxysql-admin from the official Percona software repository.

First, if your system does not already have the Percona’s apt repository configured, please fetch the repository package:

$ wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb

Next, install the repository package:

$ sudo dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb

Then update the local apt cache:

$ sudo apt-get update

Finally, install the proxysql/proxysql-admin package:

$ sudo apt-get install proxysql

Pre-Requisites

  • ProxySQL and Percona XtraDB Cluster should be up and running.
  • For security purposes, please make sure to change the default user settings in the ProxySQL configuration file. It is recommend you use –config-file to run the proxysql-admin script.

This script will accept two different options to configure Percona XtraDB Cluster nodes;

1) −−enable

This option configures Percona XtraDB Cluster nodes into the ProxySQL database, and add two cluster monitoring scripts into the ProxySQL scheduler table for checking the cluster status.

_scheduler script info:

  • proxysql_node_monitor: checks cluster node membership, and re-configures ProxySQL if cluster membership changes occur
  • proxysql_galera_checker: checks desynced nodes, and temporarily deactivates them. It will also add two new users into the Percona XtraDB Cluster with the USAGE privilege. One monitors cluster nodes through ProxySQL, and the other connects to Cluster node via the ProxySQL console. Please make sure to use super user credentials from Percona XtraDB Cluster to setup the default users.
$ sudo proxysql-admin --config-file=/etc/proxysql-admin.cnf --enable This script will assist with configuring ProxySQL (currently only Percona XtraDB cluster in combination with ProxySQL is supported) ProxySQL read/write configuration mode is singlewrite Configuring ProxySQL monitoring user.. ProxySQL monitor username as per command line/config-file is monitor User 'monitor'@'127.%' has been added with USAGE privilege Configuring the Percona XtraDB Cluster application user to connect through ProxySQL Percona XtraDB Cluster application username as per command line/config-file is proxysql_user Percona XtraDB Cluster application user 'proxysql_user'@'127.%' has been added with the USAGE privilege, please make sure to the grant appropriate privileges Adding the Percona XtraDB Cluster server nodes to ProxySQL You have not given the writer node info through the command line or in the config-file. Please enter the writer-node info (eg : 127.0.0.1:3306): 127.0.0.1:25000 ProxySQL configuration completed! ProxySQL has been successfully configured to use with Percona XtraDB Cluster You can use the following login credentials to connect your application through ProxySQL mysql --user=proxysql_user --password=***** --host=127.0.0.1 --port=6033 --protocol=tcp $ mysql> select hostgroup_id,hostname,port,status,comment from mysql_servers; +--------------+-----------+-------+--------+---------+ | hostgroup_id | hostname | port | status | comment | +--------------+-----------+-------+--------+---------+ | 11 | 127.0.0.1 | 25400 | ONLINE | READ | | 10 | 127.0.0.1 | 25000 | ONLINE | WRITE | | 11 | 127.0.0.1 | 25100 | ONLINE | READ | | 11 | 127.0.0.1 | 25200 | ONLINE | READ | | 11 | 127.0.0.1 | 25300 | ONLINE | READ | +--------------+-----------+-------+--------+---------+ 5 rows in set (0.00 sec) mysql>

2. −−disable

This option removes Percona XtraDB Cluster nodes from ProxySQL and stops the ProxySQL monitoring daemon.

$ proxysql-admin --config-file=/etc/proxysql-admin.cnf --disable ProxySQL configuration removed! $

Extra options

i) −−mode

This option sets up read/write mode for Percona XtraDB Cluster nodes in the ProxySQL database, based on the hostgroup. For now, the only supported modes are loadbal and singlewrite. singlewrite is the default mode, and it accepts writes only on one single node (and this node can be provided either interactively or by using the –write-node to specify the hostname and the port number for the one single write node). All other remaining nodes will be read-only and accept only read statements. The mode loadbal, on the other hand, is a load balanced set of evenly weighted read/write nodes.

singlewrite mode setup:

$ sudo grep "MODE" /etc/proxysql-admin.cnf export MODE="singlewrite" $ $ sudo proxysql-admin --config-file=/etc/proxysql-admin.cnf --write-node=127.0.0.1:25000 --enable ProxySQL read/write configuration mode is singlewrite [..] ProxySQL configuration completed! $ mysql> select hostgroup_id,hostname,port,status,comment from mysql_servers; +--------------+-----------+-------+--------+---------+ | hostgroup_id | hostname | port | status | comment | +--------------+-----------+-------+--------+---------+ | 11 | 127.0.0.1 | 25400 | ONLINE | READ | | 10 | 127.0.0.1 | 25000 | ONLINE | WRITE | | 11 | 127.0.0.1 | 25100 | ONLINE | READ | | 11 | 127.0.0.1 | 25200 | ONLINE | READ | | 11 | 127.0.0.1 | 25300 | ONLINE | READ | +--------------+-----------+-------+--------+---------+ 5 rows in set (0.00 sec) mysql>

loadbal mode setup:

$ sudo proxysql-admin --config-file=/etc/proxysql-admin.cnf --mode=loadbal --enable This script will assist with configuring ProxySQL (currently only Percona XtraDB cluster in combination with ProxySQL is supported) ProxySQL read/write configuration mode is loadbal [..] ProxySQL has been successfully configured to use with Percona XtraDB Cluster You can use the following login credentials to connect your application through ProxySQL mysql --user=proxysql_user --password=***** --host=127.0.0.1 --port=6033 --protocol=tcp $ mysql> select hostgroup_id,hostname,port,status,comment from mysql_servers; +--------------+-----------+-------+--------+-----------+ | hostgroup_id | hostname | port | status | comment | +--------------+-----------+-------+--------+-----------+ | 10 | 127.0.0.1 | 25400 | ONLINE | READWRITE | | 10 | 127.0.0.1 | 25000 | ONLINE | READWRITE | | 10 | 127.0.0.1 | 25100 | ONLINE | READWRITE | | 10 | 127.0.0.1 | 25200 | ONLINE | READWRITE | | 10 | 127.0.0.1 | 25300 | ONLINE | READWRITE | +--------------+-----------+-------+--------+-----------+ 5 rows in set (0.01 sec) mysql>

ii) −−node-check-interval

This option configures the interval for monitoring via the proxysql_galera_checker script (in milliseconds):

$ proxysql-admin --config-file=/etc/proxysql-admin.cnf --node-check-interval=5000 --enable

iii) −−adduser

This option aids with adding the Percona XtraDB Cluster application user to the ProxySQL database:

$ proxysql-admin --config-file=/etc/proxysql-admin.cnf --adduser Adding Percona XtraDB Cluster application user to ProxySQL database Enter Percona XtraDB Cluster application user name: root Enter Percona XtraDB Cluster application user password: Added Percona XtraDB Cluster application user to ProxySQL database! $

iv) −−test-run

This option sets up a test/dummy proxysql configuration:

$ sudo proxysql-admin --enable --quick-demo You have selected the dry test run mode. WARNING: This will create a test user (with all privileges) in the Percona XtraDB Cluster & ProxySQL installations. You may want to delete this user after you complete your testing! Would you like to proceed with '--quick-demo' [y/n] ? y Setting up proxysql test configuration! Do you want to use the default ProxySQL credentials (admin:admin:6032:127.0.0.1) [y/n] ? y Do you want to use the default Percona XtraDB Cluster credentials (root::3306:127.0.0.1) [y/n] ? n Enter the Percona XtraDB Cluster username (super user): root Enter the Percona XtraDB Cluster user password: Enter the Percona XtraDB Cluster port: 25100 Enter the Percona XtraDB Cluster hostname: localhost ProxySQL read/write configuration mode is singlewrite Configuring ProxySQL monitoring user.. User 'monitor'@'127.%' has been added with USAGE privilege Configuring the Percona XtraDB Cluster application user to connect through ProxySQL Percona XtraDB Cluster application user 'pxc_test_user'@'127.%' has been added with ALL privileges, this user is created for testing purposes Adding the Percona XtraDB Cluster server nodes to ProxySQL ProxySQL configuration completed! ProxySQL has been successfully configured to use with Percona XtraDB Cluster You can use the following login credentials to connect your application through ProxySQL mysql --user=pxc_test_user --host=127.0.0.1 --port=6033 --protocol=tcp $ mysql> select hostgroup_id,hostname,port,status,comment from mysql_servers; +--------------+-----------+-------+--------+---------+ | hostgroup_id | hostname | port | status | comment | +--------------+-----------+-------+--------+---------+ | 11 | 127.0.0.1 | 25300 | ONLINE | READ | | 10 | 127.0.0.1 | 25000 | ONLINE | WRITE | | 11 | 127.0.0.1 | 25100 | ONLINE | READ | | 11 | 127.0.0.1 | 25200 | ONLINE | READ | +--------------+-----------+-------+--------+---------+ 4 rows in set (0.00 sec) mysql>

We hope you enjoy ProxySQL Admin!

Categories: MySQL

Webinar Thursday January 26, 2017: Overcoming the Challenges of Databases in the Cloud

MySQL Performance Blog - Tue, 2017-01-24 20:02

Please join Percona’s Jon Tobin, Director of Solution Engineering at Percona, and Rob Masson, Solutions Architect Manager at ScaleArc, on Thursday, January 26, 2017, at 9:00 am PST / 12:00 pm EST (UTC-8) for their webinar “Overcoming the Challenges of Databases in the Cloud.”

Enterprises enjoy the flexibility and simplified operations of using the cloud, but applying those advantages to database workloads has proven challenging. Resource contention, cross-region failover and elasticity at the data tier all introduce limitations. In addition, cloud providers support different services within their database offerings.

Jon and Rob’s webinar is a discussion on how to overcome these databases in the cloud challenges. You’ll learn:

  • The differences between IaaS and PaaS
  • How AWS, Azure, and Google Cloud stack up
  • Tricks for managing smaller database instances
  • Tips for achieving cross-region failover
  • Best practices for hybrid deployments

Register for the webinar here.

Jon Tobin, Director of Solution Engineering, Percona

When not saving kittens from sequoias or helping the elderly across busy intersections, Jon Tobin is Percona’s Director of Solutions Engineering. He has spent over 15 years in the IT industry. For the last six years, Jon has been helping innovative IT companies assess and address customer’s business needs through well-designed solutions.

Robert Masson, Solutions Architect Manager, ScaleArc

Over the past ten years, Robert has been working with leading edge Product Companies that are defining the future of Computing. When at EMC,  he led an Advanced Development Team that explored the capabilities of new technologies and helped to drive innovation within a world leader in information management technologies. Robert also was the Director of Research on the East Coast and interfaced with researchers at MIT, Harvard and Boston University (among others).

Currently, Robert works with the world leader in cloud management platforms, allowing customers to manage assets easily and securely across disparate platforms seamlessly. He gets to work hands-on with implementation teams and business drivers to help understand their needs, design their organizations and implement that vision into a game-changing infrastructure and toolset.

Categories: MySQL

Webinar Wednesday January 25, 2017: Percona Server for MySQL 5.7 Key Performance Algorithms

MySQL Performance Blog - Mon, 2017-01-23 21:22

Please join Laurynas Biveinis, Percona’s Technical Director – Engineering, on January 25, 2017, at 7 am EST (UTC-8) as he discusses “Percona Server for MySQL 5.7: Key Performance Algorithms.”

In this webinar, Laurynas will discuss selected areas of InnoDB and Percona Server for MySQL 5.7 internals as they relate to buffer pool management and flushing (from a performance and scalability point of view). He will describe the motivation behind the buffer pool mutex split, multi-threaded LRU flusher and parallel doublewrite features in Percona Server for MySQL 5.7, given that MySQL InnoDB 5.7 has re-implemented many of the same features found in Percona Server for MySQL – especially for disk-bound read/write workloads.

This webinar will be technical and assumes you have familiarity with InnoDB internals. The target audience for the webinar is InnoDB and performance engineers.

Register for this webinar here.

Laurynas joined Percona in 2011 as a member of the Percona Server, Percona XtraDB Cluster and Percona XtraBackup development group. Before coming to Percona, Laurynas was a Ph.D. student at Aalborg University, in Denmark, where he did research on physical database indexes.

 

Categories: MySQL

MySQL and MongoDB Alerting with PMM and Grafana

MySQL Performance Blog - Mon, 2017-01-23 19:33

This blog post contains step-by-step instructions for setting up alerting in Percona Monitoring and Management (PMM) using Grafana.

As a Solutions Engineer at Percona, I get to speak with a lot of the early adopters of Percona’s new PMM software. One of the frequent feature requests from them was the need for alerts at certain MySQL and MongoDB thresholds. Percona’s philosophy on alerting is that we don’t want to reinvent the wheel. We integrated Consul, Grafana and Prometheus into PMM because they are fantastic, free and open-source products. PMM packages them together in an easy to deploy and maintain format.

With the release of Grafana 4.0 came Alerting! The Grafana team excitedly shared this feature with its users (including Percona). Percona quickly followed suit by releasing a new PMM 1.0.7 package that contained Grafana 4.0.

Configuring Grafana for Alerting

The instructions below assume that you have a running instance of PMM. Percona’s website contains detailed installation instructions if you don’t.

There are several ways to receive alerts with Grafana – email, Slack, Webhook and PagerDuty. For this blog post, I demonstrate alerting through email. Please note that for this to work correctly, SMTP settings must be configured in the Grafana config file: /etc/grafana/grafana.ini. Documentation for the configuration file can be found on Grafana’s website. Below is a basic, default configuration setting for the SMTP/email section for testing purposes that works for Gmail:

[smtp] enabled = true host = smtp.gmail.com:465 user = your.gmail.account@gmail.com password = yourpassword cert_file = key_file = skip_verify = false from_address = admin@grafana.localhost

There are also instructions for setting up other alerting methods on Grafana’s website. Once you have made changes to the config file, you can restart pmm-server to pick up the changes with a docker restart pmm-server command.

Once you restart the docker container, access the Alert Notification panel by selecting the Grafana icon dropdown in the upper left corner of any Grafana dashboard:

To set up alerting, you need to create an alert notification on this page. For this test, I am only entering one email address. You can enter more than one by separating emails with commas.

Testing the Configuration

You can test your server’s configuration by clicking the “Send Test” button.

You’ll know that you configured this correctly when you receive the following email:

Enabling Alerting for a Variable

Now that you’ve updated your Grafana config file and tested to ensure your SMTP settings are correct, it’s time to set up an alert for a specific variable. To do that, go the dashboard for which you’re interested in setting up alerting:

  1. Click the title [1]
  2. Select “Edit” [2]
  3. Choose the Alert tab [3]
  4. Select the “Create Alert” button. In the Alert Config section, you can modify the frequency at which the alert checks for a trigger [4] and the conditions to trigger the alert [5].

Currently, a known limitation exists with Grafana. Alerting doesn’t support template variables yet. However, there is a feature request on Github to add this functionality in the future. Because of this, template variables must be removed from the Metrics tab [6] of the dashboard you are editing.

Grafana sends an alert email whenever the configured threshold gets passed:

Once an alert clears, you will get an all clear email.

Summary

Proactive monitoring is vital to a highly-available environment. Hopefully, this blog post gives you the confidence to move forward with integrating alerting into your PMM platform. I also wrote this blog post to highlight the fact that our PMM team continues to add functionality requested by users with regards to monitoring and maintaining their environment. If you’d like to ask questions or give feedback, the PMM forums on Percona’s website are a good place to go.

Categories: MySQL

How to Manually Build Percona Server for MySQL RPM Packages

MySQL Performance Blog - Fri, 2017-01-20 17:58

In this blog, we’ll look at how to manually build Percona Server for MySQL RPM packages.

Several customers and other people from the open source community have asked us how they could make their own Percona Server for MySQL RPM binaries from scratch.

This request is often made by companies that want to add custom patches to our release. To do this, you need to make some modifications to the percona-server.spec file in the source tree, and some preparation is necessary.

This post covers how you can make your own RPMs from GIT or source tarball so that you can build RPMs from your own modified branch, or by applying patches. In this example, we’ll build Percona Server 5.7.16-10.

Making your own RPMs is not a recommended practice, and should rarely be necessary.

Prepare the Source Using GIT Repository

We can fetch percona/percona-server from GitHub (or your own fork). As we build Percona Server 5.7.16-10, we create a new branch based on the tag of that version:

$ git clone https://github.com/percona/percona-server.git Cloning into 'percona-server'... remote: Counting objects: 1216597, done. remote: Compressing objects: 100% (4/4), done. remote: Total 1216597 (delta 1), reused 1 (delta 1), pack-reused 1216592 Receiving objects: 100% (1216597/1216597), 510.50 MiB | 5.94 MiB/s, done. Resolving deltas: 100% (997331/997331), done. Checking out files: 100% (28925/28925), done. $ cd percona-server/ $ git checkout -b build-Percona-Server-5.7.16-10 Percona-Server-5.7.16-10 Switched to a new branch 'build-Percona-Server-5.7.16-10' $ git submodule init Submodule 'Percona-TokuBackup' (https://github.com/percona/Percona-TokuBackup.git) registered for path 'plugin/tokudb-backup-plugin/Percona-TokuBackup' Submodule 'storage/rocksdb/rocksdb' (https://github.com/facebook/rocksdb.git) registered for path 'storage/rocksdb/rocksdb' Submodule 'PerconaFT' (https://github.com/percona/PerconaFT.git) registered for path 'storage/tokudb/PerconaFT' $ git submodule update Cloning into 'plugin/tokudb-backup-plugin/Percona-TokuBackup'... ... Submodule path 'plugin/tokudb-backup-plugin/Percona-TokuBackup': checked out '1b9bb16ad74588601d8fefe46c74cc1dac1dd1d5' Cloning into 'storage/rocksdb/rocksdb'... ... Submodule path 'storage/rocksdb/rocksdb': checked out '6a17b07ca856e573fabd6345d70787d4e481f57b' Cloning into 'storage/tokudb/PerconaFT'... ... Submodule path 'storage/tokudb/PerconaFT': checked out '0c1b53909bc62a4d9b420116ec8833c78c0c6e8e'

Downloading Source Tarball

An alternative way is to download the source tarball, which you can find at https://www.percona.com/downloads/Percona-Server-5.7/Percona-Server-5.7.16-10/source/.

Extract the source tarball, as the RPM spec file is located there:

$ wget https://www.percona.com/downloads/Percona-Server-5.7/Percona-Server-5.7.16-10/source/tarball/percona-server-5.7.16-10.tar.gz $ tar -xzvf percona-server-5.7.16-10.tar.gz $ cd percona-server-5.7.16-10

Making Changes with Patch Files

If you need to make any changes to the source code, you can either use your own GitHub fork or you can apply patches. If you use the former, then you can skip this section.

Why Patches?

Why would we want to use patch files? Because you won’t need to maintain your own fork. You can just build the RPM with the Percona Server source tarball and the patch file.

Create Patch Files

If you do not want to use your own fork in GitHub, you can also create some patch files and modify the RPM spec file to include them.

  1. Create your changes to the source files
  2. Use diff to create the patch file:
    $ diff -ru FILE-orig FILE >| ~/custom.patch
Add Patch to RPM Spec File

In order for the patch to be applied during the building of the RPM, edit the ./build-ps/percona-server.spec file and add the two lines that are highlighted:

... Source91: filter-requires.sh Patch0: mysql-5.7-sharedlib-rename.patch Patch1: custom.patch BuildRequires: cmake >= 2.8.2 ... %prep %setup -q -T -a 0 -a 10 -c -n %{src_dir} pushd %{src_dir} %patch0 -p1 %patch1 -p1 %build ...

Note that you have to number the patches, in this case I gave it the name patch1.

Creating New Source Tarball

If you use your own GitHub fork, or you made manual changes to the source (and you’re not using patch files), you should use that to create your own source tarball.

First, change the Percona Server version number. In this case, we are naming it 10custom to indicate this is not a standard build. You can adapt as you wish, just make sure the VERSION file looks something like this:

$ cat VERSION MYSQL_VERSION_MAJOR=5 MYSQL_VERSION_MINOR=7 MYSQL_VERSION_PATCH=16 MYSQL_VERSION_EXTRA=-10custom

Then make the source tarball:

$ cmake . -DDOWNLOAD_BOOST=1 -DWITH_BOOST=build-ps/boost $ make dist ... -- Source package ~/percona-server/percona-server-5.7.16-10custom.tar.gz created Built target dist

Now you have the tarball in your source directory, but we won’t use it yet. We need to add some TokuDB submodules to it first. The make dist also kept the uncompressed directory, which we will use to create the tarball again when the TokuDB parts are included:

$ rm percona-server-5.7.16-10custom.tar.gz $ cp -R storage/tokudb/PerconaFT/* percona-server-5.7.16-10custom/storage/tokudb/PerconaFT/ $ cp -R plugin/tokudb-backup-plugin/Percona-TokuBackup/* percona-server-5.7.16-10custom/plugin/tokudb-backup-plugin/Percona-TokuBackup/ $ tar --owner=0 --group=0 --exclude=.git -czf percona-server-5.7.16-10custom.tar.gz percona-server-5.7.16-10custom

Preparing Build Environment Environment Requirements

Make sure the build host has at least 10GB of free space and at least 4GB of RAM, or the build will fail at some point.

Install Dependencies

To build the RPM, we need to prepare our build environment and ensure the necessary build dependencies are installed:

$ sudo yum install epel-release $ sudo yum install git gcc gcc-c++ openssl check cmake bison boost-devel asio-devel libaio-devel ncurses-devel readline-devel pam-devel wget perl-Env time numactl-devel rpmdevtools rpm-build

Prepare RPM Build Tree

Next we need to prepare our build directory structure, in this case we will install it in ~/rpmbuild:

$ cd ~/ $ rpmdev-setuptree

Download Boost Source

We also need to download the boost source (http://sourceforge.net/projects/boost/files/boost/1.59.0/boost_1_59_0.tar.gz) and put it in the ~/rpmbuild/SOURCES/ directory:

$ cd ~/rpmbuild/SOURCES $ wget http://sourceforge.net/projects/boost/files/boost/1.59.0/boost_1_59_0.tar.gz

Move Files to the RPM Build Tree

Copy the source tarball:

$ cp percona-server-5.7.16-10custom.tar.gz ~/rpmbuild/SOURCES/

Also copy the ./build-ps/rpm/* files:

$ cp ~/percona-server/build-ps/rpm/* ~/rpmbuild/SOURCES

If you have any patch files, move them to the ~/rpmbuild/SOURCES/ directory as well:

$ cp ~/custom.patch ~/rpmbuild/SOURCES/

Then the percona-server.spec file goes into the ~/rpmbuild/SPECS directory:

$ cp ~/percona-server/build-ps/percona-server.spec ~/rpmbuild/SPECS/

Setting Correct Versions in the Spec File

$ cd ~/rpmbuild/SPECS/ $ sed -i s:@@MYSQL_VERSION@@:5.7.16:g percona-server.spec $ sed -i s:@@PERCONA_VERSION@@:10custom:g percona-server.spec $ sed -i s:@@REVISION@@:f31222d:g percona-server.spec $ sed -i s:@@RPM_RELEASE@@:1:g percona-server.spec $ sed -i s:@@BOOST_PACKAGE_NAME@@:boost_1_59_0:g percona-server.spec

Note the @@PERCONA_VERSION@@ contains our 10custom version number.

You can add your changelog information in the %changelog section.

Building RPM Packages

For Percona Server, making the RPMs is a two-step process. First, we need to make the SRPMS:

$ cd ~/ $ rpmbuild -bs --define 'dist .generic' rpmbuild/SPECS/percona-server.spec Wrote: /home/vagrant/rpmbuild/SRPMS/Percona-Server-57-5.7.16-10custom.1.generic.src.rpm

And then we can build the binary RPMs from the SRPMS:

$ rpmbuild --define 'dist .el7' --rebuild rpmbuild/SRPMS/Percona-Server-57-5.7.16-10custom.1.generic.src.rpm ... Wrote: /home/vagrant/rpmbuild/RPMS/x86_64/Percona-Server-server-57-5.7.16-10custom.1.el7.x86_64.rpm Wrote: /home/vagrant/rpmbuild/RPMS/x86_64/Percona-Server-client-57-5.7.16-10custom.1.el7.x86_64.rpm Wrote: /home/vagrant/rpmbuild/RPMS/x86_64/Percona-Server-test-57-5.7.16-10custom.1.el7.x86_64.rpm Wrote: /home/vagrant/rpmbuild/RPMS/x86_64/Percona-Server-devel-57-5.7.16-10custom.1.el7.x86_64.rpm Wrote: /home/vagrant/rpmbuild/RPMS/x86_64/Percona-Server-shared-57-5.7.16-10custom.1.el7.x86_64.rpm Wrote: /home/vagrant/rpmbuild/RPMS/x86_64/Percona-Server-shared-compat-57-5.7.16-10custom.1.el7.x86_64.rpm Wrote: /home/vagrant/rpmbuild/RPMS/x86_64/Percona-Server-tokudb-57-5.7.16-10custom.1.el7.x86_64.rpm Wrote: /home/vagrant/rpmbuild/RPMS/x86_64/Percona-Server-57-debuginfo-5.7.16-10custom.1.el7.x86_64.rpm Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.xmcI8e + umask 022 + cd /home/vagrant/rpmbuild/BUILD + cd percona-server-5.7.16-10custom + /usr/bin/rm -rf /home/vagrant/rpmbuild/BUILDROOT/Percona-Server-57-5.7.16-10custom.1.generic.x86_64 + exit 0 Executing(--clean): /bin/sh -e /var/tmp/rpm-tmp.UXUTmh + umask 022 + cd /home/vagrant/rpmbuild/BUILD + rm -rf percona-server-5.7.16-10custom + exit 0

Once the build is done, you can find the RPMs in the RPMS directory:

$ ls -1 ~/rpmbuild/RPMS/x86_64/ Percona-Server-57-debuginfo-5.7.16-10custom.1.el7.x86_64.rpm Percona-Server-client-57-5.7.16-10custom.1.el7.x86_64.rpm Percona-Server-devel-57-5.7.16-10custom.1.el7.x86_64.rpm Percona-Server-server-57-5.7.16-10custom.1.el7.x86_64.rpm Percona-Server-shared-57-5.7.16-10custom.1.el7.x86_64.rpm Percona-Server-shared-compat-57-5.7.16-10custom.1.el7.x86_64.rpm Percona-Server-test-57-5.7.16-10custom.1.el7.x86_64.rpm Percona-Server-tokudb-57-5.7.16-10custom.1.el7.x86_64.rpm

Categories: MySQL

Setup ProxySQL for High Availability (not a Single Point of Failure)

MySQL Performance Blog - Thu, 2017-01-19 22:05

In this blog post, we’ll look at how to set up ProxySQL for high availability.

During the last few months, we’ve had a lot of opportunities to present and discuss a very powerful tool that will become more and more used in the architectures supporting MySQL: ProxySQL.

ProxySQL is becoming more flexible, solid, performant and used every day (http://www.proxysql.com/ and recent http://www.proxysql.com/compare). You can use ProxySQL for high availability.

The tool is a winner when compared to similar ones, and we should all have a clear(er) idea of how to integrate it in our architectures in order to achieve the best results.

The first thing to keep in mind is that ProxySQL doesn’t natively support any high availability solution. We can setup a cluster of MySQL(s) and achieve four or even five nines of HA. But if we include ProxySQL as it is, and as a single block, our HA has a single point of failure (SPOF) that can drag us down in the case of a crash.

To solve this, the most common solution is setting up ProxySQL as part of a tile architecture, where Application/ProxySQL are deployed together.

This is a good solution for some cases, and it for sure reduces the network hops, but it might be less than practical when our architecture has a very high number of tiles (hundreds of application servers, which is not so unusual nowadays).
In that case, managing ProxySQL is challenging. But more problematically, ProxySQL must perform several checks on the destination servers (MySQL). If we have 400 instances of ProxySQL, we end up keeping our databases busy just performing the checks.

In short, this is not a smart move.

Another possible approach is to have two layers of ProxySQL, one close to the application and another in the middle to connect to the database.

I personally don’t like this approach for many reasons. The most relevant reasons are that this approach creates additional complexity in the management of the platform, and it adds network hops.

So what can be done?

I love the KISS principle because I am lazy and don’t want to reinvent a wheel someone else has already invented. I also like it when my customers don’t need to depend on me or any other colleagues after I am done and gone. They must be able to manage, understand and fix their environment by themselves.

To keep things simple, here is my checklist:

  • Exclude the cases where a tile (application/ProxySQL) makes sense
  • Exclude the “in the cloud” cases where tools like ELB (Elastic Load Balancer) exist
  • Exclude architecture that already includes a balancer

What can I use for the remaining cases?

The answer comes with combining existing solutions and existing blocks: KeepAlived + ProxySQl + MySQL.

For an explanation of KeepAlived, visit http://www.keepalived.org/.

Short description
“Keepalived is a routing software written in C. The main goal of this project is to provide simple and robust facilities for load balancing and high-availability to Linux system and Linux-based infrastructures. The load balancing framework relies on well-known and widely used Linux Virtual Server (IPVS) kernel module providing Layer4 load balancing. Keepalived implements a set of checkers to dynamically and adaptively maintain and manage load-balanced server pool according to their health. On the other hand, high-availability is achieved by VRRP protocol. VRRP is a fundamental brick for router failover. Also, Keepalived implements a set of hooks to the VRRP finite state machine providing low-level and high-speed protocol interactions. Keepalived frameworks can be used independently or all together to provide resilient infrastructures.”

Bingo! This is exactly what we need for our ProxySQL setup.

Below, I will explain how to set up:

  • A simple solution based on a single VIP
  • A more complex solution using multiple VIPs
  • An even more complex solution using virtual VIPs and virtual servers

All we want to do is to prevent ProxySQL from becoming a SPOF. And while doing that, we need to reduce network hops as much as possible (keeping the solution SIMPLE).

Another important concept to keep in mind is that ProxySQL (re)start takes place in less than a second. This means that if it crashes, assuming it can be restarted by the angel process, doing that is much more efficient than any kind of failover mechanism. As such, your solution plan should keep in mind the ~1 second time of ProxySQL restart as a baseline.

Ready? Let’s go.

Setup

Choose three machines to host the combination of Keepalived and ProxySQL.

In the following example, I will use three machines for ProxySQL and Keepalived, and three hosting Percona XtraDB Cluster. You can have the Keepalived+ProxySQL whenever you like (even on the same Percona XtraDB Cluster box).

For the following examples, we will have:

PXC node1 192.168.0.5 galera1h1n5 node2 192.168.0.21 galera2h2n21 node3 192.168.0.231 galera1h3n31   ProxySQL-Keepalived test1 192.168.0.11 test2 192.168.0.12 test3 192.168.0.235   VIP 192.168.0.88 /89/90  

To check, I will use this table (please create it in your MySQL server):

DROP TABLE test.`testtable2`; CREATE TABLE test.`testtable2` ( `autoInc` bigint(11) NOT NULL AUTO_INCREMENT, `a` varchar(100) COLLATE utf8_bin NOT NULL, `b` varchar(100) COLLATE utf8_bin NOT NULL, `host` varchar(100) COLLATE utf8_bin NOT NULL, `userhost` varchar(100) COLLATE utf8_bin NOT NULL, PRIMARY KEY (`autoInc`) ) ENGINE=InnoDB ROW_FORMAT=DYNAMIC;

And this bash TEST command to use later:

while [ 1 ];do export mydate=$(date +'%Y-%m-%d %H:%M:%S.%6N'); mysql --defaults-extra-file=./my.cnf -h 192.168.0.88 -P 3311 --skip-column-names -b -e "BEGIN;set @userHost='a';select concat(user,'_', host) into @userHost from information_schema.processlist where user = 'load_RW' limit 1;insert into test.testtable2 values(NULL,'$mydate',SYSDATE(6),@@hostname,@userHost);commit;select * from test.testtable2 order by 1 DESC limit 1" ; sleep 1;done
  1. Install ProxySQL (refer to https://github.com/sysown/proxysql/wiki#installation)
  2. Install Keepalived (yum install keepalived; apt-get install keepalived)
  3. Setup ProxySQL users and servers

Once you have your ProxySQL up (run the same configuration on all ProxySQL nodes, it is much simpler), connect to the Admin interface and execute the following:

DELETE FROM mysql_replication_hostgroups WHERE writer_hostgroup=500 ; DELETE FROM mysql_servers WHERE hostgroup_id IN (500,501);   INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.0.5',500,3306,1000000000); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.0.5',501,3306,100); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.0.21',500,3306,1000000); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.0.21',501,3306,1000000000); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.0.231',500,3306,100); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.0.231',501,3306,1000000000); LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK;   DELETE FROM mysql_users WHERE username='load_RW'; INSERT INTO mysql_users (username,password,active,default_hostgroup,default_schema,transaction_persistent) VALUES ('load_RW','test',1,500,'test',1); LOAD MYSQL USERS TO RUNTIME;SAVE MYSQL USERS TO DISK;   DELETE FROM mysql_query_rules WHERE rule_id IN (200,201); INSERT INTO mysql_query_rules (rule_id,username,destination_hostgroup,active,retries,match_digest,apply) VALUES(200,'load_RW',501,1,3,'^SELECT.*FOR UPDATE',1); INSERT INTO mysql_query_rules (rule_id,username,destination_hostgroup,active,retries,match_digest,apply) VALUES(201,'load_RW',501,1,3,'^SELECT ',1);   LOAD MYSQL QUERY RULES TO RUNTIME;SAVE MYSQL QUERY RULES TO DISK;

Create a my.cnf file in your default dir with:

[mysql] user=load_RW password=test Simple Setup using a single VIP, 3 ProxySQL and 3 Galera nodes

First, setup the Keepalived configuration file (/etc/keepalived/keepalived.conf):

global_defs { # Keepalived process identifier lvs_id proxy_HA } # Script used to check if Proxy is running vrrp_script check_proxy { script "killall -0 proxysql" interval 2 weight 2 } # Virtual interface # The priority specifies the order in which the assigned interface to take over in a failover vrrp_instance VI_01 { state MASTER interface em1 virtual_router_id 51 priority <calculate on the WEIGHT for each node>   # The virtual ip address shared between the two loadbalancers virtual_ipaddress { 192.168.0.88 dev em1 } track_script { check_proxy } }

Given the above, and given I want to have TEST1 as the main priority, we will set as:

test1 = 101 test2 = 100 test3 = 99

Modify the config in each node with the above values and (re)start Keepalived.

If all is set correctly, you will see the following in the system log of the TEST1 machine:

Jan 10 17:56:56 mysqlt1 systemd: Started LVS and VRRP High Availability Monitor. Jan 10 17:56:56 mysqlt1 Keepalived_healthcheckers[6183]: Configuration is using : 6436 Bytes Jan 10 17:56:56 mysqlt1 Keepalived_healthcheckers[6183]: Using LinkWatch kernel netlink reflector... Jan 10 17:56:56 mysqlt1 Keepalived_vrrp[6184]: Configuration is using : 63090 Bytes Jan 10 17:56:56 mysqlt1 Keepalived_vrrp[6184]: Using LinkWatch kernel netlink reflector... Jan 10 17:56:56 mysqlt1 Keepalived_vrrp[6184]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)] Jan 10 17:56:56 mysqlt1 Keepalived_vrrp[6184]: VRRP_Script(check_proxy) succeeded Jan 10 17:56:57 mysqlt1 Keepalived_vrrp[6184]: VRRP_Instance(VI_01) Transition to MASTER STATE Jan 10 17:56:57 mysqlt1 Keepalived_vrrp[6184]: VRRP_Instance(VI_01) Received lower prio advert, forcing new election Jan 10 17:56:57 mysqlt1 Keepalived_vrrp[6184]: VRRP_Instance(VI_01) Received higher prio advert Jan 10 17:56:57 mysqlt1 Keepalived_vrrp[6184]: VRRP_Instance(VI_01) Entering BACKUP STATE Jan 10 17:56:58 mysqlt1 Keepalived_vrrp[6184]: VRRP_Instance(VI_01) forcing a new MASTER election ... Jan 10 17:57:00 mysqlt1 Keepalived_vrrp[6184]: VRRP_Instance(VI_01) Transition to MASTER STATE Jan 10 17:57:01 mysqlt1 Keepalived_vrrp[6184]: VRRP_Instance(VI_01) Entering MASTER STATE <-- MASTER Jan 10 17:57:01 mysqlt1 Keepalived_vrrp[6184]: VRRP_Instance(VI_01) setting protocol VIPs. Jan 10 17:57:01 mysqlt1 Keepalived_healthcheckers[6183]: Netlink reflector reports IP 192.168.0.88 added Jan 10 17:57:01 mysqlt1 avahi-daemon[937]: Registering new address record for 192.168.0.88 on em1.IPv4. Jan 10 17:57:01 mysqlt1 Keepalived_vrrp[6184]: VRRP_Instance(VI_01) Sending gratuitous ARPs on em1 for 192.168.0.88

In the other two machines:

Jan 10 17:56:59 mysqlt2 Keepalived_vrrp[13107]: VRRP_Instance(VI_01) Entering BACKUP STATE <---

Which means the node is there as a Backup.  

Categories: MySQL

Open Source Databases on Big Machines: Does READ COMMITTED Scale on 144 Cores?

MySQL Performance Blog - Thu, 2017-01-19 21:12

In the second post in my series on open source databases on big machines, we’ll look at whether READ COMMITTED scales with multiple cores.

The default transaction level for InnoDB is REPEATABLE READ. A more permissive level is READ COMMITTED, and is known to work well. While the REPEATABLE READ level maintains the transaction history up to the start of the transaction, READ COMMITTED maintains the transaction history up to the start of the current statement. Peter Zaitsev described the differences between how these modes are handled in this blog post. Both can theoretically cause performance slowdowns, but READ COMMITTED is usually seen as fast-working – at least on a typical MySQL machine (owned by Percona):

The default transaction isolation mode for PostgreSQL is also READ COMMITTED. Originally I wanted to use this mode for MySQL tests as well. But when I tested on a machine with 144 cores, I found that after 36 threads REPEATABLE READ continued to scale while READ COMMITTED slowed down. It then got stuck at about 3x slower results for standard OLTP RW tests.

Dimitri Kravtchuk wrote about performance issues with READ COMMITTED in 2015, but he tested with 40 cores that time. My tests show that there is a huge difference after 40 cores.

I tested this originally with Percona Server 5.7.15 and recently re-tested with Oracle’s MySQL versions 5.6.35 and 5.7.17. I confirmed that the bug exists in these versions as well, and reported it. I the Oracle MySQL Team fixes it. The good news is that while 5.6 stopped scaling after 16 threads, 5.7 improves this to 36 threads.

Results for 5.6.35:

Results for 5.7.17:

Machine details:

PostgreSQL Professional and Freematiq machine (tests for MYSQL 5.6.35, Percona 5.7.15 and MySQL 5.7.17 servers):

Processors: physical = 4, cores = 72, virtual = 144, hyperthreading = yes
Memory: 3.0T
Disk speed: about 3K IOPS
OS: CentOS 7.1.1503
File system: XFS

Percona machine (test for Percona 5.7.15 server):

Processors: physical = 2, cores = 12, virtual = 24, hyperthreading = yes
Memory: 251.9G
Disk speed: about 33K IOPS
OS: Ubuntu 14.04.5 LTS
File system: EXT4

Test: SysBench OLTP RW test, converted to use prepared statements, as described in this post.

MySQL Options: described in this post.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Categories: MySQL

Elasticsearch Ransomware: Open Source Database Security Part 2

MySQL Performance Blog - Wed, 2017-01-18 18:15

In this blog post, we’ll look at a new Elasticsearch ransomware outbreak and what you can do to prevent it happening to you.

Mere weeks after reports of MongoDB servers getting hacked and infected with ransomware, Elasticsearch clusters are experiencing the same difficulties. David Murphy’s blog discussed the situation and the solution for MongoDB servers. In this blog post, we look at how you can prevent ransomware attacks on your Elasticsearch clusters.

First off, what is Elasticsearch? Elasticsearch is an open source distributed index based on Apache Lucene. It provides a full-text search with an HTTP API, using schemaless JSON documents. By its nature, it is also distributed and redundant. Companies use Elasticsearch with logging via the ELK stack and data-gathering software, to assist with data analytics and visualizations. It is also used to back search functionality in a number of popular apps and web services.

In this new scenario, the ransomware completed wiped away the cluster data, and replaced it with the following warning index:

“SEND 0.2 BTC TO THIS WALLET: 1DAsGY4Kt1a4LCTPMH5vm5PqX32eZmot4r IF YOU WANT RECOVER YOUR DATABASE! SEND TO THIS EMAIL YOUR SERVER IP AFTER SENDING THE BITCOINS.”

As with the MongoDB situation, this isn’t a flaw in the Elasticsearch software. This vulnerability stems from not correctly using the security settings provided by Elasticsearch. As the PCWorld article sums up:

According to experts, there is no reason to expose Elasticsearch clusters to the internet. In response to these recent attacks, search technologies and distributed systems architect Itamar Syn-Hershko has published a blog post with recommendations for securing Elasticsearch deployments.

The blog post they reference has excellent advice and examples of how to protect your Elasticsearch installations from exploitation. To summarize its advice (from the post itself):

Whatever you do, never expose your cluster nodes to the web directly.

So how do you prevent hackers from getting into your Elasticsearch cluster? Using the advice from Syn-Hershko’s blog, here are some bullet points for shoring up your Elasticsearch security:

  • HTTP-enabled nodes should only listen to private IPs. You can configure what IPs Elasticsearch listens to: localhost, private IPs, public IPs or several of these options. network.bind_host and network.host control the IP types (manual). Never set Elasticsearch to listen to a public IP or a publicly accessible DNS name.
  • Use proxies to communicate with clients. You should pass any application queries to Elasticsearch through some sort of software that can filter requests, perform audit-logging and password-protect the data. Your client-side javascript shouldn’t talk to Elastic directly, and should only communicate with your server-side software. That software can translate all client-side requests to Elasticsearch DSL, execute the query, and then send the response in a format the clients expect.
  • Don’t use default ports. Once again for clarity: DON’T USE DEFAULT PORTS. You can easily change Elasticsearch’s default ports by modifying the .YML file. The relevant parameters are http.port and transport.tcp.port (manual).
  • Disable HTTP if you don’t need it. Only Elasticsearch client nodes should enable HTTP, and your private network applications should be the only ones with access to them. You can completely disable the HTTP module by setting http.enabled to false (manual).
  • Secure publicly available client nodes. You should protect your Elasticsearch client and any UI it communicates with (such as Kibana and Kopf) behind a VPN. If you choose to allow some nodes access to the public network, use HTTPS and don’t transmit data and credentials as plain-text. You can use plugins like Elastic’s Shield or SearchGuard to secure your cluster.
  • Disable scripting (pre-5.x). Malicious scripts can hack clusters via the Search API. Earlier versions of Elasticscript allowed unsecured scripts to access the software. If you are using an older version (pre-5.x), upgrade to a newer version or disable dynamic scripting completely.

Go to Syn-Hershko’s blog for more details.

This should get you started on correctly protecting yourself against Elasticsearch ransomware (and other security threats). If you want to have someone review your security, please contact us.

Categories: MySQL

Webinar Wednesday January 18, 2017: Lessons from Database Failures

MySQL Performance Blog - Tue, 2017-01-17 18:09

Join Percona’s Chief Evangelist Colin Charles on Wednesday, January 18, 2017, at 7:00 am (PST) / 10:00 am (EST) (UTC-8) as he presents “Lessons from Database Failures.”

MySQL failures at scale can teach a great deal. MySQL failures can lead to a discussion about such topics as high availability (HA), geographical redundancy and automatic failover. In this webinar, Colin will present case study material (how automatic failover caused Github to go offline, why Facebook uses assisted failover rather than fully automated failover, and other scenarios) to look at how the MySQL world is making things better. One way, for example, is using semi-synchronous replication to run fully scalable services.

The webinar will begin with an obvious example of how a business died due to incorrect MySQL backup procedures. The agenda includes backups (and verification), replication (and failover) and security (and encryption).

The webinar will cover a mix of big “fail whale” problems from the field, and how you should avoid them by properly architecting solutions.

Register for the webinar here.

Colin Charles is the Chief Evangelist at Percona. Previously, he was part of the founding team of MariaDB Server in 2009. Before that, he worked at MySQL since 2005. Colin has been a MySQL user since 2000. Before joining MySQL, he worked actively on the Fedora and OpenOffice.org projects. He’s well known within the APAC open source communities and has spoken at many conferences.

Categories: MySQL

Percona Live Featured Tutorial with Morgan Tocker — MySQL 8.0 Optimizer Guide

MySQL Performance Blog - Mon, 2017-01-16 23:06

Welcome to another post in the series of Percona Live featured tutorial speakers blogs! In these blogs, we’ll highlight some of the tutorial speakers that will be at this year’s Percona Live conference. We’ll also discuss how these tutorials can help you improve your database environment. Make sure to read to the end to get a special Percona Live 2017 registration bonus!

In this Percona Live featured tutorial, we’ll meet Morgan Tocker, MySQL Product Manager at Oracle. His tutorial is a MySQL 8.0 Optimizer Guide. Many users who follow MySQL development are aware that recent versions introduced a number of improvements to query execution (via the addition of new execution strategies and an improved cost model). But what we don’t talk enough about is that the diagnostic features are also significantly better. I had a chance to speak with Morgan and learn a bit more about the MySQLOptimizer:

Percona: How did you get into database technology? What do you love about it?

Morgan: I started my career as a web developer, mainly focusing on the front end area. As the team I worked on grew and required different skills, I tried my hand at the back end programming. This led me to databases.

I think what I enjoyed about databases at the time was that front end design was a little bit too subjective for my tastes. With databases, you could prove what was “correct” by writing a simple micro-benchmark.  I joined the MySQL team in January 2006, and rejoined it again in 2013 after a five-year hiatus.

I don’t quite subscribe to this same view on micro benchmarks today, since it is very easy to accidentally (or intentionally) write a naïve benchmark. But I am still enjoying myself.

Percona: Your tutorial is called “MySQL 8.0 Optimizer Guide.” What exactly is the MySQL optimizer, and what new things have been added in MySQL 8.0?

Morgan: Because SQL is declarative (i.e., you state “what you want” rather than “do this then that”), there is a process that has to happen internally to prepare a query for execution. I like to describe it as similar to what happens when you enter an address in a GPS navigator. Some software then spits out the best steps on how to get there. In a database server, the Optimizer is that software code area.

There are a number of new optimizer features in MySQL 8.0, both in terms of new syntax supported and performance improvements to existing queries. These will be covered in some talks at the main conference (and also my colleague Øystein’s tutorial). The goal of my tutorial is to focus more on diagnostics than the enhancements themselves.

Percona: How can you use diagnostics to improve queries?

Morgan: To put it in numbers: it is not uncommon to see a user obsess over a configuration change that may yield a 2x improvement, and not spot the 100x improvement available by adding an index!

I like to say that users do not get the performance that they are entitled to if and when they lack the visibility and diagnostics available to them:

-In MySQL 5.6, an optimizer trace diagnostic was added so that you can now see not only why the optimizer arrived at a particular execution plan, but why other options were avoided.

-In MySQL 5.7, the EXPLAIN FORMAT=JSON command now includes the cost information (the internal formula used for calculations). My experience has been that sharing this detail actually makes the optimizer a lot easier to teach.

Good diagnostics by themselves do not make the improvements, but they bring required changes to the surface. On most systems, there are opportunities for improvements (indexes, hints, slight changes to queries, etc.).

Percona: What do you want attendees to take away from your tutorial session? Why should they attend?

Morgan: Visibility into running systems has been a huge priority for the MySQL Engineering team over the last few releases. I think in many cases, force-of-habit leaves users using an older generation of diagnostics (EXPLAIN versus EXPLAIN FORMAT=JSON). My goal is to show them the light using the state-of-the-art stack. This is why I called my talk an 8.0 guide, even though much of it is still relevant for 5.7 and 5.6.

I also have a companion website to my tutorial at www.unofficalmysqlguide.com.

Percona: What are you most looking forward to at Percona Live?

Morgan: I’m excited to talk to users about MySQL 8.0, and not just in an optimizer sense. The MySQL engineering team has made a large investment in improving the reliability of MySQL with the introduction of a native data dictionary. I expect it will be the subject of many discussions, and a great opportunity for feedback.

There is also the social aspect for me, too. It will be 11 years since I first attended the predecessor to Percona Live. I have a lot of fond memories, and enjoy catching up with new friends and old over a beer!

You can find out more about Morgan Tocker and his work with the Optimizer at his tutorial website. Want to find out more about Morgan and MySQL query optimization? Register for Percona Live Data Performance Conference 2017, and see his MySQL 8.0 Optimizer Guide tutorial. Use the code FeaturedTalk and receive $30 off the current registration price!

Percona Live Data Performance Conference 2017 is the premier open source event for the data performance ecosystem. It is the place to be for the open source community as well as businesses that thrive in the MySQL, NoSQL, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Data Performance Conference will be April 24-27, 2017 at the Hyatt Regency Santa Clara & The Santa Clara Convention Center.

Categories: MySQL

Ad-hoc Data Visualization and Machine Learning with mysqlshell

MySQL Performance Blog - Mon, 2017-01-16 22:00

In this blog post, I am going to show how we can use mysqlshell to run ad-hoc data visualizations and use machine learning to predict new outcomes from the data.

Some time ago Oracle released MySQL Shell, a command line client to connect to MySQL using the X protocol. It allows us to use Python or JavaScript scripting capabilities. This unties us from the limitations of SQL, and the possibilities are infinite. It means that MySQL can not only read data from the tables, but also learn from it and predict new values from features never seen before.

Some disclaimers:

  • This is not a post about to how to install mysqlshell or enable the X plugin. It should be already installed. Follow the first link if instructions are needed.
  • The idea is to show some of the things that can be done from the shell. Don’t expect the best visualizations or a perfectly tuned Supervised Learning algorithm.

It is possible to start mysqlshell with JavaScript or Python interpreter. Since we are going to use Pandas, NumPy and Scikit, Python will be our choice. There is an incompatibility between mysqlshell and Python > 2.7.10 that gives an error when loading some external libraries, so make sure you use 2.7.10.

We’ll work the “employees” database that can be downloaded here. In order to make everything easier and avoid several lines of data parsing, I have created a new table that summarizes the data we are going to work with, generated using the following structure and query:

mysql> show create table data\G *************************** 1. row *************************** Create Table: CREATE TABLE `data` ( `emp_no` int(11) NOT NULL, `age` int(11) DEFAULT NULL, `hired` int(11) DEFAULT NULL, `gender` int(11) DEFAULT NULL, `salary` int(11) DEFAULT NULL, `department` int(11) DEFAULT NULL, PRIMARY KEY (`emp_no`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1

mysql> INSERT INTO data SELECT employees.emp_no, YEAR(now()) - YEAR(birth_date) as age, YEAR(now()) - YEAR(hire_date) as hired, IF(gender='M',0,1) as gender, max(salary) as salary, RIGHT(dept_no,1) as department from employees, salaries, dept_emp WHERE employees.emp_no = salaries.emp_no and employees.emp_no = dept_emp.emp_no and dept_emp.to_date="9999-01-01" GROUP BY emp_no, dept_emp.dept_no;

mysql> select * from data limit 5; +--------+------+-------+--------+--------+------------+ | emp_no | age | hired | gender | salary | department | +--------+------+-------+--------+--------+------------+ | 10001 | 64 | 31 | 0 | 88958 | 5 | | 10002 | 53 | 32 | 1 | 72527 | 7 | | 10003 | 58 | 31 | 0 | 43699 | 4 | | 10004 | 63 | 31 | 0 | 74057 | 4 | | 10005 | 62 | 28 | 0 | 94692 | 3 | +--------+------+-------+--------+--------+------------+

The data is:

  • Age: the age of the employee
  • Hired: the number of years working in the company
  • Gender: 0 Male, 1 Female
  • Salary: the salary
Categories: MySQL

MongoDB 3.4 Views

MySQL Performance Blog - Fri, 2017-01-13 23:07

This blog post covers MongoDB 3.4 views, one of the more recent MongoDB features.

Views are often used in relational databases to achieve both data security and a high level of abstraction, making it easier to retrieve data. Unlike regular tables, views neither have a physical schema nor use disk space. They execute a pre-specified query. There are exceptions (such as materialized views and pre-executed views), but as a default the engine actually executes a query and then sends the result set as a single table when a view is used.

In MySQL, a simple view can be defined as:

create database percona; use percona; create view user_hosts as select user, host from mysql.user; select * from user_hosts +------------------+-----------+ | user | host | +------------------+-----------+ | myuser | % | ...

The query above shows only the users and host field, rather than all the table fields. Anyone who queries this view sees a table that only has the user and host fields.

This feature was not available in previous MongoDB versions. All we could do was either deny reads in a collection (which would make it useless to the user) or allow reads to the entire collection (which  was pretty unsafe).

The views feature request was open for a while, and as we can see there was a considerable number of votes to make this feature available: https://jira.mongodb.org/browse/SERVER-142.

MongoDB 3.4 views are non-materialized views, and behind the scenes the engine runs an aggregation. Creating a view requires that we specify a collection or a previous existing view. When a view is the source collection from another view, it allows us to execute a chained aggregation.

To create a view, we should use the db.createView(‘view_name’,’source’,[pipeline]) command, specifying the view name, the view source collection and the aggregation pipeline. This aggregation pipeline, as well as the other parameters, is saved in the system.views collection. This is the only space that the view will use in the system. A new document is saved in the system.views collection for each view created.

Although views seem very easy to create, a few pitfalls when using them.

Since views always run an aggregation, an index is desired to cover the aggregation $match pipeline, or slow responses might be expected during the full collection scans.

Cascading aggregations (creating views of views) can be slow, as the view does not have any data and therefore cannot be indexed. MongoDB neither checks the collection fields nor the collection existence before creating the view. If there is no collection, the view returns an empty cursor.

Views appear as a collection when we are listing them. The show collections command shows us views as one collection, but such collections are read-only. To drop a view, we simply execute db.collection.drop(). The collection is removed from the system.collections, but the data remains untouched because it only removes the code that generates the view result.

How to create views:

In this step-by-step, we will create a view and restrict the user UserReadOnly to read privileges only:

1. Populate collection:

$ mongo --authenticatinDatabase admin -u foo -p use financial switched to db financial > db.employee.insert({FirstName : 'John', LastName: 'Test', position : 'CFO', wage : 180000.00 }) WriteResult({ "nInserted" : 1 }) > db.employee.insert({FirstName : 'John', LastName: 'Another Test', position : 'CTO', wage : 210000.00 }) WriteResult({ "nInserted" : 1 }) > db.employee.insert({FirstName : 'Johnny', LastName: 'Test', position : 'COO', wage : 180000.00 }) WriteResult({ "nInserted" : 1 })

2. Create view that only shows full names:

use financial db.createView('employee_names','employee', [{ $project : { _id : 0, "fullname" : {$concat : ["$FirstName", " ", "$LastName"]}}}]) { "ok" : 1 } >show collections employee employee_names system.views db.system.views.find() db.employee_names.find() { "fullname" : "John Test" } { "fullname" : "John Another Test" } { "fullname" : "Johnny Test" }

3. Create a user-defined role that only gives access to the views:

Create a file “createviewOnlyRole.js” with the following javascript, or copy and paste the following code: 

use financial db_name = db.toString() priv = [] db.system.views.find({},{"_id" : 1, "viewOn" : 1}).forEach(function (view) { database_collection = view['_id'].split('.') database = database_collection[0] database_collection.splice(0,1) coll = database_collection.join([separator = '.']) priv.push({"resource" : { "db" : database, "collection" : coll}, "actions" : ["find"]}) }) var viewrole = db.getRole(db_name + '_readAnyView') if (viewrole == null) { db.runCommand({ createRole: db_name + "_readAnyView", "privileges": priv, roles : [] }); } else { db.runCommand({ updateRole: db_name + "_readAnyView", "privileges": priv, roles : [] }); } print('access granted to:') printjson(priv)

Then authenticate and use the desired database to create this role. In our case:

use financial db.load('createviewOnlyRole.js')

4. Create a new user assigned to the readAnyView role. This new user is only able to query against views, and they must know the view name because no other privileges are granted:

use financial db_name = db.toString() db.createUser( { user: "userReadViews", pwd: "123", roles: [ db_name + "_readAnyView"] } )

Notes: If you receive an error when trying to execute the .js file, please create a new role that grants find in the system.views collection:

use admin db.runCommand({ createRole: "readViewCollection", privileges: [ { resource: { db: "", collection: "system.views" }, actions: [ "find"] }], roles : [] }) db.grantRoleToUser('<your_user>',['readViewCollection'])

For more information about user-defined roles, please check please check the user-defined docs.

This should help explain MongoDB 3.4 views. Please feel free to contact me @AdamoTonete or @percona for any questions and suggestions.

Categories: MySQL

The Impact of Swapping on MySQL Performance

MySQL Performance Blog - Fri, 2017-01-13 17:27

In this blog, I’ll look at the impact of swapping on MySQL performance. 

It’s common sense that when you’re running MySQL (or really any other DBMS) you don’t want to see any I/O in your swap space. Scaling the cache size (using innodb_buffer_pool_size in MySQL’s case) is standard practice to make sure there is enough free memory so swapping isn’t needed.   

But what if you make some mistake or miscalculation, and swapping happens? How much does it really impact performance? This is exactly what I set out to investigate.

My test system has the following:

  • 32GB of physical memory
  • OS (and swap space) on a (pretty old) Intel 520 SSD device
  • Database stored on Intel 750 NVMe storage

To simulate a worst case scenario, I’m using Uniform Sysbench Workload:

sysbench --test=/usr/share/doc/sysbench/tests/db/select.lua   --report-interval=1 --oltp-table-size=700000000 --max-time=0 --oltp-read-only=off --max-requests=0 --num-threads=64 --rand-type=uniform --db-driver=mysql --mysql-password=password --mysql-db=test_innodb  run

To better visualize the performance of the metrics that matter for this test, I have created the following custom graph in our Percona Monitoring and Management (PMM) tool. It shows performance disk IO and swapping activity on the same graph.

Here are the baseline results for innodb_buffer_pool=24GB. The results are a reasonable ballpark number for a system with 32GB of memory.

As you can see in the baseline scenario, there is almost no swapping, with around 600MB/sec read from the disk. This gives us about 44K QPS. The 95% query response time (reported by sysbench) is about 3.5ms.

Next, I changed the configuration to innodb_buffer_pool_size=32GB, which is the total amount of memory available. As memory is required for other purposes, it caused swapping activity:

We can see that performance stabilizes after a bit at around 20K QPS, with some 380MB/sec disk IO and 125MB/sec swap IO. The 95% query response time has grown to around 9ms.

Now let’s look at an even worse case. This time, we’ll set our configuration to innodb_buffer_pool_size=48GB (on a 32GB system).

Now we have around 6K QPS. Disk IO has dropped to 250MB/sec, and swap IO is up to 190MB/sec. The 95% query response time is around 35ms. As the graph shows, the performance becomes more variable, confirming the common assumption that intense swapping affects system stability.

Finally, let’s remember MySQL 5.7 has the Online Buffer Pool Resize feature, which was created to solve exactly this problem (among other reasons). It changes the buffer pool size if you accidentally set it too large. As we have tested innodb_buffer_pool_size=24GB, and demonstrated it worked well, let’s scale it back to that value:

mysql> set global innodb_buffer_pool_size=24*1024*1024*1024; Query OK, 0 rows affected (0.00 sec)

Now the graph shows both good and bad news. The good news is that the feature works as intended, and after the resize completes we get close to the same results before our swapping experiment. The bad news is everything pretty much grinds to halt for 15 minutes or so while resizing occurs. There is almost no IO activity or intensive swapping while the buffer pool resize is in progress.   

I also performed other sysbench runs for selects using Pareto random type rather than Uniform type, creating more realistic testing (skewed) data access patterns. I further performed update key benchmarks using both Uniform and Pareto access distribution.

You can see the results below:

As you can see, the results for selects are as expected. Accesses with Pareto distributions are better and are affected less – especially by minor swapping.  

If you look at the update key results, though, you find that minor swapping causes performance to improve for Pareto distribution. The results at 48GB of memory are pretty much the same.

Before you say that that is impossible, let me provide an explanation: I limited innodb_max_purge_lag on this system to avoid unbound InnoDB history length growth. These workloads tend to be bound by InnoDB purge performance. It looks like swapping has impacted the user threads more than it did the purge threads, causing such an unusual performance profile. This is something that might not be repeatable between systems.

Summary

When I started, I expected severe performance drop even with very minor swapping. I surprised myself by getting swap activity to more than 100MB/sec, with performance “only” halved.  

While you should continue to plan your capacity so that there is no constant swapping on the database system, these results show that a few MB/sec of swapping activity it is not going to have a catastrophic impact.

This assumes your swap space is on an SSD, of course! SSDs handle random IO (which is what paging activity usually is) much better than HDDs.

Categories: MySQL
Syndicate content