MySQL

Meeting The Challenges of Monitoring In The Cloud

Xaprb, home of innotop - Fri, 2016-09-09 17:30

I’ll be visiting MIT’s Tang Center on October 10 in Boston to talk about monitoring. Join me!

The infrastructure underneath modern apps is rapidly changing, with cloud and hybrid infrastructure now commonplace. The IaaS trend, however, is just the beginning. Today, a “cloud-hosted app” may mean renting EC2 instances and installing and running your platform as you always did, but in the future you won’t think about virtual services. You’ll think about services instead, e.g. DBaaS, lambda computing, and so-called “serverless” computing (c.f. horseless carriages). This is already the reality; the majority of the growth in database markets over the last few years has been in services such as Amazon RDS, rather than installing MySQL in EC2. And most major and emerging database companies are turning to DBaaS as a major part of their business model (see Azure, MongoDB Atlas, InfluxCloud, Elastic Cloud, Citus Cloud, etc).

There are some real visibility and governance challenges to solve, though. Without good visibility, you can’t find issues, you can’t diagnose and solve them, and you can’t be sure controls and security measures are actually operational. And “black-box” hosted DBaaS forces you to rely on the monitoring that the vendor provides, which is often an afterthought at best, designed by people who aren’t solving the same problems you are.

How can you prepare for the challenges of meeting your service level objectives with services over which you have increasingly less visibility and control? What benefits will you get in return? And how can you influence the future and ensure vendors create the solutions you need, rather than accepting what you’re given and making the best of it?

Join me October 10 at the Boston MySQL Meetup Group for a lively discussion of this topic!

7 p.m. on October 10, 2016

MIT, The Tang Center, Building E51
70 Memorial Drive
Cambridge, MA

Looking forward to seeing you there!

Categories: MySQL

MySQL Replication Troubleshooting: Q & A

MySQL Performance Blog - Thu, 2016-09-08 18:53

In this blog, I will provide answers to the Q & A for the MySQL Replication Troubleshooting webinar.

First, I want to thank everybody for attending the August 25 webinar. The recording and slides for the webinar are available here. Below is the list of your questions that I wasn’t able to answer during the webinar, with responses:

Q: Hi Sveta. One question: how is it possible to get N previous events using the SHOW BINLOG EVENTS command? For example, the position is 999 and I want to analyze the previous five events. Is it possible?

A: Not, there is no such option. You cannot get the previous five events using SHOW BINLOG EVENTS. However, you can use mysqlbinlog with the option --stop-position and tail its output.

Q: We are having issues with inconsistencies over time. We also have a lot of “waiting for table lock” statuses during high volume usage. Would changing these tables to InnoDB help the replicated database remain consistent?

A: Do you use MyISAM? Switching to InnoDB might help, but it depends on what types of queries you use. For example, if you often use the LOCK TABLE  command, that will cause a "waiting for table lock"  error for InnoDB too. Regarding data consistency between the master and slave, you need to use row-based replication.

Q: For semi-sync replication, what’s the master’s behavior when the master never received ACK from any of the slaves?

A: It will timeout after rpl_semi_sync_master_timeout  milliseconds, and then switch to asynchronous replication.

Q: We’re using MySQL on r3.4xlarge EC2 instances (16 CPU). We use RBR. innodb_read_io_threads and innodb_write_io_threads =4. We often experience lags. Would increasing these to eight offer better IO for slaves? What other parameters could boost slave IO?

A: Yes, an increased number of IO threads would most likely improve performance. Other parameters that could help are similar to the ones discussed in “InnoDB Troubleshooting” and “Introduction to Troubleshooting Performance: What Affects Query Execution?” webinars. You need to pay attention to InnoDB options that affect IO (innodb_thread_concurrency, innodb_flush_method, innodb_flush_log_at_trx_commit, innodb_flush_log_at_timeout ) and general IO options, such as sync_binlog .

Q: How many masters can I have working together?

A: What do you mean by “how many masters can [you] have working together”? Do you mean circular replication or a multi-master setup? In any case, the only limitation is hardware. For a multi-master setup you should ensure that the slave has enough resources to process all requests. For circular replication, ensure that each of the masters in the chain can handle the increasing number of writes as they replicate down the chain, and do not lead to permanently increasing slave lags.

Q: What’s the best way to handle auto_increment?

A: Follow the advice in the user manual: set auto_increment_offset  to a unique value on each of servers,auto_increment_increment  to the number of servers and never update auto-incremented columns manually.

Q: I configured multi threads replication. Sometimes the replication lag keeps increasing while the slave was doing “invalidating query cache entries(table)”.  How should I do to fine tune it?

A: The status "invalidating query cache entries(table)" means that the query cache is invalidating entries, and has been changed by a command currently being executed by the slave SQL thread. To avoid this issue, you need to keep the query cache small (not larger than 512 MB) and de-fragment it from time to time using the FLUSH QUERY CACHE command.

Q: Sometimes when IO is slow and during lag we see info: Reading event from the relay log “Waiting for master to send event” — How do we troubleshoot to get more details.

A: The "Waiting for master to send event" state shows that the slave IO thread sent a request for a new event, and is waiting for the event from the master. If you believe it hasn’t received the event in a timely fashion, check the error log files on both the master and slave for connection errors. If there is no error message, or if the message doesn’t provide enough information to solve the issue, use the network troubleshooting methods discussed in the “Troubleshooting hardware resource usage” webinar.

Save

Categories: MySQL

Percona is Hiring: Director of Platform Engineering

MySQL Performance Blog - Thu, 2016-09-08 18:20

Percona is hiring a Director of Platform Engineering. Find out more!

At Percona, we recognize you need much more than just a database server to successfully run a database-powered infrastructure. You also need strong tools that deploy, manage and monitor the software. Percona’s Platform Engineering group is responsible just for that. They build next-generation open source solutions for the deployment, monitoring and management of open source databases.

This  team is currently responsible for products such as Percona Toolkit , Percona Monitoring Plugins and Percona Monitoring and Management.  

Percona builds products that advance state-of-the-art open source software. Our products help our customers monitor and manage their databases. They help our services team serve customers faster, better and more effectively.

The leader of the Platform Engineering group needs a strong vision, as well as an understanding of market trends, best practices for automation, monitoring and management – in the cloud and on premises. This person must have some past technical operations background and experience building and leading engineering teams that have efficiently delivered high-quality software. The ideal candidate will also understand the nature of open source software development and experience working with distributed teams.

This position is for “player coach” – you will get your hands dirty writing code, performing quality assurance, making great documentation and assisting customers with troubleshooting.

We not looking for extensive experience with a particular programming language, but qualified candidates should be adept at learning new programming languages. Currently, our teams use a combination of Perl, Python, Go and Javascript.

The Director of Platform Engineering reports to Vadim Tkachenko, CTO and VP of Engineering. They will also work closely with myself, other senior managers and experts at Percona.

Interested? Please apply here on Percona’s website.

Categories: MySQL

Percona Live Europe featured talk with Igor Canadi — Everything you wanted to know about MongoRocks

MySQL Performance Blog - Wed, 2016-09-07 17:47

Welcome to another Percona Live Europe featured talk with Percona Live Europe 2016: Amsterdam speakers! In this series of blogs, we’ll highlight some of the speakers that will be at this year’s conference. We’ll also discuss the technologies and outlooks of the speakers themselves. Make sure to read to the end to get a special Percona Live Europe registration bonus!

In this Percona Live Europe featured talk, we’ll meet Igor Canadi, Software Engineer at Facebook, Inc. His talk will be on Everything you wanted to know about MongoRocks. MongoRocks is MongoDB with RocksDB storage engine. It was developed by Facebook, where it’s used to power mobile backend as a service provider Parse.

I had a chance to speak with Igor and learn a bit more about these questions:

Percona: Give me a brief history of yourself: how you got into database development, where you work, what you love about it?

Igor: After I finished my undergrad at the University of Zagreb in Croatia, I joined University of Wisconsin-Madison’s Masters program. Even though UW-M is famous for its work on databases, during my two years there I worked in a different area. However, as I joined Facebook after school, I heard of a cool new project called RocksDB. Everything about building a new storage engine sounded exciting to me, although I had zero idea how thrilling the ride will actually be. The best part was working with and getting to know amazing people from Facebook, Parse, MongoDB, Percona, and many other companies that are using or experimenting with RocksDB.

Percona: Your talk is called “Everything you wanted to know about MongoRocks.” Briefly, what is MongoRocks and why did it get developed?

Igor: Back in 2014 MongoDB announced that they are building a pluggable storage engine API, which would enable MongoDB users to seamlessly choose a storage engine that works best for their workload. Their first prototype was actually using RocksDB as a storage engine, which was very exciting for us. However, they bought WiredTiger soon after, another great storage engine, and decided to abandon MongoDB+RocksDB project. At the same time, Parse was running into scaling challenges with their MongoDB deployment. We decided to help out and take over the development of MongoRocks. We started rolling it out at Parse in March of 2015 already and completed the rollout in October. Running MongoRocks instead of MongoDB with the MMap storage engine resulted in much greater efficiency and lower latencies in some scenarios. Some of the experiences are captured in Parse’s blog posts: http://blog.parse.com/announcements/mongodb-rocksdb-parse/ and http://blog.parse.com/learn/engineering/mongodb-rocksdb-writing-so-fast-it-makes-your-head-spin/

Percona: What are the workloads and database environments that are best suited for a MongoRocks deployment? Do you see and expansion of the solution to encompass other scenarios?

Igor: Generally speaking, MongoRocks should compress really well. Over the years of using LSM engines, we learned that its compression rates are hard to beat. The difference can sometimes be substantial. For example, many benchmarks of MyRocks, which is a MySQL with RocksDB storage engines, have shown that compressed InnoDB uses two times as much space as compressed RocksDB. With better compression, more of your data fits in memory, which could also improve read latencies and lower the stress on storage media. However, this is a tricky question to answer generally. It really depends on the metrics you care about. One great thing about Mongo and different storage engines is that the replication format is the same across all of them, so it’s simple to try it out and see how it performs under your workload. You can just add an additional node in your replica set that’s using RocksDB and monitor the metric you care about on that node.

Percona: What are the unique database requirements at Facebook that keep you awake at night? What would you most like to see feature-wise in MongoDB in the near future (or any database technology)?

Igor: One of the most exciting database projects that we’re working on at Facebook is MyRocks, which I mentioned previously. Currently, we use MySQL with InnoDB to store our Facebook graph and we are experimenting with replacing that with MyRocks. The main motivation behind the project is 2x better compression rates, but we also see better performance in some areas. If you’re attending Percona Live Europe I encourage you to attend either Mark Callaghan’s talk on MyRocks, or Yoshinori’s 3-hour tutorial to learn more.

Percona: What are looking forward to the most at Percona Live Europe this year?

Igor: The best part of attending conferences is the people. I am looking forward to seeing old friends and meeting new ones. If you like to talk storage engines, hit me up!

You can read more about Igor’s thoughts on MongoRocks at his twitter feed.

Want to find out more about Igor, Facebook and MongoRocks? Register for Percona Live Europe 2016, and come see his talk Everything you wanted to know about MongoRocks.

Use the code FeaturedTalk and receive €25 off the current registration price!

Percona Live Europe 2016: Amsterdam is the premier event for the diverse and active open source database community. The conferences have a technical focus with an emphasis on the core topics of MySQL, MongoDB, and other open source databases. Percona live tackles subjects such as analytics, architecture and design, security, operations, scalability and performance. It also provides in-depth discussions for your high-availability, IoT, cloud, big data and other changing business needs. This conference is an opportunity to network with peers and technology professionals by bringing together accomplished DBA’s, system architects and developers from around the world to share their knowledge and experience. All of these people help you learn how to tackle your open source database challenges in a whole new way.

This conference has something for everyone!

Percona Live Europe 2016: Amsterdam is October 3-5 at the Mövenpick Hotel Amsterdam City Centre.

Categories: MySQL

Get MySQL Passwords in Plain Text from .mylogin.cnf

MySQL Performance Blog - Wed, 2016-09-07 16:25

This post will tell you how to get MySQL passwords in plain text using the .mylogin.cnf file.

Since MySQL 5.6.6, it became possible to store MySQL credentials in an encrypted login path file named .mylogin.cnf, using the mysql_config_editor tool. This is better than in plain text anyway.

What if I need to read this password in plain text?

Perhaps because I didn’t save it? It might be that I don’t need it for long (as I can reset it), but it’s important that I get it.

Categories: MySQL

MyRocks Docker images

MySQL Performance Blog - Tue, 2016-09-06 20:28

In this post, I’ll point you to MyRocks Docker images with binaries, allowing you to install and play with the software.

During the @Scale conference, Facebook announced that MyRocks is mature enough that it has been installed on 5% of Facebook’s MySQL slaves. This has saved 50% of the space on these slaves, which allows them to decrease the number of servers by half. Check out the announcement here:  https://code.facebook.com/posts/190251048047090/myrocks-a-space-and-write-optimized-mysql-database/

Those are pretty impressive numbers, so I decided to take a serious look at MyRocks. The biggest showstopper is usually binary availability, since Facebook only provides the source code: https://github.com/facebook/mysql-5.6.

You can get the image from https://hub.docker.com/r/perconalab/myrocks/.

To start MyRocks:

docker run -d --name myr -P  perconalab/myrocks

To access it, use a regular MySQL client:

mysql -h127.0.0.1

From there you should see RocksDB installed:

show engines; +------------+---------+----------------------------------------------------------------+--------------+------+------------+ | Engine | Support | Comment | Transactions | XA | Savepoints | +------------+---------+----------------------------------------------------------------+--------------+------+------------+ | ROCKSDB | DEFAULT | RocksDB storage engine | YES | YES | YES |

I hope it makes easier to start experimenting with MyRocks!

Categories: MySQL

MongoDB at Percona Live Europe

MySQL Performance Blog - Tue, 2016-09-06 15:28

This year, you will find a great deal about MongoDB at Percona Live Europe.

As we continue to work on growing the independent MongoDB ecosystem, this year’s Percona Live Europe in Amsterdam includes many talks about MongoDB. If your company uses MongoDB technologies, is focused exclusively on developing with MongoDB or MongoDB operations, or is just evaluating MongoDB, attending Percona Live Europe will prove a valuable experience.  

As always with Percona Live conferences, the focus is squarely on the technical content — not sales pitches. We encourage our speakers to tell the truth: the good, the bad and the ugly. There is never a “silver bullet” when it comes to technology — only tradeoffs between different solution options.

As someone who has worked in database operations for more than 15 years, I recognize and respect the value of “negative information.” I like knowing what does not work, what you should not do and where trouble lies. Negative information often proves more valuable than knowing how great the features of a specific technology work — especially since the product’s marketing team tends to highlight those very well (and they seldom require independent coverage).

For MongoDB at this year’s Percona Live Europe:
  • We have talks about MongoRocks, a RocksDB powered storage engine for MongoDB — the one you absolutely need to know about if you’re looking to run the most efficient MongoDB deployment at scale!  
  • We will cover MongoDB Backups best practices, as well as several talks about MongoDB monitoring and management  (1, 2, 3) — all of them with MongoDB Community Edition and Percona Server for MongoDB (so they don’t require a MongoDB Enterprise subscription).

There will also be a number of talks about how MongoDB interfaces with other technologies. We show how ToroDB can use the MongoDB protocol while storing data in a relational database (and why that might be a good idea), we contrast and compare MySQL and MongoDB Geospatial features, and examine MongoDB from MySQL DBA point of view.

We also how to use Apache Spark to unify data from MongoDB, MySQL, and Redis, and what are generally the best practices for choosing databases for different application needs.

Finally, if you’re just starting with MongoDB and would like a jump start before attending more detailed MongoDB talks, we’ve got a full day MongoDB 101 tutorial for you.

Join us for the full conference, or register for just one day if that is all your schedule allows. But come to Percona Live Europe in Amsterdam on October 3-5 to get the best and latest MongoDB information.

Categories: MySQL

Gluten

Xaprb, home of innotop - Sun, 2016-09-04 20:55

I’m not saying I’m gluten-sensitive. I just know that when I eat things like pizza, bread, pasta, or the like, I suffer. And gluten-free alternatives are disgusting. But I’ve figured out how to make the breads I love, such as pancakes, waffles, and muffins, without pain. Here’s my recipe.

First, briefly, what I experience and my current thinking:

  • If I eat pizza or similar for supper, I feel like I’ve got a brick in my abdomen for 24-36 hours. I bloat and cramp, and I get a horrible dull ache in my pelvic bowl.
  • I also feel tired, irritable, and unfocused. It took me years to realize this always happened after eating gluteny things. I know other people sometimes have inexplicable fatigue and malaise too.
  • At the advice of an allergist, I tried various elimination diets, such as complete dairy avoidance for several months. (Very hard to do.) It didn’t help.
  • I know lots of people with actual celiac disease and I’m not one. I can eat small amounts of glutenish stuff (say, half a slice of bread, or some crackers) without noticing much.
  • I know lots of people who start right in with “scientifically speaking, the incidence of celiac disease is…” whenever anyone talks about gluten, ‘splaining the hell out of people like me in irrelevant, unhelpful ways without stopping to think.
  • I know a lot of people like me feel dismissed without being heard. I know some of us have been told their condition is psychosomatic (i.e. we can fix our problem by going to a shrink). I know we’re not insane.
  • I know the incidence of actual celiac disease is dramatically higher in the United States than it used to be.
  • I know the wheat we eat in the United States barely resembles what we grew 100 years ago. I know people who came from Europe to the United States and found that they digest bread here very differently than they’re used to.
  • I know there’s a lot of controversy about gluten. I know there are theories about things called FODMAPs and various types of proteins. I know recent research continues to uncover more ways that non-celiac people really do get legitimately sick from wheat in particular, including “amylase-trypsin inhibitors, or ATIs.”

Most gluten-free bread substitutes are inedible and should be taken off the market. Here’s a simple, gluten-free recipe I’ve developed that works for me. (Apologies to people who don’t use Imperial units).

  • 1 cup each of almond flour, rice flour, and quick-cooking table rolled oats, the latter ground into flour in a spinning-blade grinder.
  • 1 tsp baking powder
  • ¼ tsp baking soda
  • ¼ tsp salt
  • 1-2 tbsp sugar
  • 3 eggs, separated
  • 1-2 cups whole milk
  • 2 tbsp canola oil
  • ¼ tsp vanilla

Directions:

  • Combine the dry ingredients in a mixing bowl and whisk together.
  • Add the egg yolks and oil to the bowl, and stir in milk gradually until the mixture is still thick and doughy, depending on the intended usage.
  • In a separate bowl, beat the egg whites until they hold stiff peaks. Using one of the beaters, stir them slowly into the batter by hand, until just blended.

Now you can bake or cook as you please. The keys are to get the thickness right for the desired purpose, and to avoid mixing all the air out of the egg whites when blending them into the batter. Unlike gluten-based breads, which gain their structure from the gluten, the air trapped into the egg whites provides the structure and loft. I typically want the batter to be a little stiffer than you’d expect. For pancakes, for example, stiff enough that I need to gently jiggle the pan or griddle to encourage them to spread out a little more.

If you have leftover batter, it’s better to cook it immediately and save the cooked food for later, rather than saving the batter to cook later. The egg whites will not hold the air well for longer than an hour or so.

I’ve found the recipe is very versatile. For example:

  • For pancakes, see above.
  • You can add ingredients freely, such as blueberries, chocolate chips, pumpkin-and-ginger, etc.
  • You can add spices freely: cinnamon, ginger, pumpkin pie spice blend, etc. If you add it to the dry mix, it’s easier to blend well.
  • For waffles, add about twice as much oil.
  • For banana bread, puree bananas with the milk in a blender before mixing into the batter, and make it about the same thickness. Be aware it’ll rise quite a bit. Bake at 350 degrees Fahrenheit for 45-60 minutes until a toothpick comes out clean.
  • For muffins, see notes for banana bread.
  • For pumpkin bread, you can mix pumpkin pie spices and cooked pumpkin right into the batter.

If you don’t have the types of flour I specified, the mix isn’t critical to get right; various other grains and materials seem to work okay too (coconut flour, amaranth flour) although they change the taste, texture, and appearance in ways I don’t always like.

Categories: MySQL

Gluten

Xaprb, home of innotop - Sun, 2016-09-04 20:55

I’m not saying I’m gluten-sensitive. I just know that when I eat things like pizza, bread, pasta, or the like, I suffer. And gluten-free alternatives are disgusting. But I’ve figured out how to make the breads I love, such as pancakes, waffles, and muffins, without pain. Here’s my recipe.

First, briefly, what I experience and my current thinking:

  • If I eat pizza or similar for supper, I feel like I’ve got a brick in my abdomen for 24-36 hours. I bloat and cramp, and I get a horrible dull ache in my pelvic bowl.
  • I also feel tired, irritable, and unfocused. It took me years to realize this always happened after eating gluteny things. I know other people sometimes have inexplicable fatigue and malaise too.
  • At the advice of an allergist, I tried various elimination diets, such as complete dairy avoidance for several months. (Very hard to do.) It didn’t help.
  • I know lots of people with actual celiac disease and I’m not one. I can eat small amounts of glutenish stuff (say, half a slice of bread, or some crackers) without noticing much.
  • I know lots of people who start right in with “scientifically speaking, the incidence of celiac disease is…” whenever anyone talks about gluten, ‘splaining the hell out of people like me in irrelevant, unhelpful ways without stopping to think.
  • I know a lot of people like me feel dismissed without being heard. I know some of us have been told their condition is psychosomatic (i.e. we can fix our problem by going to a shrink). I know we’re not insane.
  • I know the incidence of actual celiac disease is dramatically higher in the United States than it used to be.
  • I know the wheat we eat in the United States barely resembles what we grew 100 years ago. I know people who came from Europe to the United States and found that they digest bread here very differently than they’re used to.
  • I know there’s a lot of controversy about gluten. I know there are theories about things called FODMAPs and various types of proteins. I know recent research continues to uncover more ways that non-celiac people really do get legitimately sick from wheat in particular, including “amylase-trypsin inhibitors, or ATIs.”

Most gluten-free bread substitutes are inedible and should be taken off the market. Here’s a simple, gluten-free recipe I’ve developed that works for me. (Apologies to people who don’t use Imperial units).

  • 1 cup each of almond flour, rice flour, and quick-cooking table rolled oats, the latter ground into flour in a spinning-blade grinder.
  • 1 tsp baking powder
  • ¼ tsp baking soda
  • ¼ tsp salt
  • 1-2 tbsp sugar
  • 3 eggs, separated
  • 1-2 cups whole milk
  • 2 tbsp canola oil
  • ¼ tsp vanilla

Directions:

  • Combine the dry ingredients in a mixing bowl and whisk together.
  • Add the egg yolks and oil to the bowl, and stir in milk gradually until the mixture is still thick and doughy, depending on the intended usage.
  • In a separate bowl, beat the egg whites until they hold stiff peaks. Using one of the beaters, stir them slowly into the batter by hand, until just blended.

Now you can bake or cook as you please. The keys are to get the thickness right for the desired purpose, and to avoid mixing all the air out of the egg whites when blending them into the batter. Unlike gluten-based breads, which gain their structure from the gluten, the air trapped into the egg whites provides the structure and loft. I typically want the batter to be a little stiffer than you’d expect. For pancakes, for example, stiff enough that I need to gently jiggle the pan or griddle to encourage them to spread out a little more.

If you have leftover batter, it’s better to cook it immediately and save the cooked food for later, rather than saving the batter to cook later. The egg whites will not hold the air well for longer than an hour or so.

I’ve found the recipe is very versatile. For example:

  • For pancakes, see above.
  • You can add ingredients freely, such as blueberries, chocolate chips, pumpkin-and-ginger, etc.
  • You can add spices freely: cinnamon, ginger, pumpkin pie spice blend, etc. If you add it to the dry mix, it’s easier to blend well.
  • For waffles, add about twice as much oil.
  • For banana bread, puree bananas with the milk in a blender before mixing into the batter, and make it about the same thickness. Be aware it’ll rise quite a bit. Bake at 350 degrees Fahrenheit for 45-60 minutes until a toothpick comes out clean.
  • For muffins, see notes for banana bread.
  • For pumpkin bread, you can mix pumpkin pie spices and cooked pumpkin right into the batter.

If you don’t have the types of flour I specified, the mix isn’t critical to get right; various other grains and materials seem to work okay too (coconut flour, amaranth flour) although they change the taste, texture, and appearance in ways I don’t always like.

Categories: MySQL

InnoDB Troubleshooting: Q & A

MySQL Performance Blog - Fri, 2016-09-02 21:12

In this blog, I will provide answers to the Q & A for the InnoDB Troubleshooting webinar.

First, I want to thank everybody for attending the August 11 webinar. The recording and slides for the webinar are available here. Below is the list of your questions that I wasn’t able to answer during the webinar, with responses:

Q: What’s a good speed for buffer pool speed/size for maximum query performance?

A: I am sorry, I don’t quite understand the question. InnoDB buffer pool is an in-memory buffer. In an ideal case, your whole active dataset (rows that are accessed by application regularly) should be in the buffer pool. There is a good blog post by Peter Zaitsev describing how to find the best size for the buffer pool.

Q: Any maximum range for these InnoDB options?

A: I am again sorry, I only see questions after the webinar and don’t know which slide you were on when you asked about options. But generally speaking, the maximum ranges should be limited by hardware: the size of InnoDB buffer pool limited by the amount of physical memory you have, the size of innodb_io_capacity  limited by the number of IOPS which your disk can handle, and the number of concurrent threads limited by the number of CPU cores.

Q: On a AWS r3.4xlarge, 16 CPU, 119GB of RAM, EBS volumes, what innodb_thread_concurrency, innodb_read_io_threads, innodb_write_io_threads would you recommend? and innodb_read_io_capacity?

A: innodb_thread_concurrency = 16, innodb_read_io_threads = 8, innodb_write_io_threads = 8, innodb_io_capacity — but it depends on the speed of your disks. As far as I know, AWS offers disks with different speeds. You should consult IOPS about what your disks can handle when setting innodb_io_capacity, and “Max IOPS” when setting innodb_io_capacity_max.

Q: About InnoDB structures and parallelism: Are there InnoDB settings that can prevent or reduce latching (causes semaphore locks and shutdown after 600s) that occur trying to add an index object to memory but only DML queries on the primary key are running?

A: Unfortunately, semaphore locks for the CREATE INDEX command are not avoidable. You only can affect other factors that speed up index creation. For example, how fast you write records to the disk or how many concurrent queries you run. Kill queries that are waiting for a lock too long. There is an old feature request asking to handle long semaphore waits gracefully. Consider clicking “Affects Me” button to bring it to the developers’ attention.

Q: How can we check these threads?

A: I assume you are asking about InnoDB threads? You can find information about running threads in SHOW ENGINE INNODB STATUS :

-------- FILE I/O -------- I/O thread 0 state: waiting for completed aio requests (insert buffer thread) I/O thread 1 state: waiting for completed aio requests (log thread) I/O thread 2 state: waiting for completed aio requests (read thread) I/O thread 3 state: waiting for completed aio requests (read thread) I/O thread 4 state: waiting for completed aio requests (read thread) I/O thread 5 state: waiting for completed aio requests (read thread) I/O thread 6 state: waiting for completed aio requests (write thread) I/O thread 7 state: waiting for completed aio requests (write thread) I/O thread 8 state: waiting for completed aio requests (write thread) I/O thread 9 state: waiting for completed aio requests (write thread) Pending normal aio reads: 0 [0, 0, 0, 0] , aio writes: 0 [0, 0, 0, 0] , ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0 Pending flushes (fsync) log: 1; buffer pool: 0 529 OS file reads, 252 OS file writes, 251 OS fsyncs 0.74 reads/s, 16384 avg bytes/read, 7.97 writes/s, 7.94 fsyncs/s

And in the Performance Schema THREADS table:

mysql> select thread_id, name, type from performance_schema.threads where name like '%innodb%'; +-----------+----------------------------------------+------------+ | thread_id | name | type | +-----------+----------------------------------------+------------+ | 2 | thread/innodb/io_handler_thread | BACKGROUND | | 3 | thread/innodb/io_handler_thread | BACKGROUND | | 4 | thread/innodb/io_handler_thread | BACKGROUND | | 5 | thread/innodb/io_handler_thread | BACKGROUND | | 6 | thread/innodb/io_handler_thread | BACKGROUND | | 7 | thread/innodb/io_handler_thread | BACKGROUND | | 8 | thread/innodb/io_handler_thread | BACKGROUND | | 9 | thread/innodb/io_handler_thread | BACKGROUND | | 10 | thread/innodb/io_handler_thread | BACKGROUND | | 11 | thread/innodb/io_handler_thread | BACKGROUND | | 13 | thread/innodb/srv_lock_timeout_thread | BACKGROUND | | 14 | thread/innodb/srv_monitor_thread | BACKGROUND | | 15 | thread/innodb/srv_error_monitor_thread | BACKGROUND | | 16 | thread/innodb/srv_master_thread | BACKGROUND | | 17 | thread/innodb/srv_purge_thread | BACKGROUND | | 18 | thread/innodb/page_cleaner_thread | BACKGROUND | | 19 | thread/innodb/lru_manager_thread | BACKGROUND | +-----------+----------------------------------------+------------+ 17 rows in set (0.00 sec)

Q: Give brief on InnoDB thread is not same as connection thread.

A: You create a MySQL connection thread each time the client connects to the server. Generally, the lifetime of this thread is the same as the connection (I won’t discuss the thread cache and thread pool plugin here to avoid unnecessary complexity). This way, if you have 100 connections you have 100 connection threads. But not all of these threads do something. Some are actively querying MySQL, but others are sleeping. You can find the number of threads actively doing something if you examine the status variable Threads_running. InnoDB doesn’t create as many threads as connections to perform its job effectively. It creates fewer threads (ideally, it is same as the number of CPU cores). So, for example just 16 InnoDB threads can handle100 and more connection threads effectively.

Q: How can we delete bulk data in Percona XtraDB Cluster?  without affecting production? nearly 6 million records worth 40 GB size table

A: You can use the utility pt-archiver. It deletes rows in chunks. While your database will still have to handle all these writes, the option --max-flow-ctl  pauses a purge job if the cluster spent too much time pausing for flow control.

Q: Why do we sometimes get “–tc-heuristic-recover” message in error logs? Especially when we recover after a crash? What does this indicate? And should we commit or rollback?

A: This means you used two transactional engines that support XA in the same transaction, and mysqld crashed in the middle of the transaction. Now mysqld cannot determine which strategy to use when recovering transactions: either COMMIT or ROLLBACK. Strangely, this option is documented as “not used”. It certainly is, however. Test case for bug #70860 proves it. I reported a documentation bug #82780.

Q: Which parameter controls the InnoDB thread count?

A: The main parameter is innodb_thread_concurrency. For fine tuning, use innodb_read_io_threads, innodb_write_io_threads, innodb_purge_threads, innodb_page_cleaners. Q:

Q: At what frequency will the InnoDB status be dumped in a file by using innodb-status-file?

A: Approximately every 15 seconds, but it can vary slightly depending on the server load.

Q: I faced an issue that once disk got detached from running server due to some issue on AWS ec2. MySQL went to default mode. After MySQL stopped and started, we observed slave skipped some around 15 mins data. We got it by foreign key relationship issue. Can you please explain why it was skipped data in slave?

A: Amazon Aurora supports two kinds of replication: physical as implemented by Amazon (this is the default for replicas in the same region), and the regular asynchronous replication for cross-region replication. If you use the former, I cannot help you because this is a closed-source Amazon feature. You need to report a bug to Amazon. If you used the latter, this looks buggy too. According to my experience, it should not happen. With regular replication you need to check which transactions were applied (best if you use GTIDs, or at least the log-slave-updates option) and which were not. If you find a gap, report a bug at bugs.mysql.com.

Q: Can you explain more about adaptive hash index?

A: InnoDB stores its indexes on disks as a B-Tree. While B-Tree indexes are effective in general, some queries can take advantage of using much simpler hash indexes. While your server is in use, InnoDB analyzes the queries it is currently processing and builds an in-memory hash index inside the buffer pool (using the prefix of the B-Tree key). While adaptive hash index generally works well, “with some workloads, the speedup from hash index lookups greatly outweighs the extra work to monitor index lookups and maintain the hash index structure” Another issue with adaptive hash index is that until version 5.7.8, it was protected by a single latch — which could be a contention point under heavy workloads. Since 5.7.8, adaptive hash index can be partitioned. The number of parts is controlled by option innodb_adaptive_hash_index_parts.

Save

Categories: MySQL

MHA Quick Start Guide

MySQL Performance Blog - Fri, 2016-09-02 19:22

MHA (Master High Availability Manager and tools for MySQL) is one of the most important pieces of our managed services. When properly set up, it can check replication health, move writer and reader virtual IPs, perform failovers, and have its output constantly monitored by Nagios. Is it easy to deploy and follows the KISS (Keep It Simple, Stupid) philosophy that I love so much.

This blog post is a quick start guide to try it out and play with it in your own testing environment. I assume that you already know how to install software, deal with SSH keys and setup replication in MySQL. The post just covers MHA configuration.

Testing environment

Taken from /etc/hosts

192.168.1.116 mysql-server1 192.168.1.117 mysql-server2 192.168.1.118 mysql-server3 192.168.1.119 mha-manager

mysql-server1: Our master MySQL server with 5.6
mysql-server2: Slave server
mysql-server3: Slave server
mha-manager: The server monitors the replication and from where we manage MHA. The installation is also required to meet some Perl dependencies.

We just introduced some new concepts, the MHA Node and MHA Manager:

MHA Node

It is installed and runs on each MySQL server. This is the piece of software that it is invoked by the manager every time we want to do something, like for example a failover or a check.

MHA Manager

As explained before, this is our operations center. The manager monitors the services, replication, and includes several administrative command lines.

Pre-requisites
  • Replication must already be running. MHA manages replication and monitors it, but it is not a tool to deploy it. So MySQL and replication need to be running already.
  • All hosts should be able to connect to each other using public SSH keys.
  • All nodes need to be able to connect to each other’s MySQL servers.
  • All nodes should have the same replication user and password.
  • In the case of multi-master setups, only one writable node is allowed. All others need to be configured with read_only.
  • MySQL version has to be 5.0 or later.
  • Candidates for master failover should have binary log enabled. The replication user must exist there too.
  • Binary log filtering variables should be the same on all servers (replicate-wild%, binlog-do-db…).
  • Disable automatic relay-log purge and do it regularly from a cron task. You can use an MHA-included script called “purge_relay_logs”.

While that is a large list of requisites, I think that they are pretty standard and logical.

MHA installation

As explained before, the MHA Node needs to be installed on all the nodes. You can download it from this Google Drive link.

This post shows you how to install it using the source code, but there are RPM packages available. Deb too, but only for older versions. Use the installation method you prefer. This is how to compile it:

tar -xzf mha4mysql-node-0.57.tar.gz perl Makefile.PL make make install

The commands included in the node package are save_binary_logs, filter_mysqlbinlog, purge_relay_logs, apply_diff_relay_logs. Mostly tools that the manager needs to call in order to perform a failover, while trying to minimize or avoid any data loss.

On the manager server, you need to install MHA Node plus MHA Manager. This is due to MHA Manager dependance on a Perl library that comes with MHA Node. The installation process is just the same.

Configuration

We only need one configuration file on the Manager node. The example below is a good starting point:

# cat /etc/app1.cnf [server default] # mysql user and password user=root password=supersecure ssh_user=root # working directory on the manager manager_workdir=/var/log/masterha/app1 # working directory on MySQL servers remote_workdir=/var/log/masterha/app1 [server1] hostname=mysql-server1 candidate_master=1 [server2] hostname=mysql-server2 candidate_master=1 [server3] hostname=mysql-server3 no_master=1

So pretty straightforward. It specifies that there are three servers, two that can be master and one that can’t be promoted to master.

Let’s check if we meet some of the pre-requisites. We are going to test if replication is working, can be monitored, and also if SSH connectivity works.

# masterha_check_ssh --conf=/etc/app1.cnf [...] [info] All SSH connection tests passed successfully.

It works. Now let’s check MySQL:

# masterha_check_repl --conf=/etc/app1.cnf [...] MySQL Replication Health is OK.

Start the manager and operations

Everything is setup, we meet the pre-requisites. We can start our manager:

# masterha_manager --remove_dead_master_conf --conf=/etc/app1.cnf [...] [info] Starting ping health check on mysql-server1(192.168.1.116:3306).. [info] Ping(SELECT) succeeded, waiting until MySQL doesn't respond..

The manager found our master and it is now actively monitoring it using a SELECT command. –remove_dead_master_conf tells the manager that if the master goes down, it must edit the config file and remove the master’s configuration from it after a successful failover. This avoids the “there is a dead slave” error when you restart the manager. All servers listed in the conf should be part of the replication and in good health, or the manager will refuse to work.

Automatic and manual failover

Good, everything is running as expected. What happens if the MySQL master dies!?!

[...] [warning] Got error on MySQL select ping: 2006 (MySQL server has gone away) [info] Executing SSH check script: save_binary_logs --command=test --start_pos=4 --binlog_dir=/var/lib/mysql,/var/log/mysql --output_file=/var/log/masterha/app1/save_binary_logs_test --manager_version=0.57 --binlog_prefix=mysql-bin Creating /var/log/masterha/app1 if not exists.. ok. Checking output directory is accessible or not.. ok. Binlog found at /var/log/mysql, up to mysql-bin.000002 [info] HealthCheck: SSH to mha-server1 is reachable. [...]

First, it tries to connect by SSH to read the binary log and save it. MHA can apply the missing binary log events to the remaining slaves so they are up to date with all the before-failover info. Nice!

Theses different phases follow:

* Phase 1: Configuration Check Phase.. * Phase 2: Dead Master Shutdown Phase.. * Phase 3: Master Recovery Phase.. * Phase 3.1: Getting Latest Slaves Phase.. * Phase 3.2: Saving Dead Master's Binlog Phase.. * Phase 3.3: Determining New Master Phase.. [info] Finding the latest slave that has all relay logs for recovering other slaves.. [info] All slaves received relay logs to the same position. No need to resync each other. [info] Starting master failover.. [info] From: mysql-server1(192.168.1.116:3306) (current master) +--mysql-server2(192.168.1.117:3306) +--mysql-server3(192.168.1.118:3306) To: mysql-server2(192.168.1.117:3306) (new master) +--mysql-server3(192.168.1.118:3306) * Phase 3.3: New Master Diff Log Generation Phase.. * Phase 3.4: Master Log Apply Phase.. * Phase 4: Slaves Recovery Phase.. * Phase 4.1: Starting Parallel Slave Diff Log Generation Phase.. * Phase 4.2: Starting Parallel Slave Log Apply Phase.. * Phase 5: New master cleanup phase..

The phases are pretty self-explanatory. MHA tries to get all the data possible from the master’s binary log and slave’s relay log (the one that is more advanced) to avoid losing any data or promote a slave that it was far behind the master. So it tries to promote a slave with the most current data as possible. We see that server2 has been promoted to master, because in our configuration we specified that server3 shouldn’t be promoted.

After the failover, the manager service stops itself. If we check the config file, the failed server is not there anymore. Now the recovery is up to you. You need to get the old master back in the replication chain, then add it again to the config file and start the manager.

It is also possible to perform a manual failover (if, for example, you need to do some maintenance on the master server). To do that you need to:

  • Stop masterha_manager.
  • Run masterha_master_switch –master_state=alive –conf=/etc/app1.cnf. The line says that you want to switch the master, but the actual master is still alive, so no need to mark it as dead or remove it from the conf file.

And that’s it. Here is part of the output. It shows the tool making the decision on the new topology and asking the user for confirmation:

[info] From: mysql-server1(192.168.1.116:3306) (current master) +--mysql-server2(192.168.1.117:3306) +--mysql-server3(192.168.1.118:3306) To: mysql-server2(192.168.1.117:3306) (new master) +--mysql-server3(192.168.1.118:3306) Starting master switch from mha-server1(192.168.1.116:3306) to mha-server2(192.168.1.117:3306)? (yes/NO): yes [...] [info] Switching master to mha-server2(192.168.1.117:3306) completed successfully.

You can also employ some extra parameters that are really useful in some cases:

–orig_master_is_new_slave: if you want to make the old master a slave of the new one.

–running_updates_limit: if the current master executes write queries that take more than this parameter’s setting, or if any of the MySQL slaves behind master take more than this parameter, the master switch aborts. By default, it’s 1 (1 second). All these checks are for safety reasons.

–interactive=0: if you want to skip all the confirmation requests and questions masterha_master_switch could ask.

Check this link in case you use GTID and want to avoid problems with errant transactions during the failover:

https://www.percona.com/blog/2015/12/02/gtid-failover-with-mysqlslavetrx-fix-errant-transactions/

Custom scripts

Since this is a quick guide to start playing around with MHA, I won’t cover advanced topics in detail. But I will mention a few:

    • Custom scripts. MHA can move IPs around, shutdown a server and send you a report in case something happens. It needs a custom script, however. MHA comes with some example scripts, but you would need to write one that fits your environment.The directives are master_ip_failover_script, shutdown_script, report_script. With them configured, MHA will send you an email or a message to your mobile device in the case of a failover, shutdown the server and move IPs between servers. Pretty nice!

Hope you found this quickstart guide useful for your own tests. Remember, one of the most important things: don’t overdo automation!  

Categories: MySQL

Percona Live Europe featured talk with Manyi Lu — MySQL 8.0: what’s new in Optimizer

MySQL Performance Blog - Thu, 2016-09-01 21:09

Welcome to a new Percona Live Europe featured talk with Percona Live Europe 2016: Amsterdam speakers! In this series of blogs, we’ll highlight some of the speakers that will be at this year’s conference. We’ll also discuss the technologies and outlooks of the speakers themselves. Make sure to read to the end to get a special Percona Live Europe registration bonus!

In this Percona Live Europe featured talk, we’ll meet Manyi Lu, Director Software Development at Oracle. Her talk will be on MySQL 8.0: what’s new in Optimizer. There are substantial improvements in the optimizer in MySQL 5.7 and MySQL 8.0. Most noticeably, users can now combine relational data with NoSQL using the new JSON features. I had a chance to speak with Manyi and learn a bit more about the MySQL 8.0:

Percona: Give me a brief history of yourself: how you got into database development, where you work, what you love about it.

Manyi: Oh, my interest in database development goes way back to university almost twenty years ago. After graduation, I joined local startup Clustra and worked on the development of a highly available distributed database system for the telecom sector. Since then, I have worked on various aspects of the database, kernel, and replication. Lately I am heading the MySQL optimizer and GIS team.

What I love most about my work are the talented and dedicated people I am surrounded by, both within the team and in the MySQL community. We are passionate about building a database used by millions.

Percona: Your talk is called “MySQL 8.0: what’s new in Optimizer.” So, obvious question, what is new in the MySQL 8.0 Optimizer?

Manyi: There are a number of interesting features in 8.0. CTE or Common Table Expression has been one of the most demanded SQL features. MySQL 8.0 will support both the WITH and WITH RECURSIVE clausesA recursive CTE is quite useful for reproducing reports based on hierarchical data. For DBAs, Invisible Index should make life easier. They can mark an index invisible to the optimizer, check the performance and then decide to either drop it or keep it. On the performance side, we have improved the performance of table scans, range scans and similar queries by batching up records read from the storage engine into the server. We have significant work happening in the cost model area. In order to produce more optimal query plans, we have started the work on adding support for histograms, and for taking into account whether data already is in memory or needs to be read from disk.

Besides the optimizer, my team is also putting a major effort into utf8mb4 support. We have added a large set of utf8mb4 collations based on the latest Unicode standard. These collations have better support for emojis and languages. Utf8 is the dominating character encoding for the web, and this move will make the life easier for the vast majority of MySQL users. We also plan to add support for accent and case sensitive collations.

Please keep in mind that 8.0.0 is the first milestone release. There are quite a few features in the pipeline down the road.

Percona: How are some of the bridges between relational and NoSQL environments (like JSON support) of benefit to database deployments?

Manyi: The JSON support that we introduced in 5.7 has been immensely popular because it solves some very basic day-to-day problems. Relational database forces you to have a fixed schema, and the JSON datatype gives you the flexibility to store data without a schema. In the past, people stored relational data in MySQL and had to install yet another datastore to handle unstructured or semi-structured data that are schema-less in nature. With JSON support, you can store both relational and non-relational data in the same database, which makes database deployment much simpler. And not only that, but you can also perform queries across the boundaries of relational and non-relational data.

Clients that communicate with a MySQL Server using the newly introduced X Protocol can use the X DevAPI to develop applications. Developers do not even need to understand SQL if they do not want to. There are a number of connectors that support the X protocol, so you can use X DevApi in your preferred programming language. We have made MySQL more appealing to a larger range of developers.

Percona: What is the latest on the refactoring of the MySQL Optimizer and Parser?

Manyi: The codebase of optimizer and parser used to be quite messy. The parsing, optimizing and execution stages were intermingled, and the code was hard to maintain. We have had a long-running effort to clean up the codebase. In 5.7, the optimization stage was separated from the execution stage. In 8.0, the focus is refactoring the prepare stage and complete parser rewrite.

We have already seen the benefits of the refactoring work. Development time on new features has been reduced. CTE is a good example. Without refactoring done previously, it would have taken much longer to implement CTE. With a cleaner codebase, we also managed to reduce the bug count, which means more development resources can be allocated to new features instead of maintenance.

Percona: Where do you see MySQL heading in order to deal with some of the database trends that keep you awake at night?

Manyi: One industry trend is cloud computing and Database as a Service becoming viable options to in-house databases. In particular, it speeds up technology deployments and reduces initial investments for smaller organizations. MySQL, being the most popular open source database, fits well into the cloud data management trend.

What we can do is make MySQL even better in the cloud setting. E.g., better support for horizontal scaling, fail-over, sharding, cross-shard queries and the like.

Percona: What are looking forward to the most at Percona Live Europe this year?

Manyi: I like to speak and get feedback from MySQL users. Their input has a big impact on our roadmap. I also look forward to learning more about innovations by web-scale players like Facebook, Alibaba and others. I always feel more energized after talking to people who are passionate about MySQL and databases in general.

You can learn more about Manyi and her thoughts on MySQL 8.0 here: http://mysqlserverteam.com/

Want to find out more about Manyi, MySQL and Oracle? Register for Percona Live Europe 2016, and see her talk MySQL 8.0: what’s new in Optimizer.

Use the code FeaturedTalk and receive €25 off the current registration price!

Percona Live Europe 2016: Amsterdam is the premier event for the diverse and active open source database community. The conferences have a technical focus with an emphasis on the core topics of MySQL, MongoDB, and other open source databases. Percona live tackles subjects such as analytics, architecture and design, security, operations, scalability and performance. It also provides in-depth discussions for your high-availability, IoT, cloud, big data and other changing business needs. This conference is an opportunity to network with peers and technology professionals by bringing together accomplished DBA’s, system architects and developers from around the world to share their knowledge and experience. All of these people help you learn how to tackle your open source database challenges in a whole new way.

This conference has something for everyone!

Percona Live Europe 2016: Amsterdam is October 3-5 at the Mövenpick Hotel Amsterdam City Centre.

Categories: MySQL

Webinar Thursday, September 1 – MongoDB Security: A Practical Approach

MySQL Performance Blog - Tue, 2016-08-30 21:23

Please join David Murphy as he presents a webinar Thursday, September 1 at 10 am PDT (UTC-7) on MongoDB Security: A Practical Approach. (Date changed*)


This webinar will discuss the many features and options available in the MongoDB community to help secure your database environment. First, we will cover how these features work and how to fill in known gaps. Next, we will look at the enterprise-type features shared by Percona Server for MongoDB and MongoDB Enterprise. Finally, we will examine some disk and network designs that can ensure your security out of the gate – you don’t want to read about how your MongoDB database leaked hundreds of gigs of data on someone’s security blog!

We will cover the following topics:

  • Using SSL for all things
  • Limiting network access
  • Custom roles
  • The missing true SUPER user
  • Wildcarding databases and collections
  • Adding specific actions to a normal role
  • LDAP
  • Auditing
  • Disk encryption
  • Network designs

Register for the webinar here.

David Murphy, MongoDB Practice Manager David joined Percona in October 2015 as Practice Manager for MongoDB. Prior to that, David was on the ObjectRocket by Rackspace team as the Lead DBA. With the growth involved with any recently acquired startup, David’s role covered a wide range from evangelism, research, run book development, knowledgebase design, consulting, technical account management, mentoring and much more. Prior to the world of MongoDB had been a MySQL and NoSQL architect at Electronic Arts working with some of the largest titles in the world like FIFA, SimCity, and Battle Field providing tuning, design, and technology choice responsibilities. David maintains an active interest in database speaking and exploring new technologies.
Categories: MySQL

MySQL Sharding with ProxySQL

MySQL Performance Blog - Tue, 2016-08-30 19:25

This article demonstrates how MySQL sharding with ProxySQL works.

Recently a colleague of mine asked me to provide a simple example on how ProxySQL performs sharding.

In response, I’m writing this short tutorial in the hope it will illustrate ProxySQL’s sharding functionalities, and help people out there better understand how to use it.

ProxySQL is a very powerful platform that allows us to manipulate and manage our connections and queries in a simple but effective way. This article shows you how.

Before starting let’s clarify some basic concepts.

  • ProxySQL organizes its internal set of servers in Host Groups (HG), and each HG can be associated with users and Query Rules (QR)
  • Each QR can be final (apply = 1) or = let ProxySQL continue to parse other QRs
  • A QR can be a rewrite action, be a simple match, have a specific target HG, or be generic
  • QRs are defined using regex

You can see QRs as a sequence of filters and transformations that you can arrange as you like.

These simple basic rules give us enormous flexibility. They allow us to create very simple actions like a simple query re-write, or very complex chains that could see dozens of QR concatenated. Documentation can be found here.

The information related to HGs or QRs is easily accessible using the ProxySQL administrator interface, in the tables mysql_servers, mysql_query_rules and stats.stats_mysql_query_rules. The last one allows us to evaluate if and how the rule(s) is used.

With regards to sharding, what can ProxySQL do to help us achieve what we need (in a relatively easy way)? Some people/companies include sharding logic in the application, use multiple connections to reach the different targets, or have some logic to split the load across several schemas/tables. ProxySQL allows us to simplify the way connectivity and query distribution is supposed to work reading data in the query or accepting HINTS.

No matter what the requirements, the sharding exercise can be summarized in a few different categories.

  • By splitting the data inside the same container (like having a shard by State where each State is a schema)
  • By physical data location (this can have multiple MySQL servers in the same room, as well as having them geographically distributed)
  • A combination of the two, where I do split by State using a dedicated server, and again split by schema/table by whatever (say by gender)

In the following examples, I show how to use ProxySQL to cover the three different scenarios defined above (and a bit more).

The example below will report text from the Admin ProxySQL interface and the MySQL console. I will mark each one as follows:

  • Mc for MySQL console
  • Pa for ProxySQL Admin

Please note that the MySQL console MUST use the -c flag to pass the comments in the query. This is because the default behavior in the MySQL console is to remove the comments.

I am going to illustrate procedures that you can replicate on your laptop, and when possible I will mention a real implementation. This because I want you to directly test the ProxySQL functionalities.

For the example described below I have a PrxySQL v1.2.2 that is going to become the master in few days. You can download it from:

git clone https://github.com/sysown/proxysql.git git checkout v1.2.2

Then to compile:

cd <path to proxy source code> make make install

If you need full instructions on how to install and configure ProxySQL, read here and here.

Finally, you need to have the WORLD test DB loaded. WORLD test DB can be found here.

Shard inside the same MySQL Server using three different schemas split by continent

Obviously, you can have any number of shards and relative schemas. What is relevant here is demonstrating how traffic gets redirected to different targets (schemas), maintaining the same structure (tables), by discriminating the target based on some relevant information in the data or pass by the application.

OK, let us roll the ball.

[Mc] +---------------+-------------+ | Continent | count(Code) | +---------------+-------------+ | Asia | 51 | <-- | Europe | 46 | <-- | North America | 37 | | Africa | 58 | <-- | Oceania | 28 | | Antarctica | 5 | | South America | 14 | +---------------+-------------+

For this exercise, I will use three hosts in replica.

To summarize, I will need:

  • Three hosts: 192.168.1.[5-6-7]
  • Three schemas: Continent X + world schema
  • One user : user_shardRW
  • Three hostgroups: 10, 20, 30 (for future use)

First, we will create the schemas Asia, Africa, Europe:

[Mc] Create schema [Asia|Europe|North_America|Africa]; create table Asia.City as select a.* from world.City a join Country on a.CountryCode = Country.code where Continent='Asia' ; create table Europe.City as select a.* from world.City a join Country on a.CountryCode = Country.code where Continent='Europe' ; create table Africa.City as select a.* from world.City a join Country on a.CountryCode = Country.code where Continent='Africa' ; create table North_America.City as select a.* from world.City a join Country on a.CountryCode = Country.code where Continent='North America' ; create table Asia.Country as select * from world.Country where Continent='Asia' ; create table Europe.Country as select * from world.Country where Continent='Europe' ; create table Africa.Country as select * from world.Country where Continent='Africa' ; create table North_America.Country as select * from world.Country where Continent='North America' ;

Now, create the user

grant all on *.* to user_shardRW@'%' identified by 'test';

Now let us start to configure the ProxySQL:

[Pa] insert into mysql_users (username,password,active,default_hostgroup,default_schema) values ('user_shardRW','test',1,10,'test_shard1'); LOAD MYSQL USERS TO RUNTIME;SAVE MYSQL USERS TO DISK; INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.5',10,3306,100); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.6',20,3306,100); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.7',30,3306,100); LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK;

With this we have defined the user, the servers and the host groups.

Let us start to define the logic with the query rules:

[Pa] delete from mysql_query_rules where rule_id > 30; INSERT INTO mysql_query_rules (rule_id,active,username,match_pattern,replace_pattern,apply) VALUES (31,1,'user_shardRW',"^SELECT\s*(.*)\s*from\s*world.(\S*)\s(.*).*Continent='(\S*)'\s*(\s*.*)$","SELECT \1 from \4.\2 WHERE 1=1 \5",1); LOAD MYSQL QUERY RULES TO RUNTIME;SAVE MYSQL QUERY RULES TO DISK;

I am now going to query the master (or a single node), but I am expecting ProxySQL to redirect the query to the right shard, catching the value of the continent:

[Mc] SELECT name,population from world.City WHERE Continent='Europe' and CountryCode='ITA' order by population desc limit 1; +------+------------+ | name | population | +------+------------+ | Roma | 2643581 | +------+------------+

You can say: “Hey! You are querying the schema World, of course you get back the correct data.”

This is not what really happened. ProxySQL did not query the schema World, but the schema Europe.

Let’s look at the details:

[Pa] select * from stats_mysql_query_digest; Original :SELECT name,population from world.City WHERE Continent='Europe' and CountryCode='ITA' order by population desc limit 1; Transformed :SELECT name,population from Europe.City WHERE ?=? and CountryCode=? order by population desc limit ?

Let me explain what happened.

Rule 31 in ProxySQL will take all the FIELDS we will pass in the query. It will catch the CONTINENT in the WHERE clause, it will take any condition after WHERE and it will reorganize the queries all using the RegEx.

Does this work for any table in the sharded schemas? Of course it does.

A query like: SELECT name,population from world.Country WHERE Continent='Asia' ;
Will be transformed into: SELECT name,population from Asia.Country WHERE ?=?

[Mc] +----------------------+------------+ | name | population | +----------------------+------------+ | Afghanistan | 22720000 | | United Arab Emirates | 2441000 | | Armenia | 3520000 | <snip ...> | Vietnam | 79832000 | | Yemen | 18112000 | +----------------------+------------+

Another possible approach is to instruct ProxySQL to shard is to pass a hint inside a comment. Let see how.

First let me disable the rule I just inserted. This is not really needed, but we’ll do it so you can see how.

Categories: MySQL

Percona Live Europe Discounted Pricing and Community Dinner!

MySQL Performance Blog - Tue, 2016-08-30 17:47

Get your Percona Live Europe discounted tickets now, and sign up for the community dinner.

The countdown is on for the annual Percona Live Europe Open Source Database Conference! This year the conference will be taking place in the great city of Amsterdam October 3-5. This three-day conference will focus on the latest trends, news and best practices in the MySQL, MongoDB, PostgreSQL and other open source databases, while tackling subjects such as analytics, architecture and design, security, operations, scalability and performance. Percona Live provides in-depth discussions for your high-availability, IoT, cloud, big data and other changing business needs.

With breakout sessions, tutorial sessions and keynote speakers, there will certainly be no lack of content.
Advanced Rate Registration ENDS September 5, so make sure to register now to secure the best price possible.

As it is a Percona Live Europe conference, there will certainly be no lack of FUN either!!!!

As tradition holds, there will be a Community Dinner. Tuesday night, October 4, Percona Live Diamond Sponsor Booking.com hosts the Community Dinner at their very own headquarters located in historic Rembrandt Square in the heart of the city. After breakout sessions conclude, attendees are picked up right outside of the venue and taken to booking.com’s headquarters by canal boats! This gives all attendees the opportunity to play “tourist” while viewing the beauty of Amsterdam from the water. Attendees are dropped off right next to Booking.com’s office (return trip isn’t included)! The Lightning Talks for this year’s conference will be featured at the dinner.

Come and show your support for the community while enjoying dinner and drinks! The first 50 people registered for the dinner get in the doors for €10 (after that the price goes to €15 euro). Space is limited so make sure to sign up ASAP!

So don’t forget, register for the conference and sign up for the community dinner before space is gone! See you in Amsterdam!

Categories: MySQL

Percona Live Europe featured talk with Alexander Krasheninnikov — Processing 11 billion events a day with Spark in Badoo

MySQL Performance Blog - Mon, 2016-08-29 18:51

Welcome to a new Percona Live Europe featured talk with Percona Live Europe 2016: Amsterdam speakers! In this series of blogs, we’ll highlight some of the speakers that will be at this year’s conference. We’ll also discuss the technologies and outlooks of the speakers themselves. Make sure to read to the end to get a special Percona Live Europe registration bonus!

In this Percona Live Europe featured talk, we’ll meet Alexander Krasheninnikov, Head of Data Team at Badoo. His talk will be on Processing 11 billions events a day with Spark in Badoo. Badoo is one of the world’s largest and fastest growing social networks for meeting new people. I had a chance to speak with Alexander and learn a bit more about the database environment at Badoo:

Percona: Give me a brief history of yourself: how you got into database development, where you work, what you love about it?

Alexander: Currently, I work at Badoo as Head of Data Team. Our team is responsible for providing internal API’s for statistics data collecting and processing.

I started as a developer at Badoo, but the project I am going to cover in my talk lead to creating a separate department.

Percona: Your talk is called “Processing 11 billion events a day with Spark in Badoo.” What were the issues with your environment that led you to Spark? How did Spark solve these needs?

Alexander: When we designed the Unified Data Stream system in Badoo, we’ve extracted several requirements: scalability, fault tolerance and reliability. Altogether, these requirements moved us towards using Hadoop as deep data storage and data processing framework. Our initial implementation was built on top of Scribe + WebHDFS + Hive. But we’ve realized that processing speed and any lag of data delivery is unacceptable (we need near-realtime data processing). One of our BI team mentioned Spark as being significantly faster than Hive in some cases, (especially ones similar to ours). When investigated Spark’s API, we found the Streaming submodule — ideal for our needs. Additionally, this framework allowed us to use some third-party libraries, and write code. We’ve actually created an aggregation framework that follows “divide and conquer” principle. Without Spark, we definitely went way re-inventing lot of things from it.

Percona: Why is tracking the event stream important for your business model? How are you using the data Spark is providing you to reach business goals?

Alexander: The event stream always represents some important business/technical metrics — votes, messages, likes and so on. All this, brought together, forms the “health” of our product. The primary goal of our Spark-based system is to process a heterogeneous event stream one way, and draw charts automatically. We acheived this goal, and now we have hundreds of charts and dozens of developers/analysts/product team members using them. The system also evolved, and now we perform automatic anomaly detection over the event stream. We report strange data behavior to all the interested people.

Percona: What is changing in data use in your businesses model that keeps you awake at night? What tools or features are you looking for to address these issues?

Alexander: As I’ve mentioned before, we have an anomaly detection process for our metrics. If some of our metrics are out of expected bounds, it is treated as being an anomaly, and notification are sent. Also, we have a self-monitoring functionality for the whole system — a small event rate of heartbeats is generated, and processed with two different systems. If those show a significant difference — that defintely keeps me awake at night!

Categories: MySQL

Percona Live Europe featured talk with Krzysztof Książek — MySQL Load Balancers – MaxScale, ProxySQL, HAProxy, MySQL Router & nginx

MySQL Performance Blog - Thu, 2016-08-25 16:36

Welcome to the first Percona Live Europe featured talk with Percona Live Europe 2016: Amsterdam speakers! In this series of blogs, we’ll highlight some of the speakers that will be at this year’s conference. We’ll also discuss the technologies and outlooks of the speakers themselves. Make sure to read to the end to get a special Percona Live Europe registration bonus!

In this Percona Live Europe featured talk, we’ll meet Krzysztof Książek, Senior Support Engineer at Severalnines AB. His talk will be on MySQL Load Balancers – MaxScale, ProxySQL, HAProxy, MySQL Router & nginx: a close up look. Load balancing MySQL connections and queries using HAProxy has been popular in the past years. However, the recent arrival of MaxScale, MySQL Router, ProxySQL and now also Nginx as a reverse proxy have changed the game. Which use cases are best for which solution, and how well do they integrate into your environment?

I had a chance to speak with Krzysztof and learn a bit more about these questions:

Percona: Give me a brief history of yourself: how you got into database development, where you work, what you love about it?

Krzysztof: I was working as a system administrator in a hosting company in Poland. They had a need for a dedicated MySQL DBA. So I volunteered for the job. Later, I decided it was time to move on and joined Laine Campbell’s PalominoDB. I had a great time there, working with large MySQL deployments. At the beginning of 2015, I joined Severalnines as Senior Support Engineer. It was a no-brainer for me as I was always interested in building and managing scalable clusters based on MySQL — this is exactly what Severalnines helps its customers with.

Percona: Your talk is called “MySQL Load Balancers: MaxScale, ProxySQL, HAProxy, MySQL Router & nginx – a close up look.” Why are more load balancing solutions becoming available? What problems does load balancing solve for database environments?

Krzysztof:Load balancers are a must in highly scalable environments that are usually distributed across multiple servers or data centers. Large MySQL setups can quickly become very complex — many clusters, each containing numerous nodes and using different and interconnected technologies: MySQL replication, Galera Cluster. Load balancers not only help to maintain availability of the database tier by routing traffic to available nodes, but they also hide the complexity of the database tier from the application.

Percona: You call out three general groups of load balancers: application connectors, TCP reverse proxies, and SQL-aware load balancers. What workloads do these three groups generally address best?

Krzysztof: I wouldn’t say “workloads” — I’d say more like “use cases.” Each of those groups will handle all types of workloads but they do it differently. TCP reverse proxies like HAProxy or nginx will just route packets: fast and robust. They won’t understand the state of MySQL backends, though. For that you need to use external scripts like Percona’s clustercheck or Severalnines’ clustercheck-iptables.

On the other hand, should you want to build your application to be more database-aware, you can use mysqlnd and manage complex HA topologies from your application. Finally, SQL-aware load balancers like ProxySQL or MaxScale can be used to move complexity away from the application and, for example, perform read-write split in the proxy layer. They detect the MySQL state and can make necessary changes in routing — such as moving writes to a newly promoted master. They can also empower the DBA by allowing him to (for example) rewrite queries as they pass the proxy.

Percona: Where do you see load balancing technologies heading in order to deal with some of the database trends that keep you awake at night?

Krzysztof: Personally, I love to see the “empowerment” of DBA’s. For example, ProxySQL not only routes packets and helps to maintain high availability (although this is still the main role of a proxy), it is also a flexible tool that can help a DBA tackle many day-to-day problems. An offending query? You can cache it in the proxy or you can rewrite it on the fly. Do you need to test your system before an upgrade, using real-world queries? You can configure ProxySQL to mirror the production traffic on a test system. You can use it to build a sharded environment. These things, in the past, typically weren’t possible for a DBA to do — the application had to be modified and new code had to be deployed. Activities like those take time, time that is very precious when the ops staff is dealing with databases on fire from a high load. Now I can do all that just through reconfiguring a proxy. Isn’t it great?

Percona: What are looking forward to the most at Percona Live Europe this year?

Krzysztof: The Percona Live Europe agenda looks great and, as always, it’s a hard choice to decide which talks to attend. I’d love to learn more about the upcoming MySQL 8.0: there are quite a few talks covering both performance improvements and different features of 8.0. There’s also a new Galera version in the works with great features like non-blocking DDL’s, so it would be great to see what’s happening there. We’re also excited to run the “Become a MySQL DBA” tutorial again (our blog series on the same topic has been very popular).

Additionally, I’ve been working within the MySQL community for a while and I have many friends who, unfortunately, I don’t see very often. Percona Live Europe is an event that brings us together and where we can catch up. I’m definitely looking forward to this.

You can read more about Krzysztof thoughts on load balancers at Severalnines blog.

Want to find out more about Krzysztof, load balancers and Severalnines? Register for Percona Live Europe 2016, and come see his talk MySQL Load Balancers – MaxScale, ProxySQL, HAProxy, MySQL Router & nginx: a close up look.

Use the code FeaturedTalk and receive €25 off the current registration price!

Percona Live Europe 2016: Amsterdam is the premier event for the diverse and active open source database community. The conferences have a technical focus with an emphasis on the core topics of MySQL, MongoDB, and other open source databases. Percona live tackles subjects such as analytics, architecture and design, security, operations, scalability and performance. It also provides in-depth discussions for your high-availability, IoT, cloud, big data and other changing business needs. This conference is an opportunity to network with peers and technology professionals by bringing together accomplished DBA’s, system architects and developers from around the world to share their knowledge and experience. All of these people help you learn how to tackle your open source database challenges in a whole new way.

This conference has something for everyone!

Percona Live Europe 2016: Amsterdam is October 3-5 at the Mövenpick Hotel Amsterdam City Centre.

Categories: MySQL

PostgreSQL Day at Percona Live Amsterdam 2016

MySQL Performance Blog - Thu, 2016-08-25 13:21

Introducing PostgreSQL Day at Percona Live Europe, Amsterdam 2016.

As modern open source database deployments change, often including more than just a single open source database, Percona Live has also changed. We changed our model from being a purely MySQL-focused conference (with variants) to include a significant amount of MongoDB content. We’ve also expanded our overview of the open source database landscape and included introductory talks on many other technologies. These included practices we commonly see used in the world, and new up and coming solutions we think show promise.

In getting Percona Live Europe 2016 ready, something unexpected happened: we noticed the PostgreSQL community come together and submit many interesting talks about this great open source database technology. This effort on their part pushed to go further than we initially planned this year, and we’ve put together a full day of PostgreSQL talks. At Percona Live Europe this year, we will be running our first ever PostgreSQL Day on October 4th!

Some folks have been questioning this decision: do we really need so much PostgreSQL content? Isn’t there some tension between the MySQL and PostgreSQL communities? (Here is a link to a very recent example.)  

While it might be true (and I think it is) that some contention exists between these groups, I don’t think isolation and indifference are the answers to improving cooperation. They certainly aren’t the best plan for the open source database community at large, because there is too much we can learn from each other — especially when it comes to promoting open source databases as a real alternative to commercial ones.

Every open source community has its own set of “zealots” (or maybe just “strict adherents”). But our dedication to one particular technology shouldn’t blind us to the value of others. The MySQL and PostgreSQL communities have both successfully obtained support through substantial large scale deployments. There are more and more engineers joining those communities, looking to find better solutions for the problems they face and learn from others’ technologies.  

Through the years I have held very productive discussions with people like Josh Berkus, Bruce Momjian, Oleg Bartunov,  Ilya Kosmodemiansky and Robert Treat (to name just a few) about how things are done in MySQL versus PostgreSQL — and what could be done better in both.

At PGDay this year, I was glad to see Alexey Kopytov speaking about what MySQL does better; it got some very constructive conversations going. I was also pleased that my keynote on Migration to the Open Source Databases at the same conference was well attended and also sparked some good conversations.

I want this trend to continue to grow. This is why I think running a PostgreSQL Day as part of Percona Live Europe, Amsterdam is an excellent development. It provides an outstanding opportunity for people interested in PostgreSQL to further their knowledge through exposure to  MySQL, MongoDB and other open source technologies. This holds true for folks attending the conference mainly as MySQL and MongoDB users: they get exposed to the state of PostgreSQL in 2016.

Even more, I hope that this new track will spark productive conversations in the hallways, at lunches and other events between the speakers themselves. It’s really the best way to see what we can learn from each other. In the end, it benefits all technologies.

I believe the whole conference is worth attending, but for people who only wish to attend our new  PostgreSQL Day on October 4th, you can register for a single day conference pass using the PostgreSQLRocks discount code (€200, plus VAT).  

I’m looking forward to meeting and speaking with members of the PostgreSQL community at Percona Live!

Categories: MySQL

How to stop offending queries with ProxySQL

MySQL Performance Blog - Wed, 2016-08-24 00:46

This blog discusses how to find and address badly written queries using ProxySQL.

All of us are very good in writing good queries. We know this to always be true!

Categories: MySQL

Percona Server 5.7.14-7 is now available

MySQL Performance Blog - Tue, 2016-08-23 17:57

Percona announces the GA release of Percona Server 5.7.14-7 on August 23, 2016. Download the latest version from the Percona web site or the Percona Software Repositories.

Based on MySQL 5.7.14, including all the bug fixes in it, Percona Server 5.7.14-7 is the current GA release in the Percona Server 5.7 series. Percona’s provides completely open-source and free software. Find release details in the 5.7.14-7 milestone at Launchpad.

New Features: Bugs Fixed:
  • Fixed potential cardinality 0 issue for TokuDB tables if ANALYZE TABLE finds only deleted rows and no actual logical rows before it times out. Bug fixed #1607300 (#1006, #732).
  • TokuDB database.table.index names longer than 256 characters could cause a server crash if background analyze table status was checked while running. Bug fixed #1005.
  • PAM Authentication Plugin would abort authentication while checking UNIX user group membership if there were more than a thousand members. Bug fixed #1608902.
  • If DROP DATABASE would fail to delete some of the tables in the database, the partially-executed command is logged in the binlog as DROP TABLE t1, t2, ... for the tables for which drop succeeded. A slave might fail to replicate such DROP TABLE statement if there exist foreign key relationships to any of the dropped tables and the slave has a different schema from the master. Fix by checking, on the master, whether any of the database to be dropped tables participate in a Foreign Key relationship, and fail the DROP DATABASE statement immediately. Bug fixed #1525407 (upstream #79610).
  • PAM Authentication Plugin didn’t support spaces in the UNIX user group names. Bug fixed #1544443.
  • Due to security reasons ld_preload libraries can now only be loaded from the system directories (/usr/lib64, /usr/lib) and the MySQL installation base directory.
  • In the client library, any EINTR received during network I/O was not handled correctly. Bug fixed #1591202 (upstream #82019).
  • SHOW GLOBAL STATUS was locking more than the upstream implementation which made it less suitable to be called with high frequency. Bug fixed #1592290.
  • The included .gitignore in the percona-server source distribution had a line *.spec, which means someone trying to check in a copy of the percona-server source would be missing the spec file required to build the RPMs. Bug fixed #1600051.
  • Audit Log Plugin did not transcode queries. Bug fixed #1602986.
  • If the changed page bitmap redo log tracking thread stops due to any reason, then shutdown will wait for a long time for the log tracker thread to quit, which it never does. Bug fixed #1606821.
  • Changed page tracking was initialized too late by InnoDB. Bug fixed #1612574.
  • Fixed stack buffer overflow if --ssl-cipher had more than 4000 characters. Bug fixed #1596845 (upstream #82026).
  • Audit Log Plugin events did not report the default database. Bug fixed #1435099.
  • Canceling the TokuDB Background ANALYZE TABLE job twice or while it was in the queue could lead to server assertion. Bug fixed #1004.
  • Fixed various spelling errors in comments and function names. Bug fixed #728 (Otto Kekäläinen).
  • Implemented set of fixes to make PerconaFT build and run on the AArch64 (64-bit ARMv8) architecture. Bug fixed #726 (Alexey Kopytov).
Other bugs fixed:

#1542874 (upstream #80296), #1610242, #1604462 (upstream #82283), #1604774 (upstream #82307), #1606782, #1607359, #1607606, #1607607, #1607671, #1609422, #1610858, #1612551, #1613663, #1613986, #1455430, #1455432, #1581195, #998, #1003, and #730.

The release notes for Percona Server 5.7.14-7 are available in the online documentation. Please report any bugs on the launchpad bug tracker .

Categories: MySQL
Syndicate content