MySQL

Charlottesville Coffee Roasters

Xaprb, home of innotop - Sun, 2017-02-26 15:17

One of the things I appreciate about living in beautiful Charlottesville, Virginia is the abundance of artisanal products that are high-quality and produced locally. There’s a vibrant network of people making food, drink, and physical goods: wineries, chocolate, art, blacksmithing, and much more. Many of our local producers are recognized worldwide. As a newly minted coffee lover, I also appreciate the variety and quality of coffee roasters in town and nearby. Of course, we have to import the beans, but there’s much to the coffee story after the beans are harvested. Here are some of my favorite local coffee resources.

Mudhouse Coffee

Mudhouse is my favorite coffee in the entire world. It’s a small local chain that started as a coffeehouse and expanded into roasting because they were unhappy with lack of control over that part of their supply chain. Their motto is beautiful coffees thoughtfully sourced and carefully roasted for you and they now make available their coffees via subscription online.

I love Mudhouse coffee for all the things they say about themselves, as well as the fact that I prefer light- to medium-roasted coffee. They do light and medium roasts better than anyone I know. Their espresso, Are You Gold, is the gold standard for me. They’ve been recognized many times nationally and internationally as well, most recently winning perhaps the most prestigious award in the industry: Roast Magazine’s Roaster Of The Year.

Trager Brothers

If you prefer dark roasts, perhaps Trager Brothers will be your cup. They swing the pendulum all the way to the other side, even boasting a roast called Dark As Dark, which is exactly what it sounds like.

Red Rooster

Red Rooster Coffee roasts in Floyd, Virginia. My favorite is the Old Crow Cuppa Joe, which makes espresso that my taste buds say is great.

Black Hand

Black Hand roasts from RVA, the local abbreviation for Richmond, about an hour away. Their coffee is easy to find in many grocery stores here.

Others

There really are too many local and regional roasters to list in detail. Here is a summary of some of the selection available in grocery stores and coffee shops. Many of these are personal favorites for me.

In addition to what’s physically close by, we get a lot of coffee from places that have “trade routes” to Charlottesville, including the following:

That last one is in Maine, but the people behind it have local ties and I’ve gotten their coffee several times locally.

For those who are happy to pay a bit less and get coffee they can still feel better about, there’s always the Allegro bulk coffee in Whole Foods, which I’ve found is extremely fresh—one of the most important attributes of good coffee!

Finally, to wrap this up with a bit of fun, here’s some coffee entertainment. Artist Tammie Wales spends hours painting steampunk art with coffee.

And here’s a quick video showing how to make a latte art tulip. That looks easy; I’m sure I can do that too.

A post shared by Asaf Rauch (@coffeestation_latteart) on Feb 19, 2017 at 2:27am PST

Photo Credit

Categories: MySQL

Simple Guidelines For Maintainable Spreadsheets

Xaprb, home of innotop - Sat, 2017-02-25 19:43

The spreadsheet is one of the most powerful inventions in the history of computing. But with that power comes responsibility: just as with a programming language, the spreadsheet itself can become difficult to understand and maintain.

The fundamental issues that cause spreadsheets to be difficult to maintain over time tend to be similar to those in computer programs: intent is obscured and replaced with implementation; complexity grows nonlinearly; side effects cause problems you don’t foresee in advance; debugging is twice as hard as creating, so the cleverest spreadsheet you can create is two times more complex than you can debug. Here are some practices I follow to help make spreadsheets simpler and easier to use and maintain.

Create Visual Clarity

Visual clarity and simplicity are important for me to understand a spreadsheet. Complexity makes my brain and eyes tired and confused more quickly.

Disable gridlines. Gridlines add a lot of clutter, and once you disable them you’ll discover they are not necessary. The content itself is sufficient to establish rows and columns. Judicious use of borders, such as underlining the header row of a table, is easy to add later.

Begin the spreadsheet in row 2 and column B, leaving row 1 and column A blank. This gives visual margins that add clarity. It also makes it easier to add columns and rows without causing formatting issues or duplicating formulas in unexpected ways. For example, it makes it easier to add a column without including it in a named range or table.

Use “freeze panes” or drag dividers to create fixed rows that don’t scroll, so row and column headers are visible as you scroll through large amounts of content. This is especially important if you are working on a large screen and someone with a small screen might view the spreadsheet later.

Obvious Is Better Than Clever

In some cases, hiding rows or columns can be powerful by removing distraction, enabling the user to focus on an outcome or result instead of the process used to achieve it. However, hidden rows and columns usually are a “bad smell,” because their invisibility creates magical functionality that’s difficult to discover or understand. They’re hard to maintain, too, because you’ll often need to examine them to understand the spreadsheet, and you’ll have to unhide them and hide them again to do that. Therefore, generally avoid hidden columns or rows.

I wrote previously about hacks to ignore missing data, which has examples of techniques that need to be weighed carefully for benefits versus potential drawbacks.

Respect And Work With Data Flow

Cycles in spreadsheets are perfectly possible, but just like functions with many exit points or side effects in a program, they’re a mess. But there are less obvious ways to create messy tangles in your spreadsheets, too.

In general, it’s ideal if data flow through equations is

  1. Top to bottom
  2. Left to right
  3. Avoided between tabs/worksheets

This means that ideally, formulas only refer to cells above and to the left, so as you read naturally you’ll see data that’s used in subsequent calculations, not the result of things you haven’t yet read (assuming your native language is left-to-right).

This isn’t always possible, and you’ll immediately find reasons to disregard this suggestion: percentages that refer to summaries at the bottom as well as rows above, for example. But if you have a choice, following the flow will create simpler spreadsheets. If rows use later rows as part of their formula, you have to scroll to the bottom and then back up again to trace what goes into the cell. If you can rearrange the table so the rows appear in the order they’re computed, it might flow more clearly.

When you manipulate large amounts of data to produce intermediate results and then use them further, it’s often a good idea to place those in their own worksheets (tabs). If you do this, to the extent you can make source data flow in only one direction, it’ll also improve clarity. I usually do this by placing my source tabs to the right, so data flows right-to-left. You could do it either way, but the reason I do it “backwards” is so that as people read the tabs from left to right, they’re beginning with the summaries or outcomes, and can explore the source data if they want. This follows the McKinsey Pyramid Principle, which advocates beginning with the outcomes and saving the details for later.

Regardless, I try very hard to avoid making tabs depend on each other, i.e. some columns in Sheet1 refer to columns in Sheet2, which uses columns in Sheet1 again. That’s just a mess.

Avoid Mixing Parameters Throughout

Spreadsheets often use parameters (inputs) to help model different scenarios: cost per unit sold, sales rep quota, and so on. For maintainability, it’s important to separate out these parameters, ideally into a single place.

First, a rule I regard as a pretty strict one: do not hardcode literals into formulas. If you think sales rep quota is $750k, don’t write that into a formula! Put it in a separate cell and refer to it. Hidden “magic numbers” are the source of a lot of problems.

My other preference is to create a table of parameters, ideally in its own tab/sheet. This way the parameters are isolated, so there’s a single place to find all of them, and you don’t need to hunt around. You can also place remarks next to them, explaining (is sales quota per-quarter or per-year; is it new business only, or does it include renewals?). If not a separate tab, then in a separate portion of the spreadsheet; for simple spreadsheets it is a lot less work to have everything on one tab.

Separate Charts From Data

I have found that keeping charts and data separate is helpful in many cases. Placing charts on their own tabs avoids problems such as chart sizes changing when new columns are created. And on most monitors, charts and data don’t both fit onto the screen unless they overlap, which causes problems.

A worksheet that has charts way at the bottom, where they’ll never be discovered unless you zoom way out or scroll way down, is very frustrating. In general, too, a worksheet or tab that serves multiple purposes is an invitation to ballooning complexity. The presence of several kinds of data in a single tab seems to imply that just one more table or chart wouldn’t hurt, and it takes on a life of its own because it’s not clear where it should stop.

Conclusions

All of the above suggestions are just general guidelines, not hard-and-fast rules. I break them all the time myself. But having worked on some fairly complex spreadsheets, which are shared by multiple people and edited over long periods of time, I’ve found that spreadsheets can become a lot more work in the long run than I thought they would initially. I’ve slowly adopted these practices to help with that problem. I’m sure there are spreadsheet experts with more extensive guidelines, but this is what’s worked for me so far.

Pic Credit

Categories: MySQL

Installing Percona Monitoring and Management (PMM) for the First Time

MySQL Performance Blog - Fri, 2017-02-24 21:32

This post is another in the series on Percona’s MongoDB 3.4 bundle release. This post is meant to walk a prospective user through the benefits of Percona Monitoring and Management (PMM), how it’s architected and the simple install process. By the end of this post, you should have a good idea of what PMM is, where it can add value in your environment and how you can get PMM going quickly.

Percona Monitoring and Management (PMM) is Percona’s open-source tool for monitoring and alerting on database performance and the components that contribute to it. PMM monitors MySQL (Percona Server and MySQL CE), Amazon RDS/Aurora, MongoDB (Percona Server and MongoDB CE), Percona XtraDB/Galera Cluster, ProxySQL, and Linux.

What is it?

Percona Monitoring and Management is an amalgamation of exciting, best in class, open-source tools and Percona “engineering wizardry,” designed to make it easier to monitor and manage your environment. The real value to our users is the amount of time we’ve spent integrating the tools, plus the pre-built dashboards we’ve constructed that leverage the ten years of performance optimization experience. What you get is a tool that is ready to go out of the box, and installs in minutes. If you’re still not convinced, like ALL Percona software it’s completely FREE!

Sound good? I can hear you nodding your head. Let’s take a quick look at the architecture.

What’s it made of?

PMM, at a high-level, is made up of two basic components: the client and the server. The PMM Client is installed on the database servers themselves and is used to collect metrics. The client contains technology specific exporters (which collect and export data), and an “admin interface” (which makes the management of the PMM platform very simple). The PMM server is a “pre-integrated unit” (Docker, VM or AWS AMI) that contains four components that gather the metrics from the exporters on the PMM client(s). The PMM server contains Consul, Grafana, Prometheus and a Query Analytics Engine that Percona has developed. Here is a graphic from the architecture section of our documentation. In order to keep this post to a manageable length, please refer to that page if you’d like a more “in-depth” explanation.

How do I use it?

PMM is very easy to access once it has been installed (more on the install process below). You will simply open up the web browser of your choice and connect to the PMM Landing Page by typing http://<ip_address_of _PMM_server>. That takes you to the PMM landing page, where you can access all of PMM’s tools. If you’d like to get a look into the user experience, we’ve set up a great demo site so you can easily test it out.

Where should I use it?

There’s a good chance that you already have a monitoring/alerting platform for your production workloads. If not, you should set one up immediately and start analyzing trends in your environment. If you’re confident in your production monitoring solution, there is still a use for PMM in an often overlooked area: development and testing.

When speaking with users, we often hear that their development and test environments run their most demanding workloads. This is often due to stress testing and benchmarking. The goal of these workloads is usually to break something. This allows you to set expectations for normal, and thus abnormal, behavior in your production environment. Once you have a good idea of what’s “normal” and the critical factors involved, you can alert around those parameters to identify “abnormal” patterns before they cause user issues in production. The reason that monitoring is critical in your dev/test environment(s) is that you want to easily spot inflection points in your workload, which signal impending disaster. Dashboards are the easiest way for humans to consume and analyze this data.

Are you sold? Let’s get to the easiest part: installation.

How do you install it?

PMM is very easy to install and configure for two main reasons. The first is that the components (mentioned above) take some time to install, so we spent the time to integrate everything and ship it as a unit: one server install and a client install per host. The second is that we’re targeting customers looking to monitor MySQL and MongoDB installations for high-availability and performance. The fact that it’s a targeted solution makes pre-configuring it to monitor for best practices much easier. I believe we’ve all seen a particular solution that tries to do a little of everything, and thus actually does no particular thing well. This is the type of tool that we DO NOT want PMM to be. Now, onto the installation procedure.

There are four basic steps to get PMM monitoring your infrastructure. I do not want to recreate the Deployment Guide in order to maintain the future relevancy of this post. However, I’ll link to the relevant sections of the documentation so you can cut to the chase. Also, underneath each step, I’ll list some key takeaways that will save you time now and in the future.

  1. Install the integrated PMM server in the flavor of your choice (Docker, VM or AWS AMI)
    1. Percona recommends Docker to deploy PMM server as of v1.1
      1. As of right now, using Docker will make the PMM server upgrade experience seamless.
      2. Using the default version of Docker from your package manager may cause unexpected behavior. We recommend using the latest stable version from Docker’s repositories (instructions from Docker).
    2. PMM server AMI and VM are “experimental” in PMM v1.1
    3. When you open the “Metrics Monitor” for the first time, it will ask for credentials (user: admin pwd: admin).
  2. Install the PMM client on every database instance that you want to monitor.
    1. Install with your package manager for easier upgrades when a new version of PMM is released.
  3. Connect the PMM client to the PMM Server.
    1. Think of this step as sending configuration information from the client to the server. This means you are telling the client the address of the PMM server, not the other way around.
  4. Start data collection services on the PMM client.
    1. Collection services are enabled per database technology (MySQL, MongoDB, ProxySQL, etc.) on each database host.
    2. Make sure to set permissions for PMM client to monitor the database: Cannot connect to MySQL: Error 1045: Access denied for user ‘jon’@’localhost’ (using password: NO)
      1. Setting proper credentials uses this syntax sudo pmm-admin add <service_type> –user xxxx –password xxxx
    3. There’s good information about PMM client options in the “Managing PMM Client” section of the documentation for advanced configurations/troubleshooting.
What’s next?

That’s really up to you, and what makes sense for your needs. However, here are a few suggestions to get the most out of PMM.

  1. Set up alerting in Grafana on the PMM server. This is still an experimental function in Grafana, but it works. I’d start with Barrett Chambers’ post on setting up email alerting, and refine it with  Peter Zaitsev’s post.
  2. Set up more hosts to test the full functionality of PMM. We have completely free, high-performance versions of MySQL, MongoDB, Percona XtraDB Cluster (PXC) and ProxySQL (for MySQL proxy/load balancing).
  3. Start load testing the database with benchmarking tools to build your troubleshooting skills. Try to break something to learn what troubling trends look like. When you find them, set up alerts to give you enough time to fix them.
Categories: MySQL

Quest for Better Replication in MySQL: Galera vs. Group Replication

MySQL Performance Blog - Fri, 2017-02-24 20:46

UPDATE: Some of the language in the original post was considered overly-critical of Oracle by some community members. This was not my intent, and I’ve modified the language to be less so. I’ve also changed term “synchronous” (which the use of is inaccurate and misleading) to “virtually synchronous.” This term is more accurate and already used by both technologies’ founders, and should be less misleading.

I also wanted to thank Jean-François Gagné for pointing out the incorrect sentence about multi-threaded slaves in Group Replication, which I also corrected accordingly.

In today’s blog post, I will briefly compare two major virtually synchronous replication technologies available today for MySQL.

More Than Asynchronous Replication

Thanks to the Galera plugin, founded by the Codership team, we’ve had the choice between asynchronous and virtually synchronous replication in the MySQL ecosystem for quite a few years already. Moreover, we can choose between at least three software providers: Codership, MariaDB and Percona, each with its own Galera implementation.

The situation recently became much more interesting when MySQL Group Replication went into GA (stable) stage in December 2016.

Oracle, the upstream MySQL provider, introduced its own replication implementation that is very similar in concept. Unlike the others mentioned above, it isn’t based on Galera. Group Replication was built from the ground up as a new solution. MySQL Group Replication shares many very similar concepts to Galera. This post doesn’t cover MySQL Cluster, another and fully-synchronous solution, that existed much earlier then Galera — it is a much different solution for different use cases.

In this post, I will point out a couple of interesting differences between Group Replication and Galera, which hopefully will be helpful to those considering switching from one to another (or if they are planning to test them).

This is certainly not a full list of all the differences, but rather things I found interesting during my explorations.

It is also important to know that Group Replication has evolved a lot before it went GA (its whole cluster layer was replaced). I won’t mention how things looked before the GA stage, and will just concentrate on latest available 5.7.17 version. I will not spend too much time on how Galera implementations looked in the past, and will use Percona XtraDB Cluster 5.7 as a reference.

Multi-Master vs. Master-Slave

Galera has always been multi-master by default, so it does not matter to which node you write. Many users use a single writer due to workload specifics and multi-master limitations, but Galera has no single master mode per se.

Group Replication, on the other hand, promotes just one member as primary (master) by default, and other members are put into read-only mode automatically. This is what happens if we try to change data on non-master node:

mysql> truncate test.t1; ERROR 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statement

To change from single primary mode to multi-primary (multi-master), you have to start group replication with the group_replication_single_primary_mode variable disabled.
Another interesting fact is you do not have any influence on which cluster member will be the master in single primary mode: the cluster auto-elects it. You can only check it with a query:

mysql> SELECT * FROM performance_schema.global_status WHERE VARIABLE_NAME like 'group_replication%'; +----------------------------------+--------------------------------------+ | VARIABLE_NAME | VARIABLE_VALUE | +----------------------------------+--------------------------------------+ | group_replication_primary_member | 329333cd-d6d9-11e6-bdd2-0242ac130002 | +----------------------------------+--------------------------------------+ 1 row in set (0.00 sec)

Or just:

mysql> show status like 'group%'; +----------------------------------+--------------------------------------+ | Variable_name | Value | +----------------------------------+--------------------------------------+ | group_replication_primary_member | 329333cd-d6d9-11e6-bdd2-0242ac130002 | +----------------------------------+--------------------------------------+ 1 row in set (0.01 sec)

To show the hostname instead of UUID, here:

mysql> select member_host as "primary master" from performance_schema.global_status join performance_schema.replication_group_members where variable_name='group_replication_primary_member' and member_id=variable_value; +----------------+ | primary master | +----------------+ | f18ff539956d | +----------------+ 1 row in set (0.00 sec)

Replication: Majority vs. All

Galera delivers write transactions synchronously to ALL nodes in the cluster. (Later, applying happens asynchronously in both technologies.) However, Group Replication needs just a majority of the nodes confirming the transaction. This means a transaction commit on the writer succeeds and returns to the client even if a minority of nodes still have not received it.

In the example of a three-node cluster, if one node crashes or loses the network connection, the two others continue to accept writes (or just the primary node in Single-Primary mode) even before a faulty node is removed from the cluster.

If the separated node is the primary one, it denies writes due to the lack of a quorum (it will report the error ERROR 3101 (HY000): Plugin instructed the server to rollback the current transaction.). If one of the nodes receives a quorum, it will be elected to primary after the faulty node is removed from the cluster, and will then accept writes.

With that said, the “majority” rule in Group Replication means that there isn’t a guarantee that you won’t lose any data if the majority nodes are lost. There is a chance these could apply some transactions that aren’t delivered to the minority at the moment they crash.

In Galera, a single node network interruption makes the others wait for it, and pending writes can be committed once either the connection is restored or the faulty node removed from cluster after the timeout. So the chance of losing data in a similar scenario is lower, as transactions always reach all nodes. Data can be lost in Percona XtraDB Cluster only in a really bad luck scenario: a network split happens, the remaining majority of nodes form a quorum, the cluster reconfigures and allows new writes, and then shortly after the majority part is damaged.

Schema Requirements

For both technologies, one of the requirements is that all tables must be InnoDB and have a primary key. This requirement is now enforced by default in both Group Replication and Percona XtraDB Cluster 5.7. Let’s look at the differences.

Percona XtraDB Cluster:

mysql> create table nopk (a char(10)); Query OK, 0 rows affected (0.08 sec) mysql> insert into nopk values ("aaa"); ERROR 1105 (HY000): Percona-XtraDB-Cluster prohibits use of DML command on a table (test.nopk) without an explicit primary key with pxc_strict_mode = ENFORCING or MASTER mysql> create table m1 (id int primary key) engine=myisam; Query OK, 0 rows affected (0.02 sec) mysql> insert into m1 values(1); ERROR 1105 (HY000): Percona-XtraDB-Cluster prohibits use of DML command on a table (test.m1) that resides in non-transactional storage engine with pxc_strict_mode = ENFORCING or MASTER mysql> set global pxc_strict_mode=0; Query OK, 0 rows affected (0.00 sec) mysql> insert into nopk values ("aaa"); Query OK, 1 row affected (0.00 sec) mysql> insert into m1 values(1); Query OK, 1 row affected (0.00 sec)

Before Percona XtraDB Cluster 5.7 (or in other Galera implementations), there were no such enforced restrictions. Users unaware of these requirements often ended up with problems.

Group Replication:

mysql> create table nopk (a char(10)); Query OK, 0 rows affected (0.04 sec) mysql> insert into nopk values ("aaa"); ERROR 3098 (HY000): The table does not comply with the requirements by an external plugin. 2017-01-15T22:48:25.241119Z 139 [ERROR] Plugin group_replication reported: 'Table nopk does not have any PRIMARY KEY. This is not compatible with Group Replication' mysql> create table m1 (id int primary key) engine=myisam; ERROR 3161 (HY000): Storage engine MyISAM is disabled (Table creation is disallowed).

I am not aware of any way to disable these restrictions in Group Replication.

GTID

Galera has it’s own Global Transaction ID, which has existed since MySQL 5.5, and is independent from MySQL’s GTID feature introduced in MySQL 5.6. If MySQL’s GTID is enabled on a Galera-based cluster, both numerations exist with their own sequences and UUIDs.

Group Replication is based on a native MySQL GTID feature, and relies on it. Interestingly, a separate sequence block range (initially 1M) is pre-assigned for each cluster member.

WAN Support

The MySQL Group Replication documentation isn’t very optimistic on WAN support, claiming that both “Low latency, high bandwidth network connections are a requirement” and “Group Replication is designed to be deployed in a cluster environment where server instances are very close to each other, and is impacted by both network latency as well as network bandwidth.” These statements are found here and here. However there is network traffic optimization: Message Compression.

I don’t see group communication level tunings available yet, as we find in the Galera evs.* series of wsrep_provider_options.

Galera founders actually encourage trying it in geo-distributed environments, and some WAN-dedicated settings are available (the most important being WAN segments).

But both technologies need a reliable network for good performance.

State Transfers

Galera has two types of state transfers that allow syncing data to nodes when needed: incremental (IST) and full (SST). Incremental is used when a node has been out of a cluster for some time, and once it rejoins the other nodes has the missing write sets still in Galera cache. Full SST is helpful if incremental is not possible, especially when a new node is added to the cluster. SST automatically provisions the node with fresh data taken as a snapshot from one of the running nodes (donor). The most common SST method is using Percona XtraBackup, which takes a fast and non-blocking binary data snapshot (hot backup).

In Group Replication, state transfers are fully based on binary logs with GTID positions. If there is no donor with all of the binary logs (included the ones for new nodes), a DBA has to first provision the new node with initial data snapshot. Otherwise, the joiner will fail with a very familiar error:

2017-01-16T23:01:40.517372Z 50 [ERROR] Slave I/O for channel 'group_replication_recovery': Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.', Error_code: 1236

The official documentation mentions that provisioning the node before adding it to the cluster may speed up joining (the recovery stage). Another difference is that in the case of state transfer failure, a Galera joiner will abort after the first try, and will shutdown its mysqld instance. The Group Replication joiner will then fall-back to another donor in an attempt to succeed. Here I found something slightly annoying: if no donor can satisfy joiner demands, it will still keep trying the same donors over and over, for a fixed number of attempts:

[root@cd81c1dadb18 /]# grep 'Attempt' /var/log/mysqld.log |tail 2017-01-16T22:57:38.329541Z 12 [Note] Plugin group_replication reported: 'Establishing group recovery connection with a possible donor. Attempt 1/10' 2017-01-16T22:57:38.539984Z 12 [Note] Plugin group_replication reported: 'Retrying group recovery connection with another donor. Attempt 2/10' 2017-01-16T22:57:38.806862Z 12 [Note] Plugin group_replication reported: 'Retrying group recovery connection with another donor. Attempt 3/10' 2017-01-16T22:58:39.024568Z 12 [Note] Plugin group_replication reported: 'Retrying group recovery connection with another donor. Attempt 4/10' 2017-01-16T22:58:39.249039Z 12 [Note] Plugin group_replication reported: 'Retrying group recovery connection with another donor. Attempt 5/10' 2017-01-16T22:59:39.503086Z 12 [Note] Plugin group_replication reported: 'Retrying group recovery connection with another donor. Attempt 6/10' 2017-01-16T22:59:39.736605Z 12 [Note] Plugin group_replication reported: 'Retrying group recovery connection with another donor. Attempt 7/10' 2017-01-16T23:00:39.981073Z 12 [Note] Plugin group_replication reported: 'Retrying group recovery connection with another donor. Attempt 8/10' 2017-01-16T23:00:40.176729Z 12 [Note] Plugin group_replication reported: 'Retrying group recovery connection with another donor. Attempt 9/10' 2017-01-16T23:01:40.404785Z 12 [Note] Plugin group_replication reported: 'Retrying group recovery connection with another donor. Attempt 10/10'

After the last try, even though it fails, mysqld keeps running and allows client connections…

Auto Increment Settings

Galera adjusts the auto_increment_increment and auto_increment_offset values according to the number of members in a cluster. So, for a 3-node cluster, auto_increment_increment  will be “3” and auto_increment_offset  from “1” to “3” (depending on the node). If a number of nodes change later, these are updated immediately. This feature can be disabled using the wsrep_auto_increment_control setting. If needed, these settings can be set manually.

Interestingly, in Group Replication the auto_increment_increment seems to be fixed at 7, and only auto_increment_offset is set differently on each node. This is the case even in the default Single-Primary mode! this seems like a waste of available IDs, so make sure that you adjust the group_replication_auto_increment_increment setting to a saner number before you start using Group Replication in production.

Multi-Threaded Slave Side Applying

Galera developed its own multi-threaded slave feature, even in 5.5 versions, for workloads that include tables in the same database. It is controlled with the  wsrep_slave_threads variable. Group Replication uses a feature introduced in MySQL 5.7, where the number of applier threads is controlled with slave_parallel_workers. Galera will do multi-threaded replication based on potential conflicts of changed/locked rows. Group Replication parallelism is based on an improved LOGICAL_CLOCK scheduler, which uses information from writesets dependencies. This can allow it to achieve much better results than in normal asynchronous replication MTS mode. More details can be found here: http://mysqlhighavailability.com/zooming-in-on-group-replication-performance/

Flow Control

Both technologies use a technique to throttle writes when nodes are slow in applying them. Interestingly, the default size of the allowed applier queue in both is much different:

Moreover, Group Replication provides separate certifier queue size, also eligible for the Flow Control trigger:  group_replication_flow_control_certifier_threshold. One thing I found difficult, is checking the actual applier queue size, as the only exposed one via performance_schema.replication_group_member_stats is the Count_Transactions_in_queue (which only shows the certifier queue).

Network Hiccup/Partition Handling

In Galera, when the network connection between nodes is lost, those who still have a quorum will form a new cluster view. Those who lost a quorum keep trying to re-connect to the primary component. Once the connection is restored, separated nodes will sync back using IST and rejoin the cluster automatically.

This doesn’t seem to be the case for Group Replication. Separated nodes that lose the quorum will be expelled from the cluster, and won’t join back automatically once the network connection is restored. In its error log we can see:

2017-01-17T11:12:18.562305Z 0 [ERROR] Plugin group_replication reported: 'Member was expelled from the group due to network failures, changing member status to ERROR.' 2017-01-17T11:12:18.631225Z 0 [Note] Plugin group_replication reported: 'getstart group_id ce427319' 2017-01-17T11:12:21.735374Z 0 [Note] Plugin group_replication reported: 'state 4330 action xa_terminate' 2017-01-17T11:12:21.735519Z 0 [Note] Plugin group_replication reported: 'new state x_start' 2017-01-17T11:12:21.735527Z 0 [Note] Plugin group_replication reported: 'state 4257 action xa_exit' 2017-01-17T11:12:21.735553Z 0 [Note] Plugin group_replication reported: 'Exiting xcom thread' 2017-01-17T11:12:21.735558Z 0 [Note] Plugin group_replication reported: 'new state x_start'

Its status changes to:

mysql> SELECT * FROM performance_schema.replication_group_members; +---------------------------+--------------------------------------+--------------+-------------+--------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | +---------------------------+--------------------------------------+--------------+-------------+--------------+ | group_replication_applier | 329333cd-d6d9-11e6-bdd2-0242ac130002 | f18ff539956d | 3306 | ERROR | +---------------------------+--------------------------------------+--------------+-------------+--------------+ 1 row in set (0.00 sec)

It seems the only way to bring it back into the cluster is to manually restart Group Replication:

mysql> START GROUP_REPLICATION; ERROR 3093 (HY000): The START GROUP_REPLICATION command failed since the group is already running. mysql> STOP GROUP_REPLICATION; Query OK, 0 rows affected (5.00 sec) mysql> START GROUP_REPLICATION; Query OK, 0 rows affected (1.96 sec) mysql> SELECT * FROM performance_schema.replication_group_members; +---------------------------+--------------------------------------+--------------+-------------+--------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | +---------------------------+--------------------------------------+--------------+-------------+--------------+ | group_replication_applier | 24d6ef6f-dc3f-11e6-abfa-0242ac130004 | cd81c1dadb18 | 3306 | ONLINE | | group_replication_applier | 329333cd-d6d9-11e6-bdd2-0242ac130002 | f18ff539956d | 3306 | ONLINE | | group_replication_applier | ae148d90-d6da-11e6-897e-0242ac130003 | 0af7a73f4d6b | 3306 | ONLINE | +---------------------------+--------------------------------------+--------------+-------------+--------------+ 3 rows in set (0.00 sec

Note that in the above output, after the network failure, Group Replication did not stop. It waits in an error state. Moreover, in Group Replication a partitioned node keeps serving dirty reads as if nothing happened (for non-super users):

cd81c1dadb18 {test} ((none)) > SELECT * FROM performance_schema.replication_group_members; +---------------------------+--------------------------------------+--------------+-------------+--------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | +---------------------------+--------------------------------------+--------------+-------------+--------------+ | group_replication_applier | 24d6ef6f-dc3f-11e6-abfa-0242ac130004 | cd81c1dadb18 | 3306 | ERROR | +---------------------------+--------------------------------------+--------------+-------------+--------------+ 1 row in set (0.00 sec) cd81c1dadb18 {test} ((none)) > select * from test1.t1; +----+-------+ | id | a | +----+-------+ | 1 | dasda | | 3 | dasda | +----+-------+ 2 rows in set (0.00 sec) cd81c1dadb18 {test} ((none)) > show grants; +-------------------------------------------------------------------------------+ | Grants for test@% | +-------------------------------------------------------------------------------+ | GRANT SELECT, INSERT, UPDATE, DELETE, REPLICATION CLIENT ON *.* TO 'test'@'%' | +-------------------------------------------------------------------------------+ 1 row in set (0.00 sec)

A privileged user can disable super_read_only, but then it won’t be able to write:

cd81c1dadb18 {root} ((none)) > insert into test1.t1 set a="split brain"; ERROR 3100 (HY000): Error on observer while running replication hook 'before_commit'. cd81c1dadb18 {root} ((none)) > select * from test1.t1; +----+-------+ | id | a | +----+-------+ | 1 | dasda | | 3 | dasda | +----+-------+ 2 rows in set (0.00 sec)

I found an interesting thing here, which I consider to be a bug. In this case, a partitioned node can actually perform DDL, despite the error:

cd81c1dadb18 {root} ((none)) > show tables in test1; +-----------------+ | Tables_in_test1 | +-----------------+ | nopk | | t1 | +-----------------+ 2 rows in set (0.01 sec) cd81c1dadb18 {root} ((none)) > create table test1.split_brain (id int primary key); ERROR 3100 (HY000): Error on observer while running replication hook 'before_commit'. cd81c1dadb18 {root} ((none)) > show tables in test1; +-----------------+ | Tables_in_test1 | +-----------------+ | nopk | | split_brain | | t1 | +-----------------+ 3 rows in set (0.00 sec)

In a Galera-based cluster, you are automatically protected from that, and a partitioned node refuses to allow both reads and writes. It throws an error: ERROR 1047 (08S01): WSREP has not yet prepared node for application use. You can force dirty reads using the wsrep_dirty_reads variable.

There many more subtle (and less subtle) differences between these technologies – but this blog post is long enough already. Maybe next time

Categories: MySQL

Percona MongoDB 3.4 Bundle Release: Percona Server for MongoDB 3.4 Features Explored

MySQL Performance Blog - Thu, 2017-02-23 21:36

This blog post continues the series on the Percona MongoDB 3.4 bundle release. This release includes Percona Server for MongoDB, Percona Monitoring and Management, and Percona Toolkit. In this post, we’ll look at the features included in Percona Server for MongoDB.

I apologize for the long blog, but there is a good deal of important information to cover. Not just about what new features exist, but also why that are so important. I have tried to break this down into clear areas for you to cover the most amount of data, while also linking to further reading on these topics.

The first and biggest new feature for many people is the addition of collation in MongoDB. Wikipedia says about collation:

Collation is the assembly of written information into a standard order. Many systems of collation are based on numerical order or alphabetical order, or extensions and combinations thereof. Collation is a fundamental element of most office filing systems, library catalogs, and reference books.

What this is saying is a collation is an ordering of characters for a given character set. Different languages order the alphabet differently or even have different base characters (such as Asian, Middle Eastern and other regions) that are not English-native. Collations are critical for multi-language support and sorting of non-English words for index ordering.

Sharding General

All members of a cluster are aware of sharding (all members, sharding set name, etc.). Due to this, the sharding.clusterRole must be defined on all shard nodes, a new requirement.

Mongos processes MUST connect to 3.4 mongod instances (shard and config nodes). 3.2 and lower is not possible.

Config Servers Balancer on Config Server PRIMARY

In MongoDB 3.4, the cluster balancer is moved from the mongos processes (any) to the config server PRIMARY member.

Moving to a config-server-based balancer has the following benefits:

Predictability: the balancer process is always the config server PRIMARY. Before 3.4, any mongos processes could become the balancer, often chosen at random. This made troubleshooting difficult.

Lighter “mongos” process: the mongos/shard router benefits from being as light and thin as possible. This removes some code and potential for breakage from “mongos.”

Efficiency: config servers have dedicated nodes with very low resource utilization and no direct client traffic, for the most part. Moving the balancer to the config server set moves usage away from critical “router” processes.

Reliability: balancing relies on fewer components. Now the balancer can operate on the “config” database metadata locally, without the chance of network interruptions breaking balancing.

Config servers are a more permanent member of a cluster, unlikely to scale up/down or often change, unlike “mongos” processes that may be located on app hosts, etc.

Config Server Replica Set Required

In MongoDB 3.4, the former “mirror” config server strategy (SCCC) is no longer supported. This means all sharded clusters must use a replica-set-based set of config servers.

Using a replica-set based config server set has the following benefits:

Adding and removing config servers is greatly simplified.

Config servers have oplogs (useful for investigations).

Simplicity/Consistency: removing mirrored/SCCC config servers simplifies the high-level and code-level architecture.

Chunk Migration / Balancing Example

(from docs.mongodb.com)

Parallel Migrations

Previous to MongoDB 3.4, the balancer could only perform a single chunk migration at any given time. When a chunk migrates, a “source” shard and a “destination” shard are chosen. The balancer coordinates moving the chunks from the source to the target. In a large cluster with many shards, this is inefficient because a migration only involves two shards and a cluster may contain 10s or 100s of shards.

In MongoDB 3.4, the balancer can now perform many chunk migrations at the same time in parallel — as long as they do not involve the same source and destination shards. This means that in clusters with more than two shards, many chunk migrations can now occur at the same time when they’re mutually exclusive to one another. The effective outcome is (Number of Shards / 2) -1 == number of max parallel migrations: or an increase in the speed of the migration process.

For example, if you have ten shards, then 10/2 = 5 and  5-1 = 4. So you can have four concurrent moveChunks or balancing actions.

Tags and Zone

Sharding Zones supersedes tag-aware sharding. There is mostly no changes to the functionality. This is mostly a naming change and some new helper functions.

New commands/shell-methods added:

addShardToZone / sh.addShardToZone().

removeShardFromZone / sh.removeShardFromZone().

updateZoneKeyRange / sh.updateZoneKeyRange() + sh.removeRangeFromZone().

You might recall  MongoDB has for a long time supported the idea of shard and replication tags. They break into two main areas: hardware-aware tags and access pattern tags. The idea behind hardware-aware tags was that you could have one shard with slow disks, and as data ages, you have a process to move documents to a collection that lives on that shard (or tell specific ranges to live on that shard). Then your other shards could be faster (and multiples of them) to better handle the high-speed processing of current data.

The other is a case based more in replication, where you want to allow BI and other reporting systems access to your data without damaging your primary customer interactions. To do this, you could tag a node in a replica set to be {reporting: true}, and all reporting queries would use this tag to prevent affecting the same nodes the user-generated work would live on. Zones is this same idea, simplified into a better-understood term. For now, there is no major difference between these areas, but it could be something to look at more in the 3.6 and 3.8 MongoDB versions.

Replication

New “linearizable” Read Concern: reflects all successful writes issued with a “majority” and acknowledged before the start of the read operation.

Adjustable Catchup for Newly Elected Primary: the time limit for a newly elected primary to catch up with the other replica set members that might have more recent writes.

Write Concern Majority Journal Default replset-config option: determines the behavior of the { w: "majority" } write concern if the write concern does not explicitly specify the journal option j.

Initial-sync improvements:

Now the initial sync builds the indexes as the documents are copied.

Improvements to the retry logic make it more resilient to intermittent failures on the network.

Data Types

MongoDB 3.4 adds support for the decimal128 format with the new decimal data type. The decimal128 format supports numbers with up to 34 decimal digits (i.e., significant digits) and an exponent range of −6143 to +6144.

When performing comparisons among different numerical types, MongoDB conducts a comparison of the exact stored numerical values without first converting values to a common type.

Unlike the double data type, which only stores an approximation of the decimal values, the decimal data type stores the exact value. For example, a decimal NumberDecimal("9.99") has a precise value of 9.99, whereas a double 9.99 would have an approximate value of 9.9900000000000002131628….

To test for decimal type, use the $type operator with the literal “decimal” or 19 db.inventory.find( { price: { $type: "decimal" } } ) New Number Wrapper Object Type db.inventory.insert( {_id: 1, item: "The Scream", price: NumberDecimal("9.99"), quantity: 4 } )

To use the new decimal data type with a MongoDB driver, an upgrade to a driver version that supports the feature is necessary.

Aggregation Changes Stages Recursive Search

MongoDB 3.4 introduces a stage to the aggregation pipeline that allows for recursive searches.

Stage Description $graphLookup   Performs a recursive search on a collection. To each output document, adds a new array field that contains the traversal results of the recursive search for that document. Faceted Search

Faceted search allows for the categorization of documents into classifications. For example, given a collection of inventory documents, you might want to classify items by a single category (such as by the price range), or by multiple groups (such as by price range as well as separately by the departments).

3.4 introduces stages to the aggregation pipeline that allow for faceted search.

Stage Description $bucket Categorizes or groups incoming documents into buckets that represent a range of values for a specified expression. $bucketAuto Categorizes or groups incoming documents into a specified number of buckets that constitute a range of values for a specified expression. MongoDB automatically determines the bucket boundaries. $facet Processes multiple pipelines on the input documents and outputs a document that contains the results of these pipelines. By specifying facet-related stages ($bucket$bucketAuto, and$sortByCount) in these pipelines, $facet allows for multi-faceted search. $sortByCount   Categorizes or groups incoming documents by a specified expression to compute the count for each group. Output documents are sorted in descending order by the count.

Also read: https://www.percona.com/blog/2016/12/13/mongodb-3-4-facet-aggregation-features-and-server-27395-mongod-crash/

 

Reshaping Documents

MongoDB 3.4 introduces stages to the aggregation pipeline that facilitate replacing documents as well as adding new fields.

Stage Description $addFields Adds new fields to documents. The stage outputs documents that contain all existing fields from the input documents as well as the newly added fields. $replaceRoot   Replaces a document with the specified document. You can specify a document embedded in the input document to promote the embedded document to the top level. Count

MongoDB 3.4 introduces a new stage to the aggregation pipeline that facilitates counting document.

Stage Description $count   Returns a document that contains a count of the number of documents input to the stage. Operators Array Operators Operator Description $in Returns a boolean that indicates if a specified value is in an array. $indexOfArray    Searches an array for an occurrence of a specified value and returns the array index (zero-based) of the first occurrence. $range Returns an array whose elements are a generated sequence of numbers. $reverseArray Returns an output array whose elements are those of the input array but in reverse order. $reduce Takes an array as input and applies an expression to each item in the array to return the final result of the expression. $zip Returns an output array where each element is itself an array, consisting of elements of the corresponding array index position from the input arrays. Date Operators Operator Description $isoDayOfWeek   Returns the ISO 8601-weekday number, ranging from 1 (for Monday) to 7 (for Sunday). $isoWeek Returns the ISO 8601 week number, which can range from 1 to 53. Week numbers start at 1with the week (Monday through Sunday) that contains the year’s first Thursday. $isoWeekYear Returns the ISO 8601 year number, where the year starts on the Monday of week 1 (ISO 8601) and ends with the Sundays of the last week (ISO 8601). String Operators Operator Description $indexOfBytes   Searches a string for an occurrence of a substring and returns the UTF-8 byte index (zero-based) of the first occurrence. $indexOfCP Searches a string for an occurrence of a substring and returns the UTF-8 code point index (zero-based) of the first occurrence. $split Splits a string by a specified delimiter into string components and returns an array of the string components. $strLenBytes Returns the number of UTF-8 bytes for a string. $strLenCP Returns the number of UTF-8 code points for a string. $substrBytes Returns the substring of a string. The substring starts with the character at the specified UTF-8 byte index (zero-based) in the string for the length specified. $substrCP Returns the substring of a string. The substring starts with the character at the specified UTF-8 code point index (zero-based) in the string for the length specified. Others/Misc

Other new operators:

$switch: Evaluates, in sequential order, the case expressions of the specified branches to enter the first branch for which the case expression evaluates to “true”.

$collStats: Returns statistics regarding a collection or view.

$type: Returns a string which specifies the BSON Types of the argument.

$project: Adds support for field exclusion in the output document. Previously, you could only exclude the _id field in the stage.

Views

MongoDB 3.4 adds support for creating read-only views from existing collections or other views. To specify or define a view, MongoDB 3.4 introduces:

    • theViewOn and pipeline options to the existing create command:
      • db.runCommand( { create: <view>, viewOn: <source>, pipeline: <pipeline> } )
    • or if specifying a default collation for the view:
      • db.runCommand( { create: <view>, viewOn: <source>, pipeline: <pipeline>, collation: <collation> } )
    • and a corresponding  mongo shell helper db.createView():
      • db.createView(<view>, <source>, <pipeline>, <collation>)

For more information on creating views, see Views.

Categories: MySQL

Webinar Thursday, February 23, 2017: Troubleshooting MySQL Access Privileges Issues

MySQL Performance Blog - Wed, 2017-02-22 20:50

Please join Sveta Smirnova, Percona’s Principal Technical Services Engineer, as she presents Troubleshooting MySQL Access Privileges Issues on
February 23, 2017 at 11:00 am PST / 2:00 pm EST (UTC-8).

Do you have registered users who can’t connect to the MySQL server? Strangers modifying data to which they shouldn’t have access?

MySQL supports a rich set of user privilege options and allows you to fine tune access to every object in the server. The latest versions support authentication plugins that help to create more access patterns.

However, finding errors in such a big set of options can be problematic. This is especially true for environments with hundreds of users, all with different privileges on multiple objects. In this webinar, I will show you how to decipher error messages and unravel the complicated setups that can lead to access errors. We will also cover network errors that mimic access privileges errors.

In this webinar, we will discuss:

  • Which privileges MySQL supports
  • What GRANT statements are
  • How privileges are stored
  • How to find out why a privilege does not work properly
  • How authentication plugins make difference
  • What the best access control practices are

To register for this webinar please click here.

Sveta Smirnova, Principal Technical Services Engineer

Sveta joined Percona in 2015. Her main professional interests are problem-solving, working with tricky issues, bugs, finding patterns that can solve typical issues quicker, and teaching others how to deal with MySQL issues, bugs and gotchas effectively. Before joining Percona, Sveta worked as Support Engineer in the MySQL Bugs Analysis Support Group at MySQL AB-Sun-Oracle. She is the author of book “MySQL Troubleshooting” and JSON UDF functions for MySQL.

Categories: MySQL

Percona Monitoring and Management (PMM) Graphs Explained: MongoDB with RocksDB

MySQL Performance Blog - Wed, 2017-02-22 20:36

This post is part of the series of Percona’s MongoDB 3.4 bundle release blogs. In mid-2016, Percona Monitoring and Management (PMM) added support for RocksDB with MongoDB, also known as “MongoRocks.” In this blog, we will go over the Percona Monitoring and Management (PMM) 1.1.0 version of the MongoDB RocksDB dashboard, how PMM is useful in the day-to-day monitoring of MongoDB and what we plan to add and extend.

Percona Monitoring and Management (PMM)

Percona Monitoring and Management (PMM) is an open-source platform for managing and monitoring MySQL and MongoDB, developed by Percona on top of open-source technology. Behind the scenes, the graphing features this article covers use Prometheus (a popular time-series data store), Grafana (a popular visualization tool), mongodb_exporter (our MongoDB database metric exporter) plus other technologies to provide database and operating system metric graphs for your database instances.

The mongodb_exporter tool, which provides our monitoring platform with MongoDB metrics, uses RocksDB status output and optional counters to provide detailed insight into RocksDB performance. Percona’s MongoDB 3.4 release enables RocksDB’s optional counters by default. On 3.2, however, you must set the following in /etc/mongod.conf to enable this: storage.rocksdb.counters: true .

This article shows a live demo of our MongoDB RocksDB graphs: https://pmmdemo.percona.com/graph/dashboard/db/mongodb-rocksdb.

RocksDB/MongoRocks

RocksDB is a storage engine available since version 3.2 in Percona’s fork of MongoDB: Percona Server for MongoDB.

The first thing to know about monitoring RocksDB is compaction. RocksDB stores its data on disk using several tiered levels of immutable files. Changes written to disk are written to the first RocksDB level (Level0). Later the internal compactions merge the changes down to the next RocksDB level when Level0 fills. Each level before the last is essentially deltas to the resting data set that soon merges down to the bottom.

We can see the effect of the tiered levels in our “RocksDB Compaction Level Size” graph, which reflects the size of each level in RocksDB on-disk:

Note that most of the database data is in the final level “L6” (Level 6). Levels L0, L4 and L5 hold relatively smaller amounts of data changes. These get merged down to L6 via compaction.

More about this design is explained in detail by the developers of MongoRocks, here: https://www.percona.com/live/plam16/sessions/everything-you-wanted-know-about-mongorocks.

RocksDB Compaction

Most importantly, RocksDB compactions try to happen in the background. They generally do not “block” the database. However, the additional resource usage of compactions can potentially cause some spikes in latency, making compaction important to watch. When compactions occur, between levels L4 and L5 for example, L4 and L5 are read and merged with the result being written out as a new L5.

The memtable in MongoRocks is a 64mb in-memory table. Changes initially get written to the memtable. Reads check the memtable to see if there are unwritten changes to consider. When the memtable has filled to 100%, RocksDB performs a compaction of the memtable data to Level0, the first on-disk level in RocksDB.

In PMM we have added a single-stat panel for the percentage of the memtable usage. This is very useful in indicating when you can expect a memtable-to-level0 compaction to occur:

Above we can see the memtable is 125% used, which means RocksDB is late to finish (or start) a compaction due to high activity. Shortly after taking this screenshot above, however, our test system began a compaction of the memtable and this can be seen at the drop in active memtable entries below:

Following this compaction further through PMM’s graphs, we can see from the (very useful) “RocksDB Compaction Time” graph that this compaction took 5 seconds.

In the graph above, I have singled-out “L0” to show Level0’s compaction time. However, any level can be selected either per-graph (by clicking on the legend-item) or dashboard-wide (by using the RocksDB Level drop-down at the top of the page).

In terms of throughput, we can see from our “RocksDB Write Activity” graph (Read Activity is also graphed) that this compaction required about 33MBps of disk write activity:

On top of additional resource consumption such as the write activity above, compactions cause caches to get cleared. One example is the OS cache due to new level files being written. These factors can cause some increases to read latencies, demonstrated in this example below by the bump in L4 read latency (top graph) caused by the L4 compaction (bottom graph):

This pattern above is one area to check if you see latency spikes in RocksDB.

RocksDB Stalls

When RocksDB is unable to perform compaction promptly, it uses a feature called “stalls” to try and slow down the amount of data coming into the engine. In my experience, stalls almost always mean something below RocksDB is not up to the task (likely the storage system).

Here is the “RocksDB Stall Time” graph of a host experiencing frequent stalls:

PMM can graph the different types of RocksDB stalls in the “RocksDB Stalls” graph. In our case here, we have 0.3-0.5 stalls per second due to “level0_slowdown” and “level0_slowdown_with_compaction.” This happens when Level0 stalls the engine due to slow compaction performance below its level.

Another metric reflecting the poor compaction performance is the pending compactions in “RocksDB Pending Operations”:

As I mentioned earlier, this almost always means something below RocksDB itself cannot keep up. In the top-right of PMM, we have OS-level metrics in a drop-down, I recommend you look at “Disk Performance” in these scenarios:

On the “Disk Performance” dashboard you can see the “sda” disk has an average write time of 212ms, and a max of 1100ms (1.1 seconds). This is fairly slow.

Further, on the same dashboard I can see the CPU is waiting on disk I/O 98.70% of the time on average. This explains why RocksDB needs to stall to hold back some of the load!

The disks seem too busy to keep up! Looking at the “Mongod – Document Activity” graph, it explains the cause of the high disk usage: 10,000-60,000 inserts per second:

Here we can draw the conclusion that this volume of inserts on this system configuration causes some stalling in RocksDB.

RocksDB Block Cache

The RocksDB Block Cache is the in-heap cache RocksDB uses to cache uncompressed pages. Generally, deployments benefit from dedicating most of their memory to the Linux file system cache vs. the RocksDB Block Cache. We recommend using only 20-30% of the host RAM for block cache.

PMM can take away some of the guesswork with the “RocksDB Block Cache Hit Ratio” graph, showing the efficiency of the block cache:

It is difficult to define a “good” and “bad” number for this metric, as the number varies for every deployment. However, one important thing to look for is significant changes in this graph. In this example, the Block Cache has a page in cache 3000 times for every 1 time it does not.

If you wanted to test increasing your block cache, this graph becomes very useful. If you increase your block cache and do not see an improvement in the hit ratio after a lengthy period of testing, this usually means more block cache memory is not necessary.

RocksDB Read Latency Graphs

PMM graphs Read Latency metrics for RocksDB in several different graphs, one dedicated to Level0:

And three other graphs display Average, 99th Percentile and Maximum latencies for each RocksDB level. Here is an example from the 99th Percentile latency metrics:

Coming Soon

Percona Monitoring and Management needs to add some more metrics that explain the performance of the engine. The rate of deletes/tombstones in the system affects RocksDB’s performance. Currently, this metric is not something our system can easily gather like other engine metrics. Percona Monitoring and Management can’t easily graph the efficiency of the Bloom filter yet, either. These are currently open feature requests to the MongoRocks (and likely RocksDB) team(s) to add in future versions.

Percona’s release of Percona Server for MongoDB 3.4 includes a new, improved version of MongoRocks and RocksDB. More is available in the release notes!

Categories: MySQL

Percona XtraBackup 2.4.6 is Now Available

MySQL Performance Blog - Wed, 2017-02-22 18:49

Percona announces the GA release of Percona XtraBackup 2.4.6 on February 22, 2017. You can download it from our download site and apt and yum repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups.

New features:
  • Percona XtraBackup implemented a new --remove-original option that can be used to remove the encrypted and compressed files once they’ve been decrypted/decompressed.
Bugs Fixed:
  • XtraBackup was using username set for the server in a configuration file even if a different user was defined in the user’s configuration file. Bug fixed #1551706.
  • Incremental backups did not include xtrabackup_binlog_info and xtrabackup_galera_info files. Bug fixed #1643803.
  • In case a warning was written to stout instead of stderr during the streaming backup, it could cause assertion in the xbstream. Bug fixed #1647340.
  • xtrabackup --move-back option did not always restore out-of-datadir tablespaces to their original directories. Bug fixed #1648322.
  • innobackupex and xtrabackup scripts were showing the password in the ps output when it was passed as a command line argument. Bug fixed #907280.
  • Incremental backup would fail with a path like ~/backup/inc_1 because xtrabackup didn’t properly expand tilde. Bug fixed #1642826.
  • Fixed missing dependency check for Perl Digest::MD5 in rpm packages. This will now require perl-MD5 package to be installed from EPEL repositories on CentOS 5 and CentOS 6 (along with libev). Bug fixed #1644018.
  • Percona XtraBackup now supports -H, -h, -u and -p shortcuts for --hostname, --datadir, --user and --password respectively. Bugs fixed #1655438 and #1652044.

Release notes with all the bugfixes for Percona XtraBackup 2.4.6 are available in our online documentation. Please report any bugs to the launchpad bug tracker.

Categories: MySQL

Percona XtraBackup 2.3.7 is Now Available

MySQL Performance Blog - Wed, 2017-02-22 18:48

Percona announces the release of Percona XtraBackup 2.3.7 on February 22, 2017. Downloads are available from our download site or Percona Software Repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups.

This release is the current GA (Generally Available) stable release in the 2.3 series.

New Features
  • Percona XtraBackup has implemented a new --remove-original option that can be used to remove the encrypted and compressed files once they’ve been decrypted/decompressed.
Bugs Fixed:
  • XtraBackup was using username set for the server in a configuration file even if a different user was defined in the user’s configuration file. Bug fixed #1551706.
  • Incremental backups did not include xtrabackup_binlog_info and xtrabackup_galera_info files. Bug fixed #1643803.
  • Percona XtraBackup would fail to compile with -DWITH_DEBUG and -DWITH_SSL=system options. Bug fixed #1647551.
  • xtrabackup --move-back option did not always restore out-of-datadir tablespaces to their original directories. Bug fixed #1648322.
  • innobackupex and xtrabackup scripts were showing the password in the ps output when it was passed as a command line argument. Bug fixed #907280.
  • Incremental backup would fail with a path like ~/backup/inc_1 because xtrabackup didn’t properly expand tilde. Bug fixed #1642826.
  • Fixed missing dependency check for Perl Digest::MD5 in rpm packages. This will now require perl-MD5 package to be installed from EPEL repositories on CentOS 5 and CentOS 6 (along with libev). Bug fixed #1644018.
  • Percona XtraBackup now supports -H, -h, -u and -p shortcuts for --hostname, --datadir, --user and --password respectively. Bugs fixed #1655438 and #1652044.

Other bugs fixed: #1655278.

Release notes with all the bugfixes for Percona XtraBackup 2.3.7 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

Categories: MySQL

Percona Monitoring and Management (PMM) Upgrade Guide

MySQL Performance Blog - Tue, 2017-02-21 22:53

This post is part of a series of Percona’s MongoDB 3.4 bundle release blogs. The purpose of this blog post is to demonstrate current best-practices for an in-place Percona Monitoring and Management (PMM) upgrade. Following this method allows you to retain data previously collected by PMM in your MySQL or MongoDB environment, while upgrading to the latest version.

Step 1: Housekeeping

Before beginning this process, I recommend that you use a package manager that installs directly from Percona’s official software repository. The install instructions vary by distro, but for Ubuntu users the commands are:

wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb

sudo dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb

Step 2: PMM Server Upgrade

Now that we have ensured we’re using Percona’s official software repository, we can continue with the upgrade. To check which version of PMM server is running, execute the following command on your PMM server host:

docker ps

This command shows a list of all running Docker containers. The version of PMM server you are running is found in the image description.

Once you’ve verified you are on an older version, it’s time to upgrade!

The first step is to stop and remove your docker pmm-server container with the following command:

docker stop pmm-server && docker rm pmm-server

Please note that this command may take several seconds to complete.

The next step is to create and run the image with the new version tag. In this case, we are installing version 1.1.0. Please make sure to verify the correct image name in the install instructions.

Run the command below to create and run the new image.

docker run -d -p 80:80 --volumes-from pmm-data --name pmm-server --restart always percona/pmm-server:1.1.0

We can confirm our new image is running with the following command:

docker ps

As you can see, the latest version of PMM server is installed. The final step in the process is to update the PMM client on each host to be monitored.

Step 3: PMM Client Upgrade

The GA version of Percona Monitoring and Management supports in-place upgrades. Instructions can be found in our documentation. On the client side, update the local apt cache, and upgrade to the new version of pmm-client by running the following commands:

apt-get update

apt-get install pmm-client

Congrats! We’ve successfully upgraded to the latest PMM version. As you can tell from the graph below, there is a slight gap in our polling data due to the downtime necessary to upgrade the version. However, we have verified that the data that existed prior to the upgrade is still available and new data is being gathered.

Conclusion

I hope this blog post has given you the confidence to do an in-place Percona Monitoring and Management upgrade. As always, please submit your feedback on our forums with regards to any PMM-related suggestions or questions. Our goal is to make PMM the best-available open-source MySQL and MongoDB monitoring tool.

Categories: MySQL

Webinar Wednesday February 22, 2017: Percona Server for MongoDB 3.4 Product Bundle Release

MySQL Performance Blog - Tue, 2017-02-21 21:06

Join Percona’s MongoDB Practice Manager David Murphy on Wednesday, February 22, 2017 at 10:00 am PST / 1:00 pm EST (UTC-8) as he reviews and discusses the Percona Server for MongoDB, Percona Monitoring and Management (PMM) and Percona Toolkit product bundle release.

The webinar covers how this new bundled release ensures a robust, secure database that can be adapted to changing business requirements. It demonstrates how MongoDB, PMM and Percona Toolkit are used together so that organizations benefit from the cost savings and agility provided by free and proven open source software.

Percona Server for MongoDB 3.4 delivers all the latest MongoDB 3.4 Community Edition features, additional Enterprise features and a greater choice of storage engines.

Along with improved insight into the database environment, the solution provides enhanced control options for optimizing a wider range of database workloads with greater reliability and security.

Some of the features that will be discussed are:

  • Percona Server for MongoDB 3.4
    • All the features of MongoDB Community Edition 3.4, which provides an open source, fully compatible, drop-in replacement:
      • Integrated, pluggable authentication with LDAP to provide a centralized enterprise authentication service
      • Open-source auditing for visibility into user and process actions in the database, with the ability to redact sensitive information (such as user names and IP addresses) from log files
      • Hot backups for the WiredTiger engine protect against data loss in the case of a crash or disaster, without impacting performance
      • Two storage engine options not supported by MongoDB Community Edition 3.4:
        • MongoRocks, the RocksDB-powered storage engine, designed for demanding, high-volume data workloads such as in IoT applications, on-premises or in the cloud.
        • Percona Memory Engine is ideal for in-memory computing and other applications demanding very low latency workloads.
  • Percona Monitoring and Management 1.1
    • Support for MongoDB and Percona Server for MongoDB
    • Graphical dashboard information for WiredTiger, MongoRocks and Percona Memory Engine
  • Percona Toolkit 3.0
    • Two new tools for MongoDB:
      • pt-mongodb-summary (the equivalent of pt-mysql-summary) provides a quick, at-a-glance overview of a MongoDB and Percona Server for MongoDB instance.
      • pt-mongodb-query-digest (the equivalent of pt-query-digest for MySQL) offers a query review for troubleshooting.

You can register for the webinar here.

David Murphy, MongoDB Practice Manager

David joined Percona in October 2015 as Practice Manager for MongoDB. Prior to that, David joined the ObjectRocket by Rackspace team as the Lead DBA in Sept 2013. With the growth involved with a any recently acquired startup, David’s role covered a wide range from evangelism, research, run book development, knowledge base design, consulting, technical account management, mentoring and much more.

Prior to the world of MongoDB, David was a MySQL and NoSQL architect at Electronic Arts. There, he worked with some of the largest titles in the world like FIFA, SimCity, and Battle Field providing tuning, design, and technology choice responsibilities. David maintains an active interest in database speaking and exploring new technologies.

Categories: MySQL

The Best Activity Tracking Watch

Xaprb, home of innotop - Tue, 2017-02-21 20:31

After thinking about smart watches, activity trackers, and similar devices for a while, I bought a Withings Steel HR. My goal was to find a traditional stylish-looking watch with long battery life, heart rate tracking, sleep tracking, and activity tracking. Here’s my experience thus far.

In the last few years I’ve kept an interested eye on the explosion of health- and fitness-tracking devices, considering whether it was time for me to take the plunge and get one. Broadly speaking, there are a few categories of devices on the market, each appealing to a different user for various reasons. Here’s my summary:

  • If you want an iPhone on your wrist, get an Apple Watch. It has poor battery life but it’s seamlessly integrated with your Apple lifestyle.
  • If you want the best fitness/activity tracking, look into FitBit and Garmin. The hardware, apps, and tracking features are unmatched.
  • If you want a stylish analog watch with long battery life and full-featured activity tracking, check out Withings.
  • If you want a hybrid digital watch with pseudo-analog styling, take a look at the Garmin 235 or Ticwear.

There are many more choices than these, but those represent some of the leaders in the field. For more detail, read on.

Today’s smartwatches have many features that go far beyond watch functionality. You can make and receive calls and texts, respond to emails, dismiss calendar alarms, get navigation guidance, and much more. In addition to features you’re used to getting from a smartphone, many devices offer a broad range of features to measure your activity and vital signs: heart rate tracking, step counting, GPS tracking for runs and walks, sleep tracking, and so on.

I was looking for some of what I consider to be the most important features of these, without sacrificing the form factor and style of an analog watch. The Withings Steel HR, a new offering on the market, was my choice because as far as I could tell, it was the only analog watch that tracks heart rate, steps, and sleep, and has a good battery life. I pre-ordered it before it was available and have had it for a couple of months now. I’ll review it in detail below.

I also have some experience with friends who’ve had some of these devices. As with everything, these are intensely personal choices and it’s best to go to a store where you can see and try as may of the models as possible. For example, FitBit is easily one of the leaders in the field of wearables, and makes fantastic products. But are they right for me? One of my friends doesn’t like her FitBit because of the flickering lights she sees at night in bed, which I know would bother me. I was told the FitBit Charge doesn’t truly detect your sleep automatically, though I’m not sure of that, and it may not be waterproof. And several friends complained about the difficulty of using all its many features. As for me, I simply don’t care for the aesthetics of the FitBit, nor the way it feels on my wrist; it has a rigid band that I dislike.

Here’s a summary of some wearables and their features, to give a sense of the market. This is not exhaustive and may not be current or accurate. Note that I’m not an Android user and there are far too many options in the Android market to summarize here. I’m also not listing Jawbone; I couldn’t find anyone who was willing to recommend a Jawbone.

Item Category Battery Heart Rate Sleep Other Features Apple Watch Smart Watch Short Yes Activities, Notifications, Apps Ticwear Smart Watch Short Yes Activities, Notifications, GPS Garmin 235 Watch-Like Tracker Medium Yes Yes Activities, Notifications, GPS Garmin VivoMove Augmented Watch Long Yes Withings Steel HR Augmented Watch Long Yes Yes Activities, Notifications Misfit Phase Augmented Watch Long Yes Activities Fitbit Charge 2 Fitness Tracker Medium Yes Yes Activities, Notifications Garmin VivoSmart Fitness Tracker Medium Yes Yes Activities, Notifications, GPS

Each of these makes and models typically has options or related products that have more or less functionality. For example, Withings offers an Activite Pop that is simpler than the Steel HR, and doesn’t track heart rate. But its battery lasts 8 months instead of 25 days. Misfit has a more fully-featured watch, but it’s not analog. And so on.

It’s also worth noting that the space is fast-moving. While I was biding my time, trying to decide what I wanted, at least three watch and wearable companies were acquired or went out of business, and several new options became available.

The Withings Steel HR

My watch of choice was the Withings Steel HR. I’m a traditional, simplistic watch guy. My analog watch of choice is a Timex Weekender. I wanted a minimalistic analog watch with long battery life and the following features if possible:

  • Sleep, heart rate, and step tracking
  • Activity tracking; I could take or leave this feature
  • I wasn’t really interested in text messages, notifications, and the like
  • Other bonuses: waterproof, quick charging

One of the reasons I wanted a minimalistic product, with fewer smartphone features and more activity-tracking features, is because my experience is a small device with a lot of features is hard to use. I’d rather have a few easy-to-use features than a lot. This biased me away from devices like the Garmin Forerunner 235 and Ticwatch.

The Withings Steel HR tracks steps continuously, and heart rate every few minutes instead of continuously, but has an exercise mode (just long-press the button on the side to activate) that tracks continuously. It tracks sleep automatically, detecting when you’re in bed and light/deep sleep. It’s able to vibrate on your wrist when you get a text message, call, or calendar notification, and displays the caller ID or similar. And it can act as an alarm clock, vibrating to wake you.

It also auto-detects activities and the companion app lets you set goals and review your health statistics.

It’s mostly an analog watch in appearance, although it has a notification area where caller ID and the like appear, and a goal tracker to show how much of your daily step tracking goal you’ve achieved.

I got the black 36mm model. I like the styling. I have found it functional, and I appreciate the long battery life. The band is very comfortable and flexible. I wear my Withings 24x7, even in the shower. Here’s a breakdown of how well things work:

  • The watch hands are slightly hard to see depending on the lighting, because they aren’t white; they are polished stainless steel or similar.
  • Sleep tracking is reasonably good, though it usually thinks I’ve gone to bed before I really do. Sometimes I sit on the couch and work in the evenings for a couple of hours, typing on my laptop or writing in my notebook, and it detects that I’m in “light sleep” during this time.
  • Heart rate tracking is only directionally accurate. Sometimes I look at it in the middle of an intense workout and it’s reporting a heart rate of 62 when I’m pretty sure I’m well above 120. I’ve found it to report high heart rates when I’m at rest, too. I’ve also found long gaps in the tracking when I review the statistics in the app, such as at night. It’s reasonably accurate, though, and over the long term it should be a good gauge of my resting heart rate trend, which is what I care about most.
  • Step tracking is quite accurate, to within 1% or so in my tests. I am unsure how the step measurements from my iPhone are reconciled with the step measurements from the watch. Maybe they are independent.
  • The battery life is about 15-20 days for me, depending on how often I activate the workout mode.
  • Waterproof enough that I wear it in the shower. I’ve found it to mist a bit in hot weather in direct sun once.
  • The setup was a bit finicky; syncing it to my phone with Bluetooth took a couple of tries initially. Since then it’s been fine.
  • The iPhone app is probably not as good as Garmin’s or FitBit’s, but it’s pretty good.
  • Text notifications don’t seem to work. (I have an iPhone). I don’t know about calendar notifications, because I don’t use the iPhone calendar app.
  • Call notifications work well, and the caller ID displays quickly and is surprisingly usable for such a small area.
  • The alarm doesn’t seem to work. I don’t think I’m sleeping through it. I turned it off after a while because it seemed inconsistent, as though it only worked if the phone and watch were connected by Bluetooth at the exact instant the alarm was supposed to ring. I could be wrong about this.

All in all, I’m happy with it. If I were to use something else instead, might be the Fitbit Charge line of products. What are your thoughts and experiences using any of these devices?

Picture Credit

Categories: MySQL

MongoDB 3.4 Bundle Release: Percona Server for MongoDB 3.4, Percona Monitoring and Management 1.1, Percona Toolkit 3.0 with MongoDB

MySQL Performance Blog - Mon, 2017-02-20 21:51

This blog post is the first in a series on Percona’s MongoDB 3.4 bundle release. This release includes Percona Server for MongoDB, Percona Monitoring and Management, and Percona Toolkit. In this post, we’ll look at the features included in the release.

We have a lot of great MongoDB content coming your way in the next few weeks. However, I wanted first to give you a quick list of the major things to be on the look out for.

This new bundled release ensures a robust, secure database that you can adapt to changing business requirements. It helps demonstrate how organizations can use MongoDB (and Percona Server for MongoDB), PMM and Percona Toolkit together to benefit from the cost savings and agility provided by free and proven open source software.

Percona Server for MongoDB 3.4 delivers all the latest MongoDB 3.4 Community Edition features, additional Enterprise features and a greater choice of storage engines.

Some of these new features include:

  • Shard member types. All nodes now need to know what they do – this helps with reporting and architecture planning more than the underlying code, but it’s an important first step.
  • Sharding balancer moved to config server primary
  • Configuration servers must now be a replica set
  • Faster balancing (shard count/2) – concurrent balancing actions can now happen at the same time!
  • Sharding and replication tags renamed to “zones” – again, an important first step
  • Default write behavior moved to majority – this could majorly impact many workloads, but moving to a default safe write mode is important
  • New decimal data type
  • Graph aggregation functions – we will talk about these more in a later blog, but for now note that graph and faceted searches are added.
  • Collations added to most access patterns for collections and databases
  • . . .and much more

Percona Server for MongoDB includes all the features of MongoDB Community Edition 3.4, providing an open source, fully-compatible, drop-in replacement with many improvements, such as:

  • Integrated, pluggable authentication with LDAP that provides a centralized enterprise authentication service
  • Open-source auditing for visibility into user and process actions in the database, with the ability to redact sensitive information (such as user names and IP addresses) from log files
  • Hot backups for the WiredTiger engine to protect against data loss in the case of a crash or disaster, without impacting performance
  • Two storage engine options not supported by MongoDB Community Edition 3.4 (doubling the total engine count choices):
    • MongoRocks, the RocksDB-powered storage engine, designed for demanding, high-volume data workloads such as in IoT applications, on-premises or in the cloud.
    • Percona Memory Engine is ideal for in-memory computing and other applications demanding very low latency workloads.

Percona Monitoring and Management 1.1

  • Support for MongoDB and Percona Server for MongoDB
  • Graphical dashboard information for WiredTiger, MongoRocks and Percona Memory Engine
  • Cluster and replica set wide views
  • Many more graphable metrics available for both for the OS and the database layer than currently provided by other tools in the ecosystem

Percona Toolkit 3.0

  • Two new tools for MongoDB are now in Percona’s Toolkit:
    • pt-mongodb-summary (the equivalent of pt-mysql-summary) provides a quick, at-a-glance overview of a MongoDB and Percona Server for MongoDB instance
      • This is useful for any DBA who wants a general idea of what’s happening in the system, what the state of their cluster/replica set is, and more.
    • pt-mongodb-query-digest (the equivalent of pt-query-digest for MySQL) offers a query review for troubleshooting
      • Query digest is one of the most used Toolkit features ever. In MongoDB, this is no different. Typically you might only look at your best and worst query times and document scans. However, this will show 90th percentiles, and top 10 queries take seconds versus minutes.

For all of these topics, you will see more blogs in the next few weeks that cover them in detail. Some people have asked what Percona’s MongoDB commitment looks like. Hopefully, this series of blogs help show how improving open source databases is central to the Percona vision. We are here to make the world better for developers, DBAs and other MongoDB users.

Categories: MySQL

Percona Toolkit 3.0.1 is now available

MySQL Performance Blog - Mon, 2017-02-20 21:50

Percona announces the availability of Percona Toolkit 3.0.1 on February 20, 2017. This is the first general availability (GA) release in the 3.0 series with a focus on padding MongoDB tools:

Downloads are available from the Percona Software Repositories.

NOTE: If you are upgrading using Percona’s yum repositories, make sure that the you enable the basearch repo, because Percona Toolkit 3.0 is not available in the noarch repo.

Percona Toolkit is a collection of advanced command-line tools that perform a variety of MySQL and MongoDB server and system tasks too difficult or complex for DBAs to perform manually. Percona Toolkit, like all Percona software, is free and open source.

This release includes changes from the previous 3.0.0 RC and the following additional changes:

  • Added requirement to run pt-mongodb-summary as a user with the clusterAdmin or root built-in roles.

You can find release details in the release notes. Bugs can be reported on Toolkit’s launchpad bug tracker.

Categories: MySQL

Percona Monitoring and Management 1.1.1 is now available

MySQL Performance Blog - Mon, 2017-02-20 21:49

Percona announces the release of Percona Monitoring and Management 1.1.1 on February 20, 2017. This is the first general availability (GA) release in the PMM 1.1 series with a focus on providing alternative deployment options for PMM Server:

NOTE: The AMIs and VirtualBox images above are still experimental. For production, it is recommended to run Docker images.

The instructions for installing Percona Monitoring and Management 1.1.1 are available in the documentation. Detailed release notes are available here.

There are no changes compared to previous 1.1.0 Beta release, except small fixes for MongoDB metrics dashboards.

A live demo of PMM is available at pmmdemo.percona.com.

We welcome your feedback and questions on our PMM forum.

About Percona Monitoring and Management
Percona Monitoring and Management is an open-source platform for managing and monitoring MySQL and MongoDB performance. Percona developed it in collaboration with experts in the field of managed database services, support and consulting.

PMM is a free and open-source solution that you can run in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

Categories: MySQL

Percona Server for MongoDB 3.4.2-1.2 is now available

MySQL Performance Blog - Mon, 2017-02-20 21:49

Percona announces the release of Percona Server for MongoDB 3.4.2-1.2 on February 20, 2017. It is the first general availability (GA) release in the 3.4 series. Download the latest version from the Percona web site or the Percona Software Repositories.

Percona Server for MongoDB is an enhanced, open source, fully compatible, highly-scalable, zero-maintenance downtime database supporting the MongoDB v3.4 protocol and drivers. It extends MongoDB with Percona Memory Engine and MongoRocks storage engine, as well as several enterprise-grade features:

Percona Server for MongoDB requires no changes to MongoDB applications or code.

This release candidate is based on MongoDB 3.4.2, includes changes from PSMDB 3.4.0 Beta and 3.4.1 RC, and the following additional changes:

  • Fixed the audit log message format to comply with upstream MongoDB:
    • Changed params document to param
    • Added roles document
    • Fixed date and time format
    • Changed host field to ip in the local and remote documents

Percona Server for MongoDB 3.4.2-1.2 release notes are available in the official documentation.

Categories: MySQL

How Venture Capitalists Have Helped Me

Xaprb, home of innotop - Sun, 2017-02-19 20:20

Venture capital is a competitive industry. Investors compete to win the best companies, so they pitch founders on the value they bring to their portfolio companies. When I was a new founder, their pitches didn’t resonate with me. I found it difficult to understand how they could help. A few years later, I get it; they really can add value. This is what I’ve found so far.

When I first began speaking to potential venture capital investors, I felt as though their pitches to me were all variations on “you’ll get access to our network.” This fell on mostly deaf ears. At that point in my career, it was actually an anti-value for me. My experience of “networking” was associated with cliques and special privileges shared between people based on belonging to a club. It sounded like a fraternity’s pitch to a would-be pledge, to be honest.

At this point I’ve had a few years’ experience working with some fairly active and involved investors, and on reflection, I see more of the value than I did at the beginning. If I were to explain this to past-me, here’s what I might try to emphasize.

Recruiting

In the early years, I didn’t understand how important it would be to hire carefully, nor how difficult and time-consuming it would be. I thought I was good at hiring, but I was wrong. This is a book-length topic, but my venture capital investors have helped in several concrete ways between then and now.

  • Selecting a recruiter. Recruiters have turned out to be far more important than I’d expected. My executive recruiter, in particular, has become an extension of my team and a true partner to the business. But recruiting, as an industry, in many ways deserves the reputation it has. Exceptional recruiters are exceptional. My investors helped me find a recruiter who is good for me and for our business. Without my investors’ recruiting services, I’d probably have hired a bad recruiter, or one who was good at something I didn’t need (or good for a different company but not mine), wasted a lot of time and money, and potentially even failed to find good people for vital positions at crucial moments in the company’s growth. This is a life-and-death matter. The in-house recruiting specialists at my investors have made a huge difference here.
  • Introductions. Several of my most important hires have come through the wisdom and judgment of my investors combined with their extensive networks. Timing is so, so important. By knowing the right person at the right time, they’ve helped find the needle in the haystack.
  • Closing. High-performing people are rarely “on the job market” and are careful with their careers. Joining an early-stage startup with a first-time founder/CEO is a pretty risky move. Without investors, many people never would have even started a conversation with me, and the investors have been instrumental in getting to yes. The investors have helped explain what they saw in the company and its opportunity, lending an independent, third-party perspective that I would be unable to. “Why did you invest in this company?” is an answer only an investor can give.
  • Understanding The Market. My investors have a much broader view of what’s normal and expected in the industry, and can quickly give advice and guidance on what’s going to work and won’t in recruiting-related matters. They’re scouts reporting from the front lines. They can help vet for common mistakes in our processes, provide data on compensation norms, give strategic advice on closing a particular candidate, and so on.

Note that my investors’ recruiting services aren’t for doing recruiting, per se. They’re for helping my company succeed in our own recruiting efforts.

Planning And Cross-Checking

Investors, both actual and potential, have helped review and clarify my plans. They have found things I overlooked, pointed out errors in my logic, and made my models much more rigorous. They have helped me understand the common language of things such as operating plans, showing me what types of models will be quick to evaluate and provide good answers, as well as what’s conventional and therefore easy to pattern-match.

My board member at NEA, Arjun Aggarwal, has spent a great deal of time helping build models for many aspects of the business, helping turn thoughts into spreadsheets. This is not typical; board members aren’t usually this active and involved. Arjun adds a lot of value to the team by doing this.

Speaking to investors generally results in at least some type of challenge to my thinking, even if very diplomatic. Every question is an effort to go a bit deeper. When I speak to venture capitalists, I write down the questions they ask me. Common themes always emerge. I am not a venture capitalist and don’t think like one. Being able to review my notes and see where I need to focus, both for their sake and for mine, is invaluable to me.

Pitch Practice And Feedback

I’m not a pitcher by nature. But virtually everything I do involves summarizing the business’s value, current status, and opportunity to someone, whether that’s a potential recruit, an investor, a partner, a customer, the board of directors, the all-staff meetings, and so on. Venture capitalists provide feedback on how well I’m doing that.

My investors have also gone beyond the call of duty to help me understand how to pitch better, build a better deck, and helped me with pitch practice and rehearsal. As I’ve leaned into this process, I’ve found it useful all day, every day.

When I’ve pitched potential investors, I’ve found it very useful to note and decode their feedback. Some will not say no in a direct way, leaning on compliments followed by encouragement to stay in touch. Others will take time to be very specific about why they’ve decided not to invest. Their feedback is clear guidance as to what they think the business should focus on achieving. It has to be taken with a grain of salt, but collating this feedback often results in advice that’s less conflicting than some other sources I’ve gone to for help. It also points out where I’m just doing a bad job communicating our strengths; I’ve gotten feedback that we should do X when, in fact, we already do X and I just wasn’t saying it very well.

Press and Media Relations

Early startups generally can’t and shouldn’t spend money on an expensive PR firm. Both of my major investors have PR staff and services who have helped us with periodic work we otherwise wouldn’t have had resources to do well.

Similar to recruiters, PR firms are probably a trap for founders pretty often—not that they mean badly, but you need to know how to work with them or you’ll steer yourself astray. Working with our investors’ PR experts instead of with agencies has allowed us to get lots of help at particular times, without taking a big risk on a long-term commitment.

Introductions To Advisors

Various introductions to advisors, entrepreneurs-in-residence, and other helpful people have come through my investors. Many of these people have generously spent significant amounts of time with me and others on the team. We’ve dodged many serious mistakes as a result. We’ve also seized on opportunities we didn’t see ourselves, and found alternative ways to do things that produced surprising results at times. This is true both on the business and the technical sides.

Conclusions

If you’d asked me in 2013, I think I would have said that investors were perhaps exaggerating how much they could help us. I’d have said “all they do is say they’ll make introductions, and introductions are just going to use up precious time I need to conserve.” That’s not what I’ve found. I’ve received help I didn’t expect, didn’t know I needed, and has made a big difference to the business.

PS: If I’ve omitted anything you’ve done for me, it’s forgetfulness, not passive aggressiveness.

Pic Credit

Categories: MySQL

MySQL Bug 72804 Workaround: “BINLOG statement can no longer be used to apply query events”

MySQL Performance Blog - Thu, 2017-02-16 23:39

In this blog post, we’ll look at a workaround for MySQL bug 72804.

Recently I worked on a ticket where a customer performed a point-in-time recovery PITR using a large set of binary logs. Normally we handle this by applying the last backup, then re-applying all binary logs created since the last backup. In the middle of the procedure, their new server crashed. We identified the binary log position and tried to restart the PITR from there. However, using the option --start-position, the restore failed with the error “The BINLOG statement of type Table_map was not preceded by a format description BINLOG statement.” This is a known bug and is reported as MySQL Bug #72804: “BINLOG statement can no longer be used to apply Query events.”

I created a small test to demonstrate a workaround that we implemented (and worked).

First, I ran a large import process that created several binary logs. I used a small value in max_binlog_size and tested using the database “employees” (a standard database used for testing).Then I dropped the database.

mysql> set sql_log_bin=0; Query OK, 0 rows affected (0.33 sec) mysql> drop database employees; Query OK, 8 rows affected (1.25 sec)

To demonstrate the recovery process, I joined all the binary log files into one SQL file and started an import.

sveta@Thinkie:~/build/ps-5.7/mysql-test$ ../bin/mysqlbinlog var/mysqld.1/data/master.000001 var/mysqld.1/data/master.000002 var/mysqld.1/data/master.000003 var/mysqld.1/data/master.000004 var/mysqld.1/data/master.000005 > binlogs.sql sveta@Thinkie:~/build/ps-5.7/mysql-test$ GENERATE_ERROR.sh binlogs.sql sveta@Thinkie:~/build/ps-5.7/mysql-test$ mysql < binlogs.sql ERROR 1064 (42000) at line 9020: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'inserting error

I intentionally generated a syntax error in the resulting file with the help of the GENERATE_ERROR.sh script (which just inserts a bogus SQL statement in a random row). The error message clearly showed where the import stopped: line 9020. I then created a file that cropped out the part that had already been imported (lines 1- 9020), and tried to import this new file.

sveta@Thinkie:~/build/ps-5.7/mysql-test$ tail -n +9021 binlogs.sql >binlogs_rest.sql sveta@Thinkie:~/build/ps-5.7/mysql-test$ mysql < binlogs_rest.sql ERROR 1609 (HY000) at line 134: The BINLOG statement of type `Table_map` was not preceded by a format description BINLOG statement.

Again, the import failed with exactly the same error as the customer. The reason for this error is that the BINLOG statement – which applies changes from the binary log – expects that the format description event gets run in the same session as the binary log import, but before it. The format description existed initially at the start of the import that failed at line 9020. The later import (from line 9021 on) doesn’t contain this format statement.

Fortunately, this format is the same for the same version! We can simply take it from the beginning the SQL log file (or the original binary file) and put into the file created after the crash without lines 1-9020.

With MySQL versions 5.6 and 5.7, this event is located in the first 11 rows:

sveta@Thinkie:~/build/ps-5.7/mysql-test$ head -n 11 binlogs.sql | cat -n 1 /*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/; 2 /*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/; 3 DELIMITER /*!*/; 4 # at 4 5 #170128 17:58:11 server id 1 end_log_pos 123 CRC32 0xccda074a Start: binlog v 4, server v 5.7.16-9-debug-log created 170128 17:58:11 at startup 6 ROLLBACK/*!*/; 7 BINLOG ' 8 g7GMWA8BAAAAdwAAAHsAAAAAAAQANS43LjE2LTktZGVidWctbG9nAAAAAAAAAAAAAAAAAAAAAAAA 9 AAAAAAAAAAAAAAAAAACDsYxYEzgNAAgAEgAEBAQEEgAAXwAEGggAAAAICAgCAAAACgoKKioAEjQA 10 AUoH2sw= 11 '/*!*/;

The first six rows are meta information, and rows 6-11 are the format event itself. The only thing we need to export into our resulting file is these 11 lines:

sveta@Thinkie:~/build/ps-5.7/mysql-test$ head -n 11 binlogs.sql > binlogs_rest_with_format.sql sveta@Thinkie:~/build/ps-5.7/mysql-test$ cat binlogs_rest.sql >> binlogs_rest_with_format.sql sveta@Thinkie:~/build/ps-5.7/mysql-test$ mysql < binlogs_rest_with_format.sql sveta@Thinkie:~/build/ps-5.7/mysql-test$

After this, the import succeeded!

Categories: MySQL

Percona Blog Poll Results: What Programming Languages Are You Using for Backend Development?

MySQL Performance Blog - Thu, 2017-02-16 21:53

In this blog we’ll look at the results from Percona’s blog poll on what programming languages you’re using for backend development.

Late last year we started a poll on what backend programming languages are being used by the open source community. The three components of the backend – server, application, and database – are what makes a website or application work. Below are the results of Percona’s poll on backend programming languages in use by the community:

Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

One of the best-known and earliest web service stacks is the LAMP stack, which spelled out refers to Linux, Apache, MySQL and PHP/Perl/Python. We can see that this early model is still popular when it comes to the backend.

PHP still remains a very common choice for a backend programming language, with Python moving up the list as well. Perl seems to be fading in popularity, despite being used a lot in the MySQL world.

Java is also showing signs of strength, demonstrating the strides MySQL is making in enterprise applications. We can also see JavaScript is increasingly getting used not only as a front-end programming language, but also as back-end language with the Node.JS framework.

Finally, Go is a language to look out for. Go is an open source programming language created by Google. It first appeared in 2009, and is already more popular than Perl or Ruby according to this poll.

Thanks to the community for participating in our poll. You can take our latest poll on what database engine are you using to store time series data here. 

Categories: MySQL

MariaDB at Percona Live Open Source Database Conference 2017

MySQL Performance Blog - Thu, 2017-02-16 18:08

In this blog, we’ll look at how we plan to represent MariaDB at Percona Live.

The MariaDB Corporation is organizing a conference called M17 on the East Coast in April. Some Perconians (Peter Zaitsev, Vadim Tkachenko, Sveta Smirnova, Alex Rubin, Colin Charles) decided to submit some interesting talks for that conference. Percona also offered to sponsor the conference.

As of this post, the talks haven’t been accepted, and we were politely told that we couldn’t sponsor.

Some of the proposed talks were:

  • MariaDB Backup with Percona XtraBackup (Vadim Tkachenko)
  • Managing MariaDB Server operations with Percona Toolkit (Colin Charles)
  • MariaDB Server Monitoring with Percona Monitoring and Management (Peter Zaitsev)
  • Securing your MariaDB Server/MySQL data (Colin Charles, Ronald Bradford)
  • Data Analytics with MySQL, Apache Spark and Apache Drill (Alexander Rubin)
  • Performance Schema for MySQL and MariaDB Troubleshooting (Sveta Smirnova)

At Percona, we think MariaDB Server is an important part of the MySQL ecosystem. This is why the Percona Live Open Source Database Conference 2017 in Santa Clara has a MariaDB mini-track, consisting of talks from various Percona and MariaDB experts:

If any of these topics look enticing, come to the conference. We have MariaDB at Percona Live.

To make your decision easier, we’ve created a special promo code that gets you $75 off a full conference pass! Just use MariaDB@PL17 at checkout.

In the meantime, we will continue to write and discuss MariaDB, and any other open source database technologies. The power of the open source community is the free exchange of ideas, healthy competition and open dialog within the community.

Here are some more past presentations that are also relevant:

Categories: MySQL
Syndicate content