In April and May, Percona will hold and participate in two OpenStack events: OpenStack Live and the OpenStack Summit. Join our talks at these events in Santa Clara and Vancouver for new insights into the MySQL operations of the core of OpenStack as well as the latest information on MySQL guest instances.
Next week (April 13-14), Percona will host OpenStack Live at the Santa Clara Convention Center. Conveniently located for those in the Bay Area, this two day, user-focused event will cover many core aspects of OpenStack including Nova, Swift, Neutron, and Trove as well as related technologies like Ceph and Docker. (OpenStack Live passes include access to keynotes and the expo hall at the Percona Live MySQL Conference.)
OpenStack Live will be a packed two days with 6 hands-on tutorials, 18 sessions, and keynotes and panel discussions featuring speakers from Facebook, EMC, ObjectRocket, Percona, Yahoo, VMware, Deep Information Sciences, and special guest, Steve Wozniak. Yes, the real Steve Wozniak.
Of course I’m looking forward to my session, “An introduction to Database as a Service with an emphasis on OpenStack using Trove,” with Amrith Kumar, Founder and CTO of Tesora, on Tuesday from 11:30am – 12:20pm in Room 209. Some other topics you shouldn’t miss include:
Tutorial: Deploying, Configuring and Operating OpenStack Trove
Monday, April 13: 9:30am – 12:30pm in Room 203
Amrith Kumar (Tesora) and Sriram Kalyanasundaram (Tesora) will lead a 3 hour hands-on tutorial on Trove DBaaS. It’s rare to get such a focused tutorial on this key component of OpenStack and led by Tesora, major contributors to the Trove project. If you are even considering implementing Trove for your cloud, this is a must-attend class.
Tutorial: How to Get Your Groove on with OpenStack Swift Object Storage
Monday, April 13: 1:30pm – 4:30pm in Room 204
John Dickinson (OpenStack Swift), Manzoor Brar (SwiftStack), and Sergei Glushenko (Percona) will lead the audience on a journey into Swift, OpenStack’s object store project. John, the Swift “Project Team Lead” (PTL), and Manzoor will show you how to deploy a Swift cluster, use it with real applications, and, with the assistance of Sergei from Percona, how to properly backup MySQL databases to a Swift cluster.
Sessions which I’d recommend attending are:
Session: MySQL and OpenStack deep dive
Tuesday, April 14: 11:30am – 12:20pm in Room 204
Peter Boros (Percona)
Session: Deploying a OpenStack Cloud at Scale at Time Warner Cable
Tuesday, April 14: 1:20pm – 2:10pm in Room 203
Matthew Fischer (Time Warner Cable), Clayton O’Neill (Time Warner Cable)
Session: Designing a highly resilient Network Infrastructure for OpenStack Clouds
Tuesday, April 14: 3:50pm – 4:40pm in Room 203
Pere Monclus (PLUMgrid)
This is only a small sample of the tutorials and talks which you will hear at OpenStack Live. If you’re in the Bay Area next week and are interested in OpenStack or just want to learn more about some of the core components, register today to join us in Santa Clara from April 13-14. It will be well worth the investment.
As a reminder, getting the most out of all tutorials at OpenStack Live requires that you bring your own laptop and are ready to dig into the technologies. Review the session descriptions for any materials that should be downloaded or installed prior to the event.Talks at OpenStack Summit Vancouver
Next month, Percona will join thousands in Vancouver for the next OpenStack Summit. In addition to helping those developing and running OpenStack to operate, optimize, and achieve high availability of the MySQL core, Percona MySQL experts will speak in two sessions.
Deep Dive into MySQL replication with OpenStack Trove, and Kilo
George Lorch (Percona) and Amrith Kumar (Tesora) will present an in depth exploration of MySQL replication with Trove and Kilo capabilities. This is a great way for anyone interested in OpenStack DBaaS to find out the latest information.
Core Services MySQL Database Backup and Recovery to Swift
Most OpenStack services rely heavily on MySQL, but how do you mitigate data loss in the event of a failed upgrade or unplanned outage? Kenny Gryp (Percona) and Chris Nelson (SwiftStack), experts in MySQL and storage, will discuss this challenge and provide advice and strategies for a fast, complete recovery from highly available storage sources.
The team from Percona will again be found in the exhibit hall at booth T37. We’re looking forward to seeing old friends and meeting new ones that rely on Percona software, such as the Galera-based Percona XtraDB Cluster and Percona XtraBackup, in their OpenStack clouds.
The post More on OpenStack Live and our talks at OpenStack Summit Vancouver appeared first on MySQL Performance Blog.
Two years ago Ovais Tariq had explained in detail what kinds of problems existed before MySQL introduced metadata locks in 5.5.3 and how these locks help to prevent them. Still, some implications of metadata locking in MySQL remain unclear for users – DBAs and even software developers that target recent MySQL versions. I’ve decided to include a slide or two into the presentation about InnoDB locks and deadlocks I plan to make (with my colleague Nilnandan Joshi) on April 16 at Percona Live 2015.
I decided to do this as recently I’ve got an issue to work on where it was claimed that the behavior of SELECT blocking TRUNCATE TABLE is wrong, just because transaction isolation level was set to READ COMMITTED and thus there should be no locks set by SELECT and transaction should not even start no matter what the value of autocommit is (it was explicitly set to 0 by smart software).
The MySQL manual clearly says:
“To ensure transaction serializability, the server must not permit one session to perform a data definition language (DDL) statement on a table that is used in an uncompleted explicitly or implicitly started transaction in another session. The server achieves this by acquiring metadata locks on tables used within a transaction and deferring release of those locks until the transaction ends. A metadata lock on a table prevents changes to the table’s structure. This locking approach has the implication that a table that is being used by a transaction within one session cannot be used in DDL statements by other sessions until the transaction ends.”
So, the real challenge was to show these metadata locks still set in a transaction that started implicitly, by SELECT immediately following SET autocommit=0 in a session. It was a good chance to check how metadata locks are exposed in MySQL 5.7 via Performance Schema, so I’ve set up a simple test.
First of all, I’ve enabled instrumentation for metadata locks:[openxs@centos 5.7]$ bin/mysql --no-defaults -uroot -proot mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or g. Your MySQL connection id is 4 Server version: 5.7.6-m16 MySQL Community Server (GPL) Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or 'h' for help. Type 'c' to clear the current input statement. mysql> UPDATE performance_schema.setup_consumers SET ENABLED = 'YES' WHERE NAME = 'global_instrumentation'; Query OK, 0 rows affected (0.00 sec) Rows matched: 1 Changed: 0 Warnings: 0 mysql> UPDATE performance_schema.setup_instruments SET ENABLED = 'YES' WHERE NAME = 'wait/lock/metadata/sql/mdl'; Query OK, 0 rows affected (0.00 sec) Rows matched: 1 Changed: 0 Warnings: 0
Then I’ve set up a simple test based on the details from the issue (I’ve create the InnoDB table, t, and added a row to it before this):mysql> set session autocommit=0; Query OK, 0 rows affected (0.00 sec) mysql> select @@autocommit, @@tx_isolation; +--------------+----------------+ | @@autocommit | @@tx_isolation | +--------------+----------------+ | 0 | READ-COMMITTED | +--------------+----------------+ 1 row in set (0.00 sec) mysql> select * from t limit 1; +----+------+ | id | val | +----+------+ | 1 | 1 | +----+------+ 1 row in set (0.00 sec)
Now, from another session I tried to TRUNCATE the table before the fist session got a chance to do explicit or implicit COMMIT (In the issue I mentioned software used just had not cared to do this, assuming transaction had not started. It worked with MySQL 5.1 really well that way.)mysql> truncate table t;
I was not surprised that TRUNCATE hung. Manual clearly says that until transaction is committed we do not release metadata locks. But let’s check them in Performance Schema (from the first session, where we executed SELECT):mysql> select * from performance_schema.metadata_locksG *************************** 1. row *************************** OBJECT_TYPE: TABLE OBJECT_SCHEMA: test OBJECT_NAME: t OBJECT_INSTANCE_BEGIN: 140450128308592 LOCK_TYPE: SHARED_READ LOCK_DURATION: TRANSACTION LOCK_STATUS: GRANTED SOURCE: sql_parse.cc:5585 OWNER_THREAD_ID: 27 OWNER_EVENT_ID: 17 *************************** 2. row *************************** OBJECT_TYPE: GLOBAL OBJECT_SCHEMA: NULL OBJECT_NAME: NULL OBJECT_INSTANCE_BEGIN: 140450195436144 LOCK_TYPE: INTENTION_EXCLUSIVE LOCK_DURATION: STATEMENT LOCK_STATUS: GRANTED SOURCE: sql_base.cc:5224 OWNER_THREAD_ID: 30 OWNER_EVENT_ID: 8 *************************** 3. row *************************** OBJECT_TYPE: SCHEMA OBJECT_SCHEMA: test OBJECT_NAME: NULL OBJECT_INSTANCE_BEGIN: 140450195434272 LOCK_TYPE: INTENTION_EXCLUSIVE LOCK_DURATION: TRANSACTION LOCK_STATUS: GRANTED SOURCE: sql_base.cc:5209 OWNER_THREAD_ID: 30 OWNER_EVENT_ID: 8 *************************** 4. row *************************** OBJECT_TYPE: TABLE OBJECT_SCHEMA: test OBJECT_NAME: t OBJECT_INSTANCE_BEGIN: 140450195434368 LOCK_TYPE: EXCLUSIVE LOCK_DURATION: TRANSACTION LOCK_STATUS: PENDING SOURCE: sql_parse.cc:5585 OWNER_THREAD_ID: 30 OWNER_EVENT_ID: 8 *************************** 5. row *************************** OBJECT_TYPE: TABLE OBJECT_SCHEMA: performance_schema OBJECT_NAME: metadata_locks OBJECT_INSTANCE_BEGIN: 140450128262384 LOCK_TYPE: SHARED_READ LOCK_DURATION: TRANSACTION LOCK_STATUS: GRANTED SOURCE: sql_parse.cc:5585 OWNER_THREAD_ID: 27 OWNER_EVENT_ID: 18 5 rows in set (0.00 sec)
Note SHARED_READ lock set on table t and EXCLUSIVE lock is pending on the same table t above. TRUNCATE is blocked (as DDL).
Note also locks related to out SELECT from the metadata_locks table in the output. Yes, access to Performance Schema is also protected with metadata locks!
We can get a nice view of all metadata locks from other sessions, excluding our current one, and check also all we could get about them before MySQL 5.7 (just a thread state in the SHOW PROCESSLIST output):
As soon as I complete transaction where SELECT was executed, TRUNCATE completes and we see no pending metadata locks:mysql> commit; Query OK, 0 rows affected (0.00 sec) mysql> SELECT OBJECT_TYPE, OBJECT_SCHEMA, OBJECT_NAME, LOCK_TYPE, LOCK_STATUS, THREAD_ID, PROCESSLIST_ID, PROCESSLIST_INFO FROM performance_schema.metadata_locks INNER JOIN performance_schema.threads ON THREAD_ID = OWNER_THREAD_ID WHERE PROCESSLIST_ID <> CONNECTION_ID(); Empty set (0.01 sec) mysql> select * from t; Empty set (0.00 sec)
To summarize, MySQL 5.7 allows you to study all metadata locks in detail. They are set for both transactional and non-transactional tables, but remember that if you use autocommit=0 or start transaction explicitly they are released only when commit happens, implicit or explicit. If you want single statement SELECT to not block any DDL after it is completed, make sure to COMMIT immediately or use autocommit=1.
We can surely call the behavior of metadata locks for this case a “bug” or file a “feature request” to change it, but for now any software that is supposed to work with MySQL 5.5.3+ should just take all implications of metadata locks into account.
Every day hundreds of millions of dollars are wasted by allowing improperly tuned or misconfigured systems, misunderstood infrastructure, and inefficient IT operations to live and thrive in data centers around the globe. There are both direct and indirect costs associated with allowing these unhealthy systems to continue to exist. Let’s look at some.The setup:
Let us start by using a small example. We will start by looking at a small database setup. This setup will have a single master-slave, with a database size of lets say 500GB. Traffic is steady and let’s say this translates into 500 IOPS on the master. You have chosen to host this on Amazon’s AWS. A common way of ensuring backups occur in AWS is to setup ebs snapshots of the slave. In terms of usage, let us assume your CPU is about 50% used and you have about 20GB of hot data that needs to stay in the memory for the database.
If we look at what this would take to support in EC2 you are looking roughly at this:
- 2 c3.4xlarge servers (16 vcpu, 30GB of memory )
- Master-Slave Set
- with 1TB of Provisioned IOPS SSD, over 2 volumes
- with 500 IOPS on the master, 125 iops on the slave
- estimated 7TB of storage for snapshots
This calculator gives us an estimated cost of $3,144.28 per month, or roughly $38,000 a year in hosting fees. Note that you can choose other tiers of service, or reserved or spot servers to get different pricing.Regular, steady growth:
Now let’s assume your database is growing along with its traffic at about 5% per month (these are rough numbers I know). After a year your database server would be out of steam using 86% CPU, 34GB of hot data (so relying more heavily on disk), and be consuming just about 850GB of storage space. Moving up to the next tier of servers and with additional iops you will see your spend per month jump to around $4,771.32 per month ($57,000 per year).
When tuning and auditing an environment like the above we been able to give some customers up to 50% or more improvement in performance, and often see 20-25% reduction in space. Let’s be conservative and say we can get a 25% boost in your performance, reduce your 5% monthly growth to 4%, and shrink your database by 10%. Based on that you can stave off upgrading your servers an additional 9 months, saving you almost $15,000 in that first year alone. Over 4 years this customer would end up saving an estimated $75,000 in total spend in AWS costs just based on smaller data and performance enhancements.
In this case performance enhancements are not the only place to save costs. Moving from EBS Snapshots to regular MySQL backups using Percona XtraBackup, keeping one copy on disk and sending those backups to s3, the cost of the environment drops to $2,043.87 per month ( from $3144.28). This means a simple switch of backup methodology can net you about $1,100 a month or $13,200 a year off your hosting bill.
Often we are not only reducing the resources needed, but we can also reduce the number of servers needed to run your application through tuning. We had a recent client who was able to see a 90% reduction in their read heavy workload and actually turn off servers that used to be used to serve their application. Here is what their savings over the next couple of years would look like:
Here we helped cut this customers direct costs by two thirds.Handling spikes:
The one thing to keep in mind is this assumes a linear growth in terms of application and database usage. This would mean you can predict when you will need hardware. If your user base is growing and feature adds are controlled it is possible, however in most environments you will not see that linear growth. You will probably see something like this:
Understanding this pattern and the spikes are vital to keeping your costs down. See that giant spike up to 2,500? The first reaction for many is upgrade their hardware, then tune. Inevitably any tuning benefit is offset by the already sunk cost of the hardware upgrade which after the tuning they may not have needed. Getting in front of that spike and preventing it could have saved tens or even hundreds of thousands of dollars.
Spikes kill performance and cost real dollars. Those spikes may not be easy to find. A few years ago I was working with a Fortune 500 client who had one of these spikes. They had been running perfectly fine with steady but controllable growth for 7 or 8 months, then the 9th month things went very wrong very quick.
A critical component of their company was to certify professionals through a testing application. During their peak time of season these certifications had stopped completely – delaying certifications for thousands of employees and clients for 2 weeks. I was flown out to help control the bleeding and hopefully fix the problem.
The number of users using the application was the same, the number of page views on the web was steady, but the number of queries to the database skyrocketed. None of the queries had hit their threshold of 1 second to be flagged as problem queries. It turned out to be one query that took 250ms to execute that was causing this company to grind to a halt. That one query ended up being executed 25,000 times per page when certain conditions were met, and those conditions were not met until the 9th month after this application was re-released.
This query lay like a trojan horse waiting to destroy this company’s ability to deliver to its customers.
Two lessons can be learned from this. The first is even a seemingly well-tuned system may not be. Second, small things matter. In this case fixing the code is the correct solution, however, proper indexing of the tables dropped the query time from 250ms down to 50ms. This was enough of a relief to allow the certification process to start up again until the code could be fixed. A seemingly small impacting query still should be optimized.
Another source for these performance spikes is a company’s application release cycle. Applications are very a living entity in today’s world. They grow and expand and change on a regular basis. In order to stay ahead of any problems you need to have a process and resources in place that can proactively monitor and tune. Every release of new code should be going through a rigorous performance review to prevent trojan problems that may cause problems and extra costs down the road.Indirect and hard to calculate costs:
All of this discussion so far has been around direct hosting costs. There is also a cost to your reputation and your ability to deliver services that meet a customer’s expectations. Customers who come to your site or are using your application can leave in droves due to poor performance. We have seen several customers who lost 50% or more of their user base due to performance problems with the application.
Lost revenue and profits are often much more difficult to quantify, and vary greatly from company to company. This cost, however, is very real. Silicon Valley is littered with the remnants of companies that did not plan to address scale or simply missed important problems in their IT infrastructure. Unfortunately I have worked directly with numerous companies that learned this lesson the hard way. These hidden costs can kill a customer quicker than any competitor or market shift.
One of the biggest hidden costs companies needlessly pay is the cost of downtime.The cost of downtime:
I was reading a gartner study where they estimated that the cost per minute of downtime was $5,600 dollars; other studies, like this one, have pegged the cost per minute of downtime at $7,900.
Anyway you slice it being down for even a minute costs you money. If we are conservative in our estimates, the cost of an hour of downtime can easily top $100,000. It’s amazing the number of well-established companies that don’t have a solid plan for dealing with downtime.
Let’s look at some common disaster recovery policies:Restore from backup:
How quickly can your DBAs get alerted to an outage, then login to look at the outage, and finally make a call whether or not to restore? I submit that most people are going to take a few minutes to get an alert (let’s say 2). They will then take a few minutes to get to the computer and into the system (let’s say 5 minutes). Then they will take at least 10 minutes to try and figure out what’s going on. Fast-forward 17 minutes later…. minimum has gone by with nothing to show for it.
Restoring the backup itself could take a few minutes or several hours. Let’s just say 40 minutes total. If we use that $7,900 number, you could have just lost $316,000. That’s a huge amount that could have easily been avoided. Maybe you know that you’re not losing $7,900 a minute, maybe it’s only $1,000. That’s still $40,000!Manual failover to a slave:
The time for getting, reacting and taking action does not change in this equation. The original 17 minutes of time (minimum) to react and start fixing just potentially cost you $134,300.Automated failover:
Not all automated failovers are created equal. Some solutions can take several minutes to even hours to restore proper service (passive cold slaves warm up time). Just because you think you are protected does not mean you are. Having the right automated solution can mean you minimize your downtime risks to $10K or less, having the wrong one can be worse than having none at all.
It’s important to understand the cost of downtime and pick the proper solution to mitigate it.Cost of being wrong is high:
These are just a few of the costs that companies can incur by having the improper database and infrastructure setup. Mitigating these costs requires a solid process, a high-level of expertise, and the right resources in place.
On a recent engagement I worked with a customer who makes extensive use of UUID() values for their Primary Key and stores it as char(36), and their row count on this example table has grown to over 1 billion rows.
The table is INSERT-only (no UPDATEs or DELETEs), and the bulk of their retrieval are PK lookups. Lookups by PK were performing acceptably, but they were concerned with the space usage by the table as we were approaching 1TB (running with innodb_file_per_table=1 and Percona Server 5.5).
This schema model presents an increasing burden for backups since they use Percona XtraBackup, and so the question was asked: does their choice of an effectively random Primary Key based on UUID() impact their on-disk storage, and to what extent? And as a neat trick I show towards the end of this post how you can calculate the rate of fragmentation in your table on a regular basis if you’re so inclined. So read on!
For background, the more common approach for a Primary Key in InnoDB is one that uses an integer AUTO_INCREMENT value. One of the benefits of a PK AUTO_INCREMENT is that it allows InnoDB to add new entries at the end of the table, and keeps the BTREE index from having to be split at any point. More on this splitting reference in a moment. Note also that this blog post isn’t intended to promote one type of model over another, my goal is really to illustrate the impact your choice of PK will have on the data on disk.
A Primary Key serves multiple purposes in InnoDB:
- Ensures uniqueness between rows
- InnoDB saves row data on disk clustered by the Primary Key
- Depending on the type used and INSERT / UPDATE pattern used, either provides for a unfragmented or severely fragmented Primary Key
I wanted to profile three different Primary Key types:
- integer AUTO_INCREMENT – this key will consume 4 bytes
- binary(16) using Ordered UUID() – as per Karthik’s UUID()-optimised blog post
- char(36) using UUID() – what my customer was used
I then used the powerful tool innodb_space’s function space-lsn-age-illustrate (from Jeremy Cole’s innodb_ruby project) to plot the LSN (InnoDB’s Log Sequence Number, an always-incrementing value) pages from each table that uses the different Primary Keys via ASCII colour (so hot, right? Thanks Jeremy!!). For reference, the legend indicates that the darker the colour, the “older” the page’s updated version is (the LSN), while as you move across the colour spectrum to pink you’re looking at the most recently written LSN values. What I’m trying to illustrate is that when you use AUTO_INCREMENT or UUID() that has been modified to insert in an ascending order, you get virtually no page splits, and thus consume the minimal amount of database pages. On the left side you’re looking at the page IDs for this table, and the lower the number of pages consumed, the more efficiently packed the table’s data is within those pages.
This is an example of INSERT-only based on a Primary Key of AUTO_INCREMENT. Notice how the darker colours are heavy at the earliest pages and lighter as we get to writing out the higher number pages. Further this table finishes writing somewhere around 700 pages consumed.
As we look at the optimised-UUID() INSERT pattern we see that it too has a very evenly distributed pattern with oldest pages at the start (lowest page IDs) and newest written pages at the end of the table. More pages are consumed however because the Primary Key is wider, so we end somewhere around 1,100 pages consumed.
Finally we arrive at the UUID() INSERT pattern, and as we expected, the fragmentation is extreme and has caused many page splits — this is the behaviour in InnoDB when a record needs to be written into an existing page (since it falls between two existing values) and InnoDB realises that if this additional value is written that the capacity of the page will be overcommitted, so it then “splits” the page into two pages and writes them both out. The rash of pink in the image below shows us that UUID() causes significant fragmentation because it is causing pages to be split all throughout the table. This is deemed “expensive” since the ibd file now is more than 2x greater than the UUID()-optimised method, and about 3x greater than a Primary Key with AUTO_INCREMENT.
Based on this investigation we determined that the true size of the 1 billion row table was about half the size as reported by Linux when examining the .ibd file. We happened to have an opportunity to dump and load the table (mysqldump | mysql) and found that on restore the table consumed 450GB of disk — so our estimate was pretty good!
I also wanted to highlight that you can determine for yourself the statistics for data / pages split. As you can see below, the first two PK distributions are very tight, with pages packed up to 90%, however the UUID model leaves you with just slightly higher than 50%. You can run this against your prepared backups if you use Percona XtraBackup since at least version 2.1 by using the –stats option.[root@mysql]# xtrabackup --stats --datadir=/data/backups/mysql --target-dir=/data/backups/mysql | grep -A5 test | grep -A5 PRIMARY table: test/t1, index: PRIMARY, space id: 13, root page: 3, zip size: 0 estimated statistics in dictionary: key vals: 8, leaf pages: 99, size pages: 161 real statistics: level 1 pages: pages=1, data=1287 bytes, data/pages=7% leaf pages: recs=60881, pages=99, data=1461144 bytes, data/pages=90% -- table: test/t2_uuid_ordered, index: PRIMARY, space id: 14, root page: 3, zip size: 0 estimated statistics in dictionary: key vals: 8, leaf pages: 147, size pages: 161 real statistics: level 1 pages: pages=1, data=3675 bytes, data/pages=22% leaf pages: recs=60882, pages=147, data=2191752 bytes, data/pages=91% -- table: test/t3_uuid, index: PRIMARY, space id: 15, root page: 3, zip size: 0 estimated statistics in dictionary: key vals: 8, leaf pages: 399, size pages: 483 real statistics: level 2 pages: pages=1, data=92 bytes, data/pages=0% level 1 pages: pages=2, data=18354 bytes, data/pages=56%
Below are the table definitions along with the scripts I used to generate the data for this post.mysql> show create table t1G *************************** 1. row *************************** Table: t1 Create Table: CREATE TABLE `t1` ( `c1` int(10) unsigned NOT NULL AUTO_INCREMENT, `c2` char(1) NOT NULL DEFAULT 'a', PRIMARY KEY (`c1`), KEY `c2` (`c2`) ) ENGINE=InnoDB AUTO_INCREMENT=363876 DEFAULT CHARSET=utf8 1 row in set (0.00 sec) mysql> show create table t2_uuid_orderedG *************************** 1. row *************************** Table: t2_uuid_ordered Create Table: CREATE TABLE `t2_uuid_ordered` ( `pk` binary(16) NOT NULL, `c2` char(1) NOT NULL DEFAULT 'a', PRIMARY KEY (`pk`), KEY `c2` (`c2`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 1 row in set (0.00 sec) mysql> show create table t3_uuidG *************************** 1. row *************************** Table: t3_uuid Create Table: CREATE TABLE `t3_uuid` ( `pk` char(36) NOT NULL, `c2` char(1) NOT NULL DEFAULT 'a', PRIMARY KEY (`pk`), KEY `c2` (`c2`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 1 row in set (0.00 sec)[root@mysql]# cat make_rows.sh #!/bin/bash while [ 1 ] ; do mysql -D test -e "insert into t1 (c2) values ('d')" ; mysql -D test -e "insert into t2_uuid_ordered (pk, c2) values (ordered_uuid(uuid()), 'a')" ; mysql -D test -e "insert into t3_uuid (pk, c2) values (uuid(), 'a')" ; done[root@mysql]# cat space_lsn_age_illustrate.sh #!/bin/bash cd /var/lib/mysql echo "T1 - AUTO_INCREMENT" innodb_space -f test/t1.ibd space-lsn-age-illustrate echo "T2 - UUID() ORDERED" innodb_space -f test/t2_uuid_ordered.ibd space-lsn-age-illustrate echo "T3 - UUID()" innodb_space -f test/t3_uuid.ibd space-lsn-age-illustrate
I hope that this post helps you to better understand the impact of random vs ordered Primary Key selection! Please share with me your thoughts on this post in the comments, thanks for reading!
Note to those attentive readers seeking more information: I plan to write a follow-up post that deals with these same models but from a performance perspective. In this post I tried to be as specific as possible with regards to the disk consumption and fragmentation concerns – I feel it addressed the first part and allude to this mysterious “fragmentation” beast but only teases at what that could mean for query response time… Just sit tight, I’m hopeful to get a tag-along to this one post-PLMCE!
By the way, come see me speak at the Percona Live MySQL Conference and Expo in Santa Clara, CA the week of April 13th – I’ll be delivering 5 talks and moderating one Keynote Panel. I hope to see you there! If you are at PLMCE, attend one my talks or stop me in the hallway and say “Hi Michael, I read your post, now where’s my beer?” – and I’ll buy you a cold one of your choice
- Keynote Panel: The next disruptive technology: “What comes after the Cloud and Big Data?”
- Main Conference – What I learned while migrating MySQL on-premises to Amazon RDS
- MySQL 101 & Main Conference – Running MySQL on AWS
- MySQL 101 – Percona Server Advantage (with Christopher Jeffus)
- MySQL 101 – MySQL & InnoDB Fundamentals & Configuration (with Justin Swanhart)
The post Illustrating Primary Key models in InnoDB and their impact on disk usage appeared first on MySQL Performance Blog.
Years ago when I worked for the MySQL Support organization at the original MySQL AB, we spoke about MySQL Support as insurance and focused on a value proposition similar to that of car insurance. For your car to be fully covered, you must purchase car insurance before the incident happens – in fact most places around the world require automobile insurance. Similarly with many organizations, any production-use technology might be mandated to have its own insurance in the way of 24/7 support.
I think however this is a very one-sided view that does not capture the full value (and ROI) that a MySQL Support contract with Percona provides. Let’s look at the different dimensions of value it provides based on the different support cases we have received throughout the years.
Reduce and Prevent Downtime
If your database goes down, the time to recover will be significantly shorter with a support agreement than without it. Cost of downtime varies widely between organizations. Gartner estimates the average cost of downtime is $5,000 per minute.
With most of our clients, we have found that the cost of preventing or rapidly reducing even one significant downtime event a year more than pays for the cost of support. Even when the client’s in-house team is very experienced, our help is often invaluable as we are exposed to a great variety of incidents from hundreds of companies, so it is much more likely we have encountered the same incident before and have a solution ready. Helping to recover from downtime quickly is a reactive part of support – you can realize even more value by proactively working with support to get advice on your HA options as well as ensure that you’re following the best database backup and security practices.
Having a MySQL Support contract by itself is not enough to prevent all security incidents. MySQL will be only one of the components for a possible attack vector and it takes a lot of everyday work to stay secure. There is nothing that can guarantee complete security. MySQL Support, however, can be an invaluable resource for your security team to learn how to apply security and compliance practices to your MySQL environment and how to avoid typical mistakes.
The cost of data breaches can be phenomenal and also impact business reputations much more than downtime or performance issues. Depending on the company size and market, costs will vary. Different studies estimate costs ranging in average from $640K in direct costs to $3.5M. What everyone seems to agree upon is that security risks and security costs are on the rise and you can’t afford to leave this areas unchecked.
Fix Database Software Bugs
While you might have great DBAs on your team who are comfortable with best practices and downtime recovery, most likely you do not have a development team comfortable with fixing bugs in the database kernel or supporting tools. Being able get software fixes contributes to downtime reduction as well as all kinds of other things, such as ensuring efficient development and operation teams, avoiding using complex workarounds, etc.
A large number of questions we get are performance-related which, when addressed, provide a better experience for users, saves costs, and minimizes environmental impact by using less resources.
Savings vary depending on your application scale and how much it is already optimized. In the best cases, our support team has helped customers make applications more than 10x more efficient. In most cases though, we are able to help make things at least 30% more efficient. If you’re spending $100K or more on your database environment, this benefit alone will make a support agreement well worth it.
This is important one. Way too often customers do not even give their developers access to support, even though these developers are critical in realizing the the full value of their application. Developers working with databases make many decisions about schema design, query writing, and the use of MySQL features such as stored procedures, triggers or foreign keys. Without a MySQL Support contract, developers often have resort to “Google” to find an answer – and often end up with inapplicable, outdated or simply wrong information. Combined with this, they often apply or resort to time-consuming trial and error.
With help of the Percona Support team, developers can learn the proven practices that apply to their specific situation, save a lot of time and get a better application to the market faster. Even with a single US-based developer intensively working with MySQL, a support agreement might be well worth the cost based on increased developer efficiency alone. Larger development teams simply cannot afford to not have support.
Your operations staff (DBAs, DevOps, Sysadmins) are in the same boat – if your database environment is significant, chances are you are always looking for ways to save time, make operations more efficient and reduce mistakes. Our Support team can provide you with specific actionable advice for the challenges you’re experiencing.
Chances are we have seen environments similar to yours and know which software, approaches and practices work well and which do not. This all of course contributes to downtime prevention and reduction, but also helps with team efficiency. With the Percona Support team’s help, you will be able to handle operations with a smaller team or be able to have stuff done with less experienced staff members.
Percona Support access helps developers not only be more productive, but also results in better application quality because best practices in application database interface design, schema, queries, etc. are followed. The Percona team has supported many applications for many years, so we often will think about problems before you might think about them, such as:
- “How will this design play with replication or sharding?”
- “Will it scale with large amounts of users or data?”
- “How flexible is such a design when the application will inevitably be evolving over years?”
While a better application is hard to quantify, it really is quite important.
Faster Time to Market
Yet another benefit that comes from developers having access to a MySQL Support team is faster time to market. For many agile applications, being able to launch new features faster is even more important than cost savings – this is how businesses can succeed against the competition. At Percona, we love helping businesses succeed.
As you see, there are a lot of ways Percona Support can contribute to the success of your business. Support is much more than “insurance” that you should consider purchasing for compliance reasons. If you’re using MySQL for your applications, Percona Support will provide a great return on investment, allowing you to minimize risks and costs while delivering the highest quality of application or service possible.
Recently Todd Farmer shared an interesting story about the mysql command line prompt in MySQL 5.7: how it was changed to provide more context and why the change was finally reverted. This made me think that after using the command line client for MongoDB for awhile, I would love seeing a much more modern mysql shell prompt. Here are a few examples of what a modern command line client can do.Add dynamic information to the prompt
If you use replication with MongoDB, you have probably noticed a nice feature of the prompt: it is replication aware. What I mean is that for a standalone instance, the prompt is simply:>
When you configure this instance to be the primary of a replica set named RS, the prompt automatically becomes:RS:PRIMARY>
and for secondaries, you will see:RS:SECONDARY>
And of course if an election is triggered and roles are switched, the prompt is updated accordingly to reflect the change.
Such a feature may not be easily transposed to MySQL as the allowed replication topologies are more flexible: for instance with master-master replication or for chained replication, some servers may be both a master and a slave.
Your prompt will be changed to something like:test/11:37:51>
And the change can be persisted by defining the prompt variable in a ~/.mongorc.js file.
If I could do something similar with MySQL, I would for instance be able to know the replication lag of a slave or the uptime of MySQL and so on and so forth. That would be incredibly useful in some situations.Better help system
Let’s assume I want to manually fail over to a secondary in a MongoDB replica set. I can simply instruct the master that it should no longer be the master. But what is the exact command?
I know that it is rs.xxx because all replica sets functions have this format, so let’s simply open the shell and type rs. and all the available options show up:> rs. rs.add( rs.constructor rs.propertyIsEnumerable( rs.syncFrom( rs.addArb( rs.debug rs.prototype rs.toLocaleString( rs.apply( rs.freeze( rs.reconfig( rs.toString( rs.bind( rs.hasOwnProperty( rs.remove( rs.valueOf( rs.call( rs.help( rs.slaveOk( rs.conf( rs.initiate( rs.status( rs.config( rs.isMaster( rs.stepDown(
Here again, I’d love to see something similar with MySQL. For instance each time I need to convert a timestamp to a human readable date, I have to look at the documentation to find the exact function name.
Actually I could have used the built-in help system to find the name of the function, but it is much more convenient to look at the documentation.Conclusion
What’s your opinion on which new features a modern mysql shell should provide? If many people agree on a set of features, it could be worth filing a feature request to get a better mysql command line client in a future version of MySQL.
Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups.
- Decrypting backup with the wrong key would make the backup unusable and unrecoverable. innobackupex doesn’t automatically delete the *.qp and *.xbcrypt files anymore, after --decrypt and --decompress are used. Bug fixed #1413044.
- XtraDB Changed Page Tracking wasn’t working with innobackupex. Bug fixed #1436793.
- Fixed Percona XtraBackup assertion caused by dirty pages remaining in the buffer pool after the log was fully applied. Bug fixed #1368846.
- Backup will not be prepared and innobackupex will stop with an error if the transaction log file is corrupted and it wasn’t applied to the intended LSN. Previously this was showing only as a warning. Bug fixed #1414221.
- New status log-applied is introduced for backup prepared with --redo-only to avoid making the backup unusable by preparing full or incremental backup without --redo-only and then applying next incremental on top of it. Incremental backup now can be applied only to backup in log-applied state, but not to full-prepared as it was earlier. Bug fixed #1436790.
Release notes with all the bugfixes for Percona XtraBackup 2.2.10 are available in our online documentation. Bugs can be reported on the launchpad bug tracker. Percona XtraBackup is an open source, free MySQL hot backup software that performs non-blocking backups for InnoDB and XtraDB databases.
The post Percona XtraBackup 2.2.10 for MySQL hot backups is now available (free!) appeared first on MySQL Performance Blog.
Based on Percona Server 5.5.41-37.0 including all the bug fixes in it, Galera Replicator 2.11, and on Codership wsrep API 25.11, Percona XtraDB Cluster 5.5.41-25.11 is now the current 5.5 General Availability release. All of Percona‘s software is open-source and free, and all the details of the release can be found in the 5.5.41-25.11 milestone at Launchpad.
- XtraBackup SST wouldn’t stop when MySQL was SIGKILLed. This would prevent MySQL to initiate a new transfer as port 4444 was already utilized. Bug fixed #1380697.
- garbd was returning incorrect return code, ie. when garbd was already started, return code was 0. Bugs fixed #1308103 and #1422863.
- wsrep_sst_xtrabackup-v2 script was causing innobackupex to print a false positive stack trace into the log. Bug fixed #1407599.
- MyISAM DDL (CREATE TABLE only) isn’t replicated anymore when wsrep_replicate_myisam is OFF. Note, for older nodes in the cluster, wsrep_replicate_myisam should work since the TOI decision (for MyISAM DDL) is done on origin node. Mixing of non-MyISAM and MyISAM tables in the same DDL statement is not recommended with wsrep_replicate_myisam OFF since if any table in list is MyISAM, the whole DDL statement is not put under TOI (total order isolation). This also doesn’t work if default_storage_engine variable is set to MyISAM (which is not recommended for Percona XtraDB Cluster) and a table is created without the ENGINE option. Bug fixed #1402338.
- Percona XtraDB Cluster now shows a warning in case additional utilities, like pv which may not affect critical path of SST, are not installed. Bug fixed #1248688.
- wsrep_causal_reads variable was not honored when declared as global. Bug fixed #1361859.
- garbd would not work when cluster address was specified without the port. Bug fixed #1365193.
- garbd was running as root user on Debian. Bug fixed #1392388.
- Errors in garbd init script stop/start functions have been fixed. Bug fixed #1367956.
- If mysqld gets killed during the SST it will leave an unclean data directory behind. This would cause Percona XtraDB Cluster to fail when the server would be started next time because the data directory would be corrupted. This was fixed by resuming the startup in case wsrep-recover failed to recover due to corrupted data directory. The old behavior is still achievable through --exit-on-recover-fail command line parameter to mysqld_safe or exit-on-recover-fail under [mysqld_safe] in my.cnf. Bug fixed #1378578.
- gvwstate.dat file was removed on joiner when XtraBackup SST method was used. Bug fixed #1388059.
- xtrabackup-v2 SST did not clean the undo log directory. Bug fixed #1394836.
- stderr of SST/Innobackupex is logged to syslog with appropriate tags if sst-syslog is in [sst] or [mysqld_safe] has syslog in my.cnf. This can be overridden by setting the sst-syslog to -1 in [sst]. Bug fixed #1399134.
- clustercheck can now check if the node is PRIMARY or not, to allow for synced nodes which go out of PRIMARY not to take any writes/reads. Bug fixed #1403566.
- Race condition between donor and joiner in Xtrabackup SST Configuration has been fixed. This caused XtraBackup SST to fail when joiner took longer to spawn the second listener for SST. Bug fixed #1405668.
- SST will now fail early if the xtrabackup_checkpoints file is missing on the joiner side. Bug fixed #1405985.
- socat utility was not properly terminated after a timeout. Bug fixed #1409710.
- 10 seconds timeout in Xtrabackup SST Configuration script was not enough for the joiner to delete existing files before it started the socat receiver on systems with big datadir. Bug fixed #1413879.
- Conflict between enforce_storage_engine and wsrep_replicate_myisam for CREATE TABLE has been fixed. Bug fixed #1435482.
- SST processes are now spawned with fork/exec instead of posix_spawn to allow for better cleanup of child processes in event of non-graceful termination (SIGKILL or a crash etc.). Bug fixed #1382797.
- Variable length arrays in WSREP code were causing debug builds to fail. Bug fixed #1409042.
- Signal handling in mysqld has been fixed for SST processes. Bug fixed #1399175.
- Inserts to a table with autoincrement primary key could result in duplicate key error if another node joined or dropped from the cluster during the insert processing. Bug fixed #1366997.
Help us improve our software quality by reporting any bugs you encounter using our bug tracking system. As always, thanks for your continued support of Percona!
Please also note that Percona XtraDB Cluster 5.6 series is the latest General Availability series and current GA release is 5.6.22-25.8.
The post Percona XtraDB Cluster 5.5.41-25.11 is now available appeared first on MySQL Performance Blog.
Here’s your chance to get on stage with Woz! Sort of. Apple co-founder and Silicon Valley icon and philanthropist Steve Wozniak will participate in a moderated Q&A on creativity and innovation April 14 during the Percona Live MySQL Conference and Expo in Santa Clara, California.
Woz once said that he never intended to change the world. That was the other Steve, Steve Jobs.
“I didn’t want to start this company,” Woz told the Seattle Times of Apple’s beginnings in a 2006 interview. “My goal wasn’t to make a ton of money. It was to build good computers. I only started the company when I realized I could be an engineer forever.”
What would you ask Woz if given the opportunity?
“Woz, what first sparked your interest in engineering?”
“Hey Woz, how did you come up with the design for the first Apple?”
“Woz, what do you see as the next big thing in personal computers?”
“Hi Woz, what’s the deal with your giant vacuum tube watch?”
Now it’s your turn! Ask a question in the comments below and be sure to include your Twitter handle – or your Facebook page or LinkedIn profile. If we use your question, then your profile and question will be displayed on the giant screen behind Woz on stage as it’s being asked during his big keynote! How cool is that?
Want to be there in person? See Woz speak for just $5! That’s $70 off the regular admission price! Just use the promo code “KEY” at registration under the “Expo Hall and Keynote Pass” selection. Following Woz’s keynote, be sure to stop by the Percona booth, say “hello, Tom,” and I’ll give you a limited-edition Percona t-shirt.
In the meantime, help spread the word! Please share this tweet:
Do that, then follow @Percona and I’ll send a DM for your address and will ship a t-shirt right to your door. See you at the conference!
The post ‘Woz on your mind?’ Share your questions for Steve Wozniak during his Percona Live keynote! appeared first on MySQL Performance Blog.
TechCrunch reported yesterday that Apple has acquired FoundationDB. And while I didn’t see any mention if this news on the FoundationDB website, they do have an announcement saying: “We have made the decision to evolve our company mission and, as of today, we will no longer offer downloads.”
This is an unfortunate development – I have been watching FoundationDB technology for years and was always impressed in terms of its performance and features. I was particularly impressed by their demo at last year’s Percona Live MySQL and Expo. Using their Intel NUC-based Cluster, I remember Ori Herrnstadt showing me how FoundationDB handles single-node failure as well as recovery from complete power-down – very quickly and seamlessly. We have borrowed a lot of ideas from this setup for our Percona XtraDB Cluster Demos.
I think it was a great design to build a distributed, shared-nothing transaction aware key value store, and then have an SQL Layer built on top of it. I did not have a chance to test it hands-on, though. Such a test would have revealed the capabilities of the SQL optimizer – the biggest challenge for distributed relational database systems.
My hope was to see, over time, this technology becoming available as open source (fully or partially), which would have dramatically increased adoption by the masses. It will be interesting to see Apple’s long-terms plans for this technology.
In any case it looks like FoundationDB software is off limits. If you are an existing FoundationDB customer looking for alternatives, we here at Percona would be happy to help evaluate options and develop a migration strategy if necessary.
Founded in 2004 to help people find great local businesses, Yelp has some 135 million monthly unique visitors. With those traffic volumes Yelp’s 300+ engineers are constantly working to keep things moving smoothly – and when you move that fast you learn many things.
Fortunately for the global MySQL community, three Yelp DBAs will be sharing what they’ve learned at the annual Percona Live MySQL Conference and Expo this April 13-16 in Santa Clara, California.
Say “hello” to Susanne Lehmann, Jenni Snyder and Josh Snyder! I chatted with them over email about their presentations, on how MySQL is used at Yelp, and about the shortage of women in MySQL.
Tom: Jenni, you and Josh will be co-presenting “Next generation monitoring: moving beyond Nagios ” on April 14.
You mentioned that Yelp’s databases scale dynamically, and so does your monitoring of those databases. And to minimize human intervention, you’ve created a Puppet and Sensu monitoring ensemble… because “if it’s not monitored, it’s not in production.” Talk to me more about Yelp’s philosophy of “opt-out monitoring.” What does that entail? How does that help Yelp?
Jenni: Before we moved to Sensu, our Nagios dashboards were a sea of red, muted, acknowledged, or disabled service checks. In fact, we even had a cluster check to make sure that we never accidentally put a host into use that was muted or marked for downtime. It was possible for a well-meaning operator to acknowledge checks on a host and forget about it, and I certainly perpetrated a couple of instances of disks filling up after acknowledging a 3am “warning” page that I’d rather forget about. With Sensu, hosts and services come out of the downtime/acknowledgement state automatically after a number of days, ensuring that we’re kept honest and stay on top of issues that need to be addressed.
Also, monitoring is deployed with a node, not separate monitoring configuration. Outside of a grace period we employ when a host is first provisioned or rebooted, if a host is up, it’s being monitored and alerting. Also, alerting doesn’t always mean paging. We also use IRC and file tickets directly into our tracking system when we don’t need eyes on a problem right away.
Tom: Susanne, in your presentation, titled “insert cassandra into prod where use_case=?;” you’ll discuss the situations you’ve encountered where MySQL just wasn’t the right tool for the job.
What led up to that discovery and how did you come up with finding the right tools (and what were they) to run alongside and support MySQL?
Susanne: Our main force behind exploring other datastores alongside MySQL was that Yelp is growing outside the US market a lot. Therefore we wanted the data to be nearer to the customer and needed multi-master writes.
Also, we saw use cases where our application data was organized very key-value like and not relational, which made them a better fit for a NoSQL solution.
We decided to use Cassandra as a datastore and I plan to go more into detail why during my talk. Now we offer developers more choices on how to store our application data, but we also believe in the “right tool for the job” philosophy and might add more solutions to the mix in the future.
Tom: Jenni, you’ll also be presenting “Schema changes multiple times a day? OK!” I know that you and your fellow MySQL DBAs are always improving and also finding better ways of supporting new and existing features for Yelp users like me. Delivering on such a scale must entail some unique processes and tools. Does this involve a particular mindset among your fellow DBAs? Also, what are some of those key tools – and processes and how are they used?
Jenni: Yelp prizes the productivity of our developers and our ability to iterate and develop new features quickly. In order to do that, we need to be able to not only create new database tables, but also modify existing ones, many of which are larger than MySQL can alter without causing considerable replication delay. The first step is to foster a culture of automated testing, monitoring, code reviews, and partnership between developers and DBAs to ensure that we can quickly & safely roll out schema changes. In my talk, I’ll be describing tools that we’ve talked about before, like our Gross Query Checker, as well as the way the DBA team works with developers while still getting the rest of our work done. The second, easy part is using a tool like pt-online-schema-change to run schema changes online without causing replication delay or degrading performance
Tom: Josh, you’ll also be speaking on “Bootstrapping databases in a single command: elastic provisioning for the win.” What is “elastic provisioning” and how are you using it for Yelp’s tooling?
Josh: When I say that we use elastic provisioning, I mean that we can reliably and consistently build a database server from scratch, with minimal human involvement. The goal is to encompass every aspect of the provisioning task, including configuration, monitoring, and even load balancing, in a single thoroughly automated process. With this process in place, we’ve found ourselves able to quickly allocate and reallocate resources, both in our datacenters and in the cloud. Our tools for implementing the above goals give us greater confidence in our infrastructure, while avoiding single-points of failure and achieving the maximum possible level of performance. We had a lot of fun building this system, and we think that many of the components involved are relevant to others in the field.
Tom: Susanne and Jenni, last year at Percona Live there was a BoF session titled “MySQL and Women (or where are all the women?).” The idea was to discuss why there are “just not enough women working on the technology side of tech.” In a nutshell, the conversation focused on why there are not more women in MySQL and why so relatively few attend MySQL conferences like Percona Live.
The relative scarcity of women in technical roles was also the subject of an article published in the August 2014 issue of Forbes, citing a recent industry report.
Why, in your (respective) views, do you (or don’t) think that there are so few women in MySQL? And how can this trend be reversed?
Susanne: I think there are few women in MySQL and the reasons are manifold. Of course there is the pipeline problem. Then there is the problem, widely discussed right now, that women who are entering STEM jobs are less likely staying in there. These are reasons not specific for MySQL jobs, but rather for STEM in general. What is more specific for database/MySQL jobs is, in my opinion, that often times DBAs need to be on call, they need to stay in the office if things go sideways. Database problems tend often to be problems that can’t wait till the next morning. That makes it more demanding when you have a family for example (which is true for men as well of course, but seems still to be more of a problem for women).
As for how to reverse the trend, I liked this Guardian article because it covers a lot of important points. There is no easy solution.
I like that more industry leaders and technology companies are discussing what they can do to improve diversity these days. In general, it really helps to have a great professional (female) support system. At Yelp, we have AWE, the Awesome Women in Engineering group, in which Jenni and I are both active. We participate in welcoming women to Yelp engineering, speaking at external events and workshops to help other women present their work, mentoring, and a book club.
Jenni: I’m sorry that I missed Percona Live and this BoF last year; I was out on maternity leave. I believe that tech/startup culture is a huge reason that fewer women are entering and staying these days, but a quick web search will lead you to any number of articles debating the subject. I run into quite a few women working with MySQL; it’s large, open community and generally collaborative and supportive nature is very welcoming. As the article you linked to suggests, MySQL has a broad audience. It’s easy to get started with and pull into any project, and as a result, most software professionals have worked with it at some time or another.
On another note, I’m happy to see that Percona Live has a Code of Conduct. I hope that Percona and/or MySQL will consider adopting a Community Code of Conduct like Python, Puppet, and Ubuntu. Doing so raises the bar for all participants, without hampering collaboration and creativity!
* * *
Thanks very much, Susanne, Jenni and Josh! I look forward to seeing you next month at the conference. And readers, if you’d like to attend Percona Live, use the promo code Yelp15 for 15% off! Just enter that during registration. If you’re already attending, be sure to tweet about your favorite sessions using the hashtag #PerconaLive. And if you need to find a great place to eat while attending Percona Live, click here for excellent Yelp recommendations.
The post Yelp IT! A talk with 3 Yelp MySQL DBAs on Percona Live & more appeared first on MySQL Performance Blog.
Percona Toolkit is one of our most mature open source applications. Derived from Maatkit and Aspersa, Percona Toolkit has evolved significantly over the years. The software now contains 32 tools, over 4,000 tests, and has been downloaded over 250,000 times. Anyone who manages a database – from DBAs to system administrators to even software developers – benefits from Percona Toolkit’s ability to perform a variety of MySQL server and system tasks that are too difficult or complex to perform manually.
We continue to make Percona Toolkit better each month. Over the last 9 months alone Percona has had 6 releases and resolved nearly 50 issues.
While Percona team members in Support, Consulting, and Managed Services are big drivers of identifying bugs and new features (driven mostly by Percona customer needs), the community of Percona Toolkit users plays a significant role in making the open source software what it is today.
We’d like to learn how we can make Percona Toolkit even better for your needs. Please take a brief survey so we can learn how you actually use the software. As a thank you for taking the survey, we are randomly giving away five $50 Amazon.com gift cards to participants. It’s a small token but one that we hope you’ll appreciate.
Recent additions to Percona Toolkit have included better Percona XtraDB Cluster support as well as multiple fixes and improvements to pt-online-schema-change, pt-kill, pt-query-digest, pt-stalk, and preparation for the MySQL 5.7 GA. Help us continue to improve Percona Toolkit by taking part in our survey. If you use Percona Toolkit and are attending Percona Live next month, please keep a look out for me. I’d like to hear about your experiences.
The post Calling all MySQL DBAs: How do you use Percona Toolkit? appeared first on MySQL Performance Blog.
MongoDB 3.0 was recently released. Instead of focusing on what’s new – that is so easy to find, let’s rather talk about something that has not changed a lot since the early MongoDB days. This topic is sharding and most specifically: how to choose a good sharding key. Note that most of the discussion will also apply to MySQL, so if you are more interested in sharding than in MongoDB, it could still be worth reading.When do you want to shard?
In general sharding is recommended with MongoDB as soon as any of these conditions is met:
- #1: A single server can no longer handle the write workload.
- #2: The working set no longer fits in memory.
- #3: The dataset is too large to easily fit in a single server.
Note that #1 and #2 are by far the most common reason why people need sharding. Also note that in the MySQL world, #2 does not imply that you need sharding.What are the properties of a good sharding key?
The starting point is that a cross-shard query is very expensive in a sharded environment. It is easy to understand why: the query has to be executed independently on several shards, and then results have to be merged.
With MongoDB, mongos will transparently route queries to the right shards and will automatically merge the results: this is very handy but the hidden complexity can also make you forget that you have executed a very poorly optimized query.
This is where the choice of a sharding key is so critical: choose the right key and most queries will be simple and efficient, choose a wrong one and you’ll have ugly and slow queries.
Actually a good sharding does not need to have tens of properties, but only two:
- Insertions should be as much balanced as possible across all shards.
- Each query should be able to be executed by retrieving data from as little shards as possible (ideally 1 shard).
Sounds quite easy, right? However depending on your use case, it may be quite difficult to find a good sharding key. Let’s look at a few examples.Social Network
Say we have users who can be connected to other users, who can read or write posts and who have their own wall.
All 3 collections can become very large, so sharding all will be necessary over time.
For the user and wall collection the user_id field is an obvious choice and it actually meets both criteria for a good sharding key.
For the post collection, user_id also looks like an obvious choice, but if you think about how the collection is accessed for reads, you will realize that you will need to fetch it using its post_id, not its user_id (simply because a user can have multiple posts). If you shard by user_id, any read to a single post will be broadcast to all shards: this is clearly not a good option.
So using post_id is a better choice. However it only meets criteria #2: most posts are never updated, so all the writes are insertions that will go to a single shard. However the traffic on the post collection is strongly in favor of reads, so being able to speed up reads while not slowing down writes is probably an acceptable tradeoff.Access Logs
The workload here is very specific: write-intensive and append-only.
Sharding by ObjectId is definitely a bad idea: while data can be easily spread across all shards, all writes will only go to one shard, so you will have no benefit compared to a non-sharded setup when it comes to scale the writes.
A better solution is to use a hash of the ObjectId: that way data AND writes will be spread across all shards.
Another good option would be to use another field in your documents that you know is evenly distributed across the whole dataset. Such field may not exist though, that’s why hashing the ObjectId is a more generic solution.Ecommerce
MongoDB can be a good option to store a product catalog: being schemaless, it can easily store products with very different attributes.
To be usable such a catalog must be searchable. This means that many indexes need to be added, and the working set will probably grow very quickly. In this case your main concern is probably not to scale the writes, but to make reads as efficient as possible.
Sharding can be an option because if done properly, each shard will act as a coarse-grained index. Now the issue is to find which field(s) will evenly distribute the dataset. Most likely a single field will not be enough, you will have to use a compound sharding key.
Here I would say that there is no generic solution, but if the products are for instance targeted at either kid, woman or man and if you have several categories of products, a potential sharding key would be (target, category, sku).
Note that in this case, reading from secondaries may be enough to ensure good scalability.Conclusion
As you can see, choosing a correct sharding key is not always easy: do not assume that because some apps are sharded by some fields, you should do the same. You may need a totally different strategy or sharding may even not be the right solution for your issues.
If you are interested in learning more about MongoDB, I will be presenting a free webinar on March 25 at 10 a.m. Pacific time. It will be an introduction to MongoDB for MySQL DBAs and developers. Register here if you are interested.
The post Choosing a good sharding key in MongoDB (and MySQL) appeared first on MySQL Performance Blog.
For the last year or so I have been involved with OpenStack and more specifically the Trove (DBaaS) project as sort of an ambassador for Percona, contributing bits of knowledge, help and debugging wherever I could and thought I would share some of my experience with others that wanted to get involved with OpenStack development, documentation, testing, etc. Getting started with OpenStack contributions is also the idea behind my talk next month at Percona OpenStack Live 2015. (Percona Live attendees have access to OpenStack Live)
Back at the last OpenStack Conference and Design Summit in Paris last November, I had the amazing opportunity to attend the two-day OpenStack Upstream Training hosted by Stefano Maffulli, Loic Dachary and several other very kind and generous folks. If you ever find yourself in a position to attend one of these training sessions, I strongly suggest that you take advantage of the opportunity, you will not be disappointed.
Using some of the material from the OpenStack Foundation and a little personal experience, I’m going to go through some of the basics of what you’ll need to know if you want to contribute. There are several steps but they are mostly painless:
– It all starts with a little bit of legal work such as signing a either an individual or corporate contributor agreement.
– You will need to decide on a project or projects that you want to contribute to. Chances are that you already have one in mind.
– Find the various places where other contributors to that project hang out, usually there is a mailing list and IRC channel. Logon, introduce yourself, make some friends and sit and listen to what they are working on. Find the PTL (Project Team Lead) and remember his/her name. Let him/her know who you are, who you work for, what you are interested in, etc. Sit in on their meetings, ask questions but don’t be a pest. Observe a little etiquette, be polite and humble and you will reap many rewards later on.
– Eventually you will need to find and get the code and install whatever tools are necessary for that project, build it, stand up a test/working environment, play with it and understand what the various moving parts are. Ask more questions, etc.
– Do you think you are ready to do some coding and submit a patch? Talk to the PTL and get a lightweight bug or maybe a documentation task to work on.
– In order to submit a patch you will need to understand the workflow use the OpenStack gerrit review system which takes a little bit of time to understand if you have never used gerrit before. You’ll need to find and install git-review. Here is where making friends above really helps out. In every project there are usually going to be a few folks around with the time and patience to help you work through your first review.
– Find a bit of a mentor to help you with the mechanics in case you run into trouble, could just be the PTL if he/she has the time, make your patch, send it in and work through the review process.
– As with most peer review situations, you’ll need to remember never to take things personally. A negative review comment is not an insult to you and your family! Eventually your patch will either be accepted and merged upstream (yay!) or rejected and possibly abandoned in favor of some alternative (boo!). If rejected, fret not! Talk to the PTL and your new friends to try and understand the reason why if the review comments were unclear and simply try again.
It is that easy!
Come join me on Tuesday, April 14th in Santa Clara, California and we’ll chat about how you can begin contributing to OpenStack.
MySQL QA Episode #1: Bash/GNU Tools & Linux Upskill & Scripting Fun
This episode consists of 13 parts, and an introduction. See videos below
In HD quality (set your player to 720p!)
Part 1: echo, ls, cp, rm, vi, cat, df, du, tee, cd, clear, uname, date, time, cat, mkdir
Part 2: find, wc, sort, shuf, tr, mkdir, man, more
Part 3: Redirection, tee, stdout, stderr, /dev/null, cat
Part 4: Vars, ‘ vs “, $0, $$, $!, screen, chmod, chown, export, set, whoami, sleep, kill, sh, grep, sudo, su, pwd
Part 5: grep, regex (regular expressions), tr
Part 6: sed, regex (regular expressions)
Part 7: awk
Part 8: xargs
Part 9: subshells, shells, sh
Part 10: if, for, while, seq, head, grep & grep -q, sleep, tee, read & more
Part 11: Arrays, lynx, grep, egrep, awk, redirection, variable, printf, while, wget, read
Part 12: Production scripting examples
Part 13: Gnuwin32, Gnuwin32 escaping & path name/binary selection gotcha’s, untar, unzip, gzip for Windows
If you enjoyed these video’s leave us a comment below!
The post MySQL QA Episode 1: Bash/GNU Tools & Linux Upskill & Scripting Fun appeared first on MySQL Performance Blog.
Welcome to the MySQL QA Training Series!
If you have not read our introductory blog post on pquery yet, I’d recommend reading that one first to get a bit of background. The community is enthuastic about pquery, and today I am happy to announce a full training series on pquery and more. Whether you are a Linux or MySQL newbie or a seasoned QA engineer, there is something here for you. From Bash scripting (see episode 1 below), to every aspect of the new pquery framework, it is my hope that you enjoy this series. If you do, please leave us a comment
Database quality assurance is not as straightforward as it may seem. It’s not a matter of point-and-click, but rather of many intertwined tools and scripts. Beyond that, due to the complexity of the underlying product, it’s about having an overall plan or vision on how to adequately test the product in every aspect.
Take for example the SELECT statement; it allows specifying about 30 different clauses or modifiers (GROUP BY, WHERE, ORDER, LIMIT, HAVING, …). Then, think further about what one could do inside these clauses, or inside subselects etc. The number of possible combinations (exhaustive testing) of all commands (and all formats and variations thereof) plus all mysqld options (nearly 500 of them) is for all intents and purposes infinite, and thus seemingly impossible to test.
In Episode 13, an approach is proposed which, in our view, adequately solves this test infinite-possibility coverage problem through the use of random spread coverage testing and sql interleaving.Knowing your Bash/Linux/Gnu scripting well is also an almost definite prerequisite to getting started with mysqld QA. Episode 1 in this series is over 3.5 hours of in-depth (from easy to advanced) training on many often-used Bash commands and topics. From ls to sed, from cp to xargs and from variables to arrays. Enjoy!
Without further ado, here are the planned upcoming episodes:
MySQL QA Episode 1: Bash/GNU Tools & Linux Upskill & Scripting Fun
MySQL QA Episode 2: Build a MySQL Server: Git, Compiling, Build Tools
MySQL QA Episode 3: Debugging: GDB, Backtraces, Frames, Library Dependencies
MySQL QA Episode 4: QA Framework Setup Time! percona-qa, pquery, reducer & more
MySQL QA Episode 5: Preparing Your QA Run: mtr_to_sql.sh and pquery-run.sh
MySQL QA Episode 6: Analyzing & Filtering: pquery-prep-red.sh, -clean-known.sh & pquery-results.sh
MySQL QA Episode 7: Reducing Testcases for Beginners: single-threaded reducer.sh
MySQL QA Episode 8: Reducing Testcases for Engineers: tuning reducer.sh
MySQL QA Episode 9: Reducing Testcases for Experts: multi-threaded reducer.sh
MySQL QA Episode 10: Reproducing and Simplifying: How to get it Right
MySQL QA Episode 11: Valgrind Testing: Pro’s, Con’s, Why and How
MySQL QA Episode 12: Multi-node Cluster Testing Using Docker
MySQL QA Episode 13: A Better Approach to all MySQL Regression, Stress & Feature Testing: Random Coverage Testing & SQL Interleaving
A short introduction on each episode:
As episodes are finished, the series titles above will be linked so it’s easy to check this page for updates.
One of our Percona Support customers recently reported that Percona XtraBackup failed with a page corruption error on an InnoDB table. The customer thought it was a problem or bug in the Percona XtraBackup tool. After investigation we found that an InnoDB page was actually corrupted and a Percona XtraBackup tool caught the error as expected and hence the backup job failed.
I thought this would be an interesting topic and worthy of a blog post. In this article I will describe the innochecksum tool, when and how to use it and what are the possible fixes if an InnoDB table suffers from page corruption.
The innochecksum tool is an offline tool that prints checksums for InnoDB files. This tool reads an InnoDB tablespace file, calculates the checksum for each page and reports mismatches, if any. A checksum mismatch is an indication of corrupt pages. Being as offline tool, innochecksum can’t be used on tablespace file that a MySQL server is currently using, hence you need to shutdown the server prior to running the innochecksum tool. If you try to run the innochecksum tool on a running MySQL server, then there is a chance that innochecksum crashes or reports a bad checksum for a good page, resulting false positives results. There is chance when you run innochecksum on a tablespace file that is opened by mysqld, that pages are dirty and not checksummed yet by the InnoDB storage engine itself.
The point: don’t run innochecksum against a running server.
InnoDB corruption can be caused by many factors (e.g. power lost, faulty hardware, bugs). The InnoDB storage engine validates calculated checksum while reading pages from a tablespace on disk to the stored checksum in the page. In case, InnoDB finds page checksum mismatch it will force down the MySQL server.
Let me show you a page corruption error identified by Percona XtraBackup during a backup run in which the backup failed afterward. xtrabackup: Database page corruption detected at page 25413, retrying...  xtrabackup: Database page corruption detected at page 25413, retrying...  xtrabackup: Database page corruption detected at page 25413, retrying...
First, we need to identify if the tablespace is really corrupted for that particular table. I do that with the help of the innochecksum utility as shown below. As I mentioned earlier, make sure to shut down MySQL before using the innochecksum tool.$ innochecksum -p 25413 /path/to/datadir/database_name/table_name.ibd
I passed the -p (page) flag for innochecksum to only check the specific pages that were reported corrupt by Percona XtraBackup. Without passing any option to the innochecksum tool, it will check entire tablespace for corruption which will required additional server downtime. The innochecksum tool also supports the -d (debug) option to print the checksum for each page and the -v (verbose) parameter to print a progress indicator. You can find more details in the manual. If the tool reports page corruption then database table is really corrupted as below.page 25413 invalid (fails log sequence number check)
In order to fix this issue, the first thing you should try is to mysqldump the corrupted table and If mysqldump succeeded then problem exists in secondary indexes for that tablespace. This is because the mysqldump utility doesn’t touch indexes as indexes are created after all rows are inserted.
If mysqldump succeeds then the problem is associated with indexes. I would suggest following options to fix the corruption.
— Execute OPTIMIZE TABLE on that table which rebuilds indexes. The table will be locked during the operation prior to MySQL 5.6.17. Since MySQL 5.6.17 OPTIMIZE TABLE is an online operation.
— Rebuild table with the pt-online-schema-change tool from Percona Toolkit. This will give the same result as OPTIMIZE TABLE a non-blocking way as the pt-online-schema=change tool is online schema change tool.
— Drop all secondary indexes and then recreate them. The table will be locked during that operation for writes only. Again, you can use pt-online-schema-change tool for this purpose without sacrificing read/writes ability on the table during the drop and create indexes operation.
Finally, I would suggest to re-run the innochecksum tool to verify the tables integrity again as this will make sure there is no more page corruption. In this case we found that the table was actually corrupted and fixing table corruption through the backup/reload table fixed the problem and Percona XtraBackup ran fine during the next run.
It is possible that mysqldump crashes a MySQL server for a corrupted table. Fortunately, Percona Server contains innodb_corrupt_table_action which you can enable. The configuration variable is dynamic in nature, this means enabling it doesn’t requires a MySQL server restart. Prior to Percona Server 5.6 innodb_corrupt_table_action was known as innodb_pass_corrupt_table. Once you enable this option, you can try mysqldump again. If you are using Oracle MySQL then I would suggest to try this with innodb_force_recovery in case mysqldump fails to dump the table contents.
As a side note, if your backup is successful without any errors while performing a backup with Percona Xtrabackup, this means your InnoDB tables don’t have any page checksum mismatch or corruption. Percona XtraBackup can validate page checksums and in case of errors it logs error and exists as I mentioned above.
There is also a modified version of the innochecksum made available by Facebook’s Mark Callaghan and can be found in this bug report which provides extra stats on tablespace undo blocks. There is another tool made by Google’s Jeremy Cole known as the InnoDB Ruby Tool to examine the internals of InnoDB.
- Innochecksum is an offline InnoDB checksum tool. This means you must stop MySQL server. Otherwise it produces “Unable to lock file” error since MySQL 5.7.
- Old versions of innochecksum only supports files up to 2GB in size. However, since MySQL 5.6 innochecksum supports files greater than 2GB in size.
- Percona Server variable innodb_corrupt_table_action is supported on tables existing in their tablespace (i.e. innodb_file_per_table).
- If you are using compressed tables (ROW_FORMAT=COMPRESSED) , then you must use innochecksum from MySQL 5.7.2 or greater, as earlier versions of innochecksum don’t support compressed tables. Check this bug for details.
New Features for the innochecksum tool from MySQL 5.7:
- As I mentioned above, since MySQL 5.7 innochecksum supports file sizes greater than 2GB.
- Since MySQL 5.7 you can log the output with the –log option.
- –page-type-summary option added for page type summaries.
- MySQL 5.7 also includes another nice option –page-type-dump which dumps the details of each page to standard output (STDOUT) or standard error (STDERR).
- Since MySQL 5.7 innochecksum can be executed on multiple user-defined system tablespace files.
- Since MySQL 5.7 innochecksum can be executed on multiple system tablespace files.
You can read more about this is in the MySQL 5.7 manual page of innochecksum.
In this post, we identified InnoDB page corruption using the logs generated by Percona XtraBackup and fixed them by using the mysqldump tool. But again, unfortunately, there are chances that Percona XtraBackup will not always fail in the same way when it finds corruption. So in some cases, it’s not easy to tell whether Percona XtraBackup has failed due to a bad checksum or a bug of its own. But in most cases, page corruption is the culprit if Percona XtraBackup fails to complete.
To summarize, I would say that Percona XtraBackup is a good way of verifying whether or not InnoDB pages are corrupted – and you can also verify the same thing via the mysqldump utility.
My webinar “Multi-threaded Replication in MySQL 5.6 and 5.7″ on February 25 generated several excellent questions following the presentation (available here for playback along with the slides). I didn’t have time to answer many of the questions during the session and so in this post I answer all of them. Thanks to everyone who attended!
Q: What do you expect from MTS with logical clock? Do you think performance would be good as with per database?
A: MTS with 5.6 is not usable if you have a single database. I do not have numbers, but this is quite frequent. With 5.7 everyone should be able to benefit from multi-threaded replication.
Q: When MySQL 5.6 was released, performance of MTS was lower, than in 5.5, for example. Is this addressed now?
A: I am not sure which specific issue or bug you are referring, but if your data is spread across several databases
Q: How does Percona XtraBackup work with MTS? What are the changes in mysqldump?
A: As long as you are using GTIDs, you can safely take a backup from a slave using multi-threaded replication: with XtraBackup, add the --slave-info option as usual when taking a backup from a slave and with mysqldump, use --master-data instead of --dump-slave.
Q: For checkpoint position, what if MTS thread apply Insert before creating table where it inserting data. How MTR checkpoint will keep track of these transactions applying by different thread on slave?
A: The worker threads track all execution gaps to make sure that out-of-order execution is safe and to be able to replay all events without forgetting any of them. So it is not possible that a worker thread will insert data in a table that has not been created yet.
Q: Can you use MTS with all binlog_format options?
Q: Is there any way to have the threads work so that no database contention happens?
A: The short answer is no: the goal of the worker threads is to execute the incoming transactions as fast as possible. If that results in database contention, you should probably decrease the number of worker threads.
Q: Why doesn’t multi-threaded replication perform well on a single DB?
A: With 5.6, parallelization is based on isolating the transactions going to each database. If you only have a single DB, no parallelization is possible. You should look at 5.7 and the logical clock algorithm.
Q: Are there any implications with regards to GTIDs and Multi-Threaded replication when running a Master-to-Master setup?
A: I cannot think of any, however I am not sure master-master replication is still very relevant when using GTIDs.
Q: Is there any inconvenience with memory or cache when using more workers than the number of databases?
A: If the number of workers is just a bit higher than the number of databases (like 5 workers for 3 databases), there should not be any issue. However with ridiculously high numbers (500 workers for 2 databases), there might be performance degradation. I have not tested such cases, so I cannot give a good answer there. However the idea is that the number of workers should be close to the number of databases and should exceed the number of cores on the server.
Q: Is there multi-threaded replication in MySQL 5.7?
A: Yes, multi-threaded replication is available in MySQL 5.7 and offers improvements compared to MySQL 5.6 (mainly the parallelization with logical clock).
Q: Have you used DIM_STAT to created load and measure SLAVE Lag? Any interesting take-a-ways from that effort?
A: I used sysbench to generate load and Seconds_Behind_Master from SHOW SLAVE STATUS to measure slave lag. That mainly shows that if your workload is a good fit for MTS (multiple databases and load shared evenly across ), performance benefits can be significant.
Q: Does multi-threaded replication also work with Percona XtraDB Cluster/Percona Server?
A: Percona Server 5.6 is based on MySQL 5.6, so you can use multi-threaded replication exactly as you would use it on MySQL 5.6.
On Percona XtraDB Cluster, it is a bit different: replication inside the cluster uses Galera replication, which has nothing to do with MySQL replication. Note that Galera has offered parallel replication from the beginning (parallel applying of the replicated writesets to be accurate). However if you are using asynchronous replicas, these replicas can benefit from multi-threaded replication if they are running on MySQL/Percona Server 5.6.
Q: What happens to cross db transactions? Do they not replicate?
A: These transactions will replicate, but they will have to wait until all preceding transactions have been executed. Stated differently, cross db transactions introduce serialization, so you should avoid them as much as possible if you want to benefit from parallel applying.
To be accurate, if you have db1, db2 and db3 and if you execute a transaction involving db1 and db2, transactions on db3 can still be applied in parallel. So if you have many databases, cross db transactions may not be that bad.
Q: When using MTS without GTIDs, is “Seconds_Behind_Master” from SHOW SLAVE STATUS valid?
A: Seconds_Behind_Master is based on Exec_Master_Log_Pos. And with MTS, Exec_Master_Log_Pos is not reliable as it indicates the position of the latest checkpoint and not the position of the latest executed transaction. However in practice, checkpoints will happen at least every 300ms by default, so Seconds_Behind_Master is still a good indication of the replication lag. Of course you should keep in mind the usual limitations, such as with multi-tiered replication (if the setup is A->B->C, C will report its lag against B, not against A) or when there is a replication error (then Seconds_Behind_Master is NULL).
Q: How can all the servers be realistically restarted at the same time? There could be a few sec intervals if you have multiple servers [That was when I explained how to enable GTID replication].
A: With MySQL 5.6, the requirements are pretty strict when it comes to enabling GTIDs: all servers must be restarted at the same point in time in the replication stream. As you mention, it is difficult if you have several servers, so the only viable solution is: stop the writes on the master, wait until replication has caught up on all slaves, stop all servers, change the configuration, restart all servers.
What it means is that there is a time range when all servers are down. This is a showstopper for many people, and that’s why Percona Server 5.6 now includes a patch from Facebook that allows an online migration to GTIDs and that’s why MySQL 5.7.6 also offers this option.
* * *
Thanks for all of the great questions – and I hope to see you next month at OpenStack Live and the Percona Live MySQL Conference and Expo 2015 (April 13-16) – both at the Santa Clara conference center in sunny Silicon Valley. Get more info here.
The post Q&A: Multi-threaded Replication in MySQL 5.6 and MySQL 5.7 appeared first on MySQL Performance Blog.
Geographic information systems (GIS) are used by application developers to incorporate geographic information into their datasets to create apps with cool features such finding the address to the best steakhouse in town or the locations of local points of historical interest (the list is endless). In a nutshell, GIS captures, store, checks and displays data related to positions on Earth’s surface.
Next Wednesday I’m going provide an introduction to GIS functionality in MySQL along an overview of recent changes. I invite you to register now for this free webinar titled “MySQL and Geospatial Programming.” It starts at 10 a.m. Pacific time on March 18.
In this talk I’ll discuss:
- The various tasks that deal with geocoding
- How MySQL can solve problems such as points within a radius (e.g., “Find the 10 closest coffee shops) among others
- Highlights from some of the new features bundled in the upcoming 5.7 release, and what benefits they can bring to your applications
I’m happy to field questions in advance in the comments section below. This webinar, like all Percona webinars, will be recorded. In addition to the video, my slides will also be available for download.
Register now and I’ll talk to you next week!
The post MySQL and geospatial programming: An introduction to GIS appeared first on MySQL Performance Blog.
The MySQL Performance Schema exposes so much data that it’s not trivial to learn, configure, and use. With recently released Percona Agent 1.0.11 you can get query metrics – like min, max, and average query execution time – with a few clicks:
Click “Apply” and about two minutes later you’ll have query metrics from Performance Schema, collected and sent every minute.
Percona Cloud Tools (PCT) and Percona Agent handle all the details. You’ll need MySQL (or Percona Server) 5.6 and Percona Agent 1.0.11 or newer. One caveat at the moment: it only works for local MySQL instances (so not Amazon RDS). This limitation should be fixed soon; we’re already working on it.
Why use Performance Schema? We prefer Query Analytics with MySQL slow logs (especially Percona Server slow logs which expose more metrics) because slow logs expose the most metrics compared to other sources, but sometimes the slow log just isn’t an option, so Performance Schema is the next best choice, but the choice means tradeoffs. For example, Performance Schema does not expose actual query examples (just fingerprints), so EXPLAIN does not work.
For those who’ve been following PCT development, you know that Percona Agent 1.0.5 first introduced support for Performance Schema. What’s new in 1.0.11 is everything – we completely rewrote this part of the agent. It’s so much better that it’s now the required minimum version for using Query Analytics with Performance Schema. Upgrading is really easy: just run the single command line you used to install the agent and it will auto-update.
MySQL Performance Schema exposes a lot of data and insights into the server, but query metrics are perhaps the most important because the primary job of your MySQL database is to execute queries. That’s why Percona Cloud Tools makes Query Analytics with Performance Schema (and slow logs) so easy: to help you focus on the essential and leave the details to the tools.
Percona Cloud Tools is in beta, so it’s still free to sign up and free to use all the tools and features.