MySQL

Orchestrator-agent: How to recover a MySQL database

MySQL Performance Blog - Thu, 2016-04-14 00:42

In our previous post, we showed how Orchestrator can handle complex replication topologies. Today we will discuss how the Orchestrator-agent complements Orchestrator by monitoring our servers, and provides us a snapshot and recovery abilities if there are problems.

Please be aware that the following scripts and settings in this post are not production ready (missing error handling, etc.) –  this post is just a proof of concept.

What is Orchestrator-agent?

Orchestrator-agent is a sub-project of Orchestrator. It is a service that runs on the MySQL servers, and it gives us the seeding/deploying capability.

In this context “seeding” means copying MySQL data files from a donor server to the target machine. Afterwards, the MySQL can start on the target machine and use the new data files. 

Functionalities (list from Github):
  • Detection of the MySQL service, starting and stopping (start/stop/status commands provided via configuration)
  • Detection of MySQL port, data directory (assumes configuration is /etc/my.cnf)
  • Calculation of disk usage on data directory mount point
  • Tailing the error log file
  • Discovery (the mere existence of the orchestrator-agent service on a host may suggest the existence or need of existence of a MySQL service)
  • Detection of LVM snapshots on MySQL host (snapshots that are MySQL specific)
  • Creation of new snapshots
  • Mounting/umounting of LVM snapshots
  • Detection of DC-local and DC-agnostic snapshots available for a given cluster
  • Transmitting/receiving seed data

The following image shows us an overview of a specific host (click on an image to see a larger version):

How does it work?

The Orchestrator-agent runs on the MySQL server as a service, and it connects to Orchestrator through an HTTP API. Orchestrator-agent is controlled by Orchestrator. It uses and is based on LVM and LVM snapshots: without them it cannot work.

The agent requires external scripts/commands example:

  • Detect where in the local and remote DCs it can find an appropriate snapshot
  • Find said snapshot on server, mount it
  • Stop MySQL on target host, clear data on MySQL data directory
  • Initiate send/receive process
  • Cleanup data after copy

If these external commands are configured, a snapshot can be created through the Orchestrator web interface example. The agent gets the task through the HTTP API and will call an external script, which creates a consistent snapshot.

Orchestrator-agent configuration settings

There are many configuration options, some of which we’ll list here:

  • SnapshotMountPoint – Where should the agent mount the snapshot.
  • AgentsServer  –  Where is the AgentServer example: “http://192.168.56.111:3001” .
  • CreateSnapshotCommand  – Creating a consistent snapshot.
  • AvailableLocalSnapshotHostsCommand  – Shows us the available snapshots on localhost.
  • AvailableSnapshotHostsCommand  – Shows us the available snapshots on remote hosts.
  • SnapshotVolumesFilter  – Free text which identifies MySQL data snapshots.
  • ReceiveSeedDataCommand  – Command that receives the data.
  • SendSeedDataCommand  – Command that sends the data.
  • PostCopyCommand  – Command to be executed after the seed is complete.
Example external scripts

As we mentioned before, these scripts are not production ready.

"CreateSnapshotCommand": "/usr/local/orchestrator-agent/create-snapshot.sh",

#!/bin/bash donorName='MySQL' snapName='my-snapshot' lvName=`lvdisplay | grep "LV Path" | awk '{print $3}'|grep $donorName` size='500M' dt=$(date '+%d_%m_%Y_%H_%M_%S'); mysql -e"STOP SLAVE; FLUSH TABLES WITH READ LOCK;SELECT SLEEP(10);" &>/dev/null & lvcreate --size $size --snapshot --name orc-$snapName-$dt $lvName

This small script creates a consistent snapshot what agent can use later, but it is going to Lock all the tables for 10 seconds.  (Better solutions can be exists but this is a proof of concept script.)

"AvailableLocalSnapshotHostsCommand": "lvdisplay | grep "LV Path" | awk '{print $3}'|grep my-snapshot",

We can filter the available snapshots based on the “SnapshotVolumesFilter” string.

"AvailableSnapshotHostsCommand": "echo rep4",

You can define a command that can show where the available snapshots in your topology are, or you can use a dedicated slave. In our test, we easily used a dedicated server.

"SnapshotVolumesFilter": "-my-snapshot",

“-my-snapshot” is the filter here.

"ReceiveSeedDataCommand": "/usr/local/orchestrator-agent/receive.sh",

#!/bin/bash directory=$1 SeedTransferPort=$2 echo "delete $directory" rm -rf $directory/* cd $directory/ echo "Start nc on port $SeedTransferPort" `/bin/nc -l -q -1 -p $SeedTransferPort | tar xz` rm -f $directory/auto.cnf echo "run chmod on $directorty" chown -R mysql:mysql $directory

The agent passes two parameters to the script, then it calls the script like this:

/usr/local/orchestrator-agent/recive.sh /var/lib/mysql/ 21234

The script cleans the folder (you can not start while mysqld is running; first you have to stop it on the web interface or command line), listens on the specified port and it waits for the compressed input. After it removes “auto.cnf”,  MySQL recreates a new UUID at start time. Finally, make sure every file has the right owner.

"SendSeedDataCommand": "/usr/local/orchestrator-agent/seed.sh",

#!/bin/bash directory=$1 targetHostname=$2 SeedTransferPort=$3 cd $directory echo "start nc" `/bin/tar -czf - -C $directory . | /bin/nc $targetHostname $SeedTransferPort`

The agent passes three parameters to the script:

/usr/local/orchestrator-agent/seed.sh /tmp/MySQLSnapshot rep5 21234

The first parameter is the mount point of the snapshot, and the second one is the destination host and the port number. The script easily compresses the data and sends it through “nc”.

Job details

A detailed log can be found from every seed. These logs can be really helpful in discovering any problems during the seed.

Why do we need Orchestrator-agent?

If you have a larger MySQL topology where you frequently have to provide new servers, or if you have a dev/staging replica set where you want to easily go back to a previous production stage, Orchestrator-agent can be really helpful and save you a lot of time. Finally, you’ll have the time for other fun and awesome stuff!

Features requests

Orchestrator-agent does its job, but adding a few extra abilities could make it even better:

  • Adding XtraBackup support.
  • Adding Mysqlpump/Mysqldump/Mydumper support.
  • Implement some kind of scheduling.
  • Batch seeding (seeding to more than one server with one job.)
Summary

Shlomi did a great job again, just like with Orchestrator.

Orchestrator and Orchestrator-agent together give us a useful platform to manage our topology and deploy MySQL data files to the new servers, or re-sync old ones.

Categories: MySQL

Evaluating Database Compression Methods: Update

MySQL Performance Blog - Wed, 2016-04-13 20:12

This blog post is an update to our last post discussing database compression methods, and how they stack up against each other. 

When Vadim and I wrote about Evaluating Database Compression Methods last month, we claimed that evaluating database compression algorithms was easy these days because there are ready-to-use benchmark suites such as lzbench.

As easy as it was to do an evaluation with this tool, it turned out it was also easy to make a mistake. Due to a bug in the benchmark we got incorrect results for the LZ4 compression algorithm, and as such made some incorrect claims and observations in the original article. A big thank you to Yann Collet for reporting the issue!

In this post, we will restate and correct the important observations and recommendations that were incorrect in the last post. You can view the fully updated results in this document.

As you can see above, there was little change in compression performance. LZ4 is still the fastest, though not as fast after correcting the issue.

The compression ratio is where our results changed substantially. We reported LZ4 achieving a compression ratio of only 1.89 — by far lowest among compression engines we compared. In fact, after our correction, the ratio is 3.89 — better than Snappy and on par with QuickLZ (while also having much better performance).  

LZ4 is a superior engine in terms of the compression ratio achieved versus the CPU spent.

The compression versus decompression graph now shows LZ4 has the highest ratio between compression and decompression performance of the compression engines we looked at.

The compression speed was not significantly affected by the LZ4 block size, which makes it great for compressing both large and small objects. The highest compression speed achieved was with a block size of 64KB — not the highest size, but not the smallest either among the sizes tested.

We saw some positive impact on the compression ratio by increasing the block size, However, increasing the block size over 64K did not substantially improve the compression ratio, making 64K an excellent block for LZ4, where it had the best compression speed and about as-good-as-it-gets compression. A 64K block size works great for other data as well, though we can’t say how universal it is.

Scatterplot with compression speed vs compression ratio

 

Updated Recommendations

Most of our recommendations still stand after reviewing the updated results, with one important change. If you’re looking for a fast compression algorithm that has decent compression, consider LZ4.  It offers better performance as well as a better compression ratio, at least on the data sets we tested.

 

Categories: MySQL

Percona Live featured talk with Ying Qiang Zhang — What’s new in AliSQL: Alibaba’s branch of MySQL

MySQL Performance Blog - Tue, 2016-04-12 20:04

Welcome to the next Percona Live featured talk with Percona Live Data Performance Conference 2016 speakers! In this series of blogs, we’ll highlight some of the speakers that will be at this year’s conference, as well as discuss the technologies and outlooks of the speakers themselves. Make sure to read to the end to get a special Percona Live registration bonus!

In this Percona Live featured talk, we’ll meet Ying Qiang Zhang, Database Kernel Expert for the Alibaba Group. His talk will be What’s new in AliSQL — Alibaba’s branch of MySQL. This session introduces the Alibaba Group’s branch of the Oracle MySQL — AliSQL. In this session, we will learn about how AliSQL can support 140,000 order creations per second.  I had a chance to speak with Ying and learn a bit more about AliSQL:

PerconaGive me a brief history of yourself: how you got into database development, where you work, what you love about it.

Ying: My first step in my MySQL journey began in my graduate student period. I participated in lots of projects using MySQL as storage. At that time, I thought MySQL was a masterpiece of computer science theory and a well-demonstrated engineering implementation of a real RDBMS. By referencing MySQL source code, I solved many problems in my project.

Before joining the Alibaba group, I was a MySQL Kernel developer at Baidu Co., Ltd. I joined the Alibaba group in 2014 as the developer and maintainer of AliSQL, a MySQL fork of Alibaba.

The dramatic growth of Alibaba’s E-Commerce business puts extremely harsh demands on our database system. AliSQL faces bigger and bigger challenges. I like the challenges, and have tried my best to make AliSQL faster, safer and more scalable – which in turn makes our OLTP system more efficient and smooth.

Percona: Your talk is going to be on “What’s new in AliSQL – Alibaba’s branch of MySQL” So this was a version of MySQL that was put together specifically for the Alibaba group’s online commerce? What prompted that need and why a special MySQL implementation?

Ying: AliSQL is a fork of MySQL integrated with Alibaba’s business characteristics and requirements (based on a community version). The primary incentive of maintaining this fork are:

  1. As the largest E-Commerce platform in the world, the throughput of Alibaba’s online transaction processing system is huge, especially on days like Alibaba Singles’ Day shopping festival (China’s version of “Cyber Monday” or “Black Friday”). The databases behind the OLTP system faces the challenge of high throughput, high concurrency and low latency at the same time (requirements the community version of MySQL cannot meet).
  2. Under the high-stress scenarios, we found some MySQL bugs impact system stability. We couldn’t wait for new releases of the community version to fix these bugs. Usually we will have to fix the bugs with very limited time, and then we will report the bugs as well as the patch to community.
  3. In Alibaba, the differences between the responsibilities of an application developer and database administrator are significant. We have a very professional DBA team, and DBAs are well aware of the database system and need more features to manipulate MySQL: flow control, changing/controlling execution plan, controlling the watermark, setting blacklist without the involvement of application developer, etc. And the community version of MySQL lacks these features. The private cloud user needs these features even more than a public cloud user.

Percona: Are there differences in the online processing experience in China that are different than other countries? Especially for the Singles’ Day event?

Ying: China has huge population base and huge netizen base. With the rapid growth of China’s economy, the purchasing power of Chinese netizen is stronger and stronger. According to published data, Alibaba’s sales during Singles’ Day shopping festival 2014 was 8.99 billion USD, which was almost five times more than Cyber Monday or Black Friday’s online sales in the United States for the same year. In 2015, the amount reached 14.3 billion USD. Alibaba’s E-Commerce platform sales were one billion RMB at the first 1 minute and 12 seconds of Singles’ Day 2015. Millions of people are trying to buy the same commodity at the same time usually on that day. This is a huge challenge for the Alibaba’s online transaction processing system, and of course, for the databases sitting in the backend.

Percona: What do you see as an issue that we the database community needs to be on top of with AliSQL? What keeps you up at night with regard to the future of MySQL?

Ying: In my opinion, as an open source project MySQL has been focused on single users or the public cloud users (the “M” of “LAMP”). But with the growth of MySQL, we need to pay more attention to enterprise and big private cloud users. Some features such as administration, scalable cluster solutions, and performance optimizations for extreme scenarios are essential to enterprise users.

Percona: What are you most looking forward to at Percona Live Data Performance Conference 2016?

Ying: I am looking forward to communicating with MySQL users from all over the world, to see if we can help the community to grow even bigger with my two little cents. I am also looking forward to making more friends in the MySQL world.

You can read more about AliSQL at Ying’s website: zhangyingqiang.com.

Want to find out more about Ying and AliSQL? Register for Percona Live Data Performance Conference 2016, and see his talk What’s new in AliSQL — Alibaba’s branch of MySQL. Use the code “FeaturedTalk” and receive $100 off the current registration price!

Percona Live Data Performance Conference 2016 is the premier open source event for the data performance ecosystem. It is the place to be for the open source community as well as businesses that thrive in the MySQL, NoSQL, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Data Performance Conference will be April 18-21 at the Hyatt Regency Santa Clara & The Santa Clara Convention Center.

Categories: MySQL

Is Adaptive Hash Index in InnoDB right for my workload?

MySQL Performance Blog - Tue, 2016-04-12 14:26

This blog post will discuss what the Adaptive Hash Index in InnoDB is used for, and whether it is a good fit for your workload.

Adaptive Hash Index (AHI) is one of the least understood features in InnoDB. In theory, it magically determines when it is worth supplementing InnoDB B-Tree-based indexes with fast hash lookup tables and then builds them automatically without a prompt from the user.

Since AHI is supposed to work “like magic,” it has very little configuration available. In the early versions there were no configuration options available at all. Later versions added innodb_adaptive_hash_index  to disable AHI if required (by setting it to “0” or “OFF”). MySQL 5.7 added the ability to partition AHI by enabling innodb_adaptive_hash_index_parts.  (FYI, this feature existed in Percona Server as innodb_adaptive_hash_index_partitions since version 5.5.)

To understand AHI’s impact on performance, think about it as if it were a cache. If an AHI “Hit” happens, we have much better lookup performance; if it is an AHI “Miss,” then performance gets slightly worse (as checking a hash table for matches is fast, but not free).

This is not the only part of the equation though. In addition to the cost of lookup, there is also the cost of AHI maintenance. We can compare maintenance costs – which can be seen in terms of rows added to and removed from AHI – to successful lookups. A high ratio means a lot of lookups sped up at the low cost. A low ratio means the opposite: we’re probably paying too much maintenance cost for little benefit.

Finally there is also a cost for adding an extra contention. If your workload consists of lookups to a large number of indexes or tables, you can probably reduce the impact by setting  innodb_adaptive_hash_index_parts  appropriately. If there is a hot index, however, AHI could become a bottleneck at high concurrency and might need to be disabled.

To determine if AHI is likely to help my workload, we should verify that the AHI hit and successful lookups to maintenance operations ratios are as high as possible.

Let’s investigate what really happens for some simple workloads. I will use a basic Sysbench Lookup by the primary key – the most simple workload possible. We’ll find that even in this case we’ll find a number of behaviors.

For this test, I am using MySQL 5.7.11 with a 16GB buffer pool. The base command line for sysbench is:

sysbench --test=/usr/share/doc/sysbench/tests/db/select.lua   --report-interval=1 --oltp-table-size=1 --max-time=0 --oltp-read-only=off --max-requests=0 --num-threads=1 --rand-type=uniform --db-driver=mysql --mysql-password=password --mysql-db=test_innodb  run

Looking up a single row

Notice oltp-table-size=1  from above; this is a not a mistake, but tests how AHI behaves in a very basic case:

And it works perfectly: there is a 100% hit ratio with no AHI maintenance operations to speak of.

10000 rows in the table

When we change the OLTP table setting to oltp-table-size=10000 , we get the following picture:

Again, we see almost no overhead. There is a rare incident of 16 rows or so being added to AHI (probably due to an AHI hash collision). Otherwise, it’s almost perfect.

10M rows in the table

If we change the setting to oltp-table-size=10000000, we now have more data (but still much less than buffer pool size):

In this case, there is clearly a warm-up period before we get close to the 100% hit ratio – and it never quite hits 100% (even after a longer run). In this case, maintenance operations appear to keep going without showing signs of asymptotically reaching zero. My take on this is that with 10M rows there is a higher chance of hash collisions – causing more AHI rebuilding.

500M rows in the table, uniform distribution

Let’s now set the OLTP table size as follows: oltp-table-size=500000000. This will push the data size beyond the Innodb buffer pool size.

Here we see a lot of buffer pool misses, causing the a very poor AHI hit ratio (never reaching 1%).   We can also see a large overhead of tens of thousands of rows added/removed from AHI. Obviously, AHI is not adding any value in this case

500M rows, Pareto distribution

Finally, let’s use the setting oltp-table-size=500000000, and add --rand-type=pareto. The --rand-type=pareto setting enables a skewed distribution, a more typical scenario for many real life data access patterns.

In this case we see the AHI hit ratio gradually improving, and reaching close to 50%. The  AHI maintenance overhead is going down, but never reaches anything that suggests it is worth it.

It is important to note in both this and the previous case that AHI has not reached a “steady state” yet. A steady state condition shows the number of rows added and removed becoming close to equal.

As you can see from the math in the workloads shown above, the Adaptive Hash Index in InnoDB “magic” doesn’t always happen! There are cases when AHI is indeed helpful, and then there are others when AHI adds a lot of data structure maintenance overhead and takes memory away from buffer pool – not to mention the contention overhead. In these cases, it’s better that AHI is disabled.

Unfortunately, AHI does not seem to have the logic built-in to detect if there is too much “churn” going on to make maintaining AHI worthwhile.

I suggest using these numbers as a general guide to decide whether AHI is likely to benefit your workload. Make sure to run a test/benchmark to be sure.

Interested in learning more about other InnoDB Internals? Please join me for the Innodb Architecture and Performance Optimization Tutorial at Percona Live!

Categories: MySQL

Dealing with Jumbo Chunks in MongoDB

MySQL Performance Blog - Mon, 2016-04-11 22:39

In this blog post, we will discuss how to deal with jumbo chunks in MongoDB.

You are a MongoDB DBA, and your first task of the day is to remove a shard from your cluster. It sounds scary at first, but you know it is pretty easy. You can do it with a simple command:

db.runCommand( { removeShard: "server1_set6" } )

MongoDB then does its magic. It finds the chunks and databases and balances them across all other servers. You can go to sleep without any worry.

The next morning when you wake up, you check the status of that particular shard and you find the process is stuck:

"msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(3), "dbs" : NumberLong(0)

There are three chunks that for some reason haven’t been migrated, so the removeShard command is stalled! Now what do you do?

Find chunks that cannot be moved

We need to connect to mongos and check the catalog:

mongos> use config switched to db config mongos> db.chunks.find({shard:"server1_set6"})

The output will show three chunks, with minimum and maximum _id keys, along with the namespace where they belong. But the last part of the output is what we really need to check:

{ [...] "min" : { "_id" : "17zx3j9i60180" }, "max" : { "_id" : "30td24p9sx9j0" }, "shard" : "server1_set6", "jumbo" : true }

So, the chunk is marked as “jumbo.” We have found the reason the balancer cannot move the chunk!

Jumbo chunks and how to deal with them

So, what is a “jumbo chunk”? It is a chunk whose size exceeds the maximum amount specified in the chunk size configuration parameter (which has a default value of 64 MB). When the value is greater than the limit, the balancer won’t move it.

The way to remove the flag from that those chunks is to manually split them. There are two ways to do it:

  1. You can specify at what point to split the chunk, specifying the corresponding _id value. To do this, you really need to understand how your data is distributed and what the settings are for min and max in order to select a good splitting point.
  2. You can just tell MongoDB to split it by half, letting it decide which is the best possible _id. This is easier and less error prone.

To do it manually, you need to use sh.splitAt(). For example:

sh.splitAt("dbname", { _id: "19fr21z5sfg2j0" })

In this command, you are telling MongoDB to split the chunk in two using that _id as the cut point.

If you want MongoDB to find the best split point for you, use the sh.splitFind() command. In this particular case, you only need to specify a key (any key) that is part of the chunk you want to split. MongoDB will use that key to find that particular chunk, and then divide it into two parts using the _id that sits in the middle of the list.

sh.splitFind("dbname", { _id : "30td24p9sx9j0" })

Once the three chunks have been split, the jumbo flag is removed and the balancer can move them to a different server. removeShard will complete the process and you can drink a well-deserved coffee.

Categories: MySQL

Downloading MariaDB MaxScale binaries

MySQL Performance Blog - Mon, 2016-04-11 19:13

In this blog post we’ll discuss a caveat when downloading MariaDB MaxScale binaries.

Following the previous performance results in my last two posts on sysbench and primary keys (https://www.percona.com/blog/2016/04/07/mysql-5-7-sysbench-oltp-read-results-really-faster/ and https://www.percona.com/blog/2016/03/28/mysql-5-7-primary-key-lookup-results-is-it-really-faster/), I wanted to measure overhead from proxies servers like ProxySQL and MaxScale.

Unfortunately, I found that MaxScale binaries are not available without registering on the MariaDB.com portal. That in itself isn’t a bad thing, but to complete the registration you need to agree to an Evaluation Agreement. The agreement requests you comply with MariaDB Enterprise Terms and Conditions (you can find the text of the agreement here: MariaDB_Enterprise_Subscription_Agreement_US_v14_0).

Personally, I don’t agree with MariaDB’s “Evaluation Agreement” or the “MariaDB Enterprise Terms and Conditions,” so it left me without binaries!

In general, I strongly advise you to carefully read both documents – or, even better, ask your legal team if you can accept MariaDB’s “Evaluation Agreement.”

Fortunately, MaxScale’s source code is available from https://github.com/mariadb-corporation/MaxScale. I had to build binaries myself, which I will share with you in this post! You can get MaxScale 1.4.1 binaries here https://www.percona.com/downloads/TESTING/MaxScale/. No “Evaluation Agreement” needed!

I will follow up in a future post with my proxies testing results.

Categories: MySQL

MySQL Data at Rest Encryption

MySQL Performance Blog - Fri, 2016-04-08 19:29

This blog post will discuss the issues and solutions for MySQL Data at Rest encryption.

Data at Rest Encryption is not only a good-to-have feature, but it is also a requirement for HIPAA, PCI and other regulations.

There are three major ways to solve data encryption at rest:

  1. Full-disk encryption
  2. Database-level (table) encryption
  3. Application-level encryption, where data is encrypted before being inserted into the database

I consider full disk encryption to be the weakest method, as it only protects from someone physically removing the disks from the server. Application-level encryption, on the other hand, is the best: it is the most flexible method with almost no overhead, and it also solves data in-flight encryption. Unfortunately, it is not always possible to change the application code to support application-level encryption, so database-level encryption can be a valuable alternative.

Sergei Golubchik, Chief Architect at MariaDB, outlined the pluses and minuses of database level encryption during his session at Percona Live Amsterdam:

Pros

  • Full power of DBMS is available
  • Full power of DBMS is availableEasy to implement
  • Easy to implementOnly database can see the data
  • Only databases can see the dataPer-table encryption, per-table keys, performance
  • Per-table encryption, per-table keys, performanceCannot be done per-user

Cons

  • Cannot be done per-user
  • Does not protect against malicious root user

Data at Rest Encryption: Database-Level Options

Currently, there are two options for data at rest encryption at the database level:

MariaDB’s implementation is different from MySQL 5.7.11. MySQL 5.7.11 only encrypts InnoDB tablespace(s), while MariaDB has an option to encrypt undo/redo logs, binary logs/relay logs, etc. However, there are some limitations (especially together with Galera Cluster):

  • No key rotation in the open source plugin version (MySQL 5.7.11 has a key rotation)
  • mysqlbinlog does not work with encrypted binlogs (bug reported)
  • Percona XtraBackup does not work, so we are limited to RSYNC as SST method for Galera Cluster, which is a blocking method (one node will not be available for writes during the SST). The latest Percona XtraBackup works with MySQL 5.7.11 tablespace encryption
  • The following data is not encrypted (bug reported)
    • Galera gcache + Galera replication data
    • General log / slow query log

Database level encryption also has its weakness:

  1. Root and MySQL users can read the keyring file, which defeats the purpose. However, it is possible to place a key on the mounted drive and unmount it when MySQL starts (that can be scripted). The downside of this is that if MySQL crashes, it will not be restarted automatically without human intervention.
  2. Both MariaDB version and MySQL version only encrypt data when writing to disk – data is not encrypted in RAM, so a root user can potentially attach to MySQL with gdb/strace or other tools and read the server memory. In addition, with gdb it is possible to change the root user password structure and then use mysqldump to copy data. Another potential method is to kill MySQL and start it with skip-grant-tables. However, if the key is unmounted (i.e., on USB drive), MySQL will either not start or will not be able to read the encrypted tablespace.

MariaDB Encryption Example

To enable the full level encryption we can add the following options to my.cnf:

[mysqld] plugin-load-add=file_key_management.so file_key_management_filekey = FILE:/mount/keys/mysql.key file-key-management-filename = /mount/keys/mysql.enc innodb-encrypt-tables = ON innodb-encrypt-log = 1 innodb-encryption-threads=1 encrypt-tmp-disk-tables=1 encrypt-tmp-files=1 encrypt-binlog=1 file_key_management_encryption_algorithm = AES_CTR

After starting MariaDB with those settings, it will start encrypting the database in the background. The file_key_management plugin is used; unfortunately, it does not support key rotation. The actual keys are encrypted with:

# openssl enc -aes-256-cbc -md sha1 -k <key> -in keys.txt -out mysql.enc

The encryption <key> is placed in /mount/keys/mysql.key.

After starting MySQL, we can unmount the “/mount/key” partition. In this case, the key will not be available and a potential hacker will not be able to restart MySQL with “–skip-grant-tables” option (without passwords). However, it also prevents normal restarts, especially SSTs (cluster full sync).

Additional notes:

  1. Encryption will affect the compression ratio, especially for the physical backups (logical backups, i.e. mysqldump does not matter as the data retrieved is not encrypted). If your original compressed backup size was only 10% of the database size, it will not be the case for the encrypted tables.
  2. Data is not encrypted in flight and will not be encrypted on the replication slaves unless you enable the same options on the slaves. The encryption is also local to the server, so when encryption was just enabled on a server some tables may not be encrypted yet (but will be eventually)
  3. To check which tables are encrypted, use the Information Schema INNODB_TABLESPACES_ENCRYPTION table, which contains encryption information. To find all tables that are encrypted, use this query:
    select * from information_schema.INNODB_TABLESPACES_ENCRYPTION where ENCRYPTION_SCHEME=1

MySQL 5.7 Encryption Example

To enable encryption, add the following option to my.cnf:

[mysqld] early-plugin-load=keyring_file.so keyring_file_data=/mount/mysql-keyring/keyring

Again, after starting MySQL we can unmount the “/mount/mysql-keyring/” partition.

To start encrypting the tables, we will need to run alter table table_name encryption='Y' , as MySQL will not encrypt tables by default.

The latest Percona Xtrabackup also supports encryption, and can backup encrypted tables.

To find all encrypted tablespaces in MySQL/Percona Server 5.7.11, we can use information_schema.INNODB_SYS_TABLESPACES and the flag field. For example, to find normally encrypted tables, use the following query:

mysql> select * from information_schema.INNODB_SYS_TABLESPACES where flag = 8225G *************************** 1. row *************************** SPACE: 4688 NAME: test/t1 FLAG: 8225 FILE_FORMAT: Barracuda ROW_FORMAT: Dynamic PAGE_SIZE: 16384 ZIP_PAGE_SIZE: 0 SPACE_TYPE: Single FS_BLOCK_SIZE: 4096 FILE_SIZE: 98304 ALLOCATED_SIZE: 98304 *************************** 2. row *************************** SPACE: 4697 NAME: sbtest/sbtest1_enc FLAG: 8225 FILE_FORMAT: Barracuda ROW_FORMAT: Dynamic PAGE_SIZE: 16384 ZIP_PAGE_SIZE: 0 SPACE_TYPE: Single FS_BLOCK_SIZE: 4096 FILE_SIZE: 255852544 ALLOCATED_SIZE: 255856640 2 rows in set (0.00 sec)

You can also use this query instead: select * from information_schema.tables where CREATE_OPTIONS like '%ENCRYPTION="Y"%';.

Performance overhead

This is a debatable topic, especially for the MariaDB implementation when everything is configured to be encrypted. During my tests I’ve seen ~10% of overhead for the standalone MySQL instance, and ~20% with Galera Cluster.

The MySQL 5.7/Percona Server 5.7 tablespace-level encryption shows an extremely low overhead, however, that needs to be tested in different conditions.

Conclusion

Even with all the above limitations, database-level encryption can be a better option than the filesystem-level encryption if the application can not be changed. However, it is a new feature (especially MySQL 5.7.11 version) and I expect a number of bugs here.

Categories: MySQL

The first two Database Performance Team characters are . . .

MySQL Performance Blog - Fri, 2016-04-08 17:55

And the first two Database Performance Team characters are . . .

The last blog introduced the Database Performance Team. The Database Performance Team is comprised of our services experts, who work tirelessly every day to guarantee the performance of your database. Percona’s services team is made up of superheroes that make sure your database is running at peak performance. Now we’re ready to reveal the identity of the first two team members.

The first is funny, friendly, quick-witted, supporting, fast and courteous – but still able to get the job done with amazing competence. Who is this champion?

The Maven
Percona Support
“How can we save you?”

A helpful voice in a stressful situation, The Maven support team member is your-person Friday, with an easy manner and a knowing wink. The Maven puts you at your ease while solving your immediate problems.

She can rescue your database, and do it with a smile.

 

 

The second has computer-like smarts, instant recall, and is a counselor that can understand a problem and the solution quickly. Who is this champion?

The Specialist

Percona Consulting
“No mission refused!”

An ultra-brain, with deep knowledge of all aspects of technology, The Specialist can access minute and arcane details at a moment’s notice. For any mission, she is the one who can sharp shoot the answer under the toughest of circumstances. Put the Specialist on the case, and your problems will disappear.

 

 

Follow @Percona on Twitter and use the hashtag #DatabasePerformanceTeam to cast your guess on our remaining three mystery characters. Correctly guess any of their names or roles, and the lucky winner gets their choice of our mystery T-shirt in either men’s or women’s style. Stay tuned, as we reveal the other identities of the Database Performance Team over the coming weeks! Respond with your guesses the remaining team members.

Join the ranks of the Database Street Team! Fun games, cool prizes – more info is coming soon!

Percona Live Update!

See Bill Nye the Science Guy at Percona Live Data Performance Conference, and help an excellent cause! Bill Nye’s keynote speech at Percona Live is “Bill Nye’s Objective – Change the World!” Through his talks, books, and day job as the CEO of The Planetary Society (the world’s largest non-governmental space interest organization), Bill wants to get people involved in the power of science.

And now you can help change the world, just by attending the Percona Live Data Performance Conference! For a limited time, if you buy a Keynote or Expo pass to Percona Live using the promo code “NYE” you will get the pass for just $10, AND all the money from these registrations will be donated to The Planetary Society.

Come see Bill Nye at Percona Live, where you can also pickup these two awesome Database Performance Team t-shirts! Stay tuned as we reveal more team members!

Categories: MySQL

MySQL 5.7 sysbench OLTP read-only results: is MySQL 5.7 really faster?

MySQL Performance Blog - Thu, 2016-04-07 14:26

This blog will look at MySQL 5.7 sysbench OLTP read-only results to determine if they are faster than previous versions.

As promised in my previous post, I have checked MySQL 5.7 performance against previous versions in a different workload. This time, I will use sysbench OLTP read-only transactions (read-write transactions are part of future research, as there is more tuning required to get the best performance in write workloads).

One important thing to mention is that MySQL 5.6 and 5.7 have special optimizations for READ-ONLY transactions. In MySQL 5.6, however,  you need to start a transaction with "START TRANSACTION READ ONLY" to get the optimization benefit. MySQL 5.7 automatically detects read-only transactions.

I’ve modified the sysbench oltp.lua script to use "START TRANSACTION READ ONLY" for MySQL 5.6. This optimization is not available in MySQL 5.5.

I also tried two different setups:

  • Local connections: the client (sysbench) and the server (mysqld) are running on the same server
  • Network connection: the client and server are connected by a 10GB network

Other details

  • CPU: 56 logical CPU threads servers, Intel® Xeon® CPU E5-2683 v3 @ 2.00GHz
  • sysbench 10 tables x 10 million rows, Pareto distribution
  • OS: Ubuntu 15.10 (Wily Werewolf)
  • Kernel 4.2.0-30-generic

More details with scripts and config files are available on our github.

Summary results can also be found here:

This post covers the most interesting highlights. First, the results on the local connections:

Looking at these results, I was as surprised as you probably are. On a high number of threads and by a significantly visible margin, MySQL 5.7 is actually slower than MySQL 5.6.

Let me show you the relative performance of MySQL 5.5 and MySQL 5.6 (having MySQL 5.7 as a baseline = 1.0):

With a lower number of threads, MySQL 5.5 outperforms MySQL 5.7 by 8-15%, and on a higher number of threads MySQL 5.6 is better by 6-7%.

To validate these findings, we can check the results on a remote connection. Here is a chart:

This gives us a similar picture, with a couple of key differences. MySQL 5.6 encounters scalability problems sooner, and the throughput declines. The fix for that is using innodb-thread-concurrency=64.

Here are the results:

In this round, I did not test scenarios over 1000 threads. But gauging from the results above it seems that MySQL 5.7 has problems. It is interesting to consider how it will affect replication performance – and I will test this after my read-write benchmarks.

 

Categories: MySQL

11 Days Until Percona Live: Justification, Julian Cash, Sponsor List

MySQL Performance Blog - Wed, 2016-04-06 17:59

Only 11 days until Percona Live! Are you registered?

It’s getting close to the Percona Live Data Performance Conference 2016! The conference starts Monday, April 18th. We have some quick updates and pieces of information to pass on to you, so keep reading to find out the details.

Need Help Justifying Your Attendance?

Haven’t been able to justify going to Percona Live to your boss? Here is a link that will help you with that.

Julian Cash X-RAY Light Painting Studio

Don’t forget that Julian Cash will be setting up an X-RAY Light Painting Studio in the Exhibition Hall for your amazement and amusement. Light Painting Portraits are a rare and incredible art form that Julian has pioneered. His interactive artwork at Percona Live is an example of Julian’s vision, which also was featured on America’s Got Talent.

He’s running a campaign to take Light Painting portraits in places where it would otherwise be impossible.  With your help, the studio will be equipped with the best technology imaginable, which will make for countless magical and fantastical images. Check it out!

 

This Year’s Sponsors

Our sponsors for Percona Live Data Performance Conference are set, and we want to thank them for helping us to put on this event. Below, you can see who sponsored Percona Live this year:

  • Diamond Plus Sponsor
    • Deep Information Science
    • RocksDB (Facebook)
  • Platinum
    • Rackspace
    • VividCortex
  • Gold
    • AWS
  • Silver
    • Yelp
    • Shopify
  • Exhibition Hall
    • Codership
    • Blackmesh
    • University of Michigan (DBSeer)
    • Vertabelo
    • Raintank (Grafana.net)
    • Red Hat
    • ScaleArc
    • SolarWinds
    • Pythian
    • AgilData
    • Box
    • Clustrix
    • MaxGauge
    • HGST
    • Severalnines
    • VMware
    • Eventbrite
    • MemSQL
  • Coffee Breaks
    • Mailchimp
  • Badge Lanyards and Conference Bags
    • Google
  • 50 Minute Breakout
    • Rackspace
    • Clustrix
  • Thirty-Minute Demo
    • Vertabelo
  • Data in the Cloud Track
    • Red Hat
    • Intel
  • Signage Sponsor
    • MONyog (Webyog)

Thanks again to all of our sponsors, and all of our attendees. If you haven’t registered yet, do it now! There are only 11 days left until the conference!

Categories: MySQL

Percona Live: Advanced Percona XtraDB Cluster in a Nutshell, La Suite

MySQL Performance Blog - Wed, 2016-04-06 16:30

This blog post will discuss what is necessary of the Percona Live  Advanced Percona XtraDB Cluster tutorial.

Percona Live 2016 is happening in April! If you are attending, and you are registered to the Percona XtraDB Cluster (Galera) tutorial presented by Kenny and myself, please make sure that you:

  • Bring your laptop, this is a hands-on tutorial
  • Have Virtual Box 5 installed
  • Bring a machine that supports 64bit VMs
  • Have at least 5GB of free disk space

This advanced tutorial is a continuation of the beginners’ tutorial, so some basic experience with Percona XtraDB Cluster and Galera is required.

See you soon!

Description of the Percona Live Advanced Percona XtraDB Cluster Talk

Percona XtraDB Cluster is a high availability and high scalability solution for MySQL clustering. Percona XtraDB Cluster integrates Percona Server with the Galera synchronous replication library in a single product package, which enables you to create a cost-effective MySQL cluster. For three years at Percona Live, we’ve introduced people to this technology – but what’s next?

This tutorial continues your education and targets users that already have experience with Percona XtraDB Cluster and want to go further. This tutorial will cover the following topics:

  • Bootstrapping in details
  • Certification errors, understanding and preventing them
  • Replication failures, how to deal with them
  • Secrets of Galera Cache – Mastering flow control
  • Understanding and verifying replication throughput
  • How to use WAN replication
  • Implications of consistent reads
  • Backups
  • Load balancers and proxy protocol

Register for Percona Live now!

Categories: MySQL

EXPLAIN FORMAT=JSON wrap-up

MySQL Performance Blog - Wed, 2016-04-06 14:23

This blog is an EXPLAIN FORMAT=JSON wrap-up for the series of posts I’ve done in the last few months.

In this series, we’ve discussed everything unique to EXPLAIN FORMAT=JSON. I intentionally skipped a description of members such as table_name, access_type  or select_id, which are not unique.

In this series, I only mentioned in passing members that replace information from the Extra column in the regular EXPLAIN output, such as using_join_buffer , partitions, using_temporary_table  or simply message. You can see these in queries like the following:

mysql> explain format=json select rand() from dual *************************** 1. row *************************** EXPLAIN: { "query_block": { "select_id": 1, "message": "No tables used" } } 1 row in set, 1 warning (0.00 sec)

Or

mysql> explain format=json select emp_no from titles where 'Senior Engineer' = 'Senior Cat' *************************** 1. row *************************** EXPLAIN: { "query_block": { "select_id": 1, "message": "Impossible WHERE" } } 1 row in set, 1 warning (0.01 sec)

Their use is fairly intuitive, similar to regular EXPLAIN, and I don’t think one can achieve anything from reading a blog post about each of them.

The only thing left to list is a Table of Contents for the series:

attached_condition: How EXPLAIN FORMAT=JSON can spell-check your queries

rows_examined_per_scan, rows_produced_per_join: EXPLAIN FORMAT=JSON answers on question “What number of filtered rows mean?”

used_columns: EXPLAIN FORMAT=JSON tells when you should use covered indexes

used_key_parts: EXPLAIN FORMAT=JSON provides insight into which part of multiple-column key is used

EXPLAIN FORMAT=JSON: everything about attached_subqueries, optimized_away_subqueries, materialized_from_subquery

EXPLAIN FORMAT=JSON provides insights on optimizer_switch effectiveness

EXPLAIN FORMAT=JSON: order_by_subqueries, group_by_subqueries details on subqueries in ORDER BY and GROUP BY

grouping_operation, duplicates_removal: EXPLAIN FORMAT=JSON has all details about GROUP BY

EXPLAIN FORMAT=JSON has details for subqueries in HAVING, nested selects and subqueries that update values

ordering_operation: EXPLAIN FORMAT=JSON knows everything about ORDER BY processing

EXPLAIN FORMAT=JSON knows everything about UNIONs: union_result and query_specifications

EXPLAIN FORMAT=JSON: buffer_result is not hidden!

EXPLAIN FORMAT=JSON: cost_info knows why optimizer prefers one index to another

EXPLAIN FORMAT=JSON: nested_loop makes JOIN hierarchy transparent

Thanks for following the series!

Categories: MySQL

Webinar April 7, 10am PDT – Introduction to Troubleshooting Performance: What Affects Query Execution?

MySQL Performance Blog - Wed, 2016-04-06 00:49

Join us for our latest webinar on Thursday, April 7, at 10 am PDT (UTC-7) on Introduction to Troubleshooting Performance: What Affects Query Execution?

MySQL installations experience a multitude of issues: server hangs, wrong data stored in the database, slow running queries, stopped replications, poor user connections and many others. It’s often difficult not only to troubleshoot these issues, but to even know which tools to use.

Slow running queries, threads stacking for ages during peak times, application performance suddenly lagging: these are some of the things on a long list of possible database performance issues. How can you figure out why your MySQL installation isn’t running as fast as you’d like?

In this introductory webinar, we will concentrate on the three main reasons for performance slowdown:

  • Poorly optimized queries
  • Concurrency issues
  • Effects of hardware and other system factors

This webinar will teach you how to identify and fix these issues. Register now.

If you can’t attend this webinar live, register anyway and we’ll send you a link to the recording.

Sveta Smirnova, Principal Technical Services Engineer.

Sveta joined Percona in 2015. Her main professional interests are problem-solving, working with tricky issues, bugs, finding patterns which can solve typical issues quicker, teaching others how to deal with MySQL issues, bugs and gotchas effectively. Before joining Percona Sveta worked as Support Engineer in MySQL Bugs Analysis Support Group in MySQL AB-Sun-Oracle. She is the author of the book “MySQL Troubleshooting” and JSON UDF functions for MySQL.

Categories: MySQL

Percona Live featured talk with Anastasia Ailamaki — RAW: Fast queries on JIT databases

MySQL Performance Blog - Tue, 2016-04-05 16:07

Welcome to the next Percona Live featured talk with Percona Live Data Performance Conference 2016 speakers! In this series of blogs, we’ll highlight some of the speakers that will be at this year’s conference, as well as discuss the technologies and outlooks of the speakers themselves. Make sure to read to the end to get a special Percona Live registration bonus!

In this Percona Live featured talk, we’ll meet Anastasia Ailamaki, Professor and CEO, EPFL and RAW Labs. Her talk will be RAW: Fast queries on JIT databases. RAW is a query engine that reads data in its raw format and processes queries using adaptive, just-in-time operators. The key insight is its use of virtualization and dynamic generation of operators. I had a chance to speak with Anastasia and learn a bit more about RAW and JIT databases:

Percona: Give me a brief history of yourself: how you got into database development, where you work, what you love about it.

Anastasia: I am a computer engineer and initially trained on networks. I came across databases in the midst of the object-oriented hype — and was totally smitten by both the power of data models and the wealth of problems one had to solve to create a functioning and performant database system. In the following years, I built several systems as a student and (later) as a coder. At some point, however, I needed to learn more about the machine. I decided to do a Masters in computer architecture, which led to a Ph.D. in databases and microarchitecture. I became a professor at CMU, where for eight years I guided students as they built their ideas into real systems that assessed their ideas potential and value. During my sabbatical at EPFL, I was fascinated by the talent and opportunities in Switzerland — I decided to stay and, seven years later, co-founded RAW Labs.

Percona: Your talk is going to be on “RAW: Fast queries on JIT databases.” Would you say you’re an advocate of abandoning (or at least not relying on) the traditional “big structured database accessed by queries” model that have existed for most of computing? Why?

Anastasia: The classical usage paradigm for databases has been “create a database, then ask queries.” Traditionally, “creating a database” means creating a structured copy of the entire dataset. This is now passé for the simple reason that data is growing too fast, and loading overhead grows with data size. What’s more, we typically use only a small fraction of the data available, and investing in the mass of owned data is a waste of resources — people have to wait too long from the time they receive a dataset until they can ask a query. And it doesn’t stop there: the users are asked to pick a database engine based on the format and intended use of the data. We associate row stores to transactions, NoSQL to JSON, and column stores to analytics, but true insight comes from combining all of the data semantically as opposed to structurally. With each engine optimizing for specific kinds of queries and data formats, analysts subconsciously factor in limitations when piecing together their infrastructure. We only know the best way to structure data when we see the queries, so loading data and developing query processing operators before knowing the queries is premature.

Percona: What are the conditions that make JIT databases in general (and RAW specifically) the optimum solution?

Anastasia: JIT databases push functionality to the last minute, and execute it right when it’s actually needed. Several systems perform JIT compilation of queries, which offer great performance benefits (an example is Hyper, a system recently acquired by Tableau). RAW is JIT on steroids: it leaves data at its source and only reads it or asks for any system resources when they’re actually required. You may have 10000 files, and a file will only be read when you ask a query that needs the data in it. With RAW, when the user asks a query the RAW code-generates raw source data adaptors and the entire query engine needed to run the query. It stores all useful information about the accessed data, as well as popular operators generated in the past, and uses them to accelerate future queries. It adapts to system resources on the fly and only asks for them when needed. RAW is an interface to raw data and operational databases, and uses them to accelerate future queries. It adapts to system resources on the fly and only asks for them when needed. In addition, the RAW query language is incredibly rich; it is a superset of SQL which allows navigation on hierarchical data and tables at the same time, with support for variable assignments, regular expressions, and more for log processing — while staying in declarative land. Therefore, the analysts only need to describe the desired result in SQL, without thinking of data format.

Percona: What would you say in the next step for JIT and RAW? What keeps you up at night concerning the future of this approach?

Anastasia: The next step for RAW is to reach out to as many people as possible — especially users with complex operational data pipelines — and reduce cost and eliminate pipeline stages, unneeded data copies, and extensive scripting. RAW is a new approach that can work with existing infrastructures in a non-intrusive way. We are well on our way with several proof-of-concept projects that create verticals for RAW, and demonstrate its usefulness for different applications.

Percona: What are you most looking forward to at Percona Live Data Performance Conference 2016?

Anastasia: I am looking forward to meeting as many users and developers as possible, hearing their feedback on RAW and our ideas, and learning from their experiences.

You can read more about RAW and JIT databases at Anastasia’s academic group’s website: dias.epfl.ch.

Want to find out more about Anastasia and RAW? Register for Percona Live Data Performance Conference 2016, and see her talk RAW: Fast queries on JIT databases. Use the code “FeaturedTalk” and receive $100 off the current registration price!

Percona Live Data Performance Conference 2016 is the premier open source event for the data performance ecosystem. It is the place to be for the open source community as well as businesses that thrive in the MySQL, NoSQL, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Data Performance Conference will be April 18-21 at the Hyatt Regency Santa Clara & The Santa Clara Convention Center.

Categories: MySQL

Data in the Cloud track at Percona Live with Brent Compton and Ross Turk: The Data Performance Cloud

MySQL Performance Blog - Tue, 2016-04-05 00:03

In this blog, we’ll discuss the Data in the Cloud track at Percona Live with Red Hat’s Brent Compton and Ross Turk.

Welcome to another interview with the Percona Live Data Performance Conference speakers and presenters. This series of blog posts will highlight some of the talks and presentations available at Percona Live Data Performance Conference April 18-21 in Santa Clara. Read through to the end for a discounts for Percona Live registration.

(A webinar sneak preview of their “MySQL on Ceph” cloud storage talk is happening on Wednesday, April 6th at 2 pm EDT. You can register for it here – all attendees will receive a special $200 discount code for Percona Live registration after the webinar! See the end of this blog post for more details!)

First, we need to establish some context. Data storage has traditionally, and for most of its existence, pretty much followed a consistent model: stable and fairly static big box devices that were purpose-built to house data. Needing more storage space meant obtaining more (or bigger) boxes. Classic scale-up storage. Need more, go to the data storage vendor and order a bigger box.

The problem is that data is exploding, and has been exponentially for the last decade. Some estimates put the amount of data being generated worldwide increasing at a rate of 40%-60% per year. That kind of increase, and at that speed, doesn’t leave a lot of ramp up time to make long term big box hardware investments. Things are changing too fast.

The immediate trend – evident by declining revenues of class storage boxes – is placing data in a cloud of scale-out storage. What is the cloud? Since that question has whole books devoted to it, let’s try to simplify it a bit.

Cloud computing benefits include scalability, instantaneous configuration, virtualized consumables and the ability to quickly expand base specifications. Moving workloads to the cloud brings with it numerous business benefits, including agility, focus and cost:

  • Agility. The cloud enables businesses to react to changing needs. As the workload grows or spikes, just add compute cycles, storage, and bandwidth with the click of a mouse.
  • Focus. Deploying workloads to the cloud enables companies to focus more resources on business-critical activities, rather than system administration.
  • Cost. Businesses can pay as they go for the services level they need. Planning and sinking money into long-term plans that may or may not pan out is not as big a problem.

When it comes to moving workloads into the cloud, the low throughput applications were the obvious first choice: email, non-critical business functions, team collaboration assistance. These generally are neither mission critical, nor require high levels of security. As applications driven services became more and more prevalent (think Netflix, Facebook, Instagram), more throughput intensive services were moved to the cloud – mainly for flexibility during service spikes and to accommodate increased users. But tried and true high-performance workloads like databases and other corporate kingdoms that have perceived higher security requirements have traditionally remained stuck in the old infrastructures that have served well – until now.

So what is this all leading to? Well, according to Brent and Ross, ALL data will eventually be going to the cloud, and the old models of storage infrastructure are falling by the wayside. Between the lack of elasticity and scalability of purpose-built hardware, and the oncoming storage crisis, database storage is headed for cloud services solutions.

I had some time to talk with Brent and Ross about data in the cloud, and what we can expect regarding a new data performance cloud model.

Percona: There is always a lot of talk about public versus private paradigms when it comes to cloud discussions. To you, this is fairly inconsequential. How do see “the cloud?” How would you define it terms of infrastructure for workloads?

RHT: Red Hat has long provided software for hybrid clouds, with the understanding that most companies will use a mix of public cloud and private cloud infrastructure for their workloads. This means that Red Hat software is supported both on popular public cloud platforms (such as AWS, Azure, and GCE) as well as on-premise platforms (such as OpenStack private clouds). Our work with Percona in providing a reference architecture for MySQL running on Ceph is all about giving app developers a comparable, deterministic experience when running their MySQL-based apps on a Ceph private storage cloud v. running them in the public cloud.

Percona: So, your contention is that ALL data is headed to the cloud. What are the factors that are going ramp up this trend? What level of information storage will cement this as inevitable?

RHT:  We’d probably restate this to “most data is headed to A cloud.” Two distinctions being made in this statement. The first is “most” versus “all” data.  For years to come, there will be late adopters with on-premise data NOT being served through a private cloud infrastructure. The second distinction is “a” cloud versus “the” cloud.  “A” cloud means either a public cloud or a private cloud (or some hybrid of the two). Private clouds are being constructed by the world’s most advanced companies within their own data centers to provide a similar type of elastic infrastructure with dynamic provisioning and lower CAPEX/OPEX costs (as is found in public clouds).

Percona: What are the concerns you see with moving all workloads to the cloud, and how would you address those concerns?

RHT:  The distinctions laid out in the previous answer address this. For myriad reasons, some data and workloads will reside on-premise within private clouds for a very long time. In fact, as the technology matures for building private clouds (as we’re seeing with OpenStack and Ceph), and can offer many of the same benefits as public clouds, we see the market reaching an equilibrium of sorts. In this equilibrium many of the agility, flexibility, and cost benefits once available only through public cloud services will be matched by private cloud installations. This will re-base the public versus private cloud discussion to fewer, simpler trade-offs – such as which data must reside on-premises to meet an enterprise’s data governance and control requirements.

Percona: So you mentioned the “Data Performance Cloud”? How would you describe that that is, and how it affects enterprises?

RHT:  For many enterprises, data performance workloads have been the last category of workloads to move a cloud, whether public or private. Public cloud services, such as AWS Relational Database Service with Provisioned-IOPS storage, have illustrated improved data performance for many workloads once relegated to the cloud sidelines. Now, with guidelines in the reference architecture being produced by Percona and the Red Hat Ceph team, customers can achieve comparable data performance on their private Ceph storage clouds as they do with high-performance public cloud services.

Percona: What can people expect to get out of the Data in the Cloud track at Percona Live this year?

RHT: Architecture guidelines for building and optimizing MySQL databases on a Ceph private storage cloud.   These architectures will include public cloud benefits along with private cloud control and governance.

Want to find out more about MySQL, Ceph, and Data in the Cloud? Register for Percona Live Data Performance Conference 2016, and see Red Hat’s sponsored Data in the Cloud Keynote Panel: Cloudy with a chance of running out of disk space? Or Sunny times ahead? Use the code “FeaturedTalk” and receive $100 off the current registration price!

The Percona Live Data Performance Conference is the premier open source event for the data performance ecosystem. It is the place to be for the open source community as well as businesses that thrive in the MySQL, NoSQL, cloud, big data and Internet of Things (IoT) marketplaces. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Data Performance Conference will be April 18-21 at the Hyatt Regency Santa Clara & The Santa Clara Convention Center.

MySQL and Ceph: Database-as-a-Service sneak preview

Businesses are familiar with running a Database-as-a-Service (DBaaS) in the public cloud. They enjoy the benefits of on-demand infrastructure for spinning-up lots of MySQL instances with predictable performance, without the headaches of managing them on specific, bare-metal highly available clusters.

This webinar lays the foundation for building a DBaaS on your own private cloud, enabled by Red Hat® Ceph Storage. Join senior architects from Red Hat and Percona for reference architecture tips and head-to-head performance results of MySQL on Ceph versus MySQL on AWS.

This is a sneak preview of the labs and talks to be given in April 2016 at the Percona Live Data
Performance Conference
. Attendees received a discount code for $200 off Percona Live registration!

Speakers:

  • Brent Compton, director, Storage Solution Architectures, Red Hat
  • Kyle Bader, senior solutions architect, Red Hat
  • Yves Trudeau, principal consultant, Percona

Join the live event:

Wednesday, April 6, 2016 | 2 p.m. ET | 11 a.m. PT

Time zone converter

Categories: MySQL

See Bill Nye the Science Guy at Percona Live and help change the world: a special offer!

MySQL Performance Blog - Mon, 2016-04-04 16:47

See Bill Nye the Science Guy at Percona Live Data Performance Conference, and help an excellent cause!

The best science is built on solid data. As a world-renown icon in tech and geek circles everywhere, Bill Nye fights to raise awareness of the value of science, critical thinking, and reason. He hopes that the data he brings will help inspire people everywhere to change the world. And seeing as the open source community is full of science-minded individuals, he is excited to speak to everyone at Percona Live!

Since his early days as a comedian, to creating his well-known Science Guy character, to the present day, Bill Nye has always brought the impressive and illuminating power of science to people.

Bill Nye’s keynote speech at Percona Live is “Bill Nye’s Objective – Change the World!” Through his talks, books, and day job as the CEO of The Planetary Society (the world’s largest non-governmental space interest organization), Bill wants to get people involved in the power of science. Science can teach people about the world, and how they can influence and change it. Science helps us to understand what Bill likes to call “our place in space.”

And now you can help change the world, just by attending The Percona Live Data Performance Conference! For a limited time, if you buy a Keynote or Expo pass to Percona Live using the promo code “NYE” you will get the pass for just $10, AND all the money from these registrations will be donated to The Planetary Society. The Planetary Society sponsors projects that will seed innovative space technologies, nurtures creative young minds, and is a vital advocate for our future in space. Their mission is to empower the world’s citizens to advance space science and exploration.

A great deal, and a great cause! This offer is limited to the first 250 registrations, so hurry up and help change the world!

Categories: MySQL

Percona XtraBackup 2.4.2 is now available

MySQL Performance Blog - Fri, 2016-04-01 14:30

Percona is glad to announce the first GA release of Percona XtraBackup 2.4.2 on April 1st, 2016. Downloads are available from our download site and from apt and yum repositories.

Percona XtraBackup enables MySQL backups without blocking user queries, making it ideal for companies with large data sets and mission-critical applications that cannot tolerate long periods of downtime. Offered free as an open source solution, Percona XtraBackup drives down backup costs while providing unique features for MySQL backups.

New Features:

Bugs Fixed:

  • When backup was taken on MariaDB 10 with GTID enabled, Percona XtraBackup didn’t store gtid_slave_pos in xtrabackup_slave_info but logged it only to STDERR. Bug fixed #1404484.
  • Backup process would fail if --throttle option was used. Bug fixed #1554235.

Release notes with all the bugfixes for Percona XtraBackup 2.4.2 are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

Categories: MySQL

Fixing MySQL Bug#2: now MySQL makes toast!

MySQL Performance Blog - Fri, 2016-04-01 14:15

Historical MySQL Bug#2, opened 12 Sep 2002, states that MySQL Connector/J doesn’t make toast. It hasn’t been fixed for more than 14 years. I’ve finally created a patch for it.

First of all: why only fix this for MySQL Connector/J? We should make sure the server can do this for any implementation! With this fix, now MySQL server (starting with version 5.1) can make toast.

There are few dependences though (see assembled setup picture):

  1. Raspberry Pi + PiFace shield
  2. Power switch relay (I’ve used an IoT Power Relay)
  3. Toaster oven (any cheap mechanical model will work)

Patch:

  1. Make_toast binary, which is run on the Raspberry Pi and PiFace interface (you’ll need to install the PiFace library):
    #!/usr/bin/python import sys from time import sleep if len(sys.argv) == 2 and sys.argv[1].isdigit(): toast_time = sys.argv[1] else: toast_time = 10 print "Toasting for " + str(toast_time) + " seconds..." import pifacedigitalio as p try: p.init() p.digital_write(7,1) sleep(float(toast_time)) p.digital_write(7,0) except (KeyboardInterrupt, SystemExit): print "Exiting and turning off heat..." p.digital_write(7,0) sys.exit(1) print "Your toast is ready! Enjoy! "
  2. MySQL UDF, based on lib_mysqludf_sys, which calls the make_toast binary:
    char* make_toast( UDF_INIT *initid , UDF_ARGS *args , char* result , unsigned long* length , char *is_null , char *error ){ FILE *pipe; char line[1024]; unsigned long outlen, linelen; char buf[40]; result = malloc(1); outlen = 0; sprintf(buf, "make_toast %s", args->args[0]); pipe = popen(buf, "r"); while (fgets(line, sizeof(line), pipe) != NULL) { linelen = strlen(line); result = realloc(result, outlen + linelen); strncpy(result + outlen, line, linelen); outlen = outlen + linelen; } pclose(pipe); if (!(*result) || result == NULL) { *is_null = 1; } else { result[outlen] = 0x00; *length = strlen(result); } return result; }

Usage:

mysql> call make_toast(300)

Demo picture (thanks to my colleague Fernando Laudares Camargos), actual video will follow:

Implementation details:

Hardware/wiring

The relay switch powers on the toaster oven, and no modifications are needed to the oven itself. Make sure the timer is set to 30 min initially, the Raspberry Pi/MySQL UDF will now control how long you toast the bread.

The setup wiring is super easy (but may be counterintuitive if you are used to working with Arduino): use the output pins (image) and connect 5v on the PiFace to the “+” sign on the relay switch, and one of the pins to the “-” sign on the relay switch.

Software install

  1. Install PiFace software and Python bindings
  2. Test the make_toast python script
  3. Add user “mysql” to the spi and gpio groups so it can manipulate pins:
    # gpasswd -a mysql gpio # gpasswd -a mysql spi
  4. Download the make toast UDF code and run install.sh.

mysql> call make_toast(300);

Enjoy your toast when it is hot!

Categories: MySQL

MongoDB at Percona Live: A Special Open Source Community Discount

MySQL Performance Blog - Thu, 2016-03-31 19:32

We want MongoDB at Percona Live!

One of the main goals of the Percona Live Data Performance Conference 2016 is celebrating and embracing the open source community. The community’s spirit of innovation, expertise and competition has produced incredible software, hardware, processes and products.

The open source community is a diverse and powerful collection of companies, organizations and individuals that have helped to literally change the world. Percona is proud to call itself a member of the open source community, and we strongly feel that upholding the principles of the community is a key to our success. These principals include an open dialog, an open mind, and a zeal for cooperative interaction. Together, we can create amazing things.

That’s why we were surprised when MongoDB declined to have us sponsor or speak at MongoDB World 2016, and even more taken aback when we were told our engineers are not welcome to attend the show. We make a special point of inviting competitors to participate in, speak at, and sponsor Percona Live – MongoDB included. We welcome our competitors to speak, sponsor and attend Percona Live because it is in the greater interest of the community at large to include all voices.

With that in mind, we’d like to extend a special offer to any MongoDB employees: sign up for the Percona Live Data Performance Conference 2016 using your company email, and receive a special VIP 25% discount off the registration price (use promo code “mongodb”).

In addition:

  • We invite all MongoDB attendees to a VIP cocktail reception with the Percona Team Tuesday, April 19th from 5-6pm
  • Percona is pleased to host all MongoDB attendees as special guests at the Tuesday, April 19th, Community Dinner Event at Pedro’s

It’s our way of showing our solidarity with the open source community, and expressing our belief that we work best when we work together.

See you all at Percona Live! Register here!

Categories: MySQL

Docker MySQL Replication 101

MySQL Performance Blog - Wed, 2016-03-30 20:21

In this blog post, we’ll discuss some of the basics regarding Docker MySQL replication. Docker has gained widespread popularity in recent years as a lightweight alternative to virtualization. It is ideal for building virtual development and testing environments. The solution is flexible and seamlessly integrates with popular CI tools.

 

This post walks through the setup of MySQL replication with Docker using Percona Server 5.6 images. To keep things simple we’ll configure a pair of instances and override only the most important variables for replication. You can add whatever other variables you want to override in the configuration files for each instance.

Note: the configuration described here is suitable for development or testing. We’ve also used the operating system repository packages; for the latest version use the official Docker images. The steps described can be used to setup more slaves if required, as long as each slave has a different server-id.

First, install Docker and pull the Percona images (this will take some time and is only executed once):

# Docker install for Debian / Ubuntu apt-get install docker.io # Docker install for Red Hat / CentOS (requires EPEL repo) yum install epel-release # If not installed already yum install docker-io # Pull docker repos docker pull percona

Now create locally persisted directories for the:

  1. Instance configuration
  2. Data files

# Create local data directories mkdir -p /opt/Docker/masterdb/data /opt/Docker/slavedb/data # Create local my.cnf directories mkdir -p /opt/Docker/masterdb/cnf /opt/Docker/slavedb/cnf ### Create configuration files for master and slave vi /opt/Docker/masterdb/cnf/config-file.cnf # Config Settings: [mysqld] server-id=1 binlog_format=ROW log-bin vi /opt/Docker/slavedb/cnf/config-file.cnf # Config Settings: [mysqld] server-id=2

Great, now we’re ready start our instances and configure replication. Launch the master node, configure the replication user and get the initial replication co-ordinates:

# Launch master instance docker run --name masterdb -v /opt/Docker/masterdb/cnf:/etc/mysql/conf.d -v /opt/Docker/masterdb/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=mysecretpass -d percona:5.6 00a0231fb689d27afad2753e4350192bebc19ab4ff733c07da9c20ca4169759e # Create replication user docker exec -ti masterdb 'mysql' -uroot -pmysecretpass -vvv -e"GRANT REPLICATION SLAVE ON *.* TO repl@'%' IDENTIFIED BY 'slavepass'G" mysql: [Warning] Using a password on the command line interface can be insecure. -------------- GRANT REPLICATION SLAVE ON *.* TO repl@"%" -------------- Query OK, 0 rows affected (0.02 sec) Bye ### Get master status docker exec -ti masterdb 'mysql' -uroot -pmysecretpass -e"SHOW MASTER STATUSG" mysql: [Warning] Using a password on the command line interface can be insecure. *************************** 1. row *************************** File: mysqld-bin.000004 Position: 310 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set:

If you look carefully at the “docker run” command for masterdb, you’ll notice we’ve defined two paths to share from local storage:

/opt/Docker/masterdb/data:/var/lib/mysql

  • This maps the local “/opt/Docker/masterdb/data” to the masterdb’s container’s “/var/lib/mysql path”
  • All files within the datadir “/var/lib/mysql” persist locally on the host running docker rather than in the container

/opt/Docker/masterdb/cnf:/etc/mysql/conf.d

  • This maps the local “/opt/Docker/masterdb/cnf” directory to the container’s “/etc/mysql/conf.d” path
  • The configuration files for the masterdb instance persist locally as well
  • Remember these files augment or override the file in “/etc/mysql/my.cnf” within the container (i.e., defaults will be used for all other variables)

We’re done setting up the master, so let’s continue with the slave instance. For this instance the “docker run” command also includes the “–link masterdb:mysql” command, which links the slave instance to the master instance for replication.

After starting the instance, set the replication co-ordinates captured in the previous step:

docker run --name slavedb -d -v /opt/Docker/slavedb/cnf:/etc/mysql/conf.d -v /opt/Docker/slavedb/data:/var/lib/mysql --link masterdb:mysql -e MYSQL_ROOT_PASSWORD=mysecretpass -d percona:5.6 eb7141121300c104ccee0b2df018e33d4f7f10bf5d98445ed4a54e1316f41891 docker exec -ti slavedb 'mysql' -uroot -pmysecretpass -e'change master to master_host="mysql",master_user="repl",master_password="slavepass",master_log_file="mysqld-bin.000004",master_log_pos=310;"' -vvv mysql: [Warning] Using a password on the command line interface can be insecure. -------------- change master to master_host="mysql",master_user="repl",master_password="slavepass",master_log_file="mysqld-bin.000004",master_log_pos=310 -------------- Query OK, 0 rows affected, 2 warnings (0.23 sec) Bye

Almost ready to go! The last step is to start replication and verify that replication running:

# Start replication docker exec -ti slavedb 'mysql' -uroot -pmysecretpass -e"START SLAVE;" -vvv mysql: [Warning] Using a password on the command line interface can be insecure. -------------- START SLAVE -------------- Query OK, 0 rows affected, 1 warning (0.00 sec) Bye # Verify replication is running OK docker exec -ti slavedb 'mysql' -uroot -pmysecretpass -e"SHOW SLAVE STATUSG" -vvv mysql: [Warning] Using a password on the command line interface can be insecure. -------------- SHOW SLAVE STATUS -------------- *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: mysql Master_User: repl Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysqld-bin.000004 Read_Master_Log_Pos: 310 Relay_Log_File: mysqld-relay-bin.000002 Relay_Log_Pos: 284 Relay_Master_Log_File: mysqld-bin.000004 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 310 Relay_Log_Space: 458 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1 Master_UUID: 230d005a-f1a6-11e5-b546-0242ac110004 Master_Info_File: /var/lib/mysql/master.info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: Auto_Position: 0 1 row in set (0.00 sec) Bye

Finally, we have a pair of dockerized Percona Server 5.6 master-slave servers replicating!

As mentioned before, this is suitable for a development or testing environment. Before going into production with this configuration, think carefully about the tuning of the “my.cnf” variables and the choice of disks used for the data/binlog directories. It is important to remember that newer versions of Docker recommend using “networks” rather than “linking” for communication between containers.

Categories: MySQL
Syndicate content