Thursday, July 28, 2011

MySQL 5.6.3 Performance improvements

 Mark Callaghan at facebook tested the test release of MySQL 5.6.3 and he has found some performance improvement with InoDB feature. Read below...

Mark tried two of the previews for MySQL 5.6.3 at http://labs.mysql.com/. His first attempt with the multi-threaded slave preview was not successful. Parallel apply on the slave is serial when the master does not also run 5.6.3. He said (I hope this changes as a typical upgrade is first done on the slave.)

He was more successful with the InnoDB features preview. A few more mutex contention bottlenecks were removed in it and he wanted to compare the peak row update rate between it and MySQL 5.1.52. he configured InnoDB to use a buffer pool large enough to cache all data and ran a custom version of sysbench with 8 tables. The peak rate on the preview is about twice the peak rate on the unmodified InnoDB plugin in 5.1.52 using an 8-core server.

This is good news. The results below list the number of rows updated per second using 8 to 256 concurrent clients updating 1 row by primary key per UPDATE statement.

Configuration used:

The database had 8 tables with 2M rows each.
The binlog was disabled during the test.
innodb_flush_log_at_trx_commit=2,
innodb_doublewrite=0.

This is a configuration meant for benchmarks but it also allows maximum stress to be put on InnoDB. He only ran the test once for each level of concurrency and won't try to explain the results at 32 connections.

8
16
32
        64
       128
             256
Clients
32423
60394
121480
   77951
    76189
           74127
mysql 5.1.52
30446
53512
107291
 138408
  149003
        151705
mysql 5.6.3


Wednesday, July 27, 2011

Reduced contention during datafile extension

Another performance problem found by PoorMansProfiler

Innam rana said in his blog post on innodb blog:




InnoDB has an internal file system management module that primarily manages the space in the data files. One of the pain points was the coarse level of locking used when a data file has to be extended. More about this issue can be found here. In the latest labs release we have fixed this problem.
When we need to extend a data file inside InnoDB we write zero filled pages synchronously to the file. The user thread which is extending the data file holds fil_system::mutex during the whole operation. This mutex covers changes to all data structures related to file system management. What this means is that when we do a regular IO we do need to acquire fil_system::mutex though only for a short time. Because the thread doing the data file extension is holding the mutex during the whole IO operation any other thread (user or background) trying to access data file for regular read or write ends up waiting. This brings the whole system to a virtual stand still as no read or write activity can happen. This is true even if a thread is trying to access a data file that is not the one being extended.
We fixed this issue by introducing an internal flag to the data structure indicating that a file is being extended. Now if a user thread needs to extend a data file it does acquire the fil_system::mutex but releases it after setting the flag. Once it is done with the extension IO it resets the flag. This allows other threads to access data files while one of the file is being extended. This also allows multiple files to be extended in parallel. Our tests have shown that the issue of stalls due to file extension is indeed fixed by this approach.
A related feature which can be considered as future work is to off load the file extension to a background thread.



Friday, July 22, 2011

When does InnoDB compress and decompress pages?


There are two sections for rows in the page format for InnoDB compressed tables. The compressed section has one or more rows and must be decompressed to access individual rows. The modification log has uncompressed rows and rows can be accessed without decompressing. The modification log is used to avoid decompressing and then possibly recompressing the compressed section on every row change. The buffer pool also has separate uncompressed copies of some pages so that every row read does not require a page decompression.

I want to understand when a page must be decompressed or recompressed. This is definitely an incomplete list.
  • A page is decompressed when a row is read and the uncompressed version of the page is not in the buffer pool.
  • I think a row can be deleted from the compressed section without decompressing it in many cases as I think marking it deleted uses fields not in the compressed section. 
  • Inserts are done to the modification log assuming it has room. When it is full the modification log and data from the compressed section are merged and the result is recompressed. When the result is too large to fit in a compressed page then the page is split and both post-split pages are recompressed.
  • I don't understand the code for UPDATE statements and need to read more source code. The docs state that updates can be done to the modification log but I don't know what that implies.
A compression failure occurs when a page is recompressed and the result is too big. Innodb fixes this by splitting the page. This only works for index-organized tables and InnoDB is index-organized. You can monitor the rate of compression failures using the information schema table INNODB_CMP. This reports the global rate of compression failures. When you have a server with many tables you need to know which tables have the high failure rate and that information is only available in a yet-to-be-published Facebook patch.

But even the changes in the Facebook patch are not sufficient. In some cases it is important to understand which indexes in a table cause the compression failures. The alternative is to guess. All indexes on a table do not compress equally well yet the same compression factor for a table is used for all indexes on it via the KEY_BLOCK_SIZE option to CREATE TABLE.

------------------------------------------------------------------------------------------------------------ 
By Mark Callaghan
Tuesday, July 19, 2011

MySQL Cluster Architecture

Introduction:

MySQL Cluster is a write-scalable, real-time, ACID-compliant transactional database, combining 99.999% availability with the low TCO of open source. Designed around a distributed, multi-master architecture with no single point of failure, MySQL Cluster scales horizontally on commodity hardware to serve read and write intensive workloads, accessed via SQL and NoSQL interfaces.

MySQL Cluster's real-time design delivers predictable, millisecond response times with the ability to service millions of operations per second. Support for in-memory and disk-based data, automatic data partitioning (sharding) with load balancing and the ability to add nodes to a running cluster with zero downtime allows linear database scalability to handle the most unpredictable web-based workloads.





 MySQL Cluster comprises three types of node which collectively provide service to the application:
   
  • Data nodes manage the storage and access to data.  Tables are automatically sharded across the data nodes which also transparently handle load balancing, replication, failover and self-healing.

  • Application nodes provide connectivity from the application logic to the data nodes. Multiple APIs are presented to the application.  MySQL provides a standard SQL interface, including connectivity to all of the leading web development languages and frameworks. There are also a whole range of NoSQL inerfaces including memcached, REST/JSON, C++ (NDB-API), Java, JPA and LDAP.

  • Management nodes are used to configure the cluster and provide arbitration in the event of a network partition.

 Source: http://www.mysql.com/products/cluster/architecture.html
Thursday, July 14, 2011

getopts in shell script

getopts
The getopts command simplifies the task of validating and parsing command line options and arguments for your shell scripts.

Syntax:


getopts <optstring name> [arg...]

Example:


Step1: First I define all my option holding variables.

ListFiles=0
MoveFiles=0
email=""


Step2: While loop.

The following while statement loops through all the options and sets them to the corresponding variable. getopts returns true while there are options to be processed. The argument string, here "lme:h", specifies which options the script accepts. If the user specifies an option which is not in this string, it will go into * section which will display a help to use this script with examples. If the option is succeeded by a colon, the value immediately following the option is placed in the variable $OPTARG.


while getopts "lme:h" option; do
case "$option" in
l) ListFiles=1;;
m) MoveFiles=1;;
e) email="$OPTARG";;
h|*) helpFunction;;
esac
done

Script Call:
-----------------------------------------------------------------------------------------------------------------------------------------------
ExampleScript.sh -l

#It will go into case l) and set ListFiles=1.

ExampleScript.sh -m
#It will go into case m) and set MoveFiles=1.

ExampleScript.sh -m -e "example@gmail.com"
#It will go into case m) and e) and set MoveFiles=1 as well as get the email address in $Ovariable and set it to "email" variable.

ExampleScript.sh -h
#It will go into case h|*) and call the function helpFunction to show help.

ExampleScript.sh -<anything apart from optstringname we have provided>
#It will also go into case h|*) and will be treated as "*" and show the help.
------------------------------------------------------------------------------------------------------------------------------------------------


Comments/suggestions are welcome.. Happy scripting ... :)
 
Friday, July 1, 2011

GET SIZE OF DIRECTORY (EXCLUDE SUBDIRECTORY) IN UNIX

We all know du command to get the size of a directory. But the problem is when you use "du <directory name>" it will give you the list of all subdirectory including the directory you want with size.

Bt what if i only want the size of directory which i have passed as an argument and not all the subdirectory?

In that senario we can use:

du -sh <directory name>                              

Example 1:

du -h /home/mysql/admin/                             
   1K   /home/mysql/admin/scripts/neel               
   8K   /home/mysql/admin/scripts                    
   1K   /home/mysql/admin/bin-logs/test_instance_4   
   1K   /home/mysql/admin/bin-logs/test_instance_3   
   1K   /home/mysql/admin/bin-logs/orphan            
   1K   /home/mysql/admin/bin-logs/test_instance_1   
   1K   /home/mysql/admin/bin-logs/test_instance_2   
   9K   /home/mysql/admin/bin-logs                   
  20K   /home/mysql/admin                            


In the above example i have have passed "/home/mysql/admin/" as an argument of du and it results all subdirectory with size. (Please note -h switch converts the size into human redable and understandable format i.e KB).

Example 2:

 du -sh /home/mysql/admin/
  20K   /home/mysql/admin

In this example i have used switch "s" ( to show the size of current directory and not the subdirectory) along with "h" (human redable format) and it gave me the size of "/home/mysql/admin"
directory only rather than all subdirectories also.

I hope this will help someone. :)
 
© Copyright 2010-2012 Learn MySQL All Rights Reserved.
Template powered by Blogger.com.