Home News Feeds Planet MySQL
Newsfeeds
Planet MySQL
Planet MySQL - http://www.planetmysql.org/

  • Evaluating Database Compression Methods
    Vadim Tkachenko and I have been working with Fractal Tree® storage engines (Fractal Tree engines are available in Percona Server for MySQL and MongoDB as TokuDB and PerconaFT, respectively). While doing so, we’ve become interested evaluating database compression methods, to see how to make compression algorithms work even better than they do currently.   In this blog post, I will discuss what we found in our compression research. Introduction Before we get to evaluation database compression methods, let’s review what compression properties are most relevant to databases in general. The first thing to consider is compression and decompression performance. Databases tend to be very sensitive to decompression performance, as it is often done in the “foreground” – adding to client response latency. Compression performance, on the other hand, is less critical because it can typically run in the background without adding client latency. It can, however, cause an issue if the database fills its data compression queue and “chokes.” The database workload also affects compression performance demands. If the data is loaded only once and essentially becomes read only, it might make sense to spend extra time compressing it – as long as the better compression ratio is achieved without impact to decompression speed. The next important thing to consider is the compression block size, which can significantly affect compression ratio and performance. In some cases, the compression block size is fixed. Most InnoDB installations, for example, use a 16KB block size. In MySQL 5.7 it is possible to change block size from 4KB to 64KB, but since this setting applies to the whole MySQL instance it isn’t commonly used. TokuDB and PerconaFT allow a much more flexible compression block size configuration. Larger compression block sizes tend to give a better compression ratio and may be more optimal for sequential scan workloads, but if you’re accessing data in a random fashion you may see significant overhead as the complete block typically must be decompressed. Of course, compression will also depend on the data you’re compressing, and different algorithms may be more optimal at handling different types of data. Additionally, different data structures in databases may structure data more or less optimally for compression. For example, if a database already implements prefix compression for data in the indexes, indexes are likely to be less compressible with block compression systems. Let’s examine what choices we have when it comes to the compression algorithm selection and configuration. Typically for a given block size – which is essentially a database configuration setting – you will have a choice to select the compression algorithm (such as zlib), the library version and the compression level.   Comparing different algorithms was tough until lzbench was introduced. Izbench allows for a simple comparison of different compression libraries through a single interface. For our test, we loaded different kinds of data in an uncompressed InnoDB table and then used it as a source for lzbench:./lzbench -equicklz,1/zstd,1/snappy/lzma,1/zlib,1/lz4,1/brotli,1 -o3 -b16 data.ibdThis method is a good way to represent database structures and is likely to be more realistic than testing compression on the source text files. All results shown here are for “OnTime Air Performance.” We tried a variety of data, and even though the numbers varied the main outcomes are the same. You can see results for our other data types in this document. The results for compression are heavily CPU dependent. All the data below is single thread compression benchmarks run on an Intel Xeon E5-2643 v2 @ 3.5Ghz. Below are some of the most interesting results we found. Comparing Compression Algorithms Using a standard 16KB block size and a low level of compression, we can see that there is a huge variety of compression and decompression speed. The results ranged from 30MB per second for LZMA to more than 1GB per second for LZ4 for compression, and 100MB per second and  3.5GB per second for decompression (for the same pair). Now let’s look at the compression ratios achieved. You can see a large variety of outcomes for this data set as well, with ratios ranging from 1.89:1 (LZ4) to  6.57:1 (LZMA). Notice how the fastest and slowest compression libraries achieve the worst and best compression: better compression generally comes at disproportionately more CPU usage. Achieving 3.5 times more compression (LZMA) requires spending 37 times more CPU resources. This ratio, though, is not at all fixed: for example, Brotli provides 2.9 times better compression at 9 times higher CPU cost while Snappy manages to provide 1.9 times better compression than LZ4 with only 1.7 times more CPU cost. Another interesting compression algorithm property is how much faster decompression is than compression. It is interesting to see there is not as large a variance between compression algorithms, which implies that the default compression level is chosen in such a way that compression is 2 to 3.5 times slower than decompression. Block Size Impact on Compression Now let’s look at how the compression block size affects compression and decompression performance. On-Time Performance Data Compression Speed vs Block Size (MB/sec) Compression Method 4KB 16KB 64KB 128KB 256KB 256KB/4KB quicklz 1.5.0 level 1 128.62 299.42 467.9 518.97 550.8 4.28 zstd v0.4.1 level 1 177.77 304.16 357.38 396.65 396.02 2.23 snappy 1.1.3 674.99 644.08 622.24 626.79 629.83 0.93 lzma 9.38 level 1 18.65 30.23 36.43 37.44 38.01 2.04 zlib 1.2.8 level 1 64.73 110.34 128.85 124.74 124.1 1.92 lz4 r131 996.11 1114.35 1167.11 1067.69 1043.86 1.05 brotli 2015-10-29 level 1 64.92 123.92 170.52 177.1 179.51 2.77   On-Time Performance Data Compression Speed vs Block Size (MB/sec) Compression Method 4KB 16KB 64KB 128KB 256KB 256KB/4KB quicklz 1.5.0 level 1 128.62 299.42 467.9 518.97 550.8 4.28 zstd v0.4.1 level 1 177.77 304.16 357.38 396.65 396.02 2.23 snappy 1.1.3 674.99 644.08 622.24 626.79 629.83 0.93 lzma 9.38 level 1 18.65 30.23 36.43 37.44 38.01 2.04 zlib 1.2.8 level 1 64.73 110.34 128.85 124.74 124.1 1.92 lz4 r131 996.11 1114.35 1167.11 1067.69 1043.86 1.05 brotli 2015-10-29 level 1 64.92 123.92 170.52 177.1 179.51 2.77 If we look at compression and decompression speed versus block size, we can see that there is a difference both for compression and decompression, and that it depends a lot on the compression algorithm. QuickLZ, using these settings, compresses 4.3 times faster with 256KB blocks rather than 4KB blocks. It is interesting that LZ4, which I would consider a “similar” fast compression algorithm, is not at all similar, demonstrating only minimal changes in compression and decompression performance with increased block size. Snappy is perhaps the most curious compression algorithm of them all. It has lower performance when compressing and decompressing larger blocks. Let’s examine how compression ratio varies with different block sizes. On-Time Performance Data DeCompression Ratio vs Block Size Compression Method 4KB 16KB 64KB 128KB 256KB 256KB/4KB quicklz 1.5.0 level 1 3.09 3.91 4.56 4.79 4.97 1.61 zstd v0.4.1 level 1 3.95 5.24 6.41 6.82 7.17 1.82 snappy 1.1.3 2.98 3.65 4.21 4.21 4.21 1.41 lzma 9.38 level 1 4.86 6.57 7.96 8.43 8.71 1.79 zlib 1.2.8 level 1 3.79 4.73 5.33 5.44 5.50 1.45 lz4 r131 1.75 1.89 1.99 2.00 2.01 1.15 brotli 2015-10-29 level 1 4.12 5.47 6.61 7.00 7.35 1.78 We can see all the compression libraries perform better with larger block sizes, though how much better varies. LZ4 only benefits a little from larger blocks, with only a 15% better compression ratio between 4KB to 256KB, while Zstd, Brotli and LZMA all get about an 80% better compression ratio with large block sizes. This is another area where I would expect results to be data dependent. With highly repetitive data, gains are likely to be more significant with larger block sizes. Compression library gains from larger block sizes decrease as the base block sizes increase. For example most compression libraries are able to get at least a 20% better compression ratio going from 4KB to 16KB block size, however going from 64KB to 256KB only allows for a 4-6% better compression ratio – at least for this data set. Compression Level Impact Now let’s review what the compression level does to compression performance and ratios. Compression Method 1 2 3 4 5 6 7 8 9 Max zstd v0.4.1 404.25 415.92 235.32 217.69 207.01 146.96 124.08 94.93 82.43 21.87 lzma 9.38 39.1 37.96 36.52 35.07 30.85 3.69 3.69 3.69 3.69 3.69 zlib 1.2.8 120.25 114.52 84.14 76.91 53.97 33.06 25.94 14.77 6.92 6.92 brotli 2015-10-29 172.97 179.71 148.3 135.66 119.74 56.08 50.13 29.4 35.46 0.39 Note. Not every compression algorithm provides level selection, so we’re only looking at the ZSTD, LZMA, ZLIB and BROTLI compression libraries. Also, not every library provides ten compression levels. If more than ten levels were available, the first nine and the maximum compression level were tested. If less than ten levels were available (like LZMA), the result for the maximum compression level was used to fill the gaps. As you might expect, higher compression levels generally mean slower compression. For most compression libraries, the difference between the fastest and slowest compression level is 10-20 times – with the exception of Brotli where the highest compression level means really slow compression (more than 400 times slower than fastest compression). Compression Method 1 2 3 4 5 6 7 8 9 Max zstd v0.4.1 827.61 848.71 729.93 809.72 796.61 904.85 906.55 843.01 894.91 893.31 lzma 9.38 128.91 142.28 148.57 148.72 148.75 157.67 157.67 157.67 157.67 157.67 zlib 1.2.8 386.59 404.28 434.77 415.5 418.28 438.07 441.02 448.56 453.64 453.64 brotli 2015-10-29 476.89 481.89 543.69 534.24 512.68 505.55 513.24 517.55 521.84 499.26 This is where things get really interesting. With a higher compression level, decompression speed doesn’t change much – if anything becomes higher. If you think about it, it makes sense: during the compression phase we’re searching for patterns in data and building some sort of dictionary, and extensive pattern searches can be very slow. Decompression, however, just restores the data using the same dictionary and doesn’t need much time finding data patterns. The smaller the compressed data size is, the better the performance should be. Let’s examine the compression ratio. Compression Method 1 2 3 4 5 6 7 8 9 Max zstd v0.4.1 7.17 7.20 6.98 7.05 7.11 7.62 7.76 7.89 7.89 8.16 lzma 9.38 8.20 8.71 8.95 8.96 8.96 10.45 10.45 10.45 10.45 10.45 zlib 1.2.8 5.50 5.71 5.97 6.27 6.50 6.80 6.88 7.01 7.09 7.09 brotli 2015-10-29 7.35 7.35 7.41 7.46 7.51 8.70 8.76 8.80 8.83 10.36 As we can see, higher compression levels indeed improve the compression ratio most of the time.  The ZSTD library seems to be some strange exception where a higher level of compression does not always mean a better ratio. We can also see that BROTLI’s extremely slow compression mode can really produce a significant boost to compression, getting it to the level of compression LZMA achieves – quite an accomplishment.    Different compression levels don’t have the same effect on compression ratios as different compression methods do. While we saw a 3.5 times compression rate difference between LZ4 and LZMA, the highest compression rate difference between the fastest and slowest mode is 1.4x for Brotli – with 20-30% improvement in compression ratio more likely. An important point, however, is that the compression ratio improvement from higher compression levels comes at no decompression slowdown – in contrast to using a more complicated compression algorithm to achieve better compression. In practice, this means having control over the compression level is very important, especially for workloads where data is written once and read frequently. In that case, you could choose a higher compression leves rather than change and recompress the data frequently. Another factor is that the compression level is very easy to change dynamically, unlike the compression algorithm. In theory, a database engine could dynamically choose the compression level based on the workload – a higher compression level can be used if there are a lot of CPU resources available, and a lower compression level can be used if the system can’t keep up with compressing data. Records It is interesting to note a few records generated from all of these tests. Among all the methods tried, the lowest compression ratio was LZ4 with a 4KB block size, providing a 1.75 compression ratio. The highest ration was LZMA with a 256K block size, providing a maximum compression ratio of 10.45. LZ4 is fastest both in compression and decompression, showing the best compression speed of 1167MB per second with a 64KB block size, and a decompression speed of 3666MB per second with a 16KB block size.    LZMA appears to generally be the slowest compression and decompression, compressing at 0.88MB per second with a 16KB block size and the maximum compression level. It decompresses at 82MB per second with a 4KB block size. Only Brotli at the highest compression level compressed data slower (at 0.39MB per second). Looking at these we see three orders of magnitude difference in compression performance, and 50 times in decompression performance. This demonstrates how the right compression algorithms choices and settings can make or break compression for your application. Recommendations Snappy looks like a great and fast compression algorithm, offering a pretty decent compression with performance that is likely to be good enough for most workloads. Zstd is new and not yet 100% stable, but once it is it will be a great replacement to zlib as a general purpose compression algorithm. It gives a better compression ratio and better compression and decompression speed. At higher levels of compression, it is able to get close to LZMA’s compression ratio, at least for some kinds of data, while having a much better decompression speed. LZMA remains the choice when you want the highest compression ratio at all costs. However, it will be all costs indeed! LZMA is slow both for compression and decompression. More often than not, LZMA is chosen without a clear understanding how slow it is, leading to performance issues. Generally, it’s better to get the compression ratio you’re looking for by adjusting the compression level rather than by the type of algorithm, as the compression level affects compression performance more – and may even positively impact decompression performance. If your system allows it, choosing larger block sizes can be best for your workload. Larger block sizes generally have a better compression ratio and a better compression and decompression speed. However, if you have many “random” data lookups, constant decompression of large blocks of data is likely to negate all of those benefits.   In our tests, block sizes up to 64K provided the most benefit, with further increases showing minimal impact. It is possible, however, that these diminishing returns on adjusting the block size upward depend significantly on the data type. Data and Details Raw results, data sources and additional details for our evaluation of database compression methods are available in Google Docs.

  • New MySQL Online Training
    Oracle University recently unveiled a new online training offering – the MySQL Learning Subscription.  The combination of freely-accessible and compelling paid content makes this an exciting development to me, and should prove valuable to the community and customer base alike.  This post will briefly explore this new MySQL educational resource. Organization The subscription content is organized into topical “channels”.  Current top-level channels are: Getting Started Development Administration Security These channels have sub-channels as well – for example, the Getting Started channel includes Getting Started With MySQL New Features and MySQL For Beginners, which have 13 and 1 video associated with them, respectively.  Content is video, and ranges from 5 minutes to over an hour in duration.  Unlike traditional multi-day courses, this new delivery mechanism allows users to choose targeted, topical content which can be consumed as part of a daily job.  Training can be skipped, paused, fast-forwarded, or re-winded as needed to solve specific problems, develop specific skills, or answer specific questions. Free content There’s an extensive collection of free content made available to users who have created a free Oracle web account.  Previewing the subscription gives access to both full-length videos as well as short previews of content reserved for paid subscribers.  A number of the free videos are recordings of Oracle Open World 2015 sessions – great for community members unable to attend the live event.  That alone provides community users with free access to hours of useful MySQL content. There’s some really great content here, including presenters such as Geir Hoydalsvik (MySQL Server Engineering Senior Director @ Oracle), Sunny Bains (lead for InnoDB development), Mark Leith (creator of SYS schema and PERFORMANCE_SCHEMA guru), Luis Soares (MySQL replication development lead) and a number of other Oracle experts. Paid content There’s good reason to look beyond the free content – the paid subscription includes recordings of a number of MySQL training courses.  This includes the MySQL For Database Administrators, MySQL Performance Tuning and MySQL For Developers courses.  The cost of each of those courses is roughly equivalent to the cost of a year-long learning subscription.  Subscribers will also find a number of additional videos aimed at helping MySQL users address specific needs, and the focused, topical content ensures users can consume it at a pace and focus meeting their needs. Conclusion Check out the new learning subscription – you’re likely to find valuable information in both the free and paid components of this offering.

  • Press Release: Severalnines serves up Turkish delight for iyzico’s payment platform
    iyzico uses ClusterControl to increase MySQL database uptime Stockholm, Sweden and anywhere else in the world - 09 March 2016 - Severalnines, the provider of database automation and management software, today announced its latest customer, iyzico, a Turkish Payment Service Provider (PSP) that offers ecommerce merchants and marketplaces like sahibinden.com, Modanisa and Babil an efficient way to accept online payments in Turkey. It also provides other services such as analytics, fraud protection and settlement. iyzico helps over 27,000 registered merchants navigate the difficult and complicated merchant registration process for vPOS in Turkey. The complicated process often results in rejection rates as high as 80% for some businesses. iyzico makes it easier for merchants to start selling in Turkey via a single integration of iyzico module and becomes the primary contact for online payment procedures. Offering online payments services requires iyzico to be online around the clock. iyzico needed to provide high-availability and a seamless service to its merchants in order to stay competitive. After being recommended by the IT team, Severalnines’ ClusterControl product was used by iyzico to help keep their MySQL databases highly available. They needed a database management tool to communicate between the priority data centre in Istanbul and the fail-safe in Ankara, this required an active/active database cluster that could assist in failovers. Severalnines was chosen as it could offer database replication at scale and diagnostics required to manage iyzico databases. Including the trial period, it took only three weeks for iyzico to go live on ClusterControl, due to easy integration and the Severalnines support in coding fail-safes between nodes and databases. The collaboration between Severalnines and iyzico created a secure database management system offering high-availability, even when a data centre was affected by a power cut. iyzico intends to move to the enterprise ClusterControl solution so it can manage encrypted data and work on developing the capabilities of a new data centre. Tahsin Isin, co-founder and CTO of iyzico stated: “Severalnines is the perfect solution to help us combat the problem of using erratic data centres, last year we experienced several outages. Severalnines has helped us optimise the process of database replication and supporting active/active database clusters, so we can continue offering our services to our clients even when our main data centre is down.” Vinay Joosery, Severalnines CEO, stated: “We are delighted to have such a fast-growing FinTech company working with us. We are fully committed to helping iyzico solve problems like data centre outages and continue to stay online with maximum uptime. We have enjoyed working with the iyzico team, helping them to continue innovating in a very challenging Turkish Financial Services environment.” About Severalnines Severalnines provides automation and management software for database clusters. We help companies deploy their databases in any environment, and manage all operational aspects to achieve high-scale availability. Severalnines' products are used by developers and administrators of all skills levels to provide the full 'deploy, manage, monitor, scale' database cycle, thus freeing them from the complexity and learning curves that are typically associated with highly available database clusters. The company has enabled over 7,000 deployments to date via its popular online database configurator. Currently counting BT, Orange, Cisco, CNRS, Technicolour, AVG, Ping Identity and Paytrail as customers. Severalnines is a private company headquartered in Stockholm, Sweden with offices in Singapore and Tokyo, Japan. To see who is using Severalnines today visit, http://www.severalnines.com/company About iyzico iyzico is a payment service provider (PSP) for online businesses and enterprises, particularly e- commerce platforms. iyzico’s payment system provides fast onboarding and an easy integration - in less than 24 hours, PCI-DSS certified to ensure maximum security. It offers online businesses and enterprises the ability to collect payments in their local currency through installments. Founded in 2013, iyzico has over 27.000 registered merchant accounts and is one of the fastest growing financial technology company in the region. http://www.iyzico.com Tags: database managementclustercontroliyzicopayment processingpci-dssMySQLhigh availability

  • Orchestrator: MySQL Replication Topology Manager
    This blog post discusses Orchestrator: MySQL Replication Topology Manager. What is Orchestrator? Orchestrator is a replication topology manager for MySQL. It has many great features: The topology and status of the replication tree is automatically detected and monitored Either a GUI, CLI or API can be used to check the status and perform operations Supports automatic failover of the master, and the replication tree can be fixed when servers in the tree fail – either manually or automatically It is not dependent on any specific version or flavor of MySQL (MySQL, Percona Server, MariaDB or even MaxScale binlog servers) Orchestrator supports many different types of topologies, from a single master -> slave  to complex multi-layered replication trees consisting of hundreds of servers Orchestrator can make topology changes and will do so based on the state at that moment; it does not require a configuration to be defined with what corresponds to the database topology The GUI is not only there to report the status – one of the cooler things you can do is change replication just by doing a drag and drop in the web interface (of course you can do this and much more through the CLI and API as well) Here’s a gif that demonstrates this (click on an image to see a larger version): Orchestrator’s manual is quite extensive and detailed, so the goal of this blogpost is not to go through every installation and configuration step. It will just give a global overview on how Orchestrator works, while mentioning some important and interesting settings. How Does It Work? Orchestrator is a go application (binaries, including rpm  and deb  packages are available for download). It requires it’s own MySQL database as a backend server to store all information related to the Orchestrator managed database cluster topologies. There should be at least one Orchestrator daemon, but it is recommended to run many Orchestrator daemons on different servers at the same time – they will all use the same backend database but only one Orchestrator is going to be “active” at any given moment in time. (You can check who is active under the Status  menu on the web interface, or in the database in the active_node  table.) Using MySQL As Database Backend, Isn’t That A SPOF? If the Orchestrator MySQL database is gone, it doesn’t mean the monitored MySQL clusters stop working. Orchestrator just won’t be able to control the replication topologies anymore. This is similar to how MHA works: everything will work but you can not perform a failover until MHA is back up again. At this moment, it’s required to have a MySQL backend and there is no clear/tested support for having this in high availability (HA) as well. This might change in the future. Database Server Installation Requirements Orchestrator only needs a MySQL user with limited privileges (SUPER, PROCESS, REPLICATION SLAVE, RELOAD) to connect to the database servers. With those permissions, it is able to check the replication status of the node and perform replication changes if necessary. It supports different ways of replication: binlog file positions, MySQL&MariaDB GTID, Pseudo GTID and Binlog servers. There is no need to install any extra software on the database servers. Automatic Master Failure Recovery One example of what Orchestrator can do is promote a slave if a master is down. It will choose the most up to date slave to be promoted. Let’s see what it looks like: In this test we lost rep1 (master) and Orchestrator promoted rep4  to be the new master, and started replicating the other servers from the new master. With the default settings, if rep1 comes back rep4  is going to continue the replication from rep1. This behavior can be changed with the setting ApplyMySQLPromotionAfterMasterFailover:True in the configuration. Command Line Interface Orchestrator has a nice command line interface too. Here are some examples: Print the topology: > orchestrator -c topology -i rep1:3306 cli rep1:3306 [OK,5.6.27-75.0-log,ROW,>>] + rep2:3306 [OK,5.6.27-75.0-log,ROW,>>,GTID] + rep3:3306 [OK,5.6.27-75.0-log,ROW,>>,GTID] + rep4:3306 [OK,5.6.27-75.0-log,ROW,>>,GTID] + rep5:3306 [OK,5.6.27-75.0-log,ROW,>>,GTID] Move a slave: orchestrator -c relocate -i rep2:3306 -d rep4:3306 Print the topology again: > orchestrator -c topology -i rep1:3306 cli rep1:3306 [OK,5.6.27-75.0-log,ROW,>>] + rep3:3306 [OK,5.6.27-75.0-log,ROW,>>,GTID] + rep4:3306 [OK,5.6.27-75.0-log,ROW,>>,GTID] + rep2:3306 [OK,5.6.27-75.0-log,ROW,>>,GTID] + rep5:3306 [OK,5.6.27-75.0-log,ROW,>>,GTID]As we can see, rep2  now is replicating from rep4 . Long Queries One nice addition to the GUI is how it displays slow queries on all servers inside the replication tree. You can even kill bad queries from within the GUI. Orchestrator Configuration Settings Orchestrator’s daemon configuration can be found in /etc/orchestrator.conf.json. There are many configuration options, some of which we elaborate here: SlaveLagQuery  – Custom queries can be defined to check slave lag. AgentAutoDiscover  – If set to True , Orchestrator will auto-discover the topology. HTTPAuthPassword  and HTTPAuthUser  –  Avoids everybody being able to access the Web GUI and change your topology. RecoveryPeriodBlockSeconds  – Avoids flapping. RecoverMasterClusterFilters  –  Defines which clusters should auto failover/recover. PreFailoverProcesses  – Orchestrator will execute this command before the failover. PostFailoverProcesses  – Orchestrator will execute this command after the failover. ApplyMySQLPromotionAfterMasterFailover  –  Detaches the promoted slave after failover. DataCenterPattern  – If there are multiple data centers, you can mark them using a pattern (they will get different colors in the GUI): Limitations While being a very feature-rich application, there are still some missing features and limitations of which we should be aware. One of the key missing features is that there is no easy way to promote a slave to be the new master. This could be useful in scenarios where the master server has to be upgraded, there is a planned failover, etc. (this is a known feature request). Some known limitations: Slaves can not be manually promoted to be a master Does not support multi-source replication Does not support all types of parallel replication At this moment, combining this with Percona XtraDB Cluster (Galera) is not supported Is Orchestrator Your High Availability Solution? In order to integrate this in your HA architecture or include in your fail-over processes you still need to manage many aspects manually, which can all be done by using the different hooks available in Orchestrator: Updating application connectivity: VIP handling, Updating DNS Updating Proxy server (MaxScale , HAProxy , ProxySQL…) connections. … Automatically setting slaves to read only to avoid writes happening on non-masters and causing data inconsistencies Fencing (STONITH) of the dead master, to avoid split-brain in case a crashed master comes back online (and applications still try to connect to it) If semi-synchronous replication needs to be used to avoid data loss in case of master failure, this has to be manually added to the hooks as well The work that needs to be done is comparable to having a setup with MHA or MySQLFailover. This post also doesn’t completely describe the decision process that Orchestrator takes to determine if a server is down or not. The way we understand it right now, one active Orchestrator node will make the decision if a node is down or not. It does check a broken node’s slaves replication state to determine if Orchestrator isn’t the only one losing connectivity (in which it should just do nothing with the production servers). This is already a big improvement compared to MySQLFailover, MHA or even MaxScale’s failoverscripts, but it still might cause some problems in some cases (more information can be found on Shlomi Noach’s blog). Summary The amount of flexibility and power and fun that this tool gives you with a very simple installation process is yet to be matched. Shlomi Noach did a great job developing this at Outbrain, Booking.com and now at GitHub. If you are looking for MySQL Topology Manager, Orchestrator is definitely worth looking at.

  • Your Database, Any Cloud - Introducing NinesControl (beta)
    Get an Early Look at the New NinesControl (beta) We’re excited to announce NinesControl, a developer friendly service to deploy and manage MySQL, MariaDB and MongoDB clusters using your preferred Cloud Provider. Building systems for the cloud today means designing an application and database architecture that is both resilient and scalable. However, setting up a database cluster can be time consuming and complex. What is NinesControl? NinesControl is a new online service for developers. With a couple of simple steps, you can deploy and manage MySQL, MariaDB and MongoDB clusters on your prefered public cloud. Sign up up to stay informed and apply for early access Who is it for? NinesControl is specifically designed with developers in mind. It is currently in beta for DigitalOcean users, before we expand the service to other public cloud providers. How does NinesControl work? NinesControl is an online service that is fully integrated with DigitalOcean. Once you register for the service and provide your DigitalOcean “access key”, the service will launch droplets in your region of choice and provision database nodes on them. Sign up up to stay informed and apply for early access Tags: MySQLMongoDBPostgreSQLcloudDatabasedigitalocean

Copyright © 2024 SYSArea. All Rights Reserved.
Joomla! is Free Software released under the GNU/GPL License.