How to scale to bring 200 million QPS to MySQL

Updated on technology 2024-04-14
2 answers
  1. Anonymous users2024-02-07

    MySQL Cluster brings 200 million QPS to MySQL by scaling.

    MySQL Cluster is a scalable, real-time, in-memory, ACID-compliant transactional database that combines high availability with a low open source total cost of ownership. In terms of design ideas, MySQL Cluster adopts a distributed multi-master architecture and completely eliminates the problem of single point of failure. MySQL Cluster can be scaled out to commodity hardware, and can be automatically partitioned to carry read- and write-sensitive workloads, and can be accessed through SQL and NoSQL interfaces.

    Originally designed as an embedded telecom database for business-grade availability and real-time performance of intranet applications, MySQL Cluster has evolved rapidly with a number of new feature sets enhanced to extend use cases to web, mobile, and enterprise applications deployed on-premise or in the cloud, including: large-scale OLTP (real-time analytics) e-commerce, inventory management, shopping carts, and payment processing, Order Tracking, Gaming, Financial Transactions & Fraud Detection, Mobile & Micropayments, Session Management & Caching, Data Streaming, Analytics & Recommendations, Content Management & Delivery, Communication & Presentation Services, Subscription User Provisioning & Subsidies, and more.

  2. Anonymous users2024-02-06

    MySQL Cluster is designed for two main workload types:

    OLTP (Online Transaction Processing): Memory-optimized tables provide sub-millisecond latency and extreme levels of concurrency for OLTP workloads, while still maintaining good endurance. It can also be used to process disk-based table data.

    It is worth mentioning that MySQL Cluster is the most outstanding in handling OLTP workloads, especially when a large number of query transaction requests are made concurrently. To do this, we typically use the Flexasynch benchmark to measure the actual performance scaling of NoSQL when more data nodes are added to the cluster.

    Each data node targeted by this benchmark runs on a dedicated 56-thread Intel E5-2697 v3 (Haswell architecture) device. The preceding figure shows the trend of data throughput capacity as the number of data nodes increases, from 2 nodes to 32 nodes (note that MySQL Cluster can currently support up to 48 data nodes). As you can see, the overall scaling ratio remains almost linear, and its overall throughput capacity reaches 200 million NoSQL queries per second in the case of 32 data centers.

    The 200 million QPS benchmark with lead mining test runs on top of the MySQL Cluster version (the latest general availability version) - you can learn more about the MySQL Cluster version here, or click here for a replay of the keynote webinar.

Related questions
8 answers2024-04-14

First of all, you can use DDR400 memory. If yours can't reach 400, it will be automatically downclocked. >>>More

1 answers2024-04-14

Issue. We have a sql that finds tables that don't have a primary key unique key, but it's running very slow on mysql, what should I do? >>>More

2 answers2024-04-14

Check out proc meminfo

Tips: "Large memory page" is also known as traditional large page, large page memory, etc. helps Linux to manage virtual memory, the standard memory page is 4kb, here the use of "large memory page" can define a maximum page size of 1GB, during the system boot can use "large memory page" to reserve a part of the memory for the application, this part of the memory is occupied and will never be swapped out of the memory, it will remain there until the configuration is changed. (For details, please see the link below for the official explanation). >>>More

2 answers2024-04-14

1. MySQL database has several configuration options that can help us capture inefficient SQL statements in a timely manner1, Slow Query Log >>>More

5 answers2024-04-14

select Table 1Student number, Table 1Name, Table 1Gender, Table 2Subject 1 Comments, Table 3Subject 2 Comments. >>>More