If you’ve ever stared at a slow dashboard wondering why a query that should be instantaneous is chewing through seconds, you already know the truth: databases are fast when answers come from memory and slow when they come from disk.
Caching is how we stack the odds in our favor. It’s not about magic settings or silver bullets — it’s about using memory wisely, reducing redundant work, and keeping your users away from the dreaded spinning wheel.
In this post, let’s walk — not sprint — through MySQL caching. I’ll show you:
- What caching really means inside MySQL.
- The knobs that matter and the ones that don’t.
- How external caching (Redis, Memcached, proxies) fits into the bigger story.
- How to measure whether your “tuning” actually worked.
Internal Caching in MySQL: Where It All Starts
When people say “cache” in MySQL, they sometimes mean completely different things. Let’s straighten that out first.
The heavy lifters (data-focused):
- InnoDB Buffer Pool (the big one)
- Adaptive Hash Index (a fast lane on hot ranges)
The mechanics (connection/query overhead):
- Thread Cache
- Table Open / Definition Cache
- Host Cache
What’s gone:
- The old Query Cache — retired in MySQL 8.0, and for good reason.
If you remember nothing else: the InnoDB Buffer Pool is your performance budget. Everything else is smoothing the edges.
The Star of the Show: The InnoDB Buffer Pool
Imagine the InnoDB Buffer Pool as InnoDB’s working memory. It holds table and index pages, some undo logs, and when enabled, the Adaptive Hash Index. When a query runs, InnoDB tries to find the required data pages in the buffer pool first. If a page isn’t there, it has to go to disk, and that’s the difference between microseconds and milliseconds.
How big should it be?
On a dedicated database host, setting the buffer pool size to 60–75% of your total RAM is a sensible starting point. This leaves enough memory for the operating system and other MySQL processes.
How do you know it’s working?
Monitor the buffer pool hit rate. You can check this by running SHOW GLOBAL STATUS LIKE ‘Innodb_buffer_pool_read%’;
- Calculation: Hit Rate ≈ 1 – (Innodb_buffer_pool_reads / Innodb_buffer_pool_read_requests)
- A healthy hit rate is in the high 99s. If it’s consistently lower, your buffer pool may be too small or your workload is not fitting into memory.
Cold starts hurt
Teach MySQL what was hot before restart:
- innodb_buffer_pool_dump_at_shutdown=ON
- innodb_buffer_pool_load_at_startup=ON
Why your full table scan slowed everything down
The buffer pool uses an LRU list with a “young” and “old” area. New pages enter near the middle; frequently used pages bubble up. If you stream a massive scan, you don’t want it evicting your hot OLTP pages. That’s what this is for:
innodb_old_blocks_time=1000 # milliseconds before a page becomes evictable
Leave it on unless you’ve proved your workload is different.
Pro tips:
- Use buffer pool dump/load to “warm up” MySQL after restart.
- Protect OLTP workloads from full table scans with innodb_old_blocks_time.
- Avoid double caching by letting MySQL own the memory (innodb_flush_method=O_DIRECT).
The Shortcut Finder: Adaptive Hash Index
The Adaptive Hash Index (AHI) is an in-memory hash index that InnoDB automatically builds on frequently accessed B-tree ranges. For point lookups, it’s incredibly fast.
- Default: This is enabled by default in MySQL.
- When to turn it off: While AHI is often a performance boon, it can become a bottleneck under very high concurrency with mixed workloads, leading to latch contention. Only disable it if you have profiled your system and identified AHI as a source of contention
Making Connections and Metadata Cheaper
Thread Cache:
Reuses worker threads instead of creating/destroying per connection. Watch Threads_created — if it climbs fast, bump thread_cache_size.
Table Open / Definition Cache:
Saves table metadata and file handles. Watch Opened_tables and Opened_table_definitions — if they rise during steady load, increase the cache sizes.
Host Cache:
Saves DNS lookups for clients. Handy for stable environments, but flush it (FLUSH HOSTS;) when troubleshooting weird connection bursts.
These don’t make queries “faster,” but they make spikes smoother.
About That Old Query Cache
The Query Cache (pre–MySQL 8.0) cached entire SELECT results. It sounded great, but in practice:
- Any write invalidated entries.
- On write-heavy systems, it became a global bottleneck.
That’s why it’s gone in MySQL 8.0. If you’re still on 5.7, turn it off and handle caching in the application or a proxy.
Application-Level Caching with Redis/Memcached
This is where the biggest performance gains often come from. By using a key-value store like Redis or Memcached, you can cache the results of frequently run queries or even entire API responses.
The Cache-Aside Pattern:
Your application first checks the cache for data. If it’s there (a “cache hit”), it serves the data directly. If not (a “cache miss”), it fetches the data from the database, serves it, and then stores it in the cache for next time.
Why it’s a game-changer:
This offloads your database, reduces latency for your users, and scales horizontally with your application tier.
The Proxy Layer: ProxySQL
Tools like ProxySQL act as a smart proxy between your application and MySQL. They can be configured to cache specific queries and direct traffic based on your rules, providing another layer of caching and load balancing without changing your application code.
Where Teams Usually Go Wrong (So You Don’t)
Assuming “more cache” fixes bad queries:
Caching is not a magic bullet. If your query is performing a full table scan over a wide, non-selective index, you’re better off fixing the query and index first. Caching that bad query is just a band-aid.
Ignoring replication realities:
If you read from replicas, your cache might serve data that’s “older than master” due to replication lag. Decide if your application requires read-your-writes semantics, and if so, handle it appropriately (e.g., read from the primary for recent writes).
No invalidation plan:
A cache without a strategy for invalidation is a ticking time bomb of stale data. Relying on TTLs alone is not enough for frequently changing data. Implement event-based invalidation or versioning keys to ensure data freshness.
The Tuning Playbook: A Short, Honest Version
Profile First:
Before you change a single setting, get a baseline. Track the buffer pool hit rate, top slow queries, disk I/O, and app-level cache stats.
Right-size the Buffer Pool:
This is the most important lever. Fix this first before trying more exotic tweaks.
Stabilize the Overhead:
Tune your thread and table caches to smooth out the mechanics of new connections and table access.
Move Read Traffic Out:
Implement a cache-aside strategy with Redis or Memcached for your hottest read endpoints.
Prove It:
After making a change, compare your new metrics to the baseline. If it didn’t move the needle, revert and rethink your approach.
Wrapping Up
Caching in MySQL isn’t a single switch you flip. It’s layers working together:
- Inside MySQL: InnoDB Buffer Pool, Adaptive Hash Index, thread/table/host caches.
- In the OS: Page cache (but let MySQL lead).
- Outside MySQL: Redis, Memcached, ProxySQL, web layer caches.
When tuned thoughtfully, these layers transform MySQL from a disk-bound bottleneck into a memory-first workhorse.
And the best part? Once you’ve set it up right, your “slow query” war room becomes boring. No more firefighting. Just flat graphs and fast dashboards. That’s when you know caching is really working for you.