Some applications might have some extensive query which might take several seconds to response, some people said a few seconds isn’t a big problem. Now, let say you one heavy query which will take 2s , remember this that 2s is when your server in low load, when your server got few hundred hits to that query, that normal 2s is no longer 2s, it will be added up. If this continue, your server will be dead.
First thing people will think about caching, using ngnix (for example), but remember that, ngnix cache is based on the url , this is good for static content. Modern application use some friendly url , even they are different url but in the backend , a lot of queries are the same.
My solution is to cache our query. This only works with “select ” query. How it works? We create a function mysql_select_query($query). we measure how long this query take, if it’s over 1s , we’ll cache it to local disk. We cache by hashing the query (using md5) , the next time , when the query run, it will check if the it’s already in cache, if we see it in cache, get the result from local cache file.
We setup a cronjob to clear all cache files older than an hour.
Our solution is pretty simple, but it works perfectly. We see our server load reducing a lot during peak hour.