mariadb cache

时间:2022-09-04 08:30:25

Since MariaDB Galera cluster versions 5.5.40 and 10.0.14 you can use the query cache. Earlier versions do NOT support the query cache.

See https://mariadb.com/kb/en/mariadb/documentation/optimization-and-tuning/buffers-caches-and-threads/query-cache/#limitations

http://www.fromdual.com/regularly-flushing-mysql-query-cache

Mariadb 5.5.31 and the new incredible query cache information plugin

 

Hi guys, i was reading the new query cache plugin from Roland Bouman, now default in mariadb-5.5.31
This is a very old feature request at mysql (27 Oct 2006 12:31):
http://bugs.mysql.com/bug.php?id=23714

And recent mariadb (thanks Sergei reading my MDEV =) ) (2012-05-04 01:22)
https://mariadb.atlassian.net/browse/MDEV-249

Well this is a very nice piece of code...
Every time i google about "mysql performace", "mysql cache" etc ... i get something like this:
http://www.cyberciti.biz/tips/how-does-query-caching-in-mysql-works-and-how-to-find-find-out-my-mysql-query-cache-is-working-or-not.html
or this:
http://*.com/questions/4139936/query-cache-efficiency

But... What global statistics tell you about your specific query? How
you know if your query is cached? You can do this via status variables,
but you may not know if your cache have a good global cache hit for only
one query, or only one database.

Well mariadb plugin tell you some infomations about what you need... but
less information that might be usefull for a good query/table statistic
You can read the query and the size of query in cache memory. What this
tell about "is my query qith a high hit rate?"? hum... this tell you
have the query inside query cache, that's the only information...

Reading more and studying mariadb source code, i created a patch... at MDEV-4581 (https://mariadb.atlassian.net/browse/MDEV-4581), ok mariadb guys i don't know how to use lauchpad to help mariadb yet, but you can use the patch =)

well i will not explain statistics, let's show the results for 4 queries in cache:

SELECT
query_hits/(select max(query_hits) from query_cache_queries)*100 as p_query_hit,
select_expend_time_ms/(select max(select_expend_time_ms) from query_cache_queries)*100 as p_select_expend_time_ms,
select_rows_read/(select max(select_rows_read) from query_cache_queries)*100 AS p_select_rows_read,
result_found_rows/(select max(result_found_rows) from query_cache_queries)*100 AS p_result_found_rows,
select_rows_read/(select max(select_rows_read) from query_cache_queries)*100 AS p_select_rows_read,

`ENTRY_POSITION_IN_CACHE`,
`STATEMENT_SCHEMA`,
`STATEMENT_TEXT`,
`RESULT_FOUND_ROWS`, `QUERY_HITS`, `SELECT_EXPEND_TIME_MS`,
`SELECT_LOCK_TIME_MS`, `SELECT_ROWS_READ`, `TABLES`,
from_unixtime(`QUERY_INSERT_TIME`)
as time, `RESULT_LENGTH`, `RESULT_BLOCKS_COUNT`,
`RESULT_BLOCKS_SIZE`, `RESULT_BLOCKS_SIZE_USED`, `RESULT_TABLES_TYPE`,
`FLAGS_CLIENT_LONG_FLAG`, `FLAGS_CLIENT_PROTOCOL_41`,
`FLAGS_PROTOCOL_TYPE`, `FLAGS_MORE_RESULTS_EXISTS`, `FLAGS_IN_TRANS`,
`FLAGS_AUTOCOMMIT`, `FLAGS_PKT_NR`, `FLAGS_CHARACTER_SET_CLIENT_NUM`,
`FLAGS_CHARACTER_SET_RESULTS_NUM`, `FLAGS_COLLATION_CONNECTION_NUM`,
`FLAGS_LIMIT`, `FLAGS_SQL_MODE`, `FLAGS_MAX_SORT_LENGTH`,
`FLAGS_GROUP_CONCAT_MAX_LEN`, `FLAGS_DIV_PRECISION_INCREMENT`,
`FLAGS_DEFAULT_WEEK_FORMAT`

FROM `information_schema`.`QUERY_CACHE_QUERIES`
ORDER BY statement_schema,`QUERY_HITS`

You can see the result if you have a very very big monitor =)

p_query_hit p_select_expend_time_ms p_select_rows_read p_result_found_rows p_select_rows_read ENTRY_POSITION_IN_CACHE STATEMENT_SCHEMA STATEMENT_TEXT RESULT_FOUND_ROWS QUERY_HITS SELECT_EXPEND_TIME_MS SELECT_LOCK_TIME_MS SELECT_ROWS_READ TABLES time RESULT_LENGTH RESULT_BLOCKS_COUNT RESULT_BLOCKS_SIZE RESULT_BLOCKS_SIZE_USED RESULT_TABLES_TYPE FLAGS_CLIENT_LONG_FLAG FLAGS_CLIENT_PROTOCOL_41 FLAGS_PROTOCOL_TYPE FLAGS_MORE_RESULTS_EXISTS FLAGS_IN_TRANS FLAGS_AUTOCOMMIT FLAGS_PKT_NR FLAGS_CHARACTER_SET_CLIENT_NUM FLAGS_CHARACTER_SET_RESULTS_NUM FLAGS_COLLATION_CONNECTION_NUM FLAGS_LIMIT FLAGS_SQL_MODE FLAGS_MAX_SORT_LENGTH FLAGS_GROUP_CONCAT_MAX_LEN FLAGS_DIV_PRECISION_INCREMENT FLAGS_DEFAULT_WEEK_FORMAT
null 100 1,7857 3,5714 1,7857 0 dev_cadastros SELECT SQL_CACHE SQL_SMALL_RESULT moeda FROM moedas 1 0 1 0 1 `dev_cadastros`.`moedas` 2013-05-25 22:35:52.000 91 1 512 155 0 1 1 0 0 0 1 1 8 8 8 -1 33554434 1024 1024 4 0
null 100 100 100 100 1 dev_cadastros SELECT indice,nome,grupo FROM analise_credito_indices ORDER BY grupo,ordem 28 0 1 1 56 `dev_cadastros`.`analise_credito_indices` 2013-05-25 22:35:52.000 1234 1 1304 1298 0 1 1 0 0 0 1 1 8 8 8 -1 33554434 1024 1024 4 0
null 0 1,7857 3,5714 1,7857 2 shared SELECT
SQL_CACHE SQL_SMALL_RESULT
inteiro,inteiros,centavo,centavos,decimais,precisao_fatores,cod_bcb,ultima_alteracao,nome
FROM moedas_atual WHERE moeda="R$"
1 0 0 0 1 `shared`.`moedas_atual` 2013-05-25 22:35:52.000 759 1 824 823 0 1 1 0 0 0 1 1 8 8 8 -1 33554434 1024 1024 4 0
null 0 1,7857 3,5714 1,7857 3 shared SELECT SQL_CACHE SQL_SMALL_RESULT fator_venda,fator_compra,ultima_alteracao,nome FROM moedas_atual WHERE moeda="R$" 1 0 0 0 1 `shared`.`moedas_atual` 2013-05-25 22:35:52.000 393 1 512 457 0 1 1 0 0 0 1 1 8 8 8 -1 33554434 1024 1024 4 0

------------
ok you don't have a 180" monitor? here the columns:

p_query_hit
p_select_expend_time_ms
p_select_rows_read
p_result_found_rows
p_select_rows_read
ENTRY_POSITION_IN_CACHE
STATEMENT_SCHEMA
STATEMENT_TEXT
RESULT_FOUND_ROWS
QUERY_HITS
SELECT_EXPEND_TIME_MS
SELECT_LOCK_TIME_MS
SELECT_ROWS_READ
TABLES
 time (it return as unixtime stamp) => QUERY_INSERT_TIME
RESULT_LENGTH
RESULT_BLOCKS_COUNT
RESULT_BLOCKS_SIZE
RESULT_BLOCKS_SIZE_USED
RESULT_TABLES_TYPE
FLAGS_CLIENT_LONG_FLAG
FLAGS_CLIENT_PROTOCOL_41
FLAGS_PROTOCOL_TYPE
FLAGS_MORE_RESULTS_EXISTS
FLAGS_IN_TRANS
FLAGS_AUTOCOMMIT
FLAGS_PKT_NR
FLAGS_CHARACTER_SET_CLIENT_NUM
FLAGS_CHARACTER_SET_RESULTS_NUM
FLAGS_COLLATION_CONNECTION_NUM
FLAGS_LIMIT
FLAGS_SQL_MODE
FLAGS_MAX_SORT_LENGTH
FLAGS_GROUP_CONCAT_MAX_LEN
FLAGS_DIV_PRECISION_INCREMENT
FLAGS_DEFAULT_WEEK_FORMAT

------------
What more you need now?

You can see: how many hits you have in each query
How many time it take to execute if your query cache entry is "lost"
How many rows it read to result and many, many, many others informations
What's the oldest query entry

Humm do you want know what query in table X?
select * from information_schema.query_cache_queries where tables like "%´my_database´.´my_table´%"
And you got all queries from that table
You can do many things now =)

Now, we have a nice (very nice) tool to improve query cache statistics =)

Thanks Sergei from Mariadb with mariadb source code help, and many many thanks to Roland Bouman with this nice peace of code

New life to query cache!

mariadb cache的更多相关文章

  1. Mastering MariaDB 神秘的MariaDB 中文翻译版

    是某群的哥们义务翻译的,宣传一下,还没时间时间读,粗滤看了全部翻译完了300多页佩服 https://github.com/CMant/Mastering-MariaDB- 原地址:如果你需要读,请s ...

  2. 搭建Linux+Jexus+MariaDB+ASP.NET[LJMA]环境

    备注:,将我的博客内容整理成册,首先会在博客里优先发布,后续可能的话整理成电子书,主要从linux的最基础内容开始进入Linux的Mono开发方面的话题.本文是我整理博客内容的一篇文章. LJMA 是 ...

  3. Centos 使用YUM安装MariaDB

    1.在 /etc/yum.repos.d/ 下建立 MariaDB.repo,内容如下: [azureuser@mono etc]$ cd /etc/yum.repos.d [azureuser@mo ...

  4. Facebook MyRocks at MariaDB

    Recently my colleague Rasmus Johansson announced that MariaDB is adding support for the Facebook MyR ...

  5. centos 7 安装mariadb

    卸载mysql # rpm -qa|grep mysql mysql-community-common-5.6.30-2.el7.x86_64 mysql-community-libs-5.6.30- ...

  6. 【MySQL】TokuDB引擎初探(MySQL升级为Percona,MySQL升级为MariaDB)

    参考:http://blog.sina.com.cn/s/blog_4673e6030102v46l.html 参考:http://hcymysql.blog.51cto.com/5223301/14 ...

  7. 【MySQL】MySQL/MariaDB的优化器对in子查询的处理

    参考:http://codingstandards.iteye.com/blog/1344833 上面参考文章中<高性能MySQL>第四章第四节在第三版中我对应章节是第六章第五节 最近分析 ...

  8. Mariadb Galera Cluster 群集 安装部署

    #Mariadb Galera Cluster 群集 安装部署 openstack pike 部署  目录汇总 http://www.cnblogs.com/elvi/p/7613861.html # ...

  9. centos7安装jdk&comma;tomcat&comma;msyql&lpar;MariaDB&rpar;

    操作系统版本 CentOS Linux release 7.2.1511 (Core) 安装jdk 下载jdk-8u66-linux-x64.rpm上传到linux上 先改用户权限 然后 rpm -i ...

随机推荐

  1. 【转】简单的虚拟摇杆控制移动(NGUI)

    http://www.cnblogs.com/zhangbaochong/p/4928688.html 一.用NGUI创建虚拟摇杆贴图 先创建一个sprite作为背景叫做JoyStick 并添加一个B ...

  2. Winfrom treeview 如何从多个数据表中获取数据动态生成

    本文转载:http://www.cnblogs.com/VincentLuo/archive/2008/03/29/1128987.html 在 汪洋怡舟的这篇文章中[http://www.cnblo ...

  3. HRBUST 1987 逃课的孩子

    Sol:HASH + 二分  字符串处理,很基础的操作. 题意很明确就是找重复的次数统计下,范围比较大1≤n≤10000,1≤m≤10000. #include <cstdio> #inc ...

  4. SSH整合,&quot&semi;sessionFactory &quot&semi; or &quot&semi;hibernateTemplate &quot&semi; is required异常

    首先遇到的问题就是HibernateDaoSupport引起的,程序中所有的DAO都继承自HibernateDaoSupport,而HibernateDaoSupport需要注入sessionfact ...

  5. MVC伪一个12306图片验证码

    本文的来由主要是满足自己的好奇心,而不是证明什么东西,如果涉及到什么官方性的事情,麻烦通知我谢谢:本篇将要和大家分享的是一个看起来通12306图片验证码相似的效果,这篇应该是今年农历最后一篇分享文章了 ...

  6. Struts2【开发Action】知识要点

    前言 前面Struts博文基本把Struts的配置信息讲解完了.....本博文主要讲解Struts对数据的处理 Action开发的三种方式 在第一次我们写开发步骤的时候,我们写的Action是继承着A ...

  7. 2&period;App Inventor 2编程流程

    一.Chrome浏览器打开App Inventor 2编程网站注册登陆:     可选:          A. http://ai2.appinventor.mit.edu/          网速 ...

  8. 【新特性】JDK1&period;6

    一.Desktop类和SystemTray类 在JDK6中 ,AWT新增加了两个类:Desktop和SystemTray. 前者可以用来打开系统默认浏览器浏览指定的URL,打开系统默认邮件客户端给指定 ...

  9. Java NIO3:缓冲区Buffer

    在上一篇中,我们介绍了NIO中的两个核心对象:缓冲区和通道,在谈到缓冲区时,我们说缓冲区对象本质上是一个数组,但它其实是一个特殊的数组,缓冲区对象内置了一些机制,能够跟踪和记录缓冲区的状态变化情况,如 ...

  10. 普通用户使用docker命令

    由于docker 都是root权限启动的容器, 要给研发 登录权限查看log,对于普通用,使用docker是会出现报错, 使用 gpasswd 命令把用户加入到docker组中 gpasswd -a ...