Quantcast
Channel: MySQL Forums - Performance
Viewing all 1204 articles
Browse latest View live

Cardinality Problem (3 replies)

$
0
0
Hi Community,
i have a large table with over 60.000.000 rows, 30.000 insert per day and many updates.

One of my index doesn't works sporadically.

I checked the cardinality with "show index from ..." every 5 minutes.
Normality the value is over 10.000.000, but 1-5 times a day, the value is under 20.

In this case the index will be ignored and the execution time is very slow.



MySql Version: 5.5.8, Engine: InnoDB


Gruß Katja

Write Operations very slow in InnoDB engine when compared to MyIsam (1 reply)

$
0
0
Hi,
I have created two test tables, one with InnoDB Engine and other with MyIsam Engine
public void creatTable(Connection con, String tableName,String dbEngine){
Statement stmt = null;
String query = "CREATE TABLE "+tableName+"( \n" +
"`id` int(7) NOT NULL AUTO_INCREMENT, \n" +
"`Model` varchar(100) DEFAULT NULL, \n" +
"`Make` varchar(100) DEFAULT NULL, \n" +
"`Blade` varchar(100) DEFAULT NULL, \n" +
"PRIMARY KEY (`id`)\n" +
") ENGINE="+dbEngine+";";
try{
stmt = con.createStatement();
stmt.execute(query);
System.out.println("Table :" + tableName + " Created Successfully with " + dbEngine + "DB Engine");
}catch(SQLException sql){
sql.printStackTrace();
}
}
Passed table name and DB engine as parameters.During insertion of 10000 rows

public void insertRecords(Connection con, String tableName){
String query;
Statement stmt = null;
long startTime = System.currentTimeMillis();
try{
stmt = con.createStatement();
//max = 10000
for (int i=1; i<max;i++){
query = "insert into "+tableName+" values ("+i+",'Model"+i+"', 'Make"+i+"', 'Blade"+i+"')";
stmt.executeUpdate(query);
}
}
catch(SQLException sql){
sql.printStackTrace();
}
finally{
closeResources(stmt);
}
long endTime = System.currentTimeMillis();
long diffTime = endTime - startTime;
System.out.println("Insert Query Time taken :" + diffTime/1000);
}

InnoDB Engine returns : Insert Query Time taken :22 Secs
Time taken for Select Query - 1 Sec

MyIsam Engine returns : Insert Query Time taken : 0 Sec

The test is done in Windows 64 bit with 8 Gig memory. The max cpu usage was found to be at 58%.
Please advice how to tune the InnoDB engine as my SQL tables need to be in transactional state, so that I can commit or roll back based on the txn results.

Thanks much in advance.

Muralidharan N

UPDATE .. WHERE or INSERT .. ON DUPLICATE KEY UPDATE? (no replies)

$
0
0
Hello all.

I was wondering the potential gains of using UPDATE ... WHERE ... instead of IODKU requests, so I performed some quick tests, and I am very puzzled by the fact that the IODKU were (a little) faster that UPDATE...WHERE ones.

I am perfectly aware that my tests are not representative at all, but I was expecting the opposite.

So, my question: in general, should we use UPDATE ... WHERE instruction when we know that the entry already exists, or INSERT ... ON DUPLICATE KEY UPDATE one?

Thanks for your attention,
Daniel

innodb_checksum_algorithm stic_crc32 is slow response (5 replies)

$
0
0
I have two server one is Master and Other is Slave ,
Executing same query but getting result is slow response in Master server

MySQL Version in SLave MySQL 5.6.21 and Master 5.6.14 enterprise version

In Master

innodb_checksum_algorithm=strict_crc32

Slow Log and binlog is opened

Query Response time : 3.15 sec

In SLAVE

innodb_checksum_algorithm=innodb

Slow log and binlog is not opened

Query Response time : 4.75 sec

Is the cause of innodb_checksum_algorithm which is making difference

25.000+ rows - MySQL SELECT consecutive numbers performance issue (2 replies)

$
0
0
I have tried to be as explicit as I could. Below you have the query explained in details

------

What I want to obtain?
- I want to SELECT start_number and end_number from a set of consecutive numbers

ex:

I have: 1,2,3,5,7,8,9,10
This will return: (1,3), (5), (7), (8,10)

So my result will look something like :

[
'start_number' => 1,
'end_number' => 3
]
.............
.........

The problem is that the table contains a large number of rows (25.000+...for now) and it takes ages to execute (I have limited the number of results returned but still didn't fixed the execution time)


SELECT
l.control_series AS start_control_series,
l.card_series AS start_card_series,
(SELECT
MIN(a.control_series) AS id
FROM
cards AS a
LEFT OUTER JOIN
cards AS b ON a.control_series = b.control_series - 1
WHERE
b.control_series IS NULL
AND a.control_series >= l.control_series) AS end_control_series,
(SELECT
c.card_series
FROM
cards AS c
WHERE
end_control_series = c.control_series) AS end_card_series
FROM
cards AS l
LEFT OUTER JOIN
cards AS r ON r.control_series = l.control_series - 1
WHERE
r.control_series IS NULL
ORDER BY l.product_id ASC
LIMIT 0, 10;


It will return the following result

[
'start_control_series' => "110",
'start_card_series' => '440',
'end_control_series' => '114',
'end_card_series' => '444'
]
.................................
.....................


The above "SELECT" used to execute on a 25.000 table rows, makes MySQL Workbenck stop working....even if I set "LIMIT 1"


Below you have a simplified version of the above SELECT which returns an answer if I put LIMIT 1 and remove 2 columns



SELECT
l.control_series AS start_control_series,
(SELECT
MIN(a.control_series) AS id
FROM
cards AS a
LEFT OUTER JOIN
cards AS b ON a.control_series = b.control_series - 1
WHERE
b.control_series IS NULL
AND a.control_series >= l.control_series) AS end_control_series
FROM
cards AS l
LEFT OUTER JOIN
cards AS r ON r.control_series = l.control_series - 1
WHERE
r.control_series IS NULL
LIMIT 1;


Answer returned:

[
'start_control_series' => "110",
'end_control_series' => '114',
]


MySQLWorkbench response: 1 row(s) returned 304.829 sec / 0.000 sec (as you can see it takes ages)


In the "WHERE r.control_series IS NULL" I will put some filters " AND product_id = 18 " and "HAVING 110 BETWEEN start_control_series AND end_control_series" .

- I'm interested in returning 10 results per page (LIMIT 1,10) in a fair amount of time and display them in a table (jQuery datatable).
- Will "HAVING" statement affect SELECT statement performance?


Thanks in advanced, hopefully someone can help me.

COUNT_STAR, SUM_TIMER_WAIT (no replies)

$
0
0
Hi,

SELECT THREAD_ID, threads.NAME, SUM (COUNT_STAR) AS Tcount, SUM
(SUM_TIMER_WAIT) AS Ttime
FROM performance_schema.events_waits_summary_by_thread_by_event_name
INNER JOIN performance_schema,threads USING (THREAD_ID)
WHERE threads .NAME LIKE ‘thread/sql/slave\-%’
GROUP BY THREAD_ID, threads.NAME;

IF Tcount is o and Ttime is 0. what does it mean?
Do I need more cores to user all slave threads ?

How to analyse using replication slave threads ?

what does table_open_cache_overflow=0 means? (no replies)

$
0
0
Hi,

When table_open_cache_overflow status variable is 0.
what do I tune for removing the performance bottleneck?
Do I decrease the value of table_open_cache?
or
which variables do I have to change?
table_definition_cache, table_open_cache, table_open_cache_instances

lockfree approach to creating a single unique row without unique index (2 replies)

$
0
0
Hi,

I want to maintain a table of all unique urls that I have ever encountered and associate them with a surrogate id. I don't want to use pessimistic locks. The url is up to 5000 chars, too big to put a unique index on it. Instead, aside from the primary key (id), I put a NON-unique index on a "uniquifier" column which is a sha256 checksum of the url, 64 chars.

My game plan is to have a getCreate function that creates a candidate row which is initially marked isPermanent=false. Given that it is remotely possible that there are several competing threads trying to get the new surrogate id for the same url, these threads could try to put duplicate candidates for the same url/checksum. The goal is with a (pessmistic) lockfree update statement, to have only one thread succeed in marking exactly one of those new rows "permanent", and then each of the threads cleaning up the garbage rows.

I tried this update and it failed because mysql does not allow it with a "You can't specify target table 'LyUrl' for update in FROM clause:

update LyUrl u
set u.isPermanent = true
where u.uniquifier = :uniquifier and u.isPermanent = false and u.url = :url
and not exists (
from LyUrl uu
where uu.uniquifier = :uniquifier and uu.isPermanent = true and uu.url = :url
)
and u.id = (
select min(uuu.id)
from LyUrl uuu
where uuu.uniquifier = :uniquifier and uuu.url = :url
)

(NOTE: this is actually hibernate syntax on the not exists.)

Can anyone advise whether there is a valid mysql update statement that would do the intended job? I was thinking maybe a mirror table might help? Again the idea is that exactly one of the competing threads will succeed in selecting just one candidate row to mark permanent and the rest of the threads will update no rows at all.

Thanks,
Andy

How to use performance_schema.tables (1 reply)

$
0
0
Hi,

There are many tables in performance_schema.
I want to know how to use those tables to analyze for bottlenect, performance of mysql server. to find the point of problems in mysql server.

Is there any for an instance? examples how to use query?
Too difficult to use those tables. and too many tables

How To Use More Resources (4 replies)

$
0
0
Hi,
I have Mysql 5.6 64 Bit Running on a 16 Core Server with 32 GB Ram. on Windows 2012 Server

it is working fine, but I have som tables with around 180 Million records, and sometime I need to update a bunch of records let say 50 Million, overall the DB responce to the rest of the clients is still fast there is almos no diference.

But I can see on the resource Monito when doing this updates it only uses about 20% of CPU and 10 MB/s io Disk. I know the capability of IO of the Disk os about 160 BM/S

How can I make Mysqld to use let say 75% of CPU and 120 MB IO Disk.
this server si Mostly for the Database.

Thanks in Advance.

Unusually high cpu usage (4 replies)

$
0
0
I experience an unusually high cpu usage for the mysql process over Centos usually reaching more than 300% just after restart when not so many users might have connected. I even stopped Apache and my scripts to check if they were the culprits, but the cpu footprint remained the same. The memory usage is instead quite low. I use a very basic my.cnf but tried even with the one at: http://www.narga.net/optimizing-apachephpmysql-low-memory-server/ with no change.

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0

[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
max_connections = 200

and this is a a typical top report:

13299 mysql 20 0 142m 19m 5788 S 281.2 0.9 14:14.44 mysqld
13831 root 20 0 10932 3648 2796 S 1.3 0.2 0:00.12 sshd
11103 fabrizio 20 0 2564 956 812 R 0.3 0.0 0:00.58 top
1 root 20 0 2896 360 260 S 0.0 0.0 0:00.15 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd/4893
3 root 20 0 0 0 0 S 0.0 0.0 0:00.00 khelper/4893
140 root 16 -4 2460 12 8 S 0.0 0.0 0:00.00 udevd
656 root 20 0 36988 724 320 S 0.0 0.0 0:00.99 rsyslogd
687 root 20 0 9004 332 236 S 0.0 0.0 0:00.50 sshd
695 root 20 0 3272 12 8 S 0.0 0.0 0:00.00 xinetd
890 root 20 0 9204 12 8 S 0.0 0.0 0:00.00 saslauthd
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
13299 mysql 20 0 142m 19m 5788 S 298.8 0.9 15:16.08 mysqld
13870 root 20 0 10348 3020 2448 S 2.0 0.1 0:00.06 sshd
1 root 20 0 2896 360 260 S 0.0 0.0 0:00.15 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd/4893
3 root 20 0 0 0 0 S 0.0 0.0 0:00.00 khelper/4893
140 root 16 -4 2460 12 8 S 0.0 0.0 0:00.00 udevd
656 root 20 0 36988 732 320 S 0.0 0.0 0:00.99 rsyslogd
687 root 20 0 9004 332 236 S 0.0 0.0 0:00.50 sshd

How may I improve my situation?

RAM consideration/ IO usage (1 reply)

$
0
0
I would like to know how MySQL stores data. I assume indexes and some data are stored in memory, and majority of data itself is in the disk. Is my assumption correct?

I would like to estimate the RAM size for the MySQL instance based on the amount of data that is being forecast in a couple of years.

I have a Primary key, and also an unique index.. If I add more indexes, I assume my memory requirement would increase.

Table type: InnoDB

Mysql performance between installation (no replies)

$
0
0
Hello,

I have a question about MySQL installation. I use it as a web developer and I have a performance problem with it.
I use the windows installer with the developer default option to install it on my computer.
So at the end I have the same configuration (MySQL ver 5.6.21) than another workmate. though my computer run the same datatbase creation script within 40 min against 5 min for my workmate's computer.

I've tried to tune the config file without any change. I've tried to look to the hardrive without finding any issue.

Who can help me with this ? I don't even know where I can have look.

5.6 -> 5.7 - huge jump in memory usage, CentOS 7. (1 reply)

$
0
0
Hi all,

I have two (idle) instances of MySQL running, 5.6 and 5.7.

Machine.
Sony Vaio, 2GB RAM, Centrino Duo.
OS CentOS 7.

MySQL Ver 14.14 Distrib 5.7.5-m15 and
MySQL Ver 14.14 Distrib 5.6.22 (what does 14.14 stand for?).

Both compiled from source - no special options, just cmake ../my_code_directory.

Now, my 5.7 instance is using 4-5 times the RAM of the 5.6 instance - and this is all of the time - memory usage of both instances goes down after a period of no use. Is this a known phenomenon?

Query Optimization Needed (8 replies)

$
0
0
Hi Friends,

Need help on this query, i need to optimize it

SELECT DISTINCT `userdeviceinfo`.`deviceid` FROM `userdeviceinfo`
WHERE (`userdeviceinfo`.`created_on` < '2015-02-10 18:30:00' AND NOT (`userdeviceinfo`.`user_id` IS NULL))

--------------------------------------------------------------------------
This query is taking 0.80- 0.90 seconds and return resultset with approx. 165000 rows. I want to optimize it more in terms of execution time.

Here the thing which i already do with this :


1- Query was not using index which already exist on user_id field, i used force index option in query. Now Query is using the index and row exmained is
reduced to 205000, before use force index, it was approx 410000. But i am not gain any benefit in query execution time. it still taking 0.8-0.9 seconds.
Why ?
2- i have used composite/covering index on every searching field in this quey, but this is not help me anymore?
3- Why query is not using index by default, i am using mysql server 5.5.38

-------------------------------------------------------------------------------

CREATE TABLE `userdeviceinfo` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user_id` int(11) DEFAULT NULL,
`imei` varchar(100) DEFAULT NULL,
`deviceid` varchar(100) DEFAULT NULL,
`osversion` varchar(100) DEFAULT NULL,
`operator` varchar(100) DEFAULT NULL,
`msisdn` varchar(100) DEFAULT NULL,
`circle` varchar(50) DEFAULT NULL,
`modal` varchar(100) DEFAULT NULL,
`simoperator` varchar(100) DEFAULT NULL,
`simcountrycode` varchar(100) DEFAULT NULL,
`phonetype` varchar(100) DEFAULT NULL,
`manufacturer` varchar(100) DEFAULT NULL,
`networktype` varchar(100) DEFAULT NULL,
`gcm_reg_id` varchar(4096) DEFAULT NULL,
`oem` varchar(100) DEFAULT NULL,
`device_type` varchar(16) DEFAULT NULL,
`app_version` varchar(100) DEFAULT NULL,
`screen_width` varchar(10) DEFAULT NULL,
`screen_height` varchar(10) DEFAULT NULL,
`download_source` int(11) NOT NULL DEFAULT '0',
`is_updated` tinyint(1) NOT NULL,
`created_on` datetime NOT NULL,
`updated_on` datetime NOT NULL,
`active` tinyint(1) NOT NULL DEFAULT '1',
PRIMARY KEY (`id`),
KEY `userdeviceinfo_fbfc09f1` (`user_id`),
KEY `ind_userdeviceinfo_imei` (`imei`)
) ENGINE=InnoDB AUTO_INCREMENT=427210 DEFAULT CHARSET=latin1

-------------------------------------------------------------------------------

Regards,
Devrishi

Temp tables on Disk issue (1 reply)

$
0
0
Hi Friends,

We are facing an issue with temp tables on disk. 80% of temp tables are created on disk, which can be a performance issue. We have done followings things:

Have already increased tmp_table_size and max_heap_table_size upto 32M but it made no difference on stats, earlier it was 16M.

One thing I wanna inform that we are using longtext field in many of tables.

How can i start troubleshooting, as increasing the value of variables tmp_table_size and max_heap_table_size has not shown any benefit and stats are remail same.

Is there any way to resolution without converting longtext to varchar?

Regards,
Devrishi

Analyse-Optimize and tablecache (no replies)

$
0
0
Hello,

I have planed on my server:
- analyze on all tables every day
- optimize on all tables every Sunday

I create graph with tablecache_fillrate and tablecache_hitrate information with a monitoring tool.
After each analyze or optimize the tablecache indicator decrease quickly from 100% to 10%. The tablecache is increased slowly 2-3 hours after the analyze/optimize operation.

Is-it normal ?

Regards,
Samuel Mutel.

Slow performance when using LIMIT 1000 (2 replies)

$
0
0
It looks like MySQL v5.6 isn't using the table indexes in certain circumstances, leading to extremely slow performance.


If I use MySQL's 'EXPLAIN' command in conjunction with this query, we can see that MySQL doesn't intend to use any indexes to answer the query, and will search all 7,734,719 database rows:

mysql> EXPLAIN SELECT * FROM GroupCall WHERE ((CallInitiated >= '2015-02-16 20:10:00' AND CallInitiated <= '2015-02-16 20:50:00') OR (CallBegin >= '2015-02-16 20:10:00' AND CallBegin <= '2015-02-16 20:50:00')) ORDER BY CallInitiated DESC LIMIT 1000;
+----+-------------+-----------+------+-------------------------+------+---------+------+---------+-----------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-----------+------+-------------------------+------+---------+------+---------+-----------------------------+
| 1 | SIMPLE | GroupCall | ALL | CallInitiated,CallBegin | NULL | NULL | NULL | 7734719 | Using where; Using filesort |
+----+-------------+-----------+------+-------------------------+------+---------+------+---------+-----------------------------+
1 row in set (0.00 sec)

Strangely, if we increase or remove the 'LIMIT' value, the query performs correctly by using the available indexes and only searching 2724 rows:

mysql> EXPLAIN SELECT * FROM GroupCall WHERE ((CallInitiated >= '2015-02-16 20:10:00' AND CallInitiated <= '2015-02-16 20:50:00') OR (CallBegin >= '2015-02-16 20:10:00' AND CallBegin <= '2015-02-16 20:50:00')) ORDER BY CallInitiated DESC LIMIT 3000;
+----+-------------+-----------+-------------+-------------------------+-------------------------+---------+------+------+------------------------------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-----------+-------------+-------------------------+-------------------------+---------+------+------+------------------------------------------------------------------------+
| 1 | SIMPLE | GroupCall | index_merge | CallInitiated,CallBegin | CallInitiated,CallBegin | 6,6 | NULL | 2724 | Using sort_union(CallInitiated,CallBegin); Using where; Using filesort |
+----+-------------+-----------+-------------+-------------------------+-------------------------+---------+------+------+------------------------------------------------------------------------+
1 row in set (0.00 sec)

The database server with this issue is running MySQL 5.6.17. As a further test, if I try the original query against a similarly sized Log Server database on MySQL 5.5.37, then it behaves correctly:

mysql> EXPLAIN SELECT * FROM GroupCall WHERE ((CallInitiated >= '2015-02-16 20:10:00' AND CallInitiated <= '2015-02-16 20:50:00') OR (CallBegin >= '2015-02-1
6 20:10:00' AND CallBegin <= '2015-02-16 20:50:00')) ORDER BY CallInitiated DESC LIMIT 1000;
+----+-------------+-----------+-------------+-------------------------+-------------------------+---------+------+-------+------------------------------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-----------+-------------+-------------------------+-------------------------+---------+------+-------+------------------------------------------------------------------------+
| 1 | SIMPLE | GroupCall | index_merge | CallInitiated,CallBegin | CallInitiated,CallBegin | 9,9 | NULL | 13313 | Using sort_union(CallInitiated,CallBegin); Using where; Using filesort |
+----+-------------+-----------+-------------+-------------------------+-------------------------+---------+------+-------+------------------------------------------------------------------------+
1 row in set (0.00 sec)

Does anyone know if this performance issue been solved in MySql v5.7?
Or is there a workaround for this issue?

MySQL Performance Tuning: Deep Dive (no replies)

COUNT with GROUP BY, JOIN and SUBQUERY very slow (7 replies)

$
0
0
This count query is very slow:

SELECT Count(*)
FROM (SELECT `t`.`id` AS `t0_c0`,
`t`.`regiao_id` AS `t0_c1`,
`t`.`nome` AS `t0_c2`,
`t`.`razao_social` AS `t0_c3`,
`t`.`cpf_cnpj` AS `t0_c4`,
`t`.`rg` AS `t0_c5`,
`t`.`inscricao_estadual` AS `t0_c6`,
`t`.`orgao_emissor` AS `t0_c7`,
`t`.`passaporte` AS `t0_c8`,
`t`.`data_nascimento` AS `t0_c9`,
`t`.`nome_mae` AS `t0_c10`,
`t`.`nome_pai` AS `t0_c11`,
`t`.`estado_id` AS `t0_c12`,
`t`.`cidade_id` AS `t0_c13`,
`t`.`ssid_id` AS `t0_c14`,
`t`.`mac` AS `t0_c15`,
`t`.`mac_vinculado` AS `t0_c16`,
`t`.`ccq` AS `t0_c17`,
`t`.`sinal` AS `t0_c18`,
`t`.`cep` AS `t0_c19`,
`t`.`endereco_bairro` AS `t0_c20`,
`t`.`endereco_rua` AS `t0_c21`,
`t`.`endereco_numero` AS `t0_c22`,
`t`.`endereco_complemento` AS `t0_c23`,
`t`.`endereco_latitude` AS `t0_c24`,
`t`.`endereco_longitude` AS `t0_c25`,
`t`.`endereco_usar_caixa_postal` AS `t0_c26`,
`t`.`endereco_numero_caixa_postal` AS `t0_c27`,
`t`.`coordenadas_verificadas` AS `t0_c28`,
`t`.`modo_pagamento_id` AS `t0_c29`,
`t`.`consta_spc_serasa` AS `t0_c30`,
`t`.`tipo_conexao` AS `t0_c31`,
`t`.`login` AS `t0_c32`,
`t`.`situacao_id` AS `t0_c33`,
`t`.`numero_bloqueio` AS `t0_c34`,
`t`.`plano_id` AS `t0_c35`,
`t`.`plano_valor_especial` AS `t0_c36`,
`t`.`cobranca_dia_id` AS `t0_c37`,
`t`.`senha` AS `t0_c38`,
`t`.`equipamento_comodato` AS `t0_c39`,
`t`.`ponto_cliente_id` AS `t0_c40`,
`t`.`ponto_numero` AS `t0_c41`,
`t`.`ip_fixo` AS `t0_c42`,
`t`.`modo_envio_cobranca` AS `t0_c43`,
`t`.`modo_envio_cobranca_outros_descricao` AS `t0_c44`,
`t`.`telefonia_ativa` AS `t0_c45`,
`t`.`isento` AS `t0_c46`,
`t`.`sistema_externo_id` AS `t0_c47`,
`t`.`revenda_id` AS `t0_c48`,
`t`.`usuario_id` AS `t0_c49`,
`t`.`data_tempo_ativacao` AS `t0_c50`,
`t`.`data_tempo` AS `t0_c51`,
`t`.`bemtevi_codcliente` AS `t0_c52`,
`t`.`bemtevi_endereco_rua` AS `t0_c53`,
`t`.`usar_endereco_bemtevi` AS `t0_c54`,
`t`.`bloquear_automaticamente` AS `t0_c55`,
`t`.`spc_serasa` AS `t0_c56`,
`endereco_instalacao`.`id` AS `t1_c0`,
`telefones`.`id` AS `t2_c0`,
`telefones`.`telefone` AS `t2_c3`,
`emails`.`id` AS `t3_c0`,
`emails`.`email` AS `t3_c3`,
`metodo_cobranca`.`id` AS `t4_c0`,
`acct`.`radacctid` AS `t5_c0`,
`acct`.`framedipaddress` AS `t5_c22`
FROM `radcliente` `t`
LEFT OUTER JOIN `radcliente_endereco_instalacao`
`endereco_instalacao`
ON ( endereco_instalacao.id = (SELECT id
FROM
`radcliente_endereco_instalacao`
`endereco_instalacao`
WHERE
(
endereco_instalacao.cliente_id =
t.id )
LIMIT 1) )
LEFT OUTER JOIN `radcliente_telefone` `telefones`
ON ( `telefones`.`cliente_id` = `t`.`id` )
LEFT OUTER JOIN `radcliente_email` `emails`
ON ( `emails`.`cliente_id` = `t`.`id` )
LEFT OUTER JOIN `radmetodo_cobranca` `metodo_cobranca`
ON ( metodo_cobranca.id = (SELECT id
FROM
`radmetodo_cobranca` `metodo_cobranca`
WHERE (
metodo_cobranca.cliente_id
=
t.id )
AND (
metodo_cobranca.arquivo =
'nao' )
ORDER BY
metodo_cobranca.id
DESC
LIMIT 1) )
LEFT OUTER JOIN `radacct` `acct`
ON ( acct.radacctid = (SELECT radacctid
FROM `radacct` `acct`
WHERE ( acct.cliente_id =
t.id )
ORDER BY radacctid DESC
LIMIT 1) )
GROUP BY t.id) sq





The keys are indexed, I tried also with composite and nothing keys, the part where further delay is in "sending data". I can not remove the joins as they are used where, removed to get better understanding.

This is the result explain:


id select_type table type possible_keys key key_len ref rows Extra
1 PRIMARY <derived2> ALL NULL NULL NULL NULL 7227 NULL
2 DERIVED t index PRIMARY,radius_fk_cliente_x_estado,radius_fk_cliente_x_cidade,radius_fk_cliente_x_cobranca_dia,radius_fk_cliente_x_plano,situacao_id,regiao_id,usuario_id,cpf_cnpj,login,situacao_id_2,nome PRIMARY 4 NULL 7227 NULL
2 DERIVED endereco_instalacao eq_ref PRIMARY PRIMARY 4 func 1 Using where; Using index
2 DERIVED telefones ref cliente_id cliente_id 4 radius.t.id 1 NULL
2 DERIVED emails ref cliente_id cliente_id 4 radius.t.id 1 NULL
2 DERIVED metodo_cobranca eq_ref PRIMARY PRIMARY 4 func 1 Using where; Using index
2 DERIVED acct eq_ref PRIMARY PRIMARY 8 func 1 Using where
5 DEPENDENT SUBQUERY acct ref cliente_id cliente_id 5 radius.t.id 1 Using where; Using index; Using filesort
4 DEPENDENT SUBQUERY metodo_cobranca ref cliente_id,arquivo cliente_id 5 radius.t.id 1 Using where; Using filesort
3 DEPENDENT SUBQUERY endereco_instalacao ref radius_fk_cliente_endereco_instalacao_x_cliente radius_fk_cliente_endereco_instalacao_x_cliente 5 radius.t.id 1 Using index


Notes:
- the table 'radacct' has 2 million records, a few hundred to 'radcliente'.
- The subquery in the join is to limit the join to one row, because I need the last only.
Viewing all 1204 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>