Quantcast
Channel: MySQL Forums - Performance
Viewing all 1203 articles
Browse latest View live

Need help to otimize my query (1 reply)

$
0
0
I am using following query which causing issue. Note following points

I am using 4 tables in this query

message_share messages myusertable friends

What i am trying to do in this query

I am trying to display messages, share messages of users who are friend with each other.

This is similar to fb wall where they display friends wall data

the problme is only myusertable causing the issue. it increase too much load even query timeout

when i remove myusertable from this query then work fine, the use of myusertable is to fetch user full name. rest we are doing with userid and which need not myusertable. Because of getting user full name we are using the myusertable only

The query is

(SELECT DISTINCT M.msg_id, M.uid_fk, M.message, S.created, M.like_count,M.comment_count,M.share_count, U.username,M.uploads, S.uid_fk AS share_uid,S.ouid_fk AS share_ouid
FROM friends F
LEFT JOIN message_share S ON S.ouid_fk <> F.friend_two
LEFT JOIN messages M ON M.msg_id = S.msg_id_fk AND M.uid_fk = S.ouid_fk
LEFT JOIN myusertable U ON U.uid = M.uid_fk AND U.status1='1'
WHERE F.friend_one='199095' AND F.role='fri'
GROUP BY msg_id
ORDER BY created DESC LIMIT 10)
UNION
(SELECT DISTINCT M.msg_id, M.uid_fk, M.message, M.created, M.like_count,M.comment_count,M.share_count, U.username,M.uploads, '0' AS share_uid, '0' AS share_ouid
FROM friends F
LEFT JOIN messages M ON M.uid_fk = F.friend_two
LEFT JOIN myusertable U ON U.uid = M.uid_fk AND U.status1='1'
WHERE F.friend_one='199095'
GROUP BY msg_id
ORDER BY created DESC LIMIT 10)

Understanding Explain (4 replies)

$
0
0
I've got the following query:
SELECT media.jeid FROM j_image AS media
LEFT JOIN file_girls_names AS name
ON name.jeid = media.jeid AND name.media_type = 'image'
ORDER BY name.first_name, name.last_name

When I use explain on this query, I get the following:
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE media index NULL jeid 4 NULL 1550077 Using index; Using temporary; Using filesort
1 SIMPLE name eq_ref PRIMARY PRIMARY 21 fashion.media.jeid,const 1

The primary table, 'media', has about 1.6 million rows in it. I'm trying to optimize the query. The problem is, I don't understand the 'Extra' row with:

"Using index; Using temporary; Using filesort"

Can someone give me some assistance on what these mean, and maybe some direction on how I can fix them (if they are bad? From what I have been reading, they sound bad)

Thank you.

MySQL using too much CPU (2 replies)

$
0
0
Hello,

What changes I should make to MySQL or CentOS that can greatly reduce my CPU usage?

Centos 6.5

32gb Ram

8 Core CPU

MySQL 5.1.73


top - 18:38:44 up 2:16, 3 users, load average: 0.05, 0.03, 0.05
Tasks: 199 total, 1 running, 198 sleeping, 0 stopped, 0 zombie
Cpu(s): 66.2%us, 18.9%sy, 0.0%ni, 14.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 32811000k total, 3072920k used, 29738080k free, 69784k buffers
Swap: 40959992k total, 0k used, 40959992k free, 2013240k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4399 mysql 20 0 3151m 430m 4968 S 683.9 1.3 351:43.77 mysqld
1 root 20 0 19356 1544 1236 S 0.0 0.0 0:01.24 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
3 root RT 0 0 0 0 S 0.0 0.0 0:00.26 migration/0




[mysqld]
datadir=/data/mysql_data
socket=/var/lib/mysql/mysql.sock
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
bulk_insert_buffer_size=512M
myisam_sort_buffer_size=4G
max_binlog_size=128M

key_buffer = 32M
tmp_table_size = 32M
long_query_time=5
log-queries-not-using-indexes=1


thread_cache=32
wait_timeout=25
connect_timeout=10
max_connections=1024

query_cache_size = 128M
join_buffer_size = 2M
thread_cache_size = 4
table_cache = 2500
innodb_buffer_pool_size = 350M
slow_query_log=/data/mysql_log/slow.log


log-bin = /data/mysql_log/mysql-bin.log
binlog-do-db=centreon_storage
binlog-do-db=centreon_syslog
binlog-do-db=centreon_status
binlog-do-db=centreon
server-id=1


[mysqld_safe]
log-error=/data/mysql_log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

Optimising query with composite index, which says using where (1 reply)

$
0
0
Hi,

I have the following table "endgames" as follows:

DROP TABLE IF EXISTS `DB`.`endgames`;
CREATE TABLE `DB`.`endgames` (
`ID` int(11) NOT NULL AUTO_INCREMENT,
`GamePosition` char(64) NOT NULL DEFAULT '0123421055555555CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCBBBBBBBB6789A876',
`GameIntroduction` text,
`Event` varchar(40) DEFAULT NULL,
`Site` varchar(40) DEFAULT NULL,
`GameDate` varchar(25) DEFAULT NULL,
`Round` varchar(10) DEFAULT NULL,
`WhitePlayer` varchar(80) NOT NULL,
`BlackPlayer` varchar(80) DEFAULT NULL,
`Result` tinyint(3) unsigned DEFAULT NULL,
`ECOCode` smallint(6) unsigned DEFAULT NULL,
`Annotator` varchar(100) DEFAULT NULL,
`EventCountry` varchar(40) DEFAULT NULL,
`GameText` text,
`WhiteMove` bit(1) DEFAULT b'1',
`MoveOffset` smallint(6) DEFAULT '0',
`TextComments` int(10) unsigned DEFAULT NULL,
`VariationCount` int(10) unsigned DEFAULT NULL,
`WhiteQueenCount` tinyint(3) unsigned NOT NULL,
`WhiteRookCount` tinyint(3) unsigned NOT NULL,
`WhiteBishopCount` tinyint(3) unsigned NOT NULL,
`WhiteKnightCount` tinyint(3) unsigned NOT NULL,
`WhitePawnCount` tinyint(3) unsigned NOT NULL,
`BlackQueenCount` tinyint(3) unsigned NOT NULL,
`BlackRookCount` tinyint(3) unsigned NOT NULL,
`BlackBishopCount` tinyint(3) unsigned NOT NULL,
`BlackKnightCount` tinyint(3) unsigned NOT NULL,
`BlackPawnCount` tinyint(3) unsigned NOT NULL,
PRIMARY KEY (`ID`),
KEY `IX_PieceCounts` (`WhiteQueenCount`,`WhiteRookCount`,`WhiteBishopCount`,`WhiteKnightCount`,`WhitePawnCount`,`BlackQueenCount`,`BlackRookCount`,`BlackBishopCount`,`BlackKnightCount`,`BlackPawnCount`)
) ENGINE=MyISAM AUTO_INCREMENT=58796 DEFAULT CHARSET=latin1;

As can be seen, there is a composite index on the Count fields (the fields I search on).

I am using the query below:

EXPLAIN

SELECT * FROM endgames WHERE WhiteQueenCount >=0 AND WhiteRookCount >=3 AND WhiteBishopCount >=0 AND WhiteKnightCount >=0 AND WhitePawnCount >=0 AND BlackQueenCount >=0 AND BlackRookCount >=0 AND BlackBishopCount >=0 AND BlackKnightCount >=0 AND BlackPawnCount >=0

UNION

SELECT * FROM endgames WHERE WhiteQueenCount >=0 AND WhiteRookCount >=0 AND WhiteBishopCount >=0 AND WhiteKnightCount >=0 AND WhitePawnCount >=0 AND BlackQueenCount >=0 AND BlackRookCount >=3 AND BlackBishopCount >=0 AND BlackKnightCount >=0 AND BlackPawnCount >=0

The output of that explain is the following:

1, 'PRIMARY', 'endgames', 'range', 'IX_PieceCounts', 'IX_PieceCounts', '10', '', 8421, 'Using where'
2, 'UNION', 'endgames', 'ALL', 'IX_PieceCounts', '', '', '', 58795, 'Using where'
, 'UNION RESULT', '<union1,2>', 'ALL', '', '', '', '', , ''

This may take up to 3 seconds in the live environment on a table with 60,000 rows. Is there any way I can optimise the query? In particular I am noticing that the IX_PieceCounts index is not being used in the second part after the UNION and I see "using where" and a large number of rows having to be examined.

Thanks in advance,
Tim

InnoDB and locking, UPDATE and SELECT (no replies)

$
0
0
Hi all,

I have a question how to perform a special task optimally; I'm using InnoDB (libmysql 5.5.38) and Perl to access it. One big tasks table should be dealt with by several parallel processes, each process has a unique ID. In a loop, each process tries to lock and select a set of 100 entries (using additional criteria). Currently I introduced an additional column 'isLocked' that's usually 0 (available), and should be set to the ID of the process who is dealing with it. My current sequence of statements is (? stands for the unique process ID):

LOCK TABLES tasks WRITE
UPDATE tasks SET isLocked=? WHERE isLocked=0 AND ... ORDER BY lastChecked LIMIT 100
SELECT ... FROM tasks WHERE isLocked=?
UNLOCK TABLES

Reading the documentation, I think the 2 locking statements can safely be left away, as InnoDB should lock the rows itself, so the UPDATE and SELECT statements alone should suffice. And probably more efficient, as the lock would be on row and not table level. As this is hard to test, I'd like to ask if this is really so?

A second performance question is if these 2 statement - an UPDATE followed by a SELECT, where the SELECT selects exactly the rows that the UPDATE modified, can somehow be made more efficient? Or does the caching solve this issue already? Optimally would be a combined UPDATE-and-SELECT statement. I read about teh "SELECT ... FOR UPDATE", but I'm not sure how this would be applied on this situation and if this is really useful here.

I think this is a pretty common problewm, and there should be a known, performancewise optimal way of doing this?

Thanks in advance,

Andy

Slow Performance with Multi-threaded app (1 reply)

$
0
0
My application is a java 1.7, multi-threaded application with MySQL 5.6 as the backend.

Application reads data from files and writes it to corresponding tables in multithreaded fashion. The performance is great when all threads are inserting records on different tables, but the application or rather inserts slows down as soon as all the threads starts inserting into the same table.

Each thread has its own prepared statement in batch mode with commit interval of 100 records. "rewriteBatchedStatements=true" property is already added during establishing the connection.

MySQL Driver: com.mysql.jdbc.Driver
MySQL Storage engine: InnoDB

Any information / suggestions that can help improve performance will be highly appreciated.

Thanks...

Mass Update optimization (1 reply)

$
0
0
I have a table of 2 columns and 60 million rows. I want to update the table by incrementing one field by 1 on various different rows (This is the only type of operation that will ever be done on the table). The updates that occur happen all at once everytime, so I am usually updating 30-40 million out of the 60 million rows once a day. This table is only used by one user so I am worried about multiple users trying to connect to the table at the same time or anything like that. Which engine (and parameters) would be fastest to perform just updates and nothing else on (well there will be one select statement once in a while, but I am not concerned about the performance of select statements)? I am currently using MyISAM as I am not worried about ACID, but it currently is quite slow for mass updates. Essentially I am trying to find the fastest way to do MASS Updates.

PHP/MySQL to use all CPU/memory to increase performance (3 replies)

$
0
0
Hello,

This may be a PHP issue or MySQL. I made an example for illustration only; why does this example not burn up all the servers CPU/memory to complete it faster? Is there a way to do this? The processes handling a similar script are only using less than 1% of the servers resources. It is a dedicated server.

[php]
$query = mysql_query("SELECT * FROM tableExample WHERE criteria5 = 'example'");
while($results = mysql_fetch_assoc($query)) {
mysql_query("UPDATE tableExample_2x SET val1='$result[stuff]' WHERE associativeID = $results[id]");
}
[/php]


Any ideas or suggestions?



PHP config:
memory_limit: 4GB




my.conf
**********************
[mysql]

# CLIENT #
port=3306
socket="/var/lib/mysql/mysql.sock"

[mysqld]
slow-query-log=1
long_query_time=1

# GENERAL #
user=mysql
default-storage-engine=InnoDB
socket="/var/lib/mysql/mysql.sock"
pid-file="/var/lib/mysql/mysql.pid"

# MyISAM #
key-buffer-size=32M
myisam-recover="FORCE,BACKUP"

# SAFETY #
max-allowed-packet=1G
max-connect-errors=1000000

# DATA STORAGE #
datadir="/var/lib/mysql/"

# BINARY LOGGING #
log-bin="/var/lib/mysql/mysql-bin"
expire-logs-days=14
sync-binlog=1

# CACHES AND LIMITS #
tmp-table-size=32M
max-heap-table-size=32M
query-cache-type=0
query-cache-size=0
max-connections=5000
thread-cache-size=50
open-files-limit=65535
table-definition-cache=1024
table-open-cache=2048

# INNODB #
innodb-flush-method=O_DIRECT
innodb-log-files-in-group=2
innodb-flush-log-at-trx-commit=2
innodb-file-per-table=1
innodb-buffer-pool-size=6G

# LOGGING #
log-error="/var/lib/mysql/mysql-error.log"
log-queries-not-using-indexes=1
innodb_buffer_pool_size=134217728
max_allowed_packet=268435456
port=5123

How to set mysql CPU affinity? (3 replies)

$
0
0
Good evening!
I have 5.5.38-0+wheezy1-log installed on Debian with nginx+apache and game server. Sometimes MySQL processes long queries which cause high CPU load and lags at the game server (unstable FPS problem). My idea is to put game server on the separate core #0 (done with taskset command), and all other processes to other cores #1 - #3.
I already set worker_cpu_affinity for nginx, but still I didn't find any solution for mysql and apache. I use MyISAM, so innodb_*** parameters are useful for me.
Is it possible to set cores which mysql server will use?

Maybe I need some custom modification to /etc/init.d/mysql, using taskset, to alow it work even after system restart. Please help me.

Query optimization to reduce execution time. (3 replies)

$
0
0
Problem: Below query is slow query, minimum execution time is 3.x sec and as the category id increase in "IN" clause execution of query taking more that a minute.

QUERY:

explain SELECT a.* FROM
label a
INNER JOIN category_label c ON a.id = c.label_id
INNER JOIN product_label p ON a.id = p.label_id
INNER JOIN product p2 ON p.product_id = p2.id
INNER JOIN category c2 ON p2.category_id = c2.id
INNER JOIN category c3 ON (c2.lft BETWEEN c3.lft AND c3.rgt)
INNER JOIN user u ON ((u.id = p2.user_id AND u.is_active = 1))
INNER JOIN country c4 ON (p2.country_id = c4.id)
WHERE (c.category_id IN ('843', '848', '849', '853', '856', '858') AND a.is_filterable = 1 AND a.type <> "textarea" AND c2.rgt = (c2.lft + 1) AND c3.id IN ('843', '848', '849', '853', '856', '858') AND c4.id IN ('190') AND p2.status = 1)
GROUP BY a.id
ORDER BY a.sort_order


query explanation -
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: c4
type: const
possible_keys: PRIMARY
key: PRIMARY
key_len: 4
ref: const
rows: 1
Extra: Using index; Using temporary; Using filesort
*************************** 2. row ***************************
id: 1
select_type: SIMPLE
table: c3
type: range
possible_keys: PRIMARY,lft_rgt_inx
key: PRIMARY
key_len: 4
ref: NULL
rows: 6
Extra: Using where
*************************** 3. row ***************************
id: 1
select_type: SIMPLE
table: c
type: range
possible_keys: PRIMARY,label_id,category_id
key: PRIMARY
key_len: 4
ref: NULL
rows: 197
Extra: Using where; Using index; Using join buffer
*************************** 4. row ***************************
id: 1
select_type: SIMPLE
table: a
type: eq_ref
possible_keys: PRIMARY
key: PRIMARY
key_len: 4
ref: c.label_id
rows: 1
Extra: Using where

*************************** 5. row ***************************
id: 1
select_type: SIMPLE
table: p
type: ref
possible_keys: product_id,label_id
key: label_id
key_len: 4
ref: c.label_id
rows: 3827
Extra:
*************************** 6. row ***************************
id: 1
select_type: SIMPLE
table: p2
type: eq_ref
possible_keys: PRIMARY,category_id,user_id,country_id
key: PRIMARY
key_len: 8
ref: p.product_id
rows: 1
Extra: Using where
*************************** 7. row ***************************
id: 1
select_type: SIMPLE
table: c2
type: eq_ref
possible_keys: PRIMARY,lft_rgt_inx
key: PRIMARY
key_len: 4
ref: p2.category_id
rows: 1
Extra: Using where
*************************** 8. row ***************************
id: 1
select_type: SIMPLE
table: u
type: eq_ref
possible_keys: PRIMARY
key: PRIMARY
key_len: 4
ref: p2.user_id
rows: 1
Extra: Using where


Show create table -

`labelMaster` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(255) NOT NULL,
`type` varchar(50) NOT NULL COMMENT 'textbox, checkbox, selectbox, textarea',
`show_filter` tinyint(1) NOT NULL DEFAULT '1',
`select_all_level` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB


CREATE TABLE `categoryMaster` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`parent_id` int(11) NOT NULL DEFAULT '0',
`lft` int(11) NOT NULL DEFAULT '0',
`rgt` int(11) NOT NULL DEFAULT '0',
`level` tinyint(4) NOT NULL DEFAULT '0',
`product_count` int(11) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `lft_rgt_inx` (`lft`,`rgt`),
KEY `parent_id` (`parent_id`)
) ENGINE=InnoDB


CREATE TABLE `productMaster` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`name` varchar(255) NOT NULL,
`category_id` int(11) NOT NULL,
`status` tinyint(1) NOT NULL DEFAULT '-1' ,
`user_id` int(11) NOT NULL,
`label_value_ids` varchar (255),
`product_source_url` varchar(255) NOT NULL,
`country_id` int(4) NOT NULL,
`state_id` int(5) NOT NULL,
PRIMARY KEY (`id`),
KEY `category_id` (`category_id`),
KEY `user_id` (`user_id`),
KEY `x_area_id` (`x_area_id`),
KEY `state_id` (`state_id`),
KEY `country_id` (`country_id`),
FULLTEXT KEY `name` (`name`),
FULLTEXT KEY `label_value_ids` (`label_value_ids`)
) ENGINE=MyISAM


CREATE TABLE `product_label` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`product_id` bigint(20) NOT NULL,
`label_id` int(11) NOT NULL,
`label_value` varchar(1200) DEFAULT NULL,
`category_id` int(11) NOT NULL,
PRIMARY KEY (`id`),
KEY `product_id` (`product_id`),
KEY `category_id` (`category_id`),
KEY `label_id` (`label_id`)
) ENGINE=InnoDB

CREATE TABLE `label_values` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`label_id` int(11) NOT NULL,
`value` varchar(255) NOT NULL,
`sort_order` smallint(6) NOT NULL DEFAULT '0',
PRIMARY KEY (`id`),
KEY `label_id` (`label_id`),
CONSTRAINT `label_values_ibfk_1` FOREIGN KEY (`label_id`) REFERENCES `label` (`id`) ON DELETE CASCADE
) ENGINE=InnoDB


Please suggest me how can i rewrite the qurey to reduce execution time and make website fast.
also I have tried to change database engine.
Please note: The query already using cache.

Slow response time due queries (no replies)

$
0
0
Hello community,

I've got a problem with a MySQL database. The response time of the queries is slow. The queries are optimized to use the existing indexes.
On my test pc the MySQL server is running on a windows 7 vmware. The vmware runs on a usb 3.0 hardisk. The queries are fast when using this database.

The problematic database runs on a windows 2012 r2 server (virtualized with Hyper-V). The host system has 32 GB RAM, 15k SAS harddisks and a quad core xeon cpu. There are now other virtualized machines on the host system.

I took some time to google and I tried some adjustments, but I had no success. The response of the most queries on the virtualized server is 1 - 3 seconds slower than on my little test system.


MySQL: 5.6 CE
Connection via .NET-Connector (localhost).

Could you please take a look at my configuration? It would be awesome if you could give me some advices.
The database will be used more for reading than writing.


SHOW VARIABLES LIKE '%buffer%':
---------------------------------------------
"Variable_name" "Value"
"bulk_insert_buffer_size" "8388608"
"innodb_buffer_pool_dump_at_shutdown" "OFF"
"innodb_buffer_pool_dump_now" "OFF"
"innodb_buffer_pool_filename" "ib_buffer_pool"
"innodb_buffer_pool_instances" "8"
"innodb_buffer_pool_load_abort" "OFF"
"innodb_buffer_pool_load_at_startup" "OFF"
"innodb_buffer_pool_load_now" "OFF"
"innodb_buffer_pool_size" "878706688"
"innodb_change_buffer_max_size" "25"
"innodb_change_buffering" "all"
"innodb_log_buffer_size" "9437184"
"innodb_sort_buffer_size" "1048576"
"join_buffer_size" "262144"
"key_buffer_size" "8388608"
"myisam_sort_buffer_size" "262144000"
"net_buffer_length" "16384"
"preload_buffer_size" "32768"
"read_buffer_size" "65536"
"read_rnd_buffer_size" "262144"
"sort_buffer_size" "262144"
"sql_buffer_result" "OFF"



my.ini:
---------------------------------------------
[client]
no-beep
port=3306

[mysql]
default-character-set=utf8

[mysqld]
port=3306
datadir="D:/MySQL/Data\"
character-set-server=utf8
default-storage-engine=INNODB
sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
log-output=FILE
general-log=1
general_log_file="TERM1.log"
slow-query-log=1
slow_query_log_file="TERM1-slow.log"
long_query_time=10
log-error="TERM1.err"
max_connections=1500
query_cache_size=200M
table_open_cache=2000
tmp_table_size=129M
thread_cache_size=10
myisam_max_sort_file_size=100G
myisam_sort_buffer_size=248M
key_buffer_size=10M
read_buffer_size=64K
read_rnd_buffer_size=256K
sort_buffer_size=256K
innodb_data_home_dir="D:/MySQL/Data/"
innodb_additional_mem_pool_size=48M
innodb_flush_log_at_trx_commit=2
innodb_log_buffer_size=16M
innodb_buffer_pool_size=4G
innodb_log_file_size=128M
innodb_thread_concurrency=8
innodb_autoextend_increment=64
innodb_buffer_pool_instances=8
innodb_concurrency_tickets=5000
innodb_old_blocks_time=1000
innodb_open_files=300
innodb_stats_on_metadata=0
innodb_file_per_table=1
innodb_checksum_algorithm=0
back_log=80
flush_time=0
join_buffer_size=256K
max_allowed_packet=16M
max_connect_errors=100
open_files_limit=4161
query_cache_type=2
sort_buffer_size=256K
table_definition_cache=1400
binlog_row_event_max_size=8K
sync_master_info=10000
sync_relay_log=10000
sync_relay_log_info=10000


Thank you in advance.

Best regards from Germany
Jens

Eliminate filesort in update query (no replies)

$
0
0
I have such table which I use to implement queue in mysql:

CREATE TABLE `queue` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `queue_name` varchar(255) NOT NULL,
  `inserted` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `inserted_by` varchar(255) NOT NULL,
  `acquired` timestamp NULL DEFAULT NULL,
  `acquired_by` varchar(255) DEFAULT NULL,
  `delayed_to` timestamp NULL DEFAULT NULL,
  `priority` int(11) NOT NULL DEFAULT '0',
  `value` text NOT NULL,
  `status` varchar(255) NOT NULL DEFAULT 'new',
  PRIMARY KEY (`id`),
  KEY `queue_index` (`acquired`,`queue_name`,`priority`,`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8

My problem is that mysql use filesort when I run update.  Execution is very slow (5s for 800k rows in table).

DESCRIBE UPDATE queue SET acquired = "test" WHERE acquired IS NULL AND queue_name = "q1" ORDER BY priority, id LIMIT 1;

+----+-------------+-------+-------+---------------+-------------+---------+-------------+--------+-----------------------------+
| id | select_type | table | type  | possible_keys | key         | key_len | ref         | rows   | Extra                       |
+----+-------------+-------+-------+---------------+-------------+---------+-------------+--------+-----------------------------+
|  1 | SIMPLE      | queue | range | queue_index   | queue_index | 772     | const,const | 409367 | Using where; Using filesort |
+----+-------------+-------+-------+---------------+-------------+---------+-------------+--------+-----------------------------+

What is strange, when I run SELECT query with same WHERE conditions and ORDER columns filesort is not used:

DESCRIBE SELECT id FROM queue WHERE acquired IS NULL AND queue_name = "q1" ORDER BY priority, id LIMIT 1;
+----+-------------+-------+------+---------------+-------------+---------+-------------+--------+--------------------------+
| id | select_type | table | type | possible_keys | key         | key_len | ref         | rows   | Extra                    |
+----+-------------+-------+------+---------------+-------------+---------+-------------+--------+--------------------------+
|  1 | SIMPLE      | queue | ref  | queue_index   | queue_index | 772     | const,const | 409367 | Using where; Using index |
+----+-------------+-------+------+---------------+-------------+---------+-------------+--------+--------------------------+
(Query time 0s)

Does anybody know how avoid using filesort in update query or how increase its performance?

Regards,
Matzz

How to get fast Performance during Insert Large Volume of records into table (no replies)

$
0
0
we have around 15 crores of records in a table and in every 10 seconds around 400 new records get inserted in a table, the problem is data performance get slow while insertion and the volume of data get large and large number in a que waiting for insertion.

How we can overcome this problem and make performance of data insertion fast.

Performance of insert on duplicate key update (4 replies)

$
0
0
Hi:I am on mysql 5.5. I have a large InnoDB table (~30M rows). I have about 4-5K insert on duplicate key update queries every 5 minutes. As table grows these inserts are becoming slower and slower. I added partitioning to the table by a timestamp field. This version doesn't support explain plan for inserts and updates. Is there a way to limit the number of partitions the query scans for updates? In other words is there a way to include the timestamp field in my insert on duplicate key update query?

Thanks
Ravi

msyql 5.0.22 and multi-core server support (3 replies)

$
0
0
I have a Sparc Solaris server running mysql v5.0.22. It is running on a Sunfire v440, with 4 CPUs. We are having performance issues and want to get a server with multi-core CPUs.

Will mysql 5.0.22 take advantage of the multiple cores in server that has 2 multi-core CPUs please?

If not, what is the versions of mysql that support it please? I would upgrade but don't know if I should do such a big upgrade as 5.0.X to 5.5. I have done an upgrade to 5.0.22 to 5.1.57 and then to 5.5 before successfully.

Thanks
Christine

Why INSERT performance has not increase in sub-table? (5 replies)

$
0
0
Hi~

Recently I did an insertion test.

MySQL Version 5.6.20 CentOS 5.4 InnoDB
X3430 (4 core) 8G RAM WD blue disk NO RAID
Default configuration ( my.cnf )

First, i started 50 connections and inserted 5 million records into a table , the elapsed time is like the following:

Records : 0-100 100-200 200-300 300-400 400-500
Seconds : 108 164 280 575 670

Then i divided the table into 10 sub-table, 50 connections also, insert 500,000 record into each sub-table, total 5 million records. The test results are as follows:

Records : 0-100 100-200 200-300 300-400 400-500
Seconds : 114 187 499 775 587


So, from the first test’s result, when the record amount is less than 1 million, insertion speed is very fast, but why use 10 sub-table has not increase performance, even slower than the first test?



Table structure (a auto_increment primary key and a varchar index) and test procedures as follows:

CREATE TABLE `testa` (
`Id` int(11) NOT NULL AUTO_INCREMENT,
`var1` varchar(64) COLLATE utf8_bin NOT NULL DEFAULT '',
`int1` int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`Id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;


DROP PROCEDURE IF EXISTS TestA;
CREATE PROCEDURE TestA(
IN paramVar1 VARCHAR(64),
IN paramInt1 INT
)
label_start : BEGIN

START TRANSACTION;

SET @paramVar1 = paramVar1;
SET @paramInt1 = paramInt1;
SET @statement = CONCAT("INSERT INTO `testa`(`Var1`, `Int1` ) VALUES ( ?, ? );" );
PREPARE stmt FROM @statement;
EXECUTE stmt USING @paramVar1, @paramInt1;
DEALLOCATE PREPARE stmt;

COMMIT;

END;



DROP PROCEDURE IF EXISTS CreateTestBTable;
CREATE PROCEDURE CreateTestBTable( )
BEGIN

DECLARE TABLE_COUNT INT DEFAULT 10;
DECLARE m_nLoopCount INT DEFAULT 0;

SET m_nLoopCount = 0;
REPEAT

SET @statement = CONCAT(
"CREATE TABLE `testb_", m_nLoopCount, "` (",
"`Id` int(11) NOT NULL AUTO_INCREMENT,",
"`var1` varchar(64) COLLATE utf8_bin NOT NULL DEFAULT '',",
"`int1` int(11) NOT NULL DEFAULT '0',",
"PRIMARY KEY (`Id`),",
"KEY `VarIndex` (`var1`)",
") ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;" );
PREPARE stmt FROM @statement;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;

SET m_nLoopCount = m_nLoopCount + 1;

UNTIL m_nLoopCount >= TABLE_COUNT
END REPEAT;

END;


DROP PROCEDURE IF EXISTS DropTestBTable;
CREATE PROCEDURE DropTestBTable( )
BEGIN

DECLARE TABLE_COUNT INT DEFAULT 10;
DECLARE m_nLoopCount INT DEFAULT 0;

SET m_nLoopCount = 0;
REPEAT

SET @statement = CONCAT( "DROP TABLE `testb_", m_nLoopCount, "`;" );
PREPARE stmt FROM @statement;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;

SET m_nLoopCount = m_nLoopCount + 1;

UNTIL m_nLoopCount >= TABLE_COUNT
END REPEAT;

END;

DROP PROCEDURE IF EXISTS TestB;
CREATE PROCEDURE TestB(
IN paramVar1 VARCHAR(64),
IN paramInt1 INT
)
label_start : BEGIN


DECLARE TABLE_COUNT INT DEFAULT 10;

START TRANSACTION;

SET @TableIndex = CRC32(paramVar1) % TABLE_COUNT;
SET @paramVar1 = paramVar1;
SET @paramInt1 = paramInt1;
SET @statement = CONCAT("INSERT INTO `testb_", @TableIndex, "`(`Var1`, `Int1` ) VALUES ( ?, ? );" );
PREPARE stmt FROM @statement;
EXECUTE stmt USING @paramVar1, @paramInt1;
DEALLOCATE PREPARE stmt;

COMMIT;

END;

Reduce the Query Execution Time Between MYSQL & VC++ (8 replies)

$
0
0
Hi,

I'm using Mysql Server 5.6, ODBC 5.2 & Visual Studio 2012.

I was write a small program Using VC++ & MYSQL, Which is a mysql read operation.
Code working Good. But, The Query Execution Time is 5 seconds. How can i reduce this. pls help me.

Sample Code As folllows,

void MainScreen::OnreadProfileName()
{
// TODO: Add your control notification handler code here
CDatabase database;
CString SqlString;
CString sDsn;
int iRec = 0;
CString pname;

sDsn.Format("Driver={MySQL ODBC 5.2 ANSI Driver};Server=localhost;Database=test;User=root;Password=client;Option=4;");
TRY
{
// Open the database
database.Open(NULL,false,false,sDsn);
CRecordset recset( &database );
SqlString = "SELECT PNAME FROM PROFILEMASTER";
recset.Open(CRecordset::forwardOnly,SqlString,CRecordset::readOnly);

while( !recset.IsEOF() )
{
recset.GetFieldValue("PNAME",pname);
m_proname.InsertString(0,pname); //ComboBox
recset.MoveNext();
}
database.Close();
}
CATCH(CDBException, e)
{
// If a database exception occured, show error msg
AfxMessageBox("Database error: "+e->m_strError);
}
END_CATCH;
}

Which server hardware would you pick? (4 replies)

$
0
0
I have two servers. Which server should perform better with a 90% write load and 10% read load. The normal number of user connections is about 70.

Server #1 - MySQL 5.5.40
OS: Windows 2008 Server Standard
RAM: 16 gigs (mysql can have 7 gigs)
Processor: Intel i7-3555LE @ 2.5GHz (CPU Benchmark score 4080)
Disk 1: Intel SSD 330 Series 120GB (for OS)
Disk 2: WD Green 2TB – 64MB Cache (for databases – no RAID)
• MySQL is not the primary application on this server

Server #2 – MySQL 5.5.40
OS: Debian 3.2.63-2 x86_64
RAM: 32 gigs (mysql can have all the ram)
Processor: Intel Xeon L5420 @ 2.5GHz (dual processor, CPU Benchmark score 6605)
Disk 1: WD Red 1 TB – 16 MB Cache (4 drives in RAID 1+0 – RAID 10)
• MySQL is the only application on this server

So, looking at these two servers which one would perform better?

MySQL query execution time differs between executions (5 replies)

$
0
0
I'm using MySQL 5.6.19 community edition and I have a large table (about 20,000,000 rows, about 250,000 inserts per day). The engine I'm using is InnoDB.

I'm running a query that joins to other table, filtering and sorting the data according to a specific criteria. I've created an index for this query because this query can run many times.

In addition, I created a routine that runs every night and "shrinks" the table. It find which rows can be deleted and which rows should be updated according to the business logic and it deletes about 150,000 rows every night and updates few thousands.

Now here's the questions:

After the stored routine finishes, the query execution time is lightning fast (milliseconds) but if the routine didn't run in a long period of time or right after a DB restart, the same query takes 20 seconds or more. The execution plan looks exactly the same when it is fast and when it is slow - here's the execution plan:

id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE stationimp1_ ref_or_null PRIMARY,idx_station_complex_1 idx_station_complex_1 2 const 55 Using where; Using index; Using temporary; Using filesort
1 SIMPLE stationeve0_ ref idx_station_event_log_1,idx_station_event_log_complex_1 idx_station_event_log_complex_1 9 DMS.stationimp1_.id 575 Using index condition

The question is why do I have such a huge difference between the execution times and how could I make it faster even if the routine didn't run?

Little more on the routine: the routine inserts the latest rows (since the routine's last run) to a temporary table, doing all the business logic on the temporary table, inserts the id's I want to delete to another table and the id's I want to update to another table, then it deletes and updates in the original big table one by one accordingly.

innodb_buffer_pool_read_requests global status wraparound (1 reply)

$
0
0
Using 5.6.17 mysql, is there a limit on the global status counter between 4,000,000,000 and 5,000,000,000 for innodb_buffer_pool_read_requests?

Where can I find the list of 'deprecated' global status counters?

Thanks
Viewing all 1203 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>