开发者

The total number of locks exceeds the lock table size

I'm running a report in MySQL. One of the queries involves inserting a large amount of rows into a temp table. When I try to run it, I get this error:

Error code 1206: The number of locks exceeds the lock table size.

The queries in question are:

create temporary table SkusBought(
customerNum int(11),
sku int(11),
typedesc char(25),
key `customerNum` (customerNum)
)ENGINE=InnoDB DEFAULT CHARSET=latin1;
insert into skusBought
select t1.* from
    (select customer, sku, typedesc from transactiondatatransit
    where (cat = 150 or cat = 151)
    AND daysfrom07jan1 > 731
group by customer, sku
union
select customer, sku, typedesc from transactiondatadelaware
    where (cat = 150 or cat = 151)
    AND daysfrom07jan1 > 731
group by customer, sku
union
select customer, sku, typedesc from transactiondataprestige
    where (cat = 150 or cat = 151)
    AND daysfrom07jan1 > 731
group by customer, sku) t1
join
(select customernum from topThree开发者_JAVA技巧transit group by customernum) t2
on t1.customer = t2.customernum;

I've read that changing the configuration file to increase the buffer pool size will help, but that does nothing. What would be the way to fix this, either as a temporary workaround or a permanent fix?

EDIT: changed part of the query. Shouldn't affect it, but I did a find-replace all and didn't realize it screwed that up. Doesn't affect the question.

EDIT 2: Added typedesc to t1. I changed it in the query but not here.


This issue can be resolved by setting the higher values for the MySQL variable innodb_buffer_pool_size. The default value for innodb_buffer_pool_size will be 8,388,608.

To change the settings value for innodb_buffer_pool_size please see the below set.

  1. Locate the file my.cnf from the server. For Linux servers this will be mostly at /etc/my.cnf
  2. Add the line innodb_buffer_pool_size=64MB to this file
  3. Restart the MySQL server

To restart the MySQL server, you can use anyone of the below 2 options:

  1. service mysqld restart
  2. /etc/init.d/mysqld restart

Reference The total number of locks exceeds the lock table size


I found another way to solve it - use Table Lock. Sure, it can be unappropriate for your application - if you need to update table at same time.

See: Try using LOCK TABLES to lock the entire table, instead of the default action of InnoDB's MVCC row-level locking. If I'm not mistaken, the "lock table" is referring to the InnoDB internal structure storing row and version identifiers for the MVCC implementation with a bit identifying the row is being modified in a statement, and with a table of 60 million rows, probably exceeds the memory allocated to it. The LOCK TABLES command should alleviate this problem by setting a table-level lock instead of row-level:

SET @@AUTOCOMMIT=0;
LOCK TABLES avgvol WRITE, volume READ;
INSERT INTO avgvol(date,vol)
SELECT date,avg(vol) FROM volume
GROUP BY date;
UNLOCK TABLES;

Jay Pipes, Community Relations Manager, North America, MySQL Inc.


From the MySQL documentation (that you already have read as I see):

1206 (ER_LOCK_TABLE_FULL)

The total number of locks exceeds the lock table size. To avoid this error, increase the value of innodb_buffer_pool_size. Within an individual application, a workaround may be to break a large operation into smaller pieces. For example, if the error occurs for a large INSERT, perform several smaller INSERT operations.

If increasing innodb_buffer_pool_size doesnt help, then just follow the indication on the bolded part and split up your INSERT into 3. Skip the UNIONs and make 3 INSERTs, each with a JOIN to the topThreetransit table.


First, you can use sql command show global variables like 'innodb_buffer%'; to check the buffer size.

Solution is find your my.cnf file and add,

[mysqld]
innodb_buffer_pool_size=1G # depends on your data and machine

DO NOT forget to add [mysqld], otherwise, it won't work.

In my case, ubuntu 16.04, my.cnf is located under the folder /etc/mysql/.


I am running MySQL windows with MySQL workbench. Go to Server > Server status At the top it says configuration file: "path" (C:\ProgramData\MySQL\...\my.ini)

Then in the file "my.ini" press control+F and find buffer_pool_size. Set the value higher, I would recommend 64 MB (default is 8 MB).

Restart the server by going to Instance>Startup/Shutdown > Stop server (and then later start server again)

In my case I could not delete entries from my table.


Fixing Error code 1206: The number of locks exceeds the lock table size.

In my case, I work with MySQL Workbench (5.6.17) running on Windows with WampServer 2.5.

For Windows/WampServer you have to edit the my.ini file (not the my.cnf file)

To locate this file go to Menu Server/Server Status (in MySQL Workbench) and look under Server Directories/ Base Directory

The total number of locks exceeds the lock table size

In my.ini file there are defined sections for different settings, look for section [mysqld] (create it if it does not exist) and add the command: innodb_buffer_pool_size=4G

[mysqld]
innodb_buffer_pool_size=4G

The size of the buffer_pool file will depend on your specific machine, in most cases, 2G or 4G will fix the problem.

Remember to restart the server so it takes the new configuration, it corrected the problem for me.

Hope it helps!


Same issue I'm getting in my MYSQL while running sql script Please look into below image.. Error code 1206: The number of locks exceeds the lock table size Picture

This is Mysql configuration issue so I made some changes in my.ini and It's working on my system & issue resolved.

We need to make some changes in my.ini which is available on following Path:- C:\ProgramData\MySQL\MySQL Server 5.7\my.ini and please update following changes in my.ini config file fields:-

key_buffer_size=64M
read_buffer_size=64M
read_rnd_buffer_size=128M
innodb_log_buffer_size=10M
innodb_buffer_pool_size=256M
query_cache_type=2
max_allowed_packet=16M

After all above changes please restart the MYSQL Service. Please refer the image:- Microsoft MYSQL Service Picture


If you have properly structured your tables so that each contains relatively unique values, then the less intensive way to do this would be to do 3 separate insert-into statements, 1 for each table, with the join-filter in place for each insert -

INSERT INTO SkusBought...

SELECT t1.customer, t1.SKU, t1.TypeDesc
FROM transactiondatatransit AS T1
LEFT OUTER JOIN topThreetransit AS T2
ON t1.customer = t2.customernum
WHERE T2.customernum IS NOT NULL

Repeat this for the other two tables - copy/paste is a fine method, simply change the FROM table name. ** IF you are trying to prevent duplicated entries in your SkusBought table you can add the following join code in each section prior to the WHERE clause.

LEFT OUTER JOIN SkusBought AS T3
ON  t1.customer = t3.customer
AND t1.sku = t3.sku

-and then the last line of WHERE clause-

AND t3.customer IS NULL

Your initial code is using a number of sub-queries, and the UNION statement can be expensive as it will first create its own temporary table to populate the data from the three separate sources before inserting into the table you want ALONG with running another sub-query to filter results.


in windows: if you have mysql workbench. Go to server status. find the location of running server file in my case it was:

C:\ProgramData\MySQL\MySQL Server 5.7

open my.ini file and find the buffer_pool_size. Set the value high. default value is 8M. This is how i fixed this problem


It is worth saying that the figure used for this setting is in BYTES - found that out the hard way!


This answer below does not directly answer the OP's question. However, I'm adding this answer here because this page is the first result when you Google "The total number of locks exceeds the lock table size".


If the query you are running is parsing an entire table that spans millions of rows, you can try a while loop instead of changing limits in the configuration.

The while look will break it into pieces. Below is an example looping over an indexed column that is DATETIME.

# Drop
DROP TABLE IF EXISTS
new_table;

# Create (we will add keys later)
CREATE TABLE
new_table
(
    num INT(11),
    row_id VARCHAR(255),
    row_value VARCHAR(255),
    row_date DATETIME
);

# Change the delimimter
DELIMITER //

# Create procedure
CREATE PROCEDURE do_repeat(IN current_loop_date DATETIME)
BEGIN

    # Loops WEEK by WEEK until NOW(). Change WEEK to something shorter like DAY if you still get the lock errors like.
    WHILE current_loop_date <= NOW() DO

        # Do something
        INSERT INTO
            user_behavior_search_tagged_keyword_statistics_with_type
            (
                num,
                row_id,
                row_value,
                row_date
            )
        SELECT
            # Do something interesting here
            num,
            row_id,
            row_value,
            row_date
        FROM
            old_table
        WHERE
            row_date >= current_loop_date AND
            row_date < current_loop_date + INTERVAL 1 WEEK;

        # Increment
        SET current_loop_date = current_loop_date + INTERVAL 1 WEEK;

    END WHILE;

END//

# Run
CALL do_repeat('2017-01-01');

# Cleanup
DROP PROCEDURE IF EXISTS do_repeat//

# Change the delimimter back
DELIMITER ;

# Add keys
ALTER TABLE
    new_table
MODIFY COLUMN
    num int(11) NOT NULL,
ADD PRIMARY KEY
    (num),
ADD KEY
    row_id (row_id) USING BTREE,
ADD KEY
    row_date (row_date) USING BTREE;

You can also adapt it to loop over the "num" column if your table doesn't use a date.

Hope this helps someone!


Good Day,

I have had the same error when trying to remove millions of rows from a MySQL table.

My resolution was nothing to do with changing the configuration file of MySQL, but just to reduce the number of rows I am targeting by specifying the max id per transaction. Instead of targeting all the rows with 1 transaction, I would suggest targeting portions per transaction. It might take more transactions to get the job done, but atleast you will get somewhere other than trying to fiddle around with MySQL configurations.

Example:

delete from myTable where groupKey in ('any1', 'any2') and id < 400000;

and not

delete from myTable where groupKey in ('any1', 'any2');

The query might still be optimized by using groupBy and orderBy clauses.


I was facing a similar problem when I was trying to insert some million rows into a database using python. The solution is to group these inserts together into smaller chunks to reduce memory usage, using the executemany function to insert each chuck and then committing simultaneously instead of executing one commit in the end.

def insert(query, items, conn):
    GROUPS = 10
    total = items.shape[0]
    group = total // GROUPS
    items = list(items.itertuples(name=None, index=None))
    for i in range(GROUPS):
        cursor.executemany(query, items[group * i : group * (i + 1)])
        conn.commit()
        print('#', end='')
    print()

There's also a neat progress bar in the implementation above.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜