Use Percona's tools for large GLPI databases
Last updated
Was this helpful?
Last updated
Was this helpful?
is a leading provider of unbiased open source database solutions, they offer a wide range of products (complete from database server/clustering solution to monitoring or administration tools).
For changes (modification of columns or data migration) in large GLPI databases, based on MySQL / MariaDB, you can use the tool "pt-online-schema-change", which is included in the "percona-toolkit" developed by Percona, for example:
Converting all data from MyISAM engine to InnoDB engine
Migrate all DATETIME collumn to TIMESTAMP format (for timezone support)
Adding / removing / recovering an Index on glpi_logs table
By saying "large database", we mean those whose glpi_logs table exceeds 5GB of data, but of course it's possible to use these tools on all databases (regardless of their size).
pt-online-schema-change is more efficient and more secure than the command line tools (bin/console) provided in GLPI core release (useful for less important databases).
Installing percona-toolkit:
OR
Using pt-online-schema-change:
As indicated in the official documentation:
Percona Toolkit is mature, proven in the real world, and well tested, but all database tools can pose a risk to the system and the database server (like any others sysadmin/DBA tools).
Danger
Before using this tool, please:
Read the tool’s documentation
Review the tool’s known "BUGS"
Test the tool on a non-production server
Backup your production server and verify the backups
TECLIB cannot be held responsible for improper usage conducting to loss of data.
The main principle of pt-online-schema-change is to create a temporary table that corresponds to the ALTER you want to do, then copy the data from the old table to the new one, and delete the old table if everything is ok.
Therefore, pay attention to the disk space usage: it will temporarily double (or triple), then return to normal (or decrease in case of "OPTIMIZE").
Disk size
If your glpi_logs table (before intervention) is 25GB, we recommend having a free disk space of at least 50/75GB to perform the operation.Warning about execution time
Although thanks to this tool, the data migrations are much faster (than with the bin/console tool of GLPI), we always advise (like any good self-respecting Linux system administrator) to execute all your commands in a virtual terminal or terminal multiplexer (like: tmux or screen).
On Ubuntu 20.04 LTS: apt-get install tmux
Create a new tmux session: tmux new -s innodb_migration
Run your commands ...
Detach without closing the session: CTRL+b
then d
key
Connect to the existing session: tmux attach -t innodb_migration
Quit a session: exit
Migrate data from MyISAM to InnoDB
GLPI command:
pt-online-schema-change command:
--alter "ENGINE=InnoDB"
Here is the complete cli usage, you can test with --dry-run
instead of --execute
:
You can change the variables:
u
as user database
h
as host database
D
as database name
t
as glpi table to alter
GLPI command:
pt-online-schema-change command:
--alter "MODIFY COLUMN date_mod TIMESTAMP NULL DEFAULT NULL"
Here is the complete cli usage, you can test with --dry-run
instead of --execute
:
You can change the variables:
u
as user database
h
as host database
D
as database name
t
as glpi table to alter
If by accident or mistake, the INDEXES of one of your tables are corrupted (you discovered it thanks to the MySQL CHECK command), we recommend to re-run the --alter "ENGINE=InnoDB"
command, your table INDEXs will be rebuilt while keeping your data safe.
After having done a lot of cleaning in your glpi_logs table (deleting millions of lines for example) you realize that the disk space used has not changed on the filesystem side.
Without going into the details of how the InnoDB storage engine works, tell yourself that this is normal you need to "OPTIMIZE" your table.
To do this, we recommend once again to re-run the --alter "ENGINE=InnoDB"
command, your table INDEXs will be rebuilt while keeping your data safe and recover the used disk space.
Here is a script to alter all tables for specific database.
Be sure to adapt the ALTER_COMMAND command with the right action to perform (see previous chapters).
(you can test with --dry-run
instead of --execute
)
If for any reason you lose SSH connection/session, being in a virtual terminal the execution of your command will continue without you!
Example with