posted by qubix on March 8, 2020

TOOLS USED:

*centos 6 with EPEL / mysql 5.1 64bit *undrop-for-innodb (https://github.com/twindb/undrop-for-innodb) *mysql-utilities 1.6 (https://github.com/mysql/mysql-utilities) ...and luck...

0) Prepare recovery environment

install centos 6 final version 64bit in a vm or spare pc (preferable a VM)

 yum update

install epel repo

 yum install epel-release

install some stuff

 yum install nano mc zip flex make gcc bison 

install mysql server

 yum install mysql-server 


change mysql config /etc/my.cnf with the following:


[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
symbolic-links=0
max_connections = 2500
query_cache_limit = 2M
tmp_table_size=200M
query_cache_size=150M
key_buffer_size=300M
max_heap_table_size=300M
max_allowed_packet=500M
net_read_timeout=600
net_write_timeout=180
interactive_timeout=86400
log_error=/var/log/mysql_error.log
innodb_file_per_table=1
innodb_force_recovery=1

[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

execute the following to make the mysql error log file

# touch /var/log/mysql_error.log && chmod 660 /var/log/mysql_error.log 
chown mysql:mysql /var/log/mysql_error.log
/etc/init.d/mysql restart

read about innodb recovery levels, educate yourself!

I recommend installing a minimum desktop environment like lxde or xfce, and if in a VM guest additions to enable shared clipboard and seamless mouse integration

1) install software

git clone the above mentioned tools

1st problem mysql-utilities require python connector.

yum install mysql-connector-python.noarch

ok so now to install mysql-utilities, run setup.py

undrop-for-innodb has a makefile so run $ make to compile it (it's written in C code, using yyparse the gnu parser maker)

Our database was made with mysql 5.7 not 5.1 we have, so obviously we will face trouble along the way.

2) grub the db tables structure using mysqlfrm util

We'll use the diagnostic mode of mysqlfrm because we run mysql 5.1 instead of 5.7

mysqlfrm --diagnostic /where/thedata/reside/*.frm > ~/db_structure.sql

Do not try to do it with the spawned server way unless the files you have were generated using the same mysql version as the one running in the recovery environment.
this produces CREATE TABLE statements for all the frms we have

3) ok now we have our table structure

First open this db_structure.sql file and replace all lines having CREATE TABLE mydb.mytable ( with CREATE TABLE mytable (

or else the yyparser will fail in the next step!

We now have to copy each CREATE TABLE to its own file. Eg we have a table Atom.frm and another one Objects.frm. We should copy each CREATE TABLE statement accordingly to each own separate table sql file so we'll have 2 files, Atom.sql and Objects.sql (you'll see why).

4a) now lets move to TWINDB recovery tool

We'll use the stream_parser to extract data from our ibd files. Because we may have many many tables , lets make our life a little easier:


 cd ourdbdirectory/
 echo '#!/bin/sh' > ~/table_data_ibd.sh
 ls -1 *.frm >> ~/table_data_ibd.sh
 sed -i 's/^/\.\/stream_parser -f ~\/ourdbdirectory\//'  ~/table_data_ibd.sh
 chmod + x ~/table_data_idb.sh
 cp table_data_ibd.sh where_undrop-for-innodb_is/
 cd where_undrop-for-innodb_is/
 ./table_data_ibd.sh

this will generate all needed files for the next step. It essential dumps the data pages from the ibd files so we can then construct mysql load data which we can import in our db again.

4b) now moving to the c_parser

the general command is ./c_parser -6f table.ibd/FIL_PAGE_INDEX/ -t tablee-create.sql > dumps/default/table 2>dumps/default/table.sql

-6f: 6 mean the ibd file was generated by a version of MySQL 5.6+ (in this case it was 5.7), f for specifying the .page file we are going to parse
-t table-create.sql: the file contains the CREATE TABLE statement we generated previously
> dumps/default/table: the dump data will be in this file. This is actually a text file which is compatible with the command LOAD LOCAL DATA FILE. dumps/default is simply the folder I used for storing the exported data.
2>dumps/default/table.sql: this is the .sql file which will contain the LOAD LOCAL DATA FILE statement. So in the end we can simply run this file to import the data.

Again, because we may have many many files, lets make our life easier using good ole linux cli utils: echo '#!/bin/sh' > ~/table_parser_data.sh find . -maxdepth 1 -type f -exec echo './c_parser -6f pages-{}/FIL_PAGE_INDEX/ -t ~/ourdatabasedir/{}_create.sql > dumps/default/{} 2>dumps/default/{}.sql' \; | grep ibd | sed 's/.\///2g' | sed 's/.ibd//2g' >> ~/table_parser_data.sh chmod +x ~/table_parser_data.sh && cp ~/table_parser_data.sh where_undrop-for-innodb_is/ now lets run it


 cd where_undrop-for-innodb_is/
 ./table_parser_data.sh



When it is finished you'll see that in the dump folder a lot of files are created with the load local statement and the data for each table.

We can import them now in our db and see what happens!

Just copy the sql from the .sql files and run them in phpmyadmin or import from cli. You can concat all of them so you'll have to import only one file.

Beware: mysql may by default deny load local data or if you use phpmyadmin it might be disabled from php settings.
In any case, to enable it go to
- my.cnf and add the line local_infile=ON or if it is already present change it to ON value
- php.ini and add mysqli.allow_local_infile=On or uncomment it, if it is already there


Check the sql files for possible errors thrown by the twindb program. It's parser is not error-free and will complain for otherwise valid html.

If during import you face the error of illegal utf8 character, you can either
- change the sql in .sql files and instead of utf8 put latin1
- convert the data files (not the .sql files) with iconv. You can do something like


find . -type f -print -exec iconv -f us-ascii -t utf-8 {} -o {}.utf8 \;


hyperworks