posted by qubix on March 8, 2020

TOOLS USED:

*centos 6 with EPEL / mysql 5.1 64bit *undrop-for-innodb (https://github.com/twindb/undrop-for-innodb) *mysql-utilities 1.6 (https://github.com/mysql/mysql-utilities) ...and luck...

0) Prepare recovery environment

install centos 6 final version 64bit in a vm or spare pc (preferable a VM)

 yum update

install epel repo

 yum install epel-release

install some stuff

 yum install nano mc zip flex make gcc bison 

install mysql server

 yum install mysql-server 


change mysql config /etc/my.cnf with the following:


[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
symbolic-links=0
max_connections = 2500
query_cache_limit = 2M
tmp_table_size=200M
query_cache_size=150M
key_buffer_size=300M
max_heap_table_size=300M
max_allowed_packet=500M
net_read_timeout=600
net_write_timeout=180
interactive_timeout=86400
log_error=/var/log/mysql_error.log
innodb_file_per_table=1
innodb_force_recovery=1

[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

execute the following to make the mysql error log file

# touch /var/log/mysql_error.log && chmod 660 /var/log/mysql_error.log 
chown mysql:mysql /var/log/mysql_error.log
/etc/init.d/mysql restart

read about innodb recovery levels, educate yourself!

I recommend installing a minimum desktop environment like lxde or xfce, and if in a VM guest additions to enable shared clipboard and seamless mouse integration

1) install software

git clone the above mentioned tools

1st problem mysql-utilities require python connector.

yum install mysql-connector-python.noarch

ok so now to install mysql-utilities, run setup.py

undrop-for-innodb has a makefile so run $ make to compile it (it's written in C code, using yyparse the gnu parser maker)

Our database was made with mysql 5.7 not 5.1 we have, so obviously we will face trouble along the way.

2) grub the db tables structure using mysqlfrm util

We'll use the diagnostic mode of mysqlfrm because we run mysql 5.1 instead of 5.7

mysqlfrm --diagnostic /where/thedata/reside/*.frm > ~/db_structure.sql

Do not try to do it with the spawned server way unless the files you have were generated using the same mysql version as the one running in the recovery environment.
this produces CREATE TABLE statements for all the frms we have

3) ok now we have our table structure

First open this db_structure.sql file and replace all lines having CREATE TABLE mydb.mytable ( with CREATE TABLE mytable (

or else the yyparser will fail in the next step!

We now have to copy each CREATE TABLE to its own file. Eg we have a table Atom.frm and another one Objects.frm. We should copy each CREATE TABLE statement accordingly to each own separate table sql file so we'll have 2 files, Atom.sql and Objects.sql (you'll see why).

4a) now lets move to TWINDB recovery tool

We'll use the stream_parser to extract data from our ibd files. Because we may have many many tables , lets make our life a little easier:


 cd ourdbdirectory/
 echo '#!/bin/sh' > ~/table_data_ibd.sh
 ls -1 *.frm >> ~/table_data_ibd.sh
 sed -i 's/^/\.\/stream_parser -f ~\/ourdbdirectory\//'  ~/table_data_ibd.sh
 chmod + x ~/table_data_idb.sh
 cp table_data_ibd.sh where_undrop-for-innodb_is/
 cd where_undrop-for-innodb_is/
 ./table_data_ibd.sh

this will generate all needed files for the next step. It essential dumps the data pages from the ibd files so we can then construct mysql load data which we can import in our db again.

4b) now moving to the c_parser

the general command is ./c_parser -6f table.ibd/FIL_PAGE_INDEX/ -t tablee-create.sql > dumps/default/table 2>dumps/default/table.sql

-6f: 6 mean the ibd file was generated by a version of MySQL 5.6+ (in this case it was 5.7), f for specifying the .page file we are going to parse
-t table-create.sql: the file contains the CREATE TABLE statement we generated previously
> dumps/default/table: the dump data will be in this file. This is actually a text file which is compatible with the command LOAD LOCAL DATA FILE. dumps/default is simply the folder I used for storing the exported data.
2>dumps/default/table.sql: this is the .sql file which will contain the LOAD LOCAL DATA FILE statement. So in the end we can simply run this file to import the data.

Again, because we may have many many files, lets make our life easier using good ole linux cli utils: echo '#!/bin/sh' > ~/table_parser_data.sh find . -maxdepth 1 -type f -exec echo './c_parser -6f pages-{}/FIL_PAGE_INDEX/ -t ~/ourdatabasedir/{}_create.sql > dumps/default/{} 2>dumps/default/{}.sql' \; | grep ibd | sed 's/.\///2g' | sed 's/.ibd//2g' >> ~/table_parser_data.sh chmod +x ~/table_parser_data.sh && cp ~/table_parser_data.sh where_undrop-for-innodb_is/ now lets run it


 cd where_undrop-for-innodb_is/
 ./table_parser_data.sh



When it is finished you'll see that in the dump folder a lot of files are created with the load local statement and the data for each table.

We can import them now in our db and see what happens!

Just copy the sql from the .sql files and run them in phpmyadmin or import from cli. You can concat all of them so you'll have to import only one file.

Beware: mysql may by default deny load local data or if you use phpmyadmin it might be disabled from php settings.
In any case, to enable it go to
- my.cnf and add the line local_infile=ON or if it is already present change it to ON value
- php.ini and add mysqli.allow_local_infile=On or uncomment it, if it is already there


Check the sql files for possible errors thrown by the twindb program. It's parser is not error-free and will complain for otherwise valid html.

If during import you face the error of illegal utf8 character, you can either
- change the sql in .sql files and instead of utf8 put latin1
- convert the data files (not the .sql files) with iconv. You can do something like


find . -type f -print -exec iconv -f us-ascii -t utf-8 {} -o {}.utf8 \;


posted by qubix on November 13, 2019

The other day I was asked to check a hacked trunk in a pbx box. While I was digging through the logs the owner complaint that he couldn't use a tool from the pbx interface, the "Call Event Logging" tool because of some error.

The error was: "General error: 1194 Table 'cel' is marked as crashed and should be repairedFile:/var/www/html/admin/libraries/BMO/Database/PDOStatement.class.php:17"

That is a simple one error to fix, the cel table is in the asteriskcdrdb database:


mysql asteriskcdrdb
repair table cel; 

That's all, the table was repaired

+-------------------+--------+----------+-----------------------------------------------+
| Table             | Op     | Msg_type | Msg_text                                      |
+-------------------+--------+----------+-----------------------------------------------+
| asteriskcdrdb.cel | repair | info     | Key 1 - Found wrong stored record at 27271156 |
| asteriskcdrdb.cel | repair | warning  | Number of rows changed from 180284 to 180281  |
| asteriskcdrdb.cel | repair | status   | OK                                            |
+-------------------+--------+----------+-----------------------------------------------+
3 rows in set (4.98 sec)



posted by qubix on October 18, 2018

Well I had created my new repo, made the files I wanted and when I tried to push them:

error: src refspec master does not match any.

BUT...I have not committed anything yet oops!

So the correct order is:

Create my new shiny git repo

mkdir repo && cd repo
git remote add origin /path/to/origin.git

Add the files I want

git add . 

Commit the changes

git commit -m "initial commit"

Then push them!

git push origin master

And..success!

posted by qubix on April 1, 2018

install drush on cpanel based servers with EA4 per account

1) cpanel already has composer if EA4 /opt/cpanel/composer/bin

2) enable jailed shell access to account

3) ssh to server with account creds or keys or root and then su to user

4) composer require drush/drush:7.*

5) go to .bashrc and make an alias

alias drush-master=~/.config/composer/vendor/drush/drush/drush

6) source .bashrc to make changes immidiately availabe

7) go to drush folder and do composer install to fetch dependencies

8) drush-master status to check it is working

9) change tmp to be inside USER home just in case

go to .bashrc and put the line

TEMP=~/tmp

if you cannot edit .bashrc write this on cli EXPORT TEMP=~/tmp

you can see the change in drush-master status

10) now go to ~/public_html/sites/default/

execute drush status and see that drush sees your website

posted by qubix on February 9, 2018

You want to flush your DNS cache in linux? Well it is very very easy :]

Let's say you use systemd-resolve:

easy!


$ systemd-resolve --flush-caches

Let's say you use anything else (nscd, dnsmasq, named etc)

easier!

just restart the service


$ systemctl restart nscd (or dnsmasq, named etc)

or if you do not use systemd, you could try


service nscd restart

cheers!

PS: you probably need to have root privileges to execute the above commands

posted by qubix on November 1, 2017

θέλοντας να προσθέσω έναν client σε έναν nagios Monitoring server, ακολούθησα την τυπική διαδικασία:

-install nagios plugins -install nrpe -check nrpe config -open firewall ports etc

όλα παίζουν σωστά λοιπόν, NOT!!
μετά από λίγα λεπτά στον monitoring server άρχισαν να βαράνε τα νταούλια του nagios για unknown problems και τέτοια αγχωτικά.

Ξανακοιτάω τα πακέτα, τίποτα όλα κομπλέ..
βάζω το nrpe να βαστάει debug log και δεν βλέπω κάτι ενδιαφέρον:

[1509568185] Running command: /usr/lib64/nagios/plugins/check_disk -w 20% -c 10% -p /dev/mapper/home
[1509568185] Command completed with return code 3 and output:
[1509568185] Return Code: 3, Output: NRPE: Unable to read output

Οπότε λέω να δοκιμάσω να τρέξω την εντολή να δω τι βγάζει.Άλλωστε από το log φαίνεται πως ο remote server κάνει την κλήση, αλλά εδώ κολλάει.
$ /usr/lib64/nagios/plugins/check_disk -w 20% -c 10% -p /dev/mapper/home

-bash: /usr/lib64/nagios/plugins/check_procs: No such file or directory

What?? Τι είν τούτο? Αφού έβαλα τα πακέτα nagios-plugins και nagios-plugins-nrpe...
και φυσικά αυτά ΔΕΝ εγκαθιστούν τα plugins του nagios...
Απότι φαίνεται στο centos 7 υπάρχουν χωριστά πακέτα, ένα για κάθε plugin. 2 ώρες χαμένες από τη ζωή μου...

Ε τα έβαλα και ησύχασα :]

posted by qubix on May 15, 2017

Ξαφνικά χωρίς λόγο και αιτία, το webmail δεν δείχνει κανένα email, τα emails κολλάνε στην ουρά και όταν κοιτάς το log βλέπεις ένα κάρο debug info σχεδόν ακατανόητο.

Φυσικά μιλάμε για τον dovecot server που αναλαμβάνει τις συνδέσεις pop/imap.
Το κλειδί εδώ είναι μέσα στο χάος του log να βρούμε το σημείο που αρχίζει το πρόβλημα. Αν λοιπόν δούμε τη γραμμή:

Panic: file mail-index-sync-keywords.c: assertion failed

τα πράγματα ξεκαθαρίζουν γρήγορα. Ο dovecot μας λέει πως δεν μπορεί να κάνει sync το index του mailbox.
Ο λόγος είναι πως το αρχείο dovecot.index είναι corrupted και η λύση απλή:
σβήνουμε το αρχείο dovecot.index και κάνουμε εκ νέου ένα login στο webmail για να ξαναδημιουργηθεί!

posted by qubix on October 10, 2016

Αν μόλις βάλατε το νέο σας template και πατώντας σε οποιοδήποτε article αντί να το δείτε , βλέπετε ένα κουφό σφάλμα

500 - JHtml: :icon not supported. File not found

μην απελπίζεστε, υπάρχει λύση.

Κάνοντας λίγο debugin' το joomla επειδή το ζητά το template, ψάχνει να βρει το αρχείο icon.php στο libraries/joomla/html/ αλλά δεν το βρίσκει. Η λύση λοιπόν είναι να το αντιγράψουμε από το components/com_content/helpers/ στο libraries/joomla/html/ και et voila, παίζουν όλα σωστά!

njoy

posted by qubix on June 22, 2016

αν αναρωτιέστε γιατί δεν μπορείτε να στείλετε email από το νέο vps σας σε webmin, μην ψάχνετε πολύ μακρυά. Στο /var/log/mail.warn αν δείτε την παρακάτω γραμμή:

warning: SASL authentication failure: cannot connect to saslauthd server: No such file or directory

o server σας λέει πως δεν μπορεί να βρει τον sasl daemon, συνεπώς δεν μπορεί να κάνει authentication. Ο λόγος είναι πως λείπει το αρχείο

/var/run/saslauthd

το οποίο προσπαθεί να προσπελάσει για να διαπιστώσει αν τρέχει ο δαίμονας.

Στην πραγματικότητα το αρχείο βρίσκεται στο

/var/spool/postfix/var/run/saslauthd

οπότε με ένα symlink θα διορθωθεί το πρόβλημα:

ln -s /var/spool/postfix/var/run/saslauthd /var/run/saslauthd

posted by qubix on May 6, 2016

Well...for some reason ssh failed to bind to any port other than 22

Lets check first to what port we are trying to bind to


$ cat /etc/ssh/sshd_config | grep -i port

$ Port 5678

Ok now let's check what the logs said (yes it's systemd...)


$ journalctl -u sshd.service

systemd[1]: Starting OpenSSH server daemon...
 sshd[10158]: error: Bind to port 5678 on 0.0.0.0 failed: Permission denied.
 sshd[10158]: error: Bind to port 5678 on :: failed: Permission denied.
 sshd[10158]: fatal: Cannot bind any address.

Permission denied? I bet it is because selinux is in enforce mode! So the solution is to add the port we want to the selinux policy about ssh.

Firstly lets install the policy utils


$ yum install policycoreutils-python

Check current policy for ssh


$ semanage port -l | grep ssh

ssh_port_t        tcp      22

And add to it our desired port


$ semanage port -a -t ssh_port_t -p tcp 5678

ready!

hyperworks