posted by qubix on December 25, 2022

There is this neat panel, CWP (CentOS Web Panel) which has all the necessary tools to run a webhost and a nice interface both for the central root panel and the user one.

One thing that it's been missing though is this: there is a website hosted in a CWP powered webhost and the owner decides to change the domain name from olddomain to newdomain.

If you are coming from other panels (e.g cpanel) you may be accustomed for this to be supported. But this is not the case with CWP (Pro edition or not).

A quick search in the forums revealed some questions posted, but the usual answer was "it is not supported".

Well enough, talking, let's see how we can do it on our own!

Important: in this scenario there were no email accounts and the webhost was running just Apache (CWP offers support for Apache, NGINX, litespeed, Apache+NGINX+Varnish).

Also important: I assume that the newdomain is already pointed to your webhost!

First things first: how does the CWP knows which user has which domain?

Apparently there is this database called "root_cwp" and in this db there is this table "user" where you can find the user account you want to change and then change the domain cell.

So:


$mysql 
$MariaDB [localhost]>USE root
$MariaDB [localhost]>UPDATE user SET `domain` = 'newdomain' WHERE `username` = 'useraccount';
$MariaDB [localhost]>quit;

Next step: Apache vhost. We have to copy the already existing vhost and change the references in it from olddomain to newdomain


cd /usr/local/apache/conf.d/vhosts/
cp olddomain.conf newdomain.conf
sed -i 's/olddomain/newdomain/g' newdomain.conf


Important: do not copy the ssl.conf because there is no ssl yet for this domain. We will generate it later after we have finished.

Next step: BIND dns zone. Copy the zone and change references again.

$cd /var/named/
$cp olddomain.db newdomain.db
$sed -i 's/olddomain/newdomain/g' newdomain.db



Next substep: inform BIND that there is a new zonefile: edit /etc/named.conf , find the lines regarding the old domain (something like this:

// zone olddomain
zone "olddomain" {type master; file "/var/named/olddomain.db";};
// zone_end olddomain

)

and change the olddomain to newdomain

Next step: restart Apache and Bind

$systemctl restart named
$systemctl restart httpd


Hopefully all went well!
Now we can go and generate our brand new ssl from the root web interface, so the necessary changes will be handled by CWP for the new ssl to work.

That's all folks!

posted by qubix on October 8, 2022

As we all know, Cpanel supports advertises POWERDNS for DNSSEC and not BIND. While this is weird enough, I will not go into it.

Recently I had the situation where a client transferred a .com domain to a different registrar and the website pointing to it to a new server. This domain was signed and since noone knew it, I suddenly was facing a serious problem in DNS propagation.

Since google public DNS had no record of this domain, I checked what the heck was happening and yes, then I found out that it was a signed domain.

The problem had two solutions:
1) Remove the signing
2) Implement DNSSEC for this domain at the server so the chain of trust would be valid again.

Unfortunately, the support of the registrar was terrible, I kept talking to people that clearly had no expertise on the matter, probably some call support center with fixed questions/answers. If you had the time and patience, I suppose eventually they would forward your case to some technical person.

But I didn't had the time. Every day that website was down, that client lost many many euros in revenue, and the complaints where escalating by the hour.

So I thought to myself, there must be someone that did this to a cpanel server...NOT! Cpanel forums had this question and the answer was always "we do not support BIND for DNSSEC". Feature requests where left unanswered.

Well no worries, I could do it on my own!

The problem now was that I didn't want all of my zones automatically signed by BIND, but I wanted manually to do it for only one domain.

I will not go into the pitfalls I got into but, thank the eGods, BIND had EDNS support in Cpanel and CloudNS could transfer the zone along with the DNSSEC records among other things.

So, here comes the actual fun!

===== linux cli steps ====
cd /var/named

#generate the two keys we will use to sign and validate our zone (it will take a loooong time to do it without something like haveged. Check #cat /proc/sys/kernel/random/entropy_avail to see what happens to entropy availability while generating keys...)

#generate ZSK
dnssec-keygen -a RSASHA256 -b 1280 -n ZONE example.com 

#generate KSK
dnssec-keygen -a RSASHA256 -b 2048 -f KSK -n ZONE example.com

#adjust ownership and rights to the two key files we generated
chgrp named Kexample.com.+*
chmod g=r,o= Kexample.com.+*

# copy them to a safe location just in case
cp Kexample.com.+008+* /root/

#change example.com section in /etc/named.conf
zone "example.com" {
        type master;

        file  "/var/named/example.com.db.signed";

    allow-query { any; };

        # DNSSEC keys Location (we could use a separate folder here)
        key-directory "/var/named/";

        # Publish and Activate DNSSEC keys
        auto-dnssec maintain;

        # Use Inline Signing
        inline-signing yes;
};

#add to /etc/named.conf
        dnssec-enable yes;
        dnssec-validation auto;
        //dnssec-lookaside auto; //this is not valid for newer versions of BIND

        //lets setup logging for dnssec only
        channel dnssec_log {
                file "/var/log/named/dnssec.log";
                severity debug 3;
        };
        category dnssec { dnssec_log; };

#now before signing the zone, we must put the public keys into our zone file so the sign tools knows from which key to sign the zone
for key in `ls Kexample.com*.key`
do
echo "\$INCLUDE $key">> example.com.db
done

#sign the zone
dnssec-signzone -A -3 $(head -c 1000 /dev/random | sha1sum | cut -b 1-16) -N INCREMENT -o example.com -t example.com.db

#you should get something like
#Verifying the zone using the following algorithms: RSASHA256.
#Zone fully signed:
#Algorithm: RSASHA256: KSKs: 1 active, 0 stand-by, 0 revoked
#                      ZSKs: 1 active, 0 stand-by, 0 revoked
#example.com.db.signed
#Signatures generated:                       65
#Signatures retained:                         0
#Signatures dropped:                          0
#Signatures successfully verified:            0
#Signatures unsuccessfully verified:          0
#Signing time in seconds:                 0.012
#Signatures per second:                5054.039
#Runtime in seconds:                      0.019


#chmod signed zone file
chmod named.named example.db.signed

#reload bind
systemctl reload named

#ds signatures to put into registrar interface (from dsset-example.com. file created during the signing. They can be also obtained by running #dnssec-dsfromkey Kexample.com.+008+35056.key)
#example.com.        IN DS 35056 8 1 B10CCE8B8C94F46E22451F66E860B7F804D2AC69
#example.com.        IN DS 35056 8 2 296446D4769D4B38175B11ED71767483AD5BD9697AE9C1DD21A3BE9E 670D54EE


#check validation
(d=example.com; k=$(printf '%05d' "$(dig @127.0.0.1 +norecurse "$d". DNSKEY | dnssec-dsfromkey -f - "$d" | awk '{print $4;}' | sort -u)"); delv @127.0.0.1 -a <(sed -e '/^;/d;s/[ \t]\{1,\}/ /g;s/ [0-9]\{1,\} IN DNSKEY / IN DNSKEY /;s/ IN DNSKEY / /;s/^[^ ]* [^ ]* [^ ]* [^ ]* /&"/;:s;/"[^ ]*$/b t;s/\("[^ ]*\) /\1/;b s;:t;s/$/";/;H;$!d;x;s/^\n//;s/.*/trusted-keys {\n    &\n};/' /var/named/Kexample.com.+008+"$k".key) +root="$d" "$d". SOA +multiline)
; fully validated
example.com.        86400 IN SOA ns1.mainserver.com. server.mainserver.com. (
                                2022100118 ; serial
                                3600       ; refresh (1 hour)
                                1800       ; retry (30 minutes)
                                1209600    ; expire (2 weeks)
                                86400      ; minimum (1 day)
                                )
example.com.        86400 IN RRSIG SOA 8 2 86400 (
                                20221101074125 20221002064125 60423 example.com.
                                BUoM4IHVFuL7JhkLkRQeR7xgBHmqo1D+GJStYvfumCrZ
                                km+qAm2HtysnrW+Ug+orWA6fURF2tgY9UkTrPPuLpUlX
                                ExPanItTqrDqWghIA1lFHs28e9DiBNQgv3WByRinfYvF
                                C7o0UpzaXCMppsWisbD50xXlGvcrsCxiXoDxgpiJ+O3p
                                WlIc4hYdolcN2z4o+UoPsSTVOZTj9fBSzRB63w== )



Two excellent tools to use for checking DNS status and the chain of trust are:

https://dnsviz.net/

and

https://dnssec-analyzer.verisignlabs.com/

posted by qubix on February 7, 2022

Recently I faced a problem with a vps in which the autossl did not wanted to generate a request for the vps's hostname.

After cleaning crap off the apropriate dns zone and after fixing an ns record pointing to a non-functioning dns server I thought yeah we're good to go!

But cpanel had other plans...no certificate AGAIN and /usr/local/cpanel/bin/checkallsslcerts gave me a weird error: [WARN] The system failed to acquire a signed certificate from the cPanel Store. at bin/checkallsslcerts.pl line 653.

Putting aside that there is NO such Perl file, it came to me that maybe because of the wrong NS record along with other crap I found, there is a certificate CSR for the hostname left over and cpanel did not erase it for some reason.

And yes, there is one in /var/cpanel/hostname_cert_csrs.

Removed it and now checkallsslcerts.pl runs without an error.

Or does it?? Now throws a subtle [Note] (why??) that the hostname isn't covered by any of the subdomains

Finally, the solution was mv /var/cpanel/hostname_cert_csrs{,.cpbkp} -v

For some reason the whole directory should be backup up and then the checkallsslcert will run correctly issuing a certificate for the hostname!

posted by qubix on December 26, 2021

Yesterday, I tried to install drush through composer to manage an old Drupal installation:

composer require
search package drush
version constraint 7.4.0

I faced the following error:

                       
  [ErrorException]          
  Undefined index: process  

It seems that composer is trying to use a php function which I have disabled for obvious security reasong: proc_open.

Since I could not enable it again, I had to devise a different approach.

1) make a new composer.php.ini

[php]
memory_limit = 4096M
max_execution_time = 0
date.timezone = "Europe/Athens"
realpath_cache_size = "4096K"

2) tell php to use it and add neccesary extensions in an one liner

php -n -d extension=phar.so -d extension=json.so -d extension=mbstring.so -c /home/user/composer.php.ini /opt/composer/bin/composer require
Problem solved!

posted by qubix on August 5, 2021

For some reason, dell has abandoned debian since the 8 version (jessie).

No one knows why, but don't despair, there is a solution: install it for ubuntu!

Yup since ubuntu is a frozen debian, it should be possible to install omsa from there.

Let's see how:

1) Install dell ubuntu based repository

$ echo 'deb http://linux.dell.com/repo/community/openmanage/930/bionic bionic main' | sudo tee -a /etc/apt/sources.list.d/linux.dell.com.sources.list

2) Get and install gpg keys for it $ wget https://linux.dell.com/repo/pgp_pubkeys/0x1285491434D8786F.asc

$ apt-key add 0x1285491434D8786F.asc

3) $ apt update

4) Install missing dependancies: wget http://archive.ubuntu.com/ubuntu/pool/universe/o/openwsman/libwsman-curl-client-transport1_2.6.5-0ubuntu3_amd64.deb wget http://archive.ubuntu.com/ubuntu/pool/universe/o/openwsman/libwsman-client4_2.6.5-0ubuntu3_amd64.deb wget http://archive.ubuntu.com/ubuntu/pool/universe/o/openwsman/libwsman1_2.6.5-0ubuntu3_amd64.deb wget http://archive.ubuntu.com/ubuntu/pool/universe/o/openwsman/libwsman-server1_2.6.5-0ubuntu3_amd64.deb wget http://archive.ubuntu.com/ubuntu/pool/universe/s/sblim-sfcc/libcimcclient0_2.2.8-0ubuntu2_amd64.deb wget http://archive.ubuntu.com/ubuntu/pool/universe/o/openwsman/openwsman_2.6.5-0ubuntu3_amd64.deb wget http://archive.ubuntu.com/ubuntu/pool/multiverse/c/cim-schema/cim-schema_2.48.0-0ubuntu1_all.deb wget http://archive.ubuntu.com/ubuntu/pool/universe/s/sblim-sfc-common/libsfcutil0_1.0.1-0ubuntu4_amd64.deb wget http://archive.ubuntu.com/ubuntu/pool/multiverse/s/sblim-sfcb/sfcb_1.4.9-0ubuntu5_amd64.deb wget http://archive.ubuntu.com/ubuntu/pool/universe/s/sblim-cmpi-devel/libcmpicppimpl0_2.0.3-0ubuntu2_amd64.deb

dpkg -i libwsman-curl-client-transport1_2.6.5-0ubuntu3_amd64.deb dpkg -i libwsman-client4_2.6.5-0ubuntu3_amd64.deb dpkg -i libwsman1_2.6.5-0ubuntu3_amd64.deb dpkg -i libwsman-server1_2.6.5-0ubuntu3_amd64.deb dpkg -i libcimcclient0_2.2.8-0ubuntu2_amd64.deb dpkg -i openwsman_2.6.5-0ubuntu3_amd64.deb dpkg -i cim-schema_2.48.0-0ubuntu1_all.deb dpkg -i libsfcutil0_1.0.1-0ubuntu4_amd64.deb dpkg -i sfcb_1.4.9-0ubuntu5_amd64.deb dpkg -i libcmpicppimpl0_2.0.3-0ubuntu2_amd64.deb

5) Install these packages to avoid errors and for oemreport

$ apt install libncurses5 libcurl4-openssl-dev

6) Install omsa

$ apt install srvadmin-all

7) Tell omsa to not check our servers generation: $ touch /opt/dell/srvadmin/lib64/openmanage/IGNORE_GENERATION

8) Test command to retrieve your physical disk info: $ /opt/dell/srvadmin/bin/omreport storage pdisk controller=0

If all went smooth you will see a list of the physical disks attached to your controller!

-- Info from various posts on reddit, proxmox forum, dell website and experimentation!

posted by qubix on December 26, 2020

Well I had this old monitoring VM that needed to go from a nagios based monitoring to a zabbix based one.

1) OS UPDATE

The first culprit was that centos 6 was EOL forever and many things didn't work or needed some kind of fix.

Well this thing was stuck now at 6.6 centos. Since ver 6 is EOL, we have to use vault repos to update to latest 6 sources

So lets change the base repo contents with these: [C6.10-base] name=CentOS-6.10 - Base baseurl=http://vault.centos.org/6.10/os/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 enabled=1 metadata_expire=never [C6.10-updates] name=CentOS-6.10 - Updates baseurl=http://vault.centos.org/6.10/updates/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 enabled=1 metadata_expire=never [C6.10-extras] name=CentOS-6.10 - Extras baseurl=http://vault.centos.org/6.10/extras/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 enabled=1 metadata_expire=never [C6.10-contrib] name=CentOS-6.10 - Contrib baseurl=http://vault.centos.org/6.10/contrib/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 enabled=0 metadata_expire=never [C6.10-centosplus] name=CentOS-6.10 - CentOSPlus baseurl=http://vault.centos.org/6.10/centosplus/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 enabled=0 metadata_expire=never

before doing anything else lets grub the EPEL repo for this centos rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org rpm -Uvh https://www.elrepo.org/elrepo-release-6-10.el6.elrepo.noarch.rpm yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm

now lets update stuff

yum clean && yum update

ok now we have updated our system in the latest possible software

2) PHP VERSION

The second culprit was that zabbix frontend requires php 5.4 and up, but centos 6 has 5.3.3 We will not use remi repos here, they are not complete and noone knows how long will support this centos version Instead...lets install php from source! yeaaahh :P

  • install required packages for compilation yum install autoconf libtool re2c bison libxml2-devel bzip2-devel libcurl-devel libpng-devel libicu-devel gcc-c++ libmcrypt-devel libwebp-devel libjpeg-devel openssl-devel libxslt-devel -y

  • grub php 5.6.40 (latest php 5.6) and untar/unzip the contents curl -O -L https://github.com/php/php-src/archive/php-5.6.40.tar.gz

tar -xvf php-5.6.40.tar.gz cd php-src-php-5.6.40/

now lets compile php

./buildconf --force

after we buildconf, we can customize with what we want to compile php. See ./configure --help about that to satisfy your needs

lets continue with our config ./configure --prefix=/usr/local/php56 --with-apxs2=/usr/sbin/apxs --with-freetype-dir=/usr/include/freetype2 --disable-short-tags --enable-xml --enable-cli --with-openssl --with-pcre-regex --with-pcre-jit --with-zlib --enable-bcmath --with-bz2 --with-curl --enable-exif --with-gd --enable-intl --with-mysqli --enable-pcntl --with-pdo-mysql --enable-soap --enable-sockets --with-xmlrpc --enable-zip --with-webp-dir --with-jpeg-dir --with-png-dir --enable-json --enable-hash --enable-mbstring --with-mcrypt --enable-libxml --with-libxml-dir --enable-ctype --enable-calendar --enable-dom --enable-fileinfo --with-mhash --with-incov --enable-opcache --enable-phar --enable-simplexml --with-xsl --with-pear

oops error, apxs what? (used to build to apache php module) install httpd-devel

yum install httpd-devel

ok lets run again the above aaand... oops error, freetype something... ok install freetype-devel yum install freetype-devel

finally we move on make clean make make test make install

(make test will execute maaany tests. Probably will fail in some and will ask you to submit a report to php devs)

copy development php.ini to our new shine php 5.6 install cp php.ini-development /usr/local/php56/lib/php.ini

edit it and change max_execution_time, max_post_size max_upload_size etc to what zabbix expects (16MB for post, upload, 300 for execution time) also change date.timezone to your timezone

3) A NEWER MYSQL

Fortunately we can install a recent mysql , not that archaic 5.1 that comes with centos 6.

Let's install mysql community edition 8 !

rpm -ivh https://dev.mysql.com/get/mysql80-community-release-el6-1.noarch.rpm yum update yum install mysql-community-server service mysqld start

Well..that was it...

4) ZABBIX INSTALLATION

By now I think I have setup my weird environment, so it is time to install zabbix!

rpm -Uvh https://repo.zabbix.com/zabbix/4.2/rhel/6/x86_64/zabbix-release-4.2-2.el6.noarch.rpm yum install zabbix-server-mysql zabbix-web-mysql zabbix-agent

Now lets copy the apache configuration file from the docs of zabbix

cp /usr/share/doc/zabbix-web-*/httpd22-example.conf /etc/httpd/conf.d/zabbix.conf

Edit the configuration file to update the timezone to something like php_value date.timezone Europe/Athens

vi /etc/httpd/conf.d/zabbix.conf

Ok now we can create the zabbix database mysql -u root -p

oops...mysql root user already has a password set? Hmmm it seems that although I didn't run the mysql_secure_installation utility, mysql installation has set some root pass. Where can it be..maybe in /var/log/mysql.log

grep 'temporary password' /var/log/mysqld.log

and yes there was a temporary password set. I think it is a good time to run the mysql secure utility , set a new pass (the temp was expired anyway) and answer "Y" at the security options.

After that I can login to mysql to create zabbix database.

create database zabbix_db character set utf8 collate utf8_bin; GRANT ALL ON zabbix_db.* TO zabbix_dbuser@localhost IDENTIFIED BY 'some_decent_password_folks'; quit;

We can import the db schema like this cd /usr/share/doc/zabbix-server-mysql*/ zcat create.sql.gz | mysql -u zabbix_dbuser -p zabbix_db

After creating the db, update the zabbix_server.conf file with our new database, user and creds.

Ok I am ready to start everything and make them start on boot also, so lets do it service zabbix-server start service zabbix-agent start service httpd start chkconfig zabbix-server on chkconfig zabbix-agent on chkconfig httpd on chkconfig mysqld on

Now that everything is up, I'll visit our new and polished web interface to finalize setup http://mydomain.tld/zabbix/

and ofcourse..another error at the database step: Error connecting to database: No such file or directory

Wait ... zabbix web interface cannot find the mysql socket file? Let's try 127.0.0.1 instead of localhost...

...aaaand yet another error but predictable this time: Error connecting to database: Server sent charset unknown to the client. Please, report to the developers

It seems that mysql 8 has default charset "utfmb8" and that old zabbix doesn't know this. This is easily fixable though, just put these is /etc/my.cnf (if there isn't one, make it)

[client] default-character-set=utf8 [mysql] default-character-set=utf8 [mysqld] collation-server = utf8_unicode_ci character-set-server = utf8 default_authentication_plugin = mysql_native_password

Restart mysqld service and this error is gone.

So, this took forever, but it is over. Or not? Wait..there is a newer version that I can use.. zabbix 4.4 which has the newer agent2.I SHOULD DO AN UPGRADE!! Oh well that is easy, install the newer rpm release and upgrade. Right? RIGHT?

rpm -Uvh https://repo.zabbix.com/zabbix/4.4/rhel/6/i686/zabbix-release-4.4-1.el6.noarch.rpm yum clean yum upgrade

aaand..no offer for zabbix updates other than the agent. This sucks, but lets see, where is the packages I want? Yum says nowhere but looking at the online repository I found that they are moved to the "deprecated" sub-repo.

https://repo.zabbix.com/zabbix/4.4/rhel/6/i386/deprecated/

Ok but why I don't see them? That's because this subrepo is disabled in the zabbix yum repo file

[zabbix-deprecated] name=Zabbix Official Repository deprecated - $basearch baseurl=http://repo.zabbix.com/zabbix/4.4/rhel/6/$basearch/deprecated enabled=0

Enabled this, and now I got the binaries I need for the upgrade to go smoothly.

Let's visit the web interface to confirm everything works ok and the newer version at the bottom should say 4.4.10 instead of 4.2.8, and yes ANOTHER error because the database is older than the just upgraded zabbix.

It seems that I forgot to start the services I stopped :P and after the upgrade when the zabbix-server process starts, it checks the db and if it finds it outdate, and update begins and after a couple of minutes the interface was back online!

Now that everything is in order, just two more things:

  • If you use a firewall, you should open some necessary ports for zabbix and apache. I don't know what you use so I'll just throw some generic iptables: iptables -I INPUT -p tcp -m tcp --dport 10051 -j ACCEPT iptables -I INPUT -p tcp -m tcp --dport 10050 -j ACCEPT iptables -I INPUT -p tcp -m tcp --dport 80 -j ACCEPT /etc/init.d/iptables save

  • If you have another php installed (maybe the default 5.3.3) you can use the 5.6.40 from a .htaccess file at /usr/share/zabbix/ (make it if it is not there) like this:

AddHandler application/x-httpd-php56 .php .php5 .phtml
Wish me happy monitoring!

posted by qubix on November 23, 2020

Σας ειδοποιεί κάποιος πελάτης που βρίσκεται σε cpanel shared hosting πως του έρχονται συχνά-πυκνά emails που του λένε πως δεν παραδόθηκε το mail σε κάποια άγνωστη διεύθυνση.

Ο λόγος που μπορεί να συμβαίνει αυτό είναι αν κάπως κάπου έχει μπει στο account ένας άγνωστος forwarder που δεν έχει βάλει κάνεις. Το πως μπήκε εκεί μπορεί να είναι από hacked server, hacked cpanel account, hacked pc του πελάτη που μπαίνει από το webmail.

Τα παρακάτω βήματα είναι για να δούμε αν όντως πρόκειται για μια τέτοια περίπτωση:

1) check email forwarders από το cpanel account

2) check email filters

Αχα! Ενα mail filter με την ονομασία "." ώστε να μην το προσέξει κάποιος, είχε βάλει έναν forwarder στο email του πελάτη.

Mystery Solved!

Η φυσική τοποθεσία του φίλτρου ήταν στο: /home/user/etc/domain.tld/emailuser/filter.yaml

Περιεχόμενα του φίλτρου:

filter:
  -
    actions:
      -
        action: deliver
        dest: bogus@host.tld
      -
        action: save
        dest: $home/mail/user/info/INBOX
    filtername: .
    rules:
      -
        match: contains
        opt: or
        part: "$header_from:"
        val: "@"
    unescaped: 1
version: '2.2'


posted by qubix on June 15, 2020

After migrating a virtual maching running Centos linux from a failing XenServer cluster, to a Hyper-v based cluster, on boot, it hungs with a blinking cursor.

Although there can be different reasons for this, in my case the problem was with the rhgb=quiet kernel boot parameter. Changed it to console=tty0 and the boot process continued normally.

Other obstacles you could face are
- different disk device naming. From hdx to sdx
- eth0 network interface not working. Add a new one eth1.

posted by qubix on April 10, 2020

If you have installed virtualmin and csf spi firewall and you see the warning

"Check for DNS recursion restrictions in Virtualmin"

after you hit "Check server security button"
here is what you have to do to avoid your dns server being used for random queries by random ips:


1) Go to Webmin -> Servers -> Bind DNS server
2) Hit "Edit config file"
3) place before "options {" the following

acl "trusted"{127.0.0.1;};
4) inside options block now place the following


    recursion yes;
    allow-recursion { trusted;};
    allow-notify { trusted;};
    allow-transfer { trusted;};
    forwarders {127.0.0.1;};


5) save and restart dns server

posted by qubix on March 8, 2020

TOOLS USED:

*centos 6 with EPEL / mysql 5.1 64bit *undrop-for-innodb (https://github.com/twindb/undrop-for-innodb) *mysql-utilities 1.6 (https://github.com/mysql/mysql-utilities) ...and luck...

0) Prepare recovery environment

install centos 6 final version 64bit in a vm or spare pc (preferable a VM)

 yum update

install epel repo

 yum install epel-release

install some stuff

 yum install nano mc zip flex make gcc bison 

install mysql server

 yum install mysql-server 


change mysql config /etc/my.cnf with the following:


[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
symbolic-links=0
max_connections = 2500
query_cache_limit = 2M
tmp_table_size=200M
query_cache_size=150M
key_buffer_size=300M
max_heap_table_size=300M
max_allowed_packet=500M
net_read_timeout=600
net_write_timeout=180
interactive_timeout=86400
log_error=/var/log/mysql_error.log
innodb_file_per_table=1
innodb_force_recovery=1

[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

execute the following to make the mysql error log file

# touch /var/log/mysql_error.log && chmod 660 /var/log/mysql_error.log 
chown mysql:mysql /var/log/mysql_error.log
/etc/init.d/mysql restart

read about innodb recovery levels, educate yourself!

I recommend installing a minimum desktop environment like lxde or xfce, and if in a VM guest additions to enable shared clipboard and seamless mouse integration

1) install software

git clone the above mentioned tools

1st problem mysql-utilities require python connector.

yum install mysql-connector-python.noarch

ok so now to install mysql-utilities, run setup.py

undrop-for-innodb has a makefile so run $ make to compile it (it's written in C code, using yyparse the gnu parser maker)

Our database was made with mysql 5.7 not 5.1 we have, so obviously we will face trouble along the way.

2) grub the db tables structure using mysqlfrm util

We'll use the diagnostic mode of mysqlfrm because we run mysql 5.1 instead of 5.7

mysqlfrm --diagnostic /where/thedata/reside/*.frm > ~/db_structure.sql

Do not try to do it with the spawned server way unless the files you have were generated using the same mysql version as the one running in the recovery environment.
this produces CREATE TABLE statements for all the frms we have

3) ok now we have our table structure

First open this db_structure.sql file and replace all lines having CREATE TABLE mydb.mytable ( with CREATE TABLE mytable (

or else the yyparser will fail in the next step!

We now have to copy each CREATE TABLE to its own file. Eg we have a table Atom.frm and another one Objects.frm. We should copy each CREATE TABLE statement accordingly to each own separate table sql file so we'll have 2 files, Atom.sql and Objects.sql (you'll see why).

4a) now lets move to TWINDB recovery tool

We'll use the stream_parser to extract data from our ibd files. Because we may have many many tables , lets make our life a little easier:


 cd ourdbdirectory/
 echo '#!/bin/sh' > ~/table_data_ibd.sh
 ls -1 *.frm >> ~/table_data_ibd.sh
 sed -i 's/^/\.\/stream_parser -f ~\/ourdbdirectory\//'  ~/table_data_ibd.sh
 chmod + x ~/table_data_idb.sh
 cp table_data_ibd.sh where_undrop-for-innodb_is/
 cd where_undrop-for-innodb_is/
 ./table_data_ibd.sh

this will generate all needed files for the next step. It essential dumps the data pages from the ibd files so we can then construct mysql load data which we can import in our db again.

4b) now moving to the c_parser

the general command is ./c_parser -6f table.ibd/FIL_PAGE_INDEX/ -t tablee-create.sql > dumps/default/table 2>dumps/default/table.sql

-6f: 6 mean the ibd file was generated by a version of MySQL 5.6+ (in this case it was 5.7), f for specifying the .page file we are going to parse
-t table-create.sql: the file contains the CREATE TABLE statement we generated previously
> dumps/default/table: the dump data will be in this file. This is actually a text file which is compatible with the command LOAD LOCAL DATA FILE. dumps/default is simply the folder I used for storing the exported data.
2>dumps/default/table.sql: this is the .sql file which will contain the LOAD LOCAL DATA FILE statement. So in the end we can simply run this file to import the data.

Again, because we may have many many files, lets make our life easier using good ole linux cli utils: echo '#!/bin/sh' > ~/table_parser_data.sh find . -maxdepth 1 -type f -exec echo './c_parser -6f pages-{}/FIL_PAGE_INDEX/ -t ~/ourdatabasedir/{}_create.sql > dumps/default/{} 2>dumps/default/{}.sql' \; | grep ibd | sed 's/.\///2g' | sed 's/.ibd//2g' >> ~/table_parser_data.sh chmod +x ~/table_parser_data.sh && cp ~/table_parser_data.sh where_undrop-for-innodb_is/ now lets run it


 cd where_undrop-for-innodb_is/
 ./table_parser_data.sh



When it is finished you'll see that in the dump folder a lot of files are created with the load local statement and the data for each table.

We can import them now in our db and see what happens!

Just copy the sql from the .sql files and run them in phpmyadmin or import from cli. You can concat all of them so you'll have to import only one file.

Beware: mysql may by default deny load local data or if you use phpmyadmin it might be disabled from php settings.
In any case, to enable it go to
- my.cnf and add the line local_infile=ON or if it is already present change it to ON value
- php.ini and add mysqli.allow_local_infile=On or uncomment it, if it is already there


Check the sql files for possible errors thrown by the twindb program. It's parser is not error-free and will complain for otherwise valid html.

If during import you face the error of illegal utf8 character, you can either
- change the sql in .sql files and instead of utf8 put latin1
- convert the data files (not the .sql files) with iconv. You can do something like


find . -type f -print -exec iconv -f us-ascii -t utf-8 {} -o {}.utf8 \;


hyperworks