posted by qubix on September 27, 2023

Recently I was assigned the task of installing a more recent version of nodejs on an old laptop which was running windows 7 x64.

The problem was that many packages/plugins/tools in the nodejs ecosystem had a minimum version of v14 for nodejs server and this system had the v12 installed.

Now because windows 7 was EOL'ed by MS at January 14 2020, the latest version I could find supporting windows 7 was v13.14.0, a more recent than the one installed but not the v14.

After some search I found a thread in github on the commit that made the installer deny proceeding if windows < 8.1, in which there was an environment variable I could use, the NODE_SKIP_PLATFORM_CHECK.

Unfortunately, this variable had no effect on the installer for some reason.

So I tried the old-school method of installing first the supported 13.14.0 version and then just copy-paste a more recent one the 14.17.6.

And...It worked!

So, the steps to do it are:

1) install v13.14.0 if not installed already

2) download the zipped v14.17.6 version

3) go set globally and permanently the NODE_SKIP_PLATFORM_CHECK variable with value 1 ( control panel -> system -> advanced system settings -> environment variables button )

4) close all apps/cmds that may run the node.exe executable (double check by using the windows task manager)

5) unzip the v14.17.6 and copy its contents over the contents of program files/nodejs

6) test the new version in a cmd window node --version

it should say v14.17.6

if you see "Node.js is only supported on Windows 8.1, Windows Server 2012 R2, or higher. Setting the NODE_SKIP_PLATFORM_CHECK environment variable to 1 skips this check, but Node.js might not execute correctly. Any issues encountered on unsupported platforms will not be fixed."

it means that you didn't set the environment variable or it is not applied yet.

So that's it, go on enjoy v14 on your windows 7 retro system!

posted by qubix on December 25, 2022

There is this neat panel, CWP (CentOS Web Panel) which has all the necessary tools to run a webhost and a nice interface both for the central root panel and the user one.

One thing that it's been missing though is this: there is a website hosted in a CWP powered webhost and the owner decides to change the domain name from olddomain to newdomain.

If you are coming from other panels (e.g cpanel) you may be accustomed for this to be supported. But this is not the case with CWP (Pro edition or not).

A quick search in the forums revealed some questions posted, but the usual answer was "it is not supported".

Well enough, talking, let's see how we can do it on our own!

Important: in this scenario there were no email accounts and the webhost was running just Apache (CWP offers support for Apache, NGINX, litespeed, Apache+NGINX+Varnish).

Also important: I assume that the newdomain is already pointed to your webhost!

First things first: how does the CWP knows which user has which domain?

Apparently there is this database called "root_cwp" and in this db there is this table "user" where you can find the user account you want to change and then change the domain cell.


$MariaDB [localhost]>USE root
$MariaDB [localhost]>UPDATE user SET `domain` = 'newdomain' WHERE `username` = 'useraccount';
$MariaDB [localhost]>quit;

Next step: Apache vhost. We have to copy the already existing vhost and change the references in it from olddomain to newdomain

cd /usr/local/apache/conf.d/vhosts/
cp olddomain.conf newdomain.conf
sed -i 's/olddomain/newdomain/g' newdomain.conf

Important: do not copy the ssl.conf because there is no ssl yet for this domain. We will generate it later after we have finished.

Next step: BIND dns zone. Copy the zone and change references again.

$cd /var/named/
$cp olddomain.db newdomain.db
$sed -i 's/olddomain/newdomain/g' newdomain.db

Next substep: inform BIND that there is a new zonefile: edit /etc/named.conf , find the lines regarding the old domain (something like this:

// zone olddomain
zone "olddomain" {type master; file "/var/named/olddomain.db";};
// zone_end olddomain


and change the olddomain to newdomain

Next step: restart Apache and Bind

$systemctl restart named
$systemctl restart httpd

Hopefully all went well!
Now we can go and generate our brand new ssl from the root web interface, so the necessary changes will be handled by CWP for the new ssl to work.

That's all folks!

posted by qubix on October 8, 2022

As we all know, Cpanel supports advertises POWERDNS for DNSSEC and not BIND. While this is weird enough, I will not go into it.

Recently I had the situation where a client transferred a .com domain to a different registrar and the website pointing to it to a new server. This domain was signed and since noone knew it, I suddenly was facing a serious problem in DNS propagation.

Since google public DNS had no record of this domain, I checked what the heck was happening and yes, then I found out that it was a signed domain.

The problem had two solutions:
1) Remove the signing
2) Implement DNSSEC for this domain at the server so the chain of trust would be valid again.

Unfortunately, the support of the registrar was terrible, I kept talking to people that clearly had no expertise on the matter, probably some call support center with fixed questions/answers. If you had the time and patience, I suppose eventually they would forward your case to some technical person.

But I didn't had the time. Every day that website was down, that client lost many many euros in revenue, and the complaints where escalating by the hour.

So I thought to myself, there must be someone that did this to a cpanel server...NOT! Cpanel forums had this question and the answer was always "we do not support BIND for DNSSEC". Feature requests where left unanswered.

Well no worries, I could do it on my own!

The problem now was that I didn't want all of my zones automatically signed by BIND, but I wanted manually to do it for only one domain.

I will not go into the pitfalls I got into but, thank the eGods, BIND had EDNS support in Cpanel and CloudNS could transfer the zone along with the DNSSEC records among other things.

So, here comes the actual fun!

===== linux cli steps ====
cd /var/named

#generate the two keys we will use to sign and validate our zone (it will take a loooong time to do it without something like haveged. Check #cat /proc/sys/kernel/random/entropy_avail to see what happens to entropy availability while generating keys...)

#generate ZSK
dnssec-keygen -a RSASHA256 -b 1280 -n ZONE 

#generate KSK
dnssec-keygen -a RSASHA256 -b 2048 -f KSK -n ZONE

#adjust ownership and rights to the two key files we generated
chgrp named*
chmod g=r,o=*

# copy them to a safe location just in case
cp* /root/

#change section in /etc/named.conf
zone "" {
        type master;

        file  "/var/named/";

    allow-query { any; };

        # DNSSEC keys Location (we could use a separate folder here)
        key-directory "/var/named/";

        # Publish and Activate DNSSEC keys
        auto-dnssec maintain;

        # Use Inline Signing
        inline-signing yes;

#add to /etc/named.conf
        dnssec-enable yes;
        dnssec-validation auto;
        //dnssec-lookaside auto; //this is not valid for newer versions of BIND

        //lets setup logging for dnssec only
        channel dnssec_log {
                file "/var/log/named/dnssec.log";
                severity debug 3;
        category dnssec { dnssec_log; };

#now before signing the zone, we must put the public keys into our zone file so the sign tools knows from which key to sign the zone
for key in `ls*.key`
echo "\$INCLUDE $key">>

#sign the zone
dnssec-signzone -A -3 $(head -c 1000 /dev/random | sha1sum | cut -b 1-16) -N INCREMENT -o -t

#you should get something like
#Verifying the zone using the following algorithms: RSASHA256.
#Zone fully signed:
#Algorithm: RSASHA256: KSKs: 1 active, 0 stand-by, 0 revoked
#                      ZSKs: 1 active, 0 stand-by, 0 revoked
#Signatures generated:                       65
#Signatures retained:                         0
#Signatures dropped:                          0
#Signatures successfully verified:            0
#Signatures unsuccessfully verified:          0
#Signing time in seconds:                 0.012
#Signatures per second:                5054.039
#Runtime in seconds:                      0.019

#chmod signed zone file
chmod named.named example.db.signed

#reload bind
systemctl reload named

#ds signatures to put into registrar interface (from file created during the signing. They can be also obtained by running #dnssec-dsfromkey        IN DS 35056 8 1 B10CCE8B8C94F46E22451F66E860B7F804D2AC69        IN DS 35056 8 2 296446D4769D4B38175B11ED71767483AD5BD9697AE9C1DD21A3BE9E 670D54EE

#check validation
(; k=$(printf '%05d' "$(dig @ +norecurse "$d". DNSKEY | dnssec-dsfromkey -f - "$d" | awk '{print $4;}' | sort -u)"); delv @ -a <(sed -e '/^;/d;s/[ \t]\{1,\}/ /g;s/ [0-9]\{1,\} IN DNSKEY / IN DNSKEY /;s/ IN DNSKEY / /;s/^[^ ]* [^ ]* [^ ]* [^ ]* /&"/;:s;/"[^ ]*$/b t;s/\("[^ ]*\) /\1/;b s;:t;s/$/";/;H;$!d;x;s/^\n//;s/.*/trusted-keys {\n    &\n};/' /var/named/"$k".key) +root="$d" "$d". SOA +multiline)
; fully validated        86400 IN SOA (
                                2022100118 ; serial
                                3600       ; refresh (1 hour)
                                1800       ; retry (30 minutes)
                                1209600    ; expire (2 weeks)
                                86400      ; minimum (1 day)
                                )        86400 IN RRSIG SOA 8 2 86400 (
                                20221101074125 20221002064125 60423
                                WlIc4hYdolcN2z4o+UoPsSTVOZTj9fBSzRB63w== )

Two excellent tools to use for checking DNS status and the chain of trust are:


posted by qubix on February 7, 2022

Recently I faced a problem with a vps in which the autossl did not wanted to generate a request for the vps's hostname.

After cleaning crap off the apropriate dns zone and after fixing an ns record pointing to a non-functioning dns server I thought yeah we're good to go!

But cpanel had other certificate AGAIN and /usr/local/cpanel/bin/checkallsslcerts gave me a weird error: [WARN] The system failed to acquire a signed certificate from the cPanel Store. at bin/ line 653.

Putting aside that there is NO such Perl file, it came to me that maybe because of the wrong NS record along with other crap I found, there is a certificate CSR for the hostname left over and cpanel did not erase it for some reason.

And yes, there is one in /var/cpanel/hostname_cert_csrs.

Removed it and now runs without an error.

Or does it?? Now throws a subtle [Note] (why??) that the hostname isn't covered by any of the subdomains

Finally, the solution was mv /var/cpanel/hostname_cert_csrs{,.cpbkp} -v

For some reason the whole directory should be backup up and then the checkallsslcert will run correctly issuing a certificate for the hostname!

posted by qubix on December 26, 2021

Yesterday, I tried to install drush through composer to manage an old Drupal installation:

composer require
search package drush
version constraint 7.4.0

I faced the following error:

  Undefined index: process  

It seems that composer is trying to use a php function which I have disabled for obvious security reasong: proc_open.

Since I could not enable it again, I had to devise a different approach.

1) make a new composer.php.ini

memory_limit = 4096M
max_execution_time = 0
date.timezone = "Europe/Athens"
realpath_cache_size = "4096K"

2) tell php to use it and add neccesary extensions in an one liner

php -n -d -d -d -c /home/user/composer.php.ini /opt/composer/bin/composer require
Problem solved!

posted by qubix on August 5, 2021

For some reason, dell has abandoned debian since the 8 version (jessie).

No one knows why, but don't despair, there is a solution: install it for ubuntu!

Yup since ubuntu is a frozen debian, it should be possible to install omsa from there.

Let's see how:

1) Install dell ubuntu based repository

$ echo 'deb bionic main' | sudo tee -a /etc/apt/sources.list.d/

2) Get and install gpg keys for it $ wget

$ apt-key add 0x1285491434D8786F.asc

3) $ apt update

4) Install missing dependancies: wget wget wget wget wget wget wget wget wget wget

dpkg -i libwsman-curl-client-transport1_2.6.5-0ubuntu3_amd64.deb dpkg -i libwsman-client4_2.6.5-0ubuntu3_amd64.deb dpkg -i libwsman1_2.6.5-0ubuntu3_amd64.deb dpkg -i libwsman-server1_2.6.5-0ubuntu3_amd64.deb dpkg -i libcimcclient0_2.2.8-0ubuntu2_amd64.deb dpkg -i openwsman_2.6.5-0ubuntu3_amd64.deb dpkg -i cim-schema_2.48.0-0ubuntu1_all.deb dpkg -i libsfcutil0_1.0.1-0ubuntu4_amd64.deb dpkg -i sfcb_1.4.9-0ubuntu5_amd64.deb dpkg -i libcmpicppimpl0_2.0.3-0ubuntu2_amd64.deb

5) Install these packages to avoid errors and for oemreport

$ apt install libncurses5 libcurl4-openssl-dev

6) Install omsa

$ apt install srvadmin-all

7) Tell omsa to not check our servers generation: $ touch /opt/dell/srvadmin/lib64/openmanage/IGNORE_GENERATION

8) Test command to retrieve your physical disk info: $ /opt/dell/srvadmin/bin/omreport storage pdisk controller=0

If all went smooth you will see a list of the physical disks attached to your controller!

-- Info from various posts on reddit, proxmox forum, dell website and experimentation!

posted by qubix on December 26, 2020

Well I had this old monitoring VM that needed to go from a nagios based monitoring to a zabbix based one.


The first culprit was that centos 6 was EOL forever and many things didn't work or needed some kind of fix.

Well this thing was stuck now at 6.6 centos. Since ver 6 is EOL, we have to use vault repos to update to latest 6 sources

So lets change the base repo contents with these: [C6.10-base] name=CentOS-6.10 - Base baseurl=$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 enabled=1 metadata_expire=never [C6.10-updates] name=CentOS-6.10 - Updates baseurl=$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 enabled=1 metadata_expire=never [C6.10-extras] name=CentOS-6.10 - Extras baseurl=$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 enabled=1 metadata_expire=never [C6.10-contrib] name=CentOS-6.10 - Contrib baseurl=$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 enabled=0 metadata_expire=never [C6.10-centosplus] name=CentOS-6.10 - CentOSPlus baseurl=$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 enabled=0 metadata_expire=never

before doing anything else lets grub the EPEL repo for this centos rpm --import rpm -Uvh yum install -y

now lets update stuff

yum clean && yum update

ok now we have updated our system in the latest possible software


The second culprit was that zabbix frontend requires php 5.4 and up, but centos 6 has 5.3.3 We will not use remi repos here, they are not complete and noone knows how long will support this centos version Instead...lets install php from source! yeaaahh :P

  • install required packages for compilation yum install autoconf libtool re2c bison libxml2-devel bzip2-devel libcurl-devel libpng-devel libicu-devel gcc-c++ libmcrypt-devel libwebp-devel libjpeg-devel openssl-devel libxslt-devel -y

  • grub php 5.6.40 (latest php 5.6) and untar/unzip the contents curl -O -L

tar -xvf php-5.6.40.tar.gz cd php-src-php-5.6.40/

now lets compile php

./buildconf --force

after we buildconf, we can customize with what we want to compile php. See ./configure --help about that to satisfy your needs

lets continue with our config ./configure --prefix=/usr/local/php56 --with-apxs2=/usr/sbin/apxs --with-freetype-dir=/usr/include/freetype2 --disable-short-tags --enable-xml --enable-cli --with-openssl --with-pcre-regex --with-pcre-jit --with-zlib --enable-bcmath --with-bz2 --with-curl --enable-exif --with-gd --enable-intl --with-mysqli --enable-pcntl --with-pdo-mysql --enable-soap --enable-sockets --with-xmlrpc --enable-zip --with-webp-dir --with-jpeg-dir --with-png-dir --enable-json --enable-hash --enable-mbstring --with-mcrypt --enable-libxml --with-libxml-dir --enable-ctype --enable-calendar --enable-dom --enable-fileinfo --with-mhash --with-incov --enable-opcache --enable-phar --enable-simplexml --with-xsl --with-pear

oops error, apxs what? (used to build to apache php module) install httpd-devel

yum install httpd-devel

ok lets run again the above aaand... oops error, freetype something... ok install freetype-devel yum install freetype-devel

finally we move on make clean make make test make install

(make test will execute maaany tests. Probably will fail in some and will ask you to submit a report to php devs)

copy development php.ini to our new shine php 5.6 install cp php.ini-development /usr/local/php56/lib/php.ini

edit it and change max_execution_time, max_post_size max_upload_size etc to what zabbix expects (16MB for post, upload, 300 for execution time) also change date.timezone to your timezone


Fortunately we can install a recent mysql , not that archaic 5.1 that comes with centos 6.

Let's install mysql community edition 8 !

rpm -ivh yum update yum install mysql-community-server service mysqld start

Well..that was it...


By now I think I have setup my weird environment, so it is time to install zabbix!

rpm -Uvh yum install zabbix-server-mysql zabbix-web-mysql zabbix-agent

Now lets copy the apache configuration file from the docs of zabbix

cp /usr/share/doc/zabbix-web-*/httpd22-example.conf /etc/httpd/conf.d/zabbix.conf

Edit the configuration file to update the timezone to something like php_value date.timezone Europe/Athens

vi /etc/httpd/conf.d/zabbix.conf

Ok now we can create the zabbix database mysql -u root -p

oops...mysql root user already has a password set? Hmmm it seems that although I didn't run the mysql_secure_installation utility, mysql installation has set some root pass. Where can it be..maybe in /var/log/mysql.log

grep 'temporary password' /var/log/mysqld.log

and yes there was a temporary password set. I think it is a good time to run the mysql secure utility , set a new pass (the temp was expired anyway) and answer "Y" at the security options.

After that I can login to mysql to create zabbix database.

create database zabbix_db character set utf8 collate utf8_bin; GRANT ALL ON zabbix_db.* TO zabbix_dbuser@localhost IDENTIFIED BY 'some_decent_password_folks'; quit;

We can import the db schema like this cd /usr/share/doc/zabbix-server-mysql*/ zcat create.sql.gz | mysql -u zabbix_dbuser -p zabbix_db

After creating the db, update the zabbix_server.conf file with our new database, user and creds.

Ok I am ready to start everything and make them start on boot also, so lets do it service zabbix-server start service zabbix-agent start service httpd start chkconfig zabbix-server on chkconfig zabbix-agent on chkconfig httpd on chkconfig mysqld on

Now that everything is up, I'll visit our new and polished web interface to finalize setup http://mydomain.tld/zabbix/

and ofcourse..another error at the database step: Error connecting to database: No such file or directory

Wait ... zabbix web interface cannot find the mysql socket file? Let's try instead of localhost...

...aaaand yet another error but predictable this time: Error connecting to database: Server sent charset unknown to the client. Please, report to the developers

It seems that mysql 8 has default charset "utfmb8" and that old zabbix doesn't know this. This is easily fixable though, just put these is /etc/my.cnf (if there isn't one, make it)

[client] default-character-set=utf8 [mysql] default-character-set=utf8 [mysqld] collation-server = utf8_unicode_ci character-set-server = utf8 default_authentication_plugin = mysql_native_password

Restart mysqld service and this error is gone.

So, this took forever, but it is over. Or not? Wait..there is a newer version that I can use.. zabbix 4.4 which has the newer agent2.I SHOULD DO AN UPGRADE!! Oh well that is easy, install the newer rpm release and upgrade. Right? RIGHT?

rpm -Uvh yum clean yum upgrade offer for zabbix updates other than the agent. This sucks, but lets see, where is the packages I want? Yum says nowhere but looking at the online repository I found that they are moved to the "deprecated" sub-repo.

Ok but why I don't see them? That's because this subrepo is disabled in the zabbix yum repo file

[zabbix-deprecated] name=Zabbix Official Repository deprecated - $basearch baseurl=$basearch/deprecated enabled=0

Enabled this, and now I got the binaries I need for the upgrade to go smoothly.

Let's visit the web interface to confirm everything works ok and the newer version at the bottom should say 4.4.10 instead of 4.2.8, and yes ANOTHER error because the database is older than the just upgraded zabbix.

It seems that I forgot to start the services I stopped :P and after the upgrade when the zabbix-server process starts, it checks the db and if it finds it outdate, and update begins and after a couple of minutes the interface was back online!

Now that everything is in order, just two more things:

  • If you use a firewall, you should open some necessary ports for zabbix and apache. I don't know what you use so I'll just throw some generic iptables: iptables -I INPUT -p tcp -m tcp --dport 10051 -j ACCEPT iptables -I INPUT -p tcp -m tcp --dport 10050 -j ACCEPT iptables -I INPUT -p tcp -m tcp --dport 80 -j ACCEPT /etc/init.d/iptables save

  • If you have another php installed (maybe the default 5.3.3) you can use the 5.6.40 from a .htaccess file at /usr/share/zabbix/ (make it if it is not there) like this:

AddHandler application/x-httpd-php56 .php .php5 .phtml
Wish me happy monitoring!

posted by qubix on November 23, 2020

Σας ειδοποιεί κάποιος πελάτης που βρίσκεται σε cpanel shared hosting πως του έρχονται συχνά-πυκνά emails που του λένε πως δεν παραδόθηκε το mail σε κάποια άγνωστη διεύθυνση.

Ο λόγος που μπορεί να συμβαίνει αυτό είναι αν κάπως κάπου έχει μπει στο account ένας άγνωστος forwarder που δεν έχει βάλει κάνεις. Το πως μπήκε εκεί μπορεί να είναι από hacked server, hacked cpanel account, hacked pc του πελάτη που μπαίνει από το webmail.

Τα παρακάτω βήματα είναι για να δούμε αν όντως πρόκειται για μια τέτοια περίπτωση:

1) check email forwarders από το cpanel account

2) check email filters

Αχα! Ενα mail filter με την ονομασία "." ώστε να μην το προσέξει κάποιος, είχε βάλει έναν forwarder στο email του πελάτη.

Mystery Solved!

Η φυσική τοποθεσία του φίλτρου ήταν στο: /home/user/etc/domain.tld/emailuser/filter.yaml

Περιεχόμενα του φίλτρου:

        action: deliver
        dest: bogus@host.tld
        action: save
        dest: $home/mail/user/info/INBOX
    filtername: .
        match: contains
        opt: or
        part: "$header_from:"
        val: "@"
    unescaped: 1
version: '2.2'

posted by qubix on June 15, 2020

After migrating a virtual maching running Centos linux from a failing XenServer cluster, to a Hyper-v based cluster, on boot, it hungs with a blinking cursor.

Although there can be different reasons for this, in my case the problem was with the rhgb=quiet kernel boot parameter. Changed it to console=tty0 and the boot process continued normally.

Other obstacles you could face are
- different disk device naming. From hdx to sdx
- eth0 network interface not working. Add a new one eth1.

posted by qubix on April 10, 2020

If you have installed virtualmin and csf spi firewall and you see the warning

"Check for DNS recursion restrictions in Virtualmin"

after you hit "Check server security button"
here is what you have to do to avoid your dns server being used for random queries by random ips:

1) Go to Webmin -> Servers -> Bind DNS server
2) Hit "Edit config file"
3) place before "options {" the following

acl "trusted"{;};
4) inside options block now place the following

    recursion yes;
    allow-recursion { trusted;};
    allow-notify { trusted;};
    allow-transfer { trusted;};
    forwarders {;};

5) save and restart dns server