posted by qubix on August 5, 2021

For some reason, dell has abandoned debian since the 8 version (jessie).

No one knows why, but don't despair, there is a solution: install it for ubuntu!

Yup since ubuntu is a frozen debian, it should be possible to install omsa from there.

Let's see how:

1) Install dell ubuntu based repository

$ echo 'deb bionic main' | sudo tee -a /etc/apt/sources.list.d/

2) Get and install gpg keys for it $ wget

$ apt-key add 0x1285491434D8786F.asc

3) $ apt update

4) Install missing dependancies: wget wget wget wget wget wget wget wget wget wget

dpkg -i libwsman-curl-client-transport1_2.6.5-0ubuntu3_amd64.deb dpkg -i libwsman-client4_2.6.5-0ubuntu3_amd64.deb dpkg -i libwsman1_2.6.5-0ubuntu3_amd64.deb dpkg -i libwsman-server1_2.6.5-0ubuntu3_amd64.deb dpkg -i libcimcclient0_2.2.8-0ubuntu2_amd64.deb dpkg -i openwsman_2.6.5-0ubuntu3_amd64.deb dpkg -i cim-schema_2.48.0-0ubuntu1_all.deb dpkg -i libsfcutil0_1.0.1-0ubuntu4_amd64.deb dpkg -i sfcb_1.4.9-0ubuntu5_amd64.deb dpkg -i libcmpicppimpl0_2.0.3-0ubuntu2_amd64.deb

5) Install these packages to avoid errors and for oemreport

$ apt install libncurses5 libcurl4-openssl-dev

6) Install omsa

$ apt install srvadmin-all

7) Tell omsa to not check our servers generation: $ touch /opt/dell/srvadmin/lib64/openmanage/IGNORE_GENERATION

8) Test command to retrieve your physical disk info: $ /opt/dell/srvadmin/bin/omreport storage pdisk controller=0

If all went smooth you will see a list of the physical disks attached to your controller!

-- Info from various posts on reddit, proxmox forum, dell website and experimentation!

posted by qubix on December 26, 2020

Well I had this old monitoring VM that needed to go from a nagios based monitoring to a zabbix based one.


The first culprit was that centos 6 was EOL forever and many things didn't work or needed some kind of fix.

Well this thing was stuck now at 6.6 centos. Since ver 6 is EOL, we have to use vault repos to update to latest 6 sources

So lets change the base repo contents with these: [C6.10-base] name=CentOS-6.10 - Base baseurl=$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 enabled=1 metadata_expire=never [C6.10-updates] name=CentOS-6.10 - Updates baseurl=$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 enabled=1 metadata_expire=never [C6.10-extras] name=CentOS-6.10 - Extras baseurl=$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 enabled=1 metadata_expire=never [C6.10-contrib] name=CentOS-6.10 - Contrib baseurl=$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 enabled=0 metadata_expire=never [C6.10-centosplus] name=CentOS-6.10 - CentOSPlus baseurl=$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 enabled=0 metadata_expire=never

before doing anything else lets grub the EPEL repo for this centos rpm --import rpm -Uvh yum install -y

now lets update stuff

yum clean && yum update

ok now we have updated our system in the latest possible software


The second culprit was that zabbix frontend requires php 5.4 and up, but centos 6 has 5.3.3 We will not use remi repos here, they are not complete and noone knows how long will support this centos version Instead...lets install php from source! yeaaahh :P

  • install required packages for compilation yum install autoconf libtool re2c bison libxml2-devel bzip2-devel libcurl-devel libpng-devel libicu-devel gcc-c++ libmcrypt-devel libwebp-devel libjpeg-devel openssl-devel libxslt-devel -y

  • grub php 5.6.40 (latest php 5.6) and untar/unzip the contents curl -O -L

tar -xvf php-5.6.40.tar.gz cd php-src-php-5.6.40/

now lets compile php

./buildconf --force

after we buildconf, we can customize with what we want to compile php. See ./configure --help about that to satisfy your needs

lets continue with our config ./configure --prefix=/usr/local/php56 --with-apxs2=/usr/sbin/apxs --with-freetype-dir=/usr/include/freetype2 --disable-short-tags --enable-xml --enable-cli --with-openssl --with-pcre-regex --with-pcre-jit --with-zlib --enable-bcmath --with-bz2 --with-curl --enable-exif --with-gd --enable-intl --with-mysqli --enable-pcntl --with-pdo-mysql --enable-soap --enable-sockets --with-xmlrpc --enable-zip --with-webp-dir --with-jpeg-dir --with-png-dir --enable-json --enable-hash --enable-mbstring --with-mcrypt --enable-libxml --with-libxml-dir --enable-ctype --enable-calendar --enable-dom --enable-fileinfo --with-mhash --with-incov --enable-opcache --enable-phar --enable-simplexml --with-xsl --with-pear

oops error, apxs what? (used to build to apache php module) install httpd-devel

yum install httpd-devel

ok lets run again the above aaand... oops error, freetype something... ok install freetype-devel yum install freetype-devel

finally we move on make clean make make test make install

(make test will execute maaany tests. Probably will fail in some and will ask you to submit a report to php devs)

copy development php.ini to our new shine php 5.6 install cp php.ini-development /usr/local/php56/lib/php.ini

edit it and change max_execution_time, max_post_size max_upload_size etc to what zabbix expects (16MB for post, upload, 300 for execution time) also change date.timezone to your timezone


Fortunately we can install a recent mysql , not that archaic 5.1 that comes with centos 6.

Let's install mysql community edition 8 !

rpm -ivh yum update yum install mysql-community-server service mysqld start

Well..that was it...


By now I think I have setup my weird environment, so it is time to install zabbix!

rpm -Uvh yum install zabbix-server-mysql zabbix-web-mysql zabbix-agent

Now lets copy the apache configuration file from the docs of zabbix

cp /usr/share/doc/zabbix-web-*/httpd22-example.conf /etc/httpd/conf.d/zabbix.conf

Edit the configuration file to update the timezone to something like php_value date.timezone Europe/Athens

vi /etc/httpd/conf.d/zabbix.conf

Ok now we can create the zabbix database mysql -u root -p

oops...mysql root user already has a password set? Hmmm it seems that although I didn't run the mysql_secure_installation utility, mysql installation has set some root pass. Where can it be..maybe in /var/log/mysql.log

grep 'temporary password' /var/log/mysqld.log

and yes there was a temporary password set. I think it is a good time to run the mysql secure utility , set a new pass (the temp was expired anyway) and answer "Y" at the security options.

After that I can login to mysql to create zabbix database.

create database zabbix_db character set utf8 collate utf8_bin; GRANT ALL ON zabbix_db.* TO zabbix_dbuser@localhost IDENTIFIED BY 'some_decent_password_folks'; quit;

We can import the db schema like this cd /usr/share/doc/zabbix-server-mysql*/ zcat create.sql.gz | mysql -u zabbix_dbuser -p zabbix_db

After creating the db, update the zabbix_server.conf file with our new database, user and creds.

Ok I am ready to start everything and make them start on boot also, so lets do it service zabbix-server start service zabbix-agent start service httpd start chkconfig zabbix-server on chkconfig zabbix-agent on chkconfig httpd on chkconfig mysqld on

Now that everything is up, I'll visit our new and polished web interface to finalize setup http://mydomain.tld/zabbix/

and ofcourse..another error at the database step: Error connecting to database: No such file or directory

Wait ... zabbix web interface cannot find the mysql socket file? Let's try instead of localhost...

...aaaand yet another error but predictable this time: Error connecting to database: Server sent charset unknown to the client. Please, report to the developers

It seems that mysql 8 has default charset "utfmb8" and that old zabbix doesn't know this. This is easily fixable though, just put these is /etc/my.cnf (if there isn't one, make it)

[client] default-character-set=utf8 [mysql] default-character-set=utf8 [mysqld] collation-server = utf8_unicode_ci character-set-server = utf8 default_authentication_plugin = mysql_native_password

Restart mysqld service and this error is gone.

So, this took forever, but it is over. Or not? Wait..there is a newer version that I can use.. zabbix 4.4 which has the newer agent2.I SHOULD DO AN UPGRADE!! Oh well that is easy, install the newer rpm release and upgrade. Right? RIGHT?

rpm -Uvh yum clean yum upgrade offer for zabbix updates other than the agent. This sucks, but lets see, where is the packages I want? Yum says nowhere but looking at the online repository I found that they are moved to the "deprecated" sub-repo.

Ok but why I don't see them? That's because this subrepo is disabled in the zabbix yum repo file

[zabbix-deprecated] name=Zabbix Official Repository deprecated - $basearch baseurl=$basearch/deprecated enabled=0

Enabled this, and now I got the binaries I need for the upgrade to go smoothly.

Let's visit the web interface to confirm everything works ok and the newer version at the bottom should say 4.4.10 instead of 4.2.8, and yes ANOTHER error because the database is older than the just upgraded zabbix.

It seems that I forgot to start the services I stopped :P and after the upgrade when the zabbix-server process starts, it checks the db and if it finds it outdate, and update begins and after a couple of minutes the interface was back online!

Now that everything is in order, just two more things:

  • If you use a firewall, you should open some necessary ports for zabbix and apache. I don't know what you use so I'll just throw some generic iptables: iptables -I INPUT -p tcp -m tcp --dport 10051 -j ACCEPT iptables -I INPUT -p tcp -m tcp --dport 10050 -j ACCEPT iptables -I INPUT -p tcp -m tcp --dport 80 -j ACCEPT /etc/init.d/iptables save

  • If you have another php installed (maybe the default 5.3.3) you can use the 5.6.40 from a .htaccess file at /usr/share/zabbix/ (make it if it is not there) like this:

AddHandler application/x-httpd-php56 .php .php5 .phtml
Wish me happy monitoring!

posted by qubix on November 23, 2020

Σας ειδοποιεί κάποιος πελάτης που βρίσκεται σε cpanel shared hosting πως του έρχονται συχνά-πυκνά emails που του λένε πως δεν παραδόθηκε το mail σε κάποια άγνωστη διεύθυνση.

Ο λόγος που μπορεί να συμβαίνει αυτό είναι αν κάπως κάπου έχει μπει στο account ένας άγνωστος forwarder που δεν έχει βάλει κάνεις. Το πως μπήκε εκεί μπορεί να είναι από hacked server, hacked cpanel account, hacked pc του πελάτη που μπαίνει από το webmail.

Τα παρακάτω βήματα είναι για να δούμε αν όντως πρόκειται για μια τέτοια περίπτωση:

1) check email forwarders από το cpanel account

2) check email filters

Αχα! Ενα mail filter με την ονομασία "." ώστε να μην το προσέξει κάποιος, είχε βάλει έναν forwarder στο email του πελάτη.

Mystery Solved!

Η φυσική τοποθεσία του φίλτρου ήταν στο: /home/user/etc/domain.tld/emailuser/filter.yaml

Περιεχόμενα του φίλτρου:

        action: deliver
        dest: bogus@host.tld
        action: save
        dest: $home/mail/user/info/INBOX
    filtername: .
        match: contains
        opt: or
        part: "$header_from:"
        val: "@"
    unescaped: 1
version: '2.2'

posted by qubix on June 15, 2020

After migrating a virtual maching running Centos linux from a failing XenServer cluster, to a Hyper-v based cluster, on boot, it hungs with a blinking cursor.

Although there can be different reasons for this, in my case the problem was with the rhgb=quiet kernel boot parameter. Changed it to console=tty0 and the boot process continued normally.

Other obstacles you could face are
- different disk device naming. From hdx to sdx
- eth0 network interface not working. Add a new one eth1.

posted by qubix on April 10, 2020

If you have installed virtualmin and csf spi firewall and you see the warning

"Check for DNS recursion restrictions in Virtualmin"

after you hit "Check server security button"
here is what you have to do to avoid your dns server being used for random queries by random ips:

1) Go to Webmin -> Servers -> Bind DNS server
2) Hit "Edit config file"
3) place before "options {" the following

acl "trusted"{;};
4) inside options block now place the following

    recursion yes;
    allow-recursion { trusted;};
    allow-notify { trusted;};
    allow-transfer { trusted;};
    forwarders {;};

5) save and restart dns server

posted by qubix on March 8, 2020


*centos 6 with EPEL / mysql 5.1 64bit *undrop-for-innodb ( *mysql-utilities 1.6 ( ...and luck...

0) Prepare recovery environment

install centos 6 final version 64bit in a vm or spare pc (preferable a VM)

 yum update

install epel repo

 yum install epel-release

install some stuff

 yum install nano mc zip flex make gcc bison 

install mysql server

 yum install mysql-server 

change mysql config /etc/my.cnf with the following:

max_connections = 2500
query_cache_limit = 2M


execute the following to make the mysql error log file

# touch /var/log/mysql_error.log && chmod 660 /var/log/mysql_error.log 
chown mysql:mysql /var/log/mysql_error.log
/etc/init.d/mysql restart

read about innodb recovery levels, educate yourself!

I recommend installing a minimum desktop environment like lxde or xfce, and if in a VM guest additions to enable shared clipboard and seamless mouse integration

1) install software

git clone the above mentioned tools

1st problem mysql-utilities require python connector.

yum install mysql-connector-python.noarch

ok so now to install mysql-utilities, run

undrop-for-innodb has a makefile so run $ make to compile it (it's written in C code, using yyparse the gnu parser maker)

Our database was made with mysql 5.7 not 5.1 we have, so obviously we will face trouble along the way.

2) grub the db tables structure using mysqlfrm util

We'll use the diagnostic mode of mysqlfrm because we run mysql 5.1 instead of 5.7

mysqlfrm --diagnostic /where/thedata/reside/*.frm > ~/db_structure.sql

Do not try to do it with the spawned server way unless the files you have were generated using the same mysql version as the one running in the recovery environment.
this produces CREATE TABLE statements for all the frms we have

3) ok now we have our table structure

First open this db_structure.sql file and replace all lines having CREATE TABLE mydb.mytable ( with CREATE TABLE mytable (

or else the yyparser will fail in the next step!

We now have to copy each CREATE TABLE to its own file. Eg we have a table Atom.frm and another one Objects.frm. We should copy each CREATE TABLE statement accordingly to each own separate table sql file so we'll have 2 files, Atom.sql and Objects.sql (you'll see why).

4a) now lets move to TWINDB recovery tool

We'll use the stream_parser to extract data from our ibd files. Because we may have many many tables , lets make our life a little easier:

 cd ourdbdirectory/
 echo '#!/bin/sh' > ~/
 ls -1 *.frm >> ~/
 sed -i 's/^/\.\/stream_parser -f ~\/ourdbdirectory\//'  ~/
 chmod + x ~/
 cp where_undrop-for-innodb_is/
 cd where_undrop-for-innodb_is/

this will generate all needed files for the next step. It essential dumps the data pages from the ibd files so we can then construct mysql load data which we can import in our db again.

4b) now moving to the c_parser

the general command is ./c_parser -6f table.ibd/FIL_PAGE_INDEX/ -t tablee-create.sql > dumps/default/table 2>dumps/default/table.sql

-6f: 6 mean the ibd file was generated by a version of MySQL 5.6+ (in this case it was 5.7), f for specifying the .page file we are going to parse
-t table-create.sql: the file contains the CREATE TABLE statement we generated previously
> dumps/default/table: the dump data will be in this file. This is actually a text file which is compatible with the command LOAD LOCAL DATA FILE. dumps/default is simply the folder I used for storing the exported data.
2>dumps/default/table.sql: this is the .sql file which will contain the LOAD LOCAL DATA FILE statement. So in the end we can simply run this file to import the data.

Again, because we may have many many files, lets make our life easier using good ole linux cli utils: echo '#!/bin/sh' > ~/ find . -maxdepth 1 -type f -exec echo './c_parser -6f pages-{}/FIL_PAGE_INDEX/ -t ~/ourdatabasedir/{}_create.sql > dumps/default/{} 2>dumps/default/{}.sql' \; | grep ibd | sed 's/.\///2g' | sed 's/.ibd//2g' >> ~/ chmod +x ~/ && cp ~/ where_undrop-for-innodb_is/ now lets run it

 cd where_undrop-for-innodb_is/

When it is finished you'll see that in the dump folder a lot of files are created with the load local statement and the data for each table.

We can import them now in our db and see what happens!

Just copy the sql from the .sql files and run them in phpmyadmin or import from cli. You can concat all of them so you'll have to import only one file.

Beware: mysql may by default deny load local data or if you use phpmyadmin it might be disabled from php settings.
In any case, to enable it go to
- my.cnf and add the line local_infile=ON or if it is already present change it to ON value
- php.ini and add mysqli.allow_local_infile=On or uncomment it, if it is already there

Check the sql files for possible errors thrown by the twindb program. It's parser is not error-free and will complain for otherwise valid html.

If during import you face the error of illegal utf8 character, you can either
- change the sql in .sql files and instead of utf8 put latin1
- convert the data files (not the .sql files) with iconv. You can do something like

find . -type f -print -exec iconv -f us-ascii -t utf-8 {} -o {}.utf8 \;

posted by qubix on November 13, 2019

The other day I was asked to check a hacked trunk in a pbx box. While I was digging through the logs the owner complaint that he couldn't use a tool from the pbx interface, the "Call Event Logging" tool because of some error.

The error was: "General error: 1194 Table 'cel' is marked as crashed and should be repairedFile:/var/www/html/admin/libraries/BMO/Database/PDOStatement.class.php:17"

That is a simple one error to fix, the cel table is in the asteriskcdrdb database:

mysql asteriskcdrdb
repair table cel; 

That's all, the table was repaired

| Table             | Op     | Msg_type | Msg_text                                      |
| asteriskcdrdb.cel | repair | info     | Key 1 - Found wrong stored record at 27271156 |
| asteriskcdrdb.cel | repair | warning  | Number of rows changed from 180284 to 180281  |
| asteriskcdrdb.cel | repair | status   | OK                                            |
3 rows in set (4.98 sec)

posted by qubix on October 18, 2018

Well I had created my new repo, made the files I wanted and when I tried to push them:

error: src refspec master does not match any.

BUT...I have not committed anything yet oops!

So the correct order is:

Create my new shiny git repo

mkdir repo && cd repo
git remote add origin /path/to/origin.git

Add the files I want

git add . 

Commit the changes

git commit -m "initial commit"

Then push them!

git push origin master


posted by qubix on April 1, 2018

install drush on cpanel based servers with EA4 per account

1) cpanel already has composer if EA4 /opt/cpanel/composer/bin

2) enable jailed shell access to account

3) ssh to server with account creds or keys or root and then su to user

4) composer require drush/drush:7.*

5) go to .bashrc and make an alias

alias drush-master=~/.config/composer/vendor/drush/drush/drush

6) source .bashrc to make changes immidiately availabe

7) go to drush folder and do composer install to fetch dependencies

8) drush-master status to check it is working

9) change tmp to be inside USER home just in case

go to .bashrc and put the line


if you cannot edit .bashrc write this on cli EXPORT TEMP=~/tmp

you can see the change in drush-master status

10) now go to ~/public_html/sites/default/

execute drush status and see that drush sees your website

posted by qubix on February 9, 2018

You want to flush your DNS cache in linux? Well it is very very easy :]

Let's say you use systemd-resolve:


$ systemd-resolve --flush-caches

Let's say you use anything else (nscd, dnsmasq, named etc)


just restart the service

$ systemctl restart nscd (or dnsmasq, named etc)

or if you do not use systemd, you could try

service nscd restart


PS: you probably need to have root privileges to execute the above commands