N/B: Parallels have since fixed this in the latest version of Parallels Desktop 15, thus it’s no longer an issue
A little while back I tweeted Parallels via their @ParallelsCares twitter account to find out if they could provide support for Parallels Tools with CentOS 8.
RHEL 8 was released 7th May 2019, with Parallels 15 released 13th August 2019, so since CentOS is a free version of RHEL with the commercial, and copyright protected parts removed, I made the presumption that Parallels 15 would support it. However with CentOS 8 taking until 24th September 2019 to be released, all they did was respond to state that they didn’t support it, with a link to their KB article.
I went about looking for a fix, and it turns out it’s fairly easy. It needs two things:
Pretending to Parallels that what you’re installing is RHEL 8
The EPEL Repo enabled
Here’s the exactly what I did:
Download the CentOS DVD image for the version that you want here
In Parallels, Start the new VM Installation Assistant
Select “Install Windows or another OS from a DVD or image file”
Select “Choose Manually”, then “Select a file…”, and find the DVD image you’ve just downloaded
It will claim “Unable to detect operating system”, click “Continue”
When the Prompt comes up to select your operating system, select “More Linux” > “Red Hat Enterprise Linux”, then proceed through to creating and booting the VM
Proceed with the install
Ensure the network is enabled under “Network & Host Name”
Ensure the drive is checked in “Installation Destination”
To fix the error in “Installation Source”, select “On the network” > “http://”, and enter the following in the text box: mirror.centos.org/centos/8/BaseOS/x86_64/os
Once it’s downloaded the metadata, under the “Software Selection” select the options you want for your install
Proceed with the install, and make sure you enter a root password
Once rebooted under a terminal, install EPEL using the instructions here. Ensure you run the command to enable the PowerTools repository too.
Start the “Install Parallels Tools” process, ensuring that any existing mounted CD/DVD is unmounted. This is needed to mount the Parallels Tools DVD image.
Run the following commands in a terminal: mkdir /media/cdrom mount /dev/cdrom /media/cdrom cd /media/cdrom ./install
Proceed through the install process for Parallels Tools, and you should be done.
If you want to do this on an existing VM, shutdown the VM, go to Settings > General and change the type (above the VM Name), to Red Hat Enterprise Linux, then boot, and perform steps 9-11 above.
I had a slight frustration with this, I generally keep a lot of tabs open, but once due to an issue fiddling with an extension destroying my existing session, and another due to transferring to a new computer at work, I’ve had to try and restore my Last Tabs / Session data, and each time had to google around and cobble together an answer.
Here it is for those of you searching for the same answer.
If you’ve opened chrome since you’re going to need a backup of your old files (I’ve retrieved this from an Apple Time Machine, or simply the old mac profile ~/Library folder)
For the from backup route, navigate to the following folders in your backup:
Mac – /Library/Application Support/Google/Chrome/Default
Windows – \AppData\Local\Google\Chrome\User Data\Default
And copy the Last Session and Last Tabs files to the same folder on your hard drive, so most likely:
Mac – /Users//Library/Application Support/Google/Chrome/Default
Windows – C:\Users\\AppData\Local\Google\Chrome\User Data\Default
After which you simply need to launch it using the correct launch parameter of --restore-last-session, so:
Mac – Open a terminal and launch using: open /Applications/Google\ Chrome.app --args --restore-last-session
Windows – Edit the shortcut and make sure it’s launching with something like "C:\Users\\AppData\Local\Google\Chrome SxS\Application\chrome.exe" --restore-last-session
I’ve always compiled my software on servers from source. Call me a perfectionist / idiot, whatever, but I do it for several reasons, including making sure I’ve got the latest features and security patches, as well as being able to tailor the software I use on servers so it’s a minimal install of the software, still providing what I require.
Anyways, I’ve just found a quirk with getting MariaDB to starting up. As far as I could tell from the provided systemd .service file, it was being run by the user that owned the data directory I’d configured at compile time, this directory was specified correctly in the my.cnf, and when I tried to run the service manually it worked. but something was blocking it when it came to running it via systemctl.
I was receiving the following messages:
Sep 25 22:37:58 centos7 systemd: Starting MariaDB database server...
Sep 25 22:37:58 centos7 mysqld: 2016-09-25 22:37:58 139993404594304 [Note] /usr/local/bin/mysqld (mysqld 10.1.17-MariaDB) starting as process 7978 ...
Sep 25 22:37:58 centos7 mysqld: 2016-09-25 22:37:58 139993404594304 [Warning] Can't create test file /usr/local/mariadb/data/centos7.lower-test
Sep 25 22:37:58 centos7 mysqld: 2016-09-25 22:37:58 139993404594304 [ERROR] mysqld: File './mysql-bin.index' not found (Errcode: 30 "Read-only file system")
Sep 25 22:37:58 centos7 mysqld: 2016-09-25 22:37:58 139993404594304 [ERROR] Aborting
Sep 25 22:37:58 centos7 systemd: mariadb.service: main process exited, code=exited, status=1/FAILURE
Sep 25 22:37:58 centos7 systemd: Failed to start MariaDB database server.
Sep 25 22:37:58 centos7 systemd: Unit mariadb.service entered failed state.
Sep 25 22:37:58 centos7 systemd: mariadb.service failed.
I was inspecting the provided mariadb.service file, trying to spot anything that might indicate why the user couldn’t write to a folder it owned, when I came across this:
# Prevent writes to /usr, /boot, and /etc
ProtectSystem=full
So there you have it. I’d configured it to have the data directory under the mariadb program directory under /usr. MariaDB was allowing me to configure a data directory at compile time that it knew it wouldn’t be able to boot the daemon and write to (if I used systemd, which, lets face it, is becoming the norm, at-least on RedHat based distros). There’s a lot of guides on the net that as an example have the data directory created under /usr/local/mysql/data. This obviously isn’t an excuse. Using Javascript as an example there’s a lot of bad guides on the internet that produce quick hacks rather than quality long term code.
Recently whilst writing some code I needed to pivot rows to columns in a MySQL results set. Now with some other DB providers this is standard functionality. Oracle (of which I have previous experience of implementing pivot with), PostgreSQL and MS SQLServer all provide this. Unfortunately MySQL in it’s ultimate wisdom has never implemented this feature. This lead to one of two possibilities:
Implement two separate queries and combine the results together in a PHP foreach loop
As it supports them, a MySQL Stored Procedure
The first option was obviously not preferable, it’s far slower than getting the database to do the work. This lead to the second being my obvious preferred option. My main experience of writing stored procedures and functions in SQL is using PL/SQL with an Oracle DB. I have in general used MySQL on much lighter systems, however have in the odd situation required a stored procedure, so thought why not. The solution was to use a cursor in the stored procedure looping through said cursor to generate a SQL statement, then executing that as a prepared statement to return the results set required. This is then called from PHP by using the MySQL “CALL” functionality.
Here’s an example of the stored procedure I produced:
Whilst running multiple queries in one call, via the PHP MySQL PDO driver, I noticed that although PDO was behaving as if all the queries had executed successfully, the entries in the database weren’t all appearing that should have. Obviously something was happening that wasn’t being reported. PDO was setup to throw Exceptions on any errors when the SQL executed so there was clearly a bug somewhere.
manually split the queries up into different statements and replace the bind variables with their relevant values (which wasn’t feasible as there were hundreds),
Use str_replace to replace the bind_variables with their relevant quoted values and run the query at the sql prompt (not real world situation due to null values etc.).
Write some code to split the statement into individual queries and execute said queries individually with the relevant bind variables.
The following is a full example of how to split a long string of multiple sql queries and the relevant bind variables for that query into their separate sections, and execute them one at a time:
Recently I changed ISPs. This generally isn’t a big deal. It didn’t cause any difference in speed or connection reliability, however my old ISP had, regardless of me not ordering one, given me a static IP address. I’d taken advantage of this and assigned the IP to a DNS record to give me easy to remember access to my files when I’m away from home. Anyway when I changed ISPs I discovered my new one changed my IP address every few days, which is an unfortunate thing to discover when you’re not at home.
I took a look at my router and found it only supported DynDns, which I then found out that, apart from to some DLink users, didn’t provide provide a free version of their service to the general public, and I don’t have a DLink router.
The next option was a free alternative to DynDns, there were several, and they all required running some kind of linux demon / cron script, which I would then have to alter my existing A record DNS entry to a CNAME to point to the new dynamic DNS record.
Before setting this up I thought I’d look into my existing DNS provider just incase there was any way of updating DNS records on the fly without having to maintain two separate DNS providers. I found that CloudFlare had an API, which I could update records with. The example provided was a Curl example, which made me wonder, surely I could just write a bash script to update the record with via a cron job. I didn’t want to have to install any major scripting language on my minimal Linux distro, excluding any php, perl etc. solution I’d found that worked with CloudFlare.
I checked CloudFlare’s API rate limiting and found it certainly wouldn’t be limiting for what I was trying to do (it’s approximately 1200 requests every 5 mins if you’re wondering). I then found MyExternalIP that could provide my external IP address with a rate limit that was easy to deal with of roughly 1/s, after which it was just a case of writing the bash script.
I worked a script out using Curl requests to get the information I needed including the specific id of the DNS record I was trying to update and then used sed to get the information I needed out each requests response. I made it compatible with the CloudFlareAPI call I was making that way any parameter passed to the call could be passed by a command line argument to the script itself.
The resultant script I added to my crontab to run every half hour. Everything worked perfectly so I thought I’d create a gist at github so others could take advantage of this:
Basically I’ve been looking at upgrading a web-server to the latest version of Fedora 19, or when it’s released later this year CentOS 7.0 (providing it’s easy for them when RHEL 7.0 is released), however knowing that iptables is now becoming redundant in favour of firewalld in Fedora I started looking at updating my web-server install script to work with firewalld. Knowing part of that is Fail2Ban and that uses iptables my first port of call was finding a way of getting these two working together.
My first obvious search for “firewalld fail2ban” returned nothing helpful whatsoever, just people wanting a conf file to get it working with no actually helpful response, however once I found that firewalld uses firewall-cmd on the command line to control the rules I searched for that. This turned up a current bug posted on RedHat’s BugZilla: https://bugzilla.redhat.com/show_bug.cgi?id=979622 , where it turns out a very helpful soul, Edgar Hoch, has created an action.d conf file to get it all working: https://bugzilla.redhat.com/attachment.cgi?id=791126
After upgrading to the latest version of Fedora a few months ago I was terribly un-impressed. The box in question had been upgraded every 6 months (-ish, thanks to Fedora 18) since Fedora 14 and I’d never had any issues, but then came Fedora 19.
To be fair it wasn’t Fedora’s fault per say, it was GNOME 3 and the open source nVidia graphics drivers. The desktop looked ok when you booted the box, but if you tried to use the Activities section, none of the transparency worked, and a lot of the Favourite icons in the dock had a luminous green behind them when you hovered over them. What was worse was trying to launch a non-favourite application, click to do that and you could see the first 6 frequently used ones, but no others, and none under the “all” tab. This obviously made the whole experience pretty much unusable.
I went through the obvious investigations, straight away looking for some better nVidia graphics drivers. I didn’t expect to find any official nvidia drivers after Linus’ hilarious rant last year. However, it turned out there was. I first tried downloading them from nVidia but their installers were less than helpful, and none wanted to install on my system regardless of fulfilling their dependencies. I then tried looking elsewhere and remembered the trusty basic linux guide site If !1 0. I found a guide on there for Fedora 18, and adapted it for Fedora 19, but unfortunately that wouldn’t work due to a mass amount of package conflicts. I’d been meaning to wipe the system for a while and start again, so backed up the /etc/ folder to another drive, wiped the partitions, then installed Fedora 19 and used the guide again and all was fine and dandy. The boot screen is the basic plymouth one rather than the more graphical splash one, but apart from that everything works and I don’t have awful un-usable graphics anymore.
Well, Just over a week ago I got back from attending my first BETT show, and I have to say it was an amazing experience.
It had everything, from hearing about 3 seconds and having a fleeting glance of Brian Cox, providing my “technical advice” to customers when I could, spending hours packing / handing out bags, many trips to storage to retrieve stock, meeting hundreds of enthusiastic customers eager to get our bags or stress balls, and clearly the best bit on the stand being when we had quite a few of the staff doing “Double Dream Hands” on Saturday (I mean, why wouldn’t we!!!).
Overall it had quite a familiar experience to when I previously worked as both a waiter and barman during my 6th form college & university years. The at times very fast paced work, but also constant interaction with the general public is always something I’ve enjoyed and fed off (yep, call me a wierdo, but I enjoy running about at work every once in a while, and this was definitely one of those times). The atmosphere amongst the staff was also brilliant. Everyone pulled together and created a great atmosphere. This kept everyone going and kept everything working like a well oiled machine to try to give the customer everything they wanted, even when (most of the time) all of demonstration pods were busy, and our sales team couldn’t give any more demonstrations due to the constant influx of enthused new and existing customers.
As purely a developer it was also a great experience. Seeing how enthusiastic customers are to get their hands on our new product, and hearing how much they feel it’ll help in so many ways in their school, makes you enthused both as an individual, and as part of the team, to carry on producing the already great product we create to make a massive difference to the education of children.
Overall as I’ve already done on twitter I’d definitely like to thank both all the visitors to our stand, and all my colleagues, both those I finally properly met at the event, and those I’d already been working with, you all helped contribute to a thoroughly enjoyable working week.
Yep I know what your thinking, blocking by IP inside NginX, shouldn’t you be doing that at firewall level instead? Yes, if it comes from the actual IP, you should. Programs such as Fail2ban provide the functionality to automatically block unscrupulous IPs via iptables thus the traffic never gets anywhere near your software.
The problem comes however when your using a service such as Cloudflare. As far as the firewall is concerned the IP is that of Cloudflare, not the actual end user. Thus we then have to fall back on the next layer, which in this case is NginX. Utilising the RealIP Module we can set the user’s actual IP address into the correct server variable. Once we’ve done this anything running from NginX (PHP-FPM etc.) will see the remote address as the correct one rather than Cloudflare’s IP.
Anyway, to the point, blocking the IPs you don’t like. To do this you simply need to use NginX’s geo module. With this you can compare the provided remote address to a list and set a variable inside the http definition in nginx.conf:
Once you’ve set this up you can use that variable inside the individual server definitions to send different http response codes for those specific IP addresses:
if ($ban_ip) {
return 404;
}
or you can tell NginX to simply drop the connection,
if ($ban_ip) {
return 444;
}
which if combined with Cloudflare will show the user Cloudflare’s cached version of your site, so the end user is still getting your site, without your server ever serving their client anything.
Thanks for this lovely snippet of information go to Alexander Azarov.
Giving you lovely snippets of helpful, strange or otherwise random information