Oh xdebug, where art thou?

I don’t have a stable development environment. I change hardware and software frequently so whenever my wife’s web site needs a revision I usually have to set everything up from scratch. I always develop on a Linux box (I am a Fedora user) and I’ll usually set up an apache server with MySQL running. For development I prefer NetBeans but I’ll use Eclipse if I have to.
For debugging I use XDebug and this is where the fun starts…

It never seems to be a smooth ride getting XDebug up and running, no matter what I do and judging from the many desperate cries for help on various forum threads I am not alone.

This blog post will not solve all your problems with getting xdebug working but I hope it might help you and give you a better set of tools to solve the problem.

The “problem” here manifests itself in NetBeans or Eclipse failing to connect to the debugger.

Firstly, if you have installed xdebug using any of the normal methods and phpinfo() shows an xdebug section then, at least in my experience, you’re good to go and any amount of hacks and fiddling in ini files is unlikely to fix your problem.
It is much more likely then that the problem is external and here is my simple check list of culprits that always seem to be behind my woes, either individually or together;

  • is your firewall allowing port 9000? (the default xdebug port, yours might be different but try not to fiddle with it too much)
  • is SELinux blocking you?

Run your firewall tool of choice (I just run system-config-firewall on Fedora) and check, or add, port 9000. Now try again…if working be happy, else…
Check if SELinux is barking about blocking a “name_connect” and see if port 9000 isn’t mentioned as well. If it is then you need to allow connections, obviously, and since I am not running my Linux box in an environment where I need to be too concerned about paranoid security I just blanket allow httpd (the apache server service I’m running on Fedora) to do whatever it wants;

#setsebool -P httpd_can_network_connect 1

If that doesn’t fix it then I sympathise because nothing is quite as frustrating as struggling with tools when all you want is to get the job done. But, in my humble experience at least, it seems to always end up being something like the above, I.e. something external is blocking xdebug from working. My point is that, unless you have a very peculiar set up, a plain vanilla install of xdebug with the standard ini file entries is fine and you should avoid fiddling with it before you’ve checked the external conditions – even though forum threads out there are quick to suggest it.

Good luck and happy debugging!

0x800F0A12 error when installing Win 7 SP 1 on dual boot machine

Here’s what I’ve got
  • I have a machine with two hard disks; on one there’s a Fedora (15) install and on the other Windows 7
  • I’ve got Grub set up to allow me to boot from either
And here’s the problem
  • The other day I fired up my Windows 7 install for the first time in a loong time and it wanted to install Service Pack 1 (SP1)
  • The install failed with nothing but an “0x800F0A12” message…not very helpful
The fix
The problem (as I found out from here) is that the service pack install requires the active boot partition to be the one with the Windows 7 install on. In my set up that partition is where the Grub loader lives so when the install checks the active partition it fails.
The article I’ve linked to outlines a solution involving the use of the Disk Management tool and DISKPART utility but there’s a simpler way to do this (at least in my situation).
  • Open the Disk Management tool (Computer->Manage->Disk Management)
  • Select the Disk/Partition on which your Windows install resides
    • In my case this was on Disk 1, Disk 0 was where the Grub loader and Fedora installs lived.
    • NOTE: The Disk Management tool will show which partition is “Active” and, since you’ve got the error, that will not be the one where Windows 7 lives. Check this: if your Windows 7 partition is Active and you still get the error then there’s something else going on…
  • Mark it as “active” by right clicking on it and selecting “Mark Partition as Active
  • You’re done
You will now have TWO (2) partitions active; one which is the one you had from before and one which is where Windows 7 lives. It doesn’t matter if you’ve got two, all the “marking as active” operation does is to inform the firmware that the partition can be booted from, not that it will
You can now proceed to install the Service Pack 1.
In my case it all worked.

Stumbling through getting an OpenMPI app compiling and linking with (and without) NetBeans on Fedora

System

Fedora 14
Netbeans 6.9 + g++ 4.5.1 and MPICH2

Problem

I wanted to play around with OpenMP and C++ and I wanted to use the NetBeans IDE but had no luck compiling and/or linking.

Naively I did this:

  • installed MPICH2 packages using the package manager (I tried first with yum but that didn’t work at all…could be a red herring but be warned, see end notes.) 
  • openend up NetBeans, created a C++ app project and wrote a little Hello World program with a simple #pragma omp parallel section
  • I hit compile and…naturally, it just compiles a standard single-threaded app (ignoring the unrecognized pragma)
  • So, I tried to compile the program on the command line using the mpic++ compiler/linker wrapper which is installed with openmpi-devel
    • It failed with the errors about not finding -lmpichcxx, -lpmipch and -lopa (again, see end notes)

Solution

  1. mpic++ for some reason or other produces the wrong (???) command line;
    1. it typically looks like this: 
      1. c++ -m32 -O2 -Wl,-z,noexecstack -I/usr/include/mpich2-i386 -L/usr/lib/mpich2/lib -L/usr/lib/mpich2/lib -lmpichcxx -lmpich -lopa -lpthread -lrt
    2. but it should look like this (highlighting changes from above only):
      1. c++ -fopenmp -m32 -O2 -Wl,-z,noexecstack -I/usr/include/mpich2-i386 -L/usr/lib/mpich2/lib -L/usr/lib/mpich2/lib -lmpichcxx -lmpich -lgomp -lpthread -lrt
  2. Therefore, in the properties of your NetBeans project, under C++ Compiler::Additional Options you set the command line to
    1.  -fopenmp -m32 -O2 -Wl,-z,noexecstack -I/usr/include/mpich2-i386 -L/usr/lib/mpich2/lib -L/usr/lib/mpich2/lib -lmpichcxx -lmpich -lgomp -lpthread -lrt
    2. Alternatively you can of course use that as-is on the command line
  3. ..and compile…and it runs, and according to my perf monitor it uses more than one thread. Perfect.

Notes

libgomp is the GNU OpenMP library and it is part of the gcc 4.5.x install (and possibly earlier, but I haven’t checked/tested this). I don’t know what “libopa” was/is (can’t find anything about it) so it might even be a typo (although this would be horrendous and hopefully not the case) – If anybody reading this can shed some light….?

I tried with c++ and g++ both in the project settings in NetBeans but it doesn’t matter which one you use, as long as the command line is correct as in step 2 above.

The issue alluded to concerning installing OpenMPI using Yum; I did this first #yum install openmpi openmpi-devel but it seems that this, although it installed the libraries, did not create appropriate symlinks to them so that ld could find them (see note about ld failing at the top of this post.) I therefore manually created these and it fixed the linking, but as I subsequently did an install of MPICH2 using the package manager before I got the app running properly I can’t verify exactly if this had a positive effect overall or if it was a red herring. If anybody can recreate this and confirm then that would be great.

Btw, here are some great links for some OpenMP examples and tutorials:

http://www.codeproject.com/KB/library/Parallel_Processing.aspx

http://bisqwit.iki.fi/story/howto/openmp/#ExampleCalculatingTheMandelbrotFractalInParallel

https://computing.llnl.gov/tutorials/openMP/#CFormat

Network problem when using VirtualBox HD’s between machines

Problem
– Setting up a new VirtualBox machine using a cloned VDI can (will?) cause problems with Linux guest OS’es where the network device fails to initialize resulting in no network connection. The problem is caused by VBox having hard coded the MAC address first assigned to the guest (when the VDI was first created) in “/etc/udev/rules.d/70-persistent-net.rules”
When trying to run the guest with a new machine (from a different VBox instance for example) it mismatches the MAC address and ethX fails to initialize. Trying to “ifup” also fails.

Solution
– Take note of the MAC address first assigned by VBox when the VDI and machine is first created and then, for each new machine using that VDI, go to Settings::Network::Advanced and type the MAC address directly into the MAC field.

UPDATE: If you’ve got a VDI that you move around to multiple machines (as I do) and you create new virtual machines to use it with then you want to make sure that each of these new ones use the correct MAC address also, of course, otherwise you have no network. Just create a new machine using said VDI and fire it up. If it’s a Windows machine then you can use ipconfig /all to get the MAC address, otherwise, for Linux, you can use ifconfig or alternatively you can just look at the MAC address setting for eth0 in the /etc/udev/rules.d/70-persistent-net.rules file (there will be lines in there that look something like this:
# PCI device 0x10b7:0x9200 (3c59x) (custom name provided by external tool)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*",
ATTR{address}=="XX:YY:ZZ:UU:VV:WW", ATTR{type}=="1", NAME="eth0"

and it’s the ATTR{address} field that stores the MAC address.)

Notes
– I ran into this because I share VDIs between different machines. I would bring the VDI (for an Ubuntu install for example) to a different laptop and set up a new machine in VirtualBox, using the VDI which I store on a USB drive. It would run but the network card would fail to initialize and running ifconfig would just show me the lo (loop back) device being active. Trying to ifup eth0 just threw up “device not found” errors. At first I hand edited the rules file but then I realized that the simplest (and perhaps most obvious) solution was to just assign the same MAC address to all the new machines myself.

Growing a VirtualBox VDI is easy…

I use VirtualBox for lots of things, not the least to be able to use Microsoft Office for Windows on my work laptop, which is a Mac. Bless Office for the Mac but it sucks and I really need Excel to work with VB and the Data Analysis add-on…

However, I digress. As I installed more and more apps into my Win7 Virtual Machine I started running out of disk space so here follows a brief explanation of what I did (and do) when I need it (the VDI…) to grow. (Note that I didn’t originally create it using the  “dynamically expanding storage” option in VBox and if I had I might not be in this predicament but there you go.)

So;

  1. Create a new harddrive using the Virtual Media Manager in VBox and make sure it’s the size you want
    1. NOTE: Again I didn’t create the new disk as dynamic, but rather as static…I don’t know if the following steps would work if it was dynamic (somehow I doubt it but if you try then please let me know how it went…)
  2. Release and Remove your old harddrive from VBox’ grasp using the Virtual Media Manager;
    1. You have to do this otherwise the next step will fail. All you need to do is to release it, then remove it and REMEMBER to “Keep” the hard disk image when you do that!
  3. Clone your old (and smaller) VDI into the new (and larger) one like so using the VBox command line tool:
VBoxManage clonehd --existing OLD.vdi NEW.vdi
  1. Now go back into Virtual Media Manager and add the NEW.vdi drive
  2. In the settings for your virtual machine (the one that previously used the OLD.vdi) you change it to use NEW.vdi
  3. Downloadsystemrescuecd.iso (or any LiveCD with a Linux Distro and GParted on it. The remaining steps assume you can run GParted)
  4. Attach the LiveCD to your virtual machine so that it will be booted when it starts
  5. Boot your virtual machine…
  6. Run GParted;
    1. The virtual machine’s hard disk will be allocated into the “old” partition (which is the smaller size) and an extra, unallocated, partition which is whatever extra space you now have in your new and (larger) hard disk
  7. Resize the smaller partition (old) to take up all of the (new) disk
    1. NOTE: this assumes your guest OS uses a file system that GParted understands. In my case this was NTFS (Windows) but if you are attempting this to grow something else, and utterly esoteric, I can’t guarantee it will work. However, it would have to be very esoteric…
  8. Shut down the virtual machine
  9. Release the ISO (sysrescd.iso in my example)
  10. Reboot….
  11. Presto, you’re done! The machine should boot up happily and Windows will tell you that the C: drive is whatever size NEW.VDI was created as

Certainly beats using CloneZilla to try and save off the old image and restore it. I tried that too and it didn’t work but even if it did this method seems simpler.

installing msttcorefonts on Fedora 13 (fixing Wine problem)

I had to install msttcorefonts (the Microsoft TTF fonts used by most windows programs) to be able to run Wine in my Fedora 13 install. It was pretty clear that the fonts were missing; all Windows apps I tried to run, including the Wine config tool, were unusable with all text garbled.
To fix this I had to install the core fonts and as this was a (somewhat) non-trivial task I have decided to document it here:
Fundamentally I followed the steps on Benprove.com but had to do make some modifications to make it work.
Firstly I “su -“‘ed to get root access.
Then;
  1. cd /tmp or somewhere else convenient…
  2. Download font spec file for the rpm build process (2.0-1 at the time of writing):
    1. wget http://corefonts.sourceforge.net/msttcorefonts-2.0-1.spec
  3. Install rpm-build and cabextract:
    1. yum install rpm-build cabextract
  4. Install ttmkfdir (required to build usable font files from TTF files):
    1. yum install ttmkfdir
  5. Now build the RPM package from the spec file:
    1. rpmbuild -ba msttcorefonts-2.0-1.spec
  6. Install chkfontpath (a util to configure X server font paths apparently)…and xfs which is a deamon that serves fonts to X server clients:
    1. Get chkfontpath (look for latest version if you do this):
      1. wget ftp://ftp.pbone.net/mirror/atrpms.net/f13-i386/atrpms/stable/chkfontpath-1.10.1-2.fc13.i686.rpm
    2. Get xfs:
      1. yum install xfs
    3. Build chkfontpath:
      1. rpm -ivh chkfontpath-1.10.1-2.fc13.i686.rpm
  7. Disable GPG (signature) checking in the yum config file so open /etc/yum.conf in your favourite editor, look for the line “gpgcheck=1” in the “[main]” section and change it to “gpgcheck=0”. Save the file.
  8. Now you can FINALLY install the fonts themselves from the RPM:
    1. yum localinstall  /usr/src/redhat/RPMS/noarch/msttcorefonts-2.0-1.noarch.rpm
  9. Clean up by re-enabling gpg check in /etc/yum.conf (DON’T FORGET THIS!)
  10. Log-out and log back in…

And that was it; Wine is now usable.

Permission denied: rsync backup from my Fedora box to my DS207+ (DSM 2.3) NAS….

I’ve got a NAS that I’m very happy with; the DS207+ from Synology. I back up my wife’s massive Photoshop and Ilustrator files from her Mac without problems (using Superduper) – however, backing up stuff from my Linux box (Fedora) has turned out to be a little bit more tricky. I might have set up the DS207+ wrongly with respect to the users, groups and privileges from the start (I’ve had the box longer than I’ve had Linux) but regardless of what the reason is the practical problem is that trying to back up to the server – using DejaDup or grsync (can be found in the standard repositories both for Ubuntu and Fedora) – fails with “access permitted” problems.

I’ve tried to rsync using the server’s IP address;

192.168.0.193:/volume1/Jarl/backup

and I’ve tried to use a mount point after adding a cifs mount statement to my fstab file:

//192.168.0.193/Jarl /mnt/ds_jarl cifs credentials=/etc/.ds_credentials,_netdev 0 0

To no avail and still I get problems with rsync throwing up “Permission denied (13)” errors when it tries to create (or delete) directories.

From what I’ve been able to gleam from various fora there seems to be an issue that quite a few people have been struggling with in one form or other. If this is related to a subtle difference in the use of CIFS or Samba I don’t know but the bottom line is that it doesn’t work…for me at least.

Anyway, here be what I did to make this work for me. It might not work for you but hopefully it will give you a possible avenue to try out, should you have problems getting rsync to work with your DS207+.

  • Firstly I followed the instructions here to create an ssh key file. I subsequently copied one into my /home/jarl/.ssh folder (had to create it first) as per instructions. I also copied it over to a new /homes/admin/.ssh folder on the DS207+ (simply using the File Station.) Following that I could ssh in without entering a password BUT as opposed to what the instructions on said web site says I had to specify that I was the “admin” user for this to work. I believe this has something to do with the limitations on who can ssh into the DS207+. In short, the ssh line to log in had to be this:
    • ssh -l admin -i /home/jarl/.ssh/rsync-key 192.168.0.193
    • NOTE: the use of “-l admin” here!
  • Next I could compose the full rsync command line and instruct it to use ssh (as admin!) to connect to my NAS and do it’s thing:
    • rsync -rv -o -c -z -e ssh -l admin -i /home/jarl/.ssh/rsync-key 192.168.0.193:/volume1/Jarl/Backup /home/jarl/
    • NOTE: I composed the whole command line using grsync and just added the -e ssh… part manually

This works fine now, although all the files sync’d up are owned by “admin” of course…

I will try to make this work for the standard user as well; it makes no sense that you should have to be admin to make this work..!