Category: English


Pacman trouble again, This time, glibc is the culprit.

glibc
[###########################################] 100%
warning: /etc/locale.gen installed as /etc/locale.gen.pacnew
error: extract: not overwriting dir with file lib
error: problem occurred while upgrading glibc
call to execv failed (Bestand of map bestaat niet)
error: command failed to execute correctly
error: could not commit transaction
error: failed to commit transaction (transaction aborted)
Errors occurred, no packages were upgraded.

Well….. this leaves me with a broken C library. When this happens, do not close any windows or program as without the C library in place, it’s very hard to open a new shell. So…. since I was running pacman to upgrade stuff, which failed, I did have a root shell available, which is essential to fix this problem.

So, the situation, no program would start. It would just say file not found, but what file? Appearently the dynamic linker was broken. Running a program with the dynamic linker manually worked. For example
/lib64/ld-linux-x86-64.so.2 /bin/ls

So, what we have to do… replace the required files manually. So, wehere to get them? We have to know pacman downloads its packages to /var/cache/pacman/pkg/. So, the file we are interested in is /var/cache/pacman/pkg/glibc-2.16.0-2-x86_64.pkg.tar

So, we are going to create a directory and extract the files. As we do have a broken dynamic linker, we need to prefix it manually


/lib64/ld-linux-x86-64.so.2 /bin/mkdir blaat
cd blaat
/lib64/ld-linux-x86-64.so.2 /usr/bin/xz -d /var/cache/pacman/pkg/glibc-2.16.0-2-x86_64.pkg.tar.xz
/lib64/ld-linux-x86-64.so.2 /bin/tar -xvf /var/cache/pacman/pkg/glibc-2.16.0-2-x86_64.pkg.tar

Now, replacing the files…

/lib64/ld-linux-x86-64.so.2 /bin/cp lib64/* /lib64

which leads to the next problem. We have mixed library versions now! We have replaced the /lib64/ld-linux-x86-64.so.2 file however the /usr/lib/libc.so.6 is still the old version. Now we cannot execure anything! Lucky me, busybox provides statically linked binaries, which can solve this problem. Remember when I said not to close any window. I still had a browser open, so I could download this. Then using busybox, prefixing the path to busybox so copy the content of usr/lib to /usr/lib, and finally lib to /lib, making the system work again.

This my friends, was another day on ArchLinux. Feels like being on safari, doesn’t it?

As I mentioned yesterday, due the spam prevention plugin (NotCapatcha), this site will place a PHP Session cookie on your machine. Due that fact I have installed the Cookie Control to notify you about this. This morning, when visiting my blog using Firefox, I noticed my Ghostery giving me a blocking alert. I was like WTF?!?!? So it appears this Cookie Control plugin uses someth8ng called geoPlugin.

I can select to only view the privacy notice to visitors from specific countries. That’s probably the reason this geoPlugin is used, as it is a plugin to provide geographic locations. However I do not use this feature, and therefore it is not necessary to be loaded. I will see if I can remove this from the Cookie Control. But that will have to be later… I got plans for this afternoon.

My apologies for serving this on my blog.

Some of you might know about the new EU cookie law. Basically, I have to ask for permission to store cookies on your computer. I am using WordPress on this blog and a couple of more website. The problem is, I am using NotCaptcha as an anti-spam measure. This plugin uses, just like my own Capatcha implementation I have used a couple of times on custom websites, uses a PHP Session to store the expected answer, server side. A PHP Session sets a cookie to identify the session.

Therefore I have installed the Cookie Control plugin, to display a notification message about the cookie usage.

The problem is, with this plugin enabled, a PHP session is started every time a page on my blog loads, which is, strictly speaking, in conflict with the new European regulations, as this cookie is being placed before you can click “I am happy with that” in the Cookie Control popup.

The right to read

A while ago, I was browsing through gopherspace, a forgotten corner of the internet. I stumbled across a text named “the right to read”. It can be found at gopher://zaphod.661.org/0/text/right-to-read.txt. If you don’t have a gopher capable client, you can use the floodgap gopher-to-http proxy. http://gopher.floodgap.com/gopher/gw?zaphod.661.org/0/text/right-to-read.txt.

When reading these articles about “Secure Boot” on onnews. http://www.osnews.com/story/26086/Fedora_secure_boot_and_an_insecure_future and http://www.osnews.com/story/26106/UEFI_Secure_Boot_and_Ubuntu and the comments to this article, it reminded me of this text, the right to read.

It was also possible to bypass the copyright monitors by installing a
modified system kernel. Dan would eventually find out about the free
kernels, even entire free operating systems, that had existed around the
turn of the century. But not only were they illegal, like debuggers–you
could not install one if you had one, without knowing your computer’s
root password. And neither the FBI nor Microsoft Support would tell you
that.

The Microsoft certificate required to boot a kernel in a “Secure Boot” environment, and the idea sketched in this story that one would require the “root-password” of the computer to install an alternative OS, those concepts are more-or-less the same. One would require “permission” from Microsoft to use a non-Microsoft kernel/OS.

This, in combination with other forms of DRM, not only restricted to just texts to read, but, in today’s perspective, the right to listen to music, to watch a movie, or any other digital content. Looking at recent development, such as the blocking of the Piratebay, or, less recent, things like data retention, which by the way was again in the news recently. The UK government wishes to extent this data retention to a much larger extent. I am sorry, the article about this is in Dutch. http://tweakers.net/nieuws/82577/britse-regering-wil-vergaande-bewaarplicht.html.

I am concerned about the consequences of these and other recent developments. I am concerned, in a near future, there will be no such thing as freedom. Our freedom taken away by governments and big companies, so called, to protect us…. us? or them?

I wish to point out copyright was initially put in place as a way to control what was printed. It was a form of censorship, and it seems our leaders have not forgotten this fact, and with the means of modern technology, they wish to enforce this upon the people once again.

Arrrrrr!!!!

Sorry for the delay. I have been planning to do this for a while, but here it is: A few instructions how to get up a PirateBay proxy. I realise there are tons of these online already. However, some of them are outdated or incomplete. This guide explains how to configure an ArchLinux based Apache installation to run a piratebay proxy.

This site explains some theory about reverse proxies, however, it is outdated, and the mod_xml2enc seems no longer required. I haven’t been able to find this anyways, so knowing you don’t have to look for that, is already the first step.

This article explains a configuration, but is outdated (uses the .org in stead of the .se domain for example) In the comments there is a link to a github project containing an up-to-date configuration. However, this configuration contains some debian-specific stuff, which has to be uncommented in order to run on an ArchLinux installation. It also contains

Install mod_proxy_html from AUR

yaourt -S mod_proxy_html

During my earlier attempt, I confused it with mod_proxy_http, which was the reason my it failed.

Use configuration from: https://github.com/4np/pirateBayProxy

  • Change the ServerAlias to the desired (sub)domain
  • Uncomment
    Include /etc/apache2/sites-available/rule-*
    as this is specific to a debian configuration.
  • Uncomment Auth stuff to enable public access.

As the configuration is proviced in a rather lenghty file, which some configuration options at top,
you can save it to a reparate file, and include it from httpd-vhosts.conf. Please note the path
is relative to /etc/httpd and not the directory of the current file, so, it would become something like

Include conf/extra/httpd-thepiratebay.conf

Please note: this configuration is not perfect. There are little issues, such as the image on the front page
not being displayed correctly.

Arrrrrr!!!!

P.S. I am way behind on everything, including this blog: I still have to explain how to set up a git repository using gitosis, perhaps write something about gcj, also I am still trying to figure out eiskaltdcpp from AUR, specifically how to tweak the PKGBUILD to build the CLI and DAEMON correctly, and perhaps write something about R. Butttt I have been busy busy busy lately. Sorry to keep you waiting.

As I wanted to configure a mailing list on my server. The most known mailing list is GNU Mailman.. This software is of course available in the ArchLinux repository, but since I run a self-compiled exim, due some specific compile-time options, and a dependency on a MTU from the repository, I’ve also compiled Mailman myself.

./configure –with-cgi-gid=http –with-python=/usr/bin/python2

The option –with-cgi-gid tells the Mailman to webserver is running in group “http”, as the default configuration expect it to run to “nobody”, and to keep the rest of my configuration intact, I have decided to add it to the Mailman compile-time configuration. The other option is the pyton-path, as ArchLinux uses python2 in stead of python as executable name.

Apart from that, I have followed the installation manual except for the SMTP callback part, as my exim things that rule is invalid.

Currently, this has been configured for one domain only. As the installation guide suggests to use multiple installations for multiple domains, I don’t see any problems if I wish to expand this to multiple domains. Just a few tweaks to the configuration files would make that work.

The mailing lists are running on multiple domains now. The only restriction with a single-installation is, that a list name can be used only once across all domains. I have not verified this, but looking at the configuration, it will only match on the list name, and ignore the domain part completely while receiving mail.

I am using the struction.de mail configuration, and these modifications to the exim config file don’t appear to cause any trouble.

When I tried to upgrade my system the other day I got the message

:: The following packages should be upgraded first :
pacman
:: Do you want to cancel the current operation
:: and upgrade these packages now? [Y/n]

This is nothing unusual per se, but what happened next was

resolving dependencies…
looking for inter-conflicts…
error: failed to prepare transaction (could not satisfy dependencies)
:: gcc: requires gcc-libs=4.7.0-3

The solution is

[root@hp ~]# pacman -S libtool gcc gcc-libs
:: The following packages should be upgraded first :
pacman
:: Do you want to cancel the current operation
:: and upgrade these packages now? [Y/n] n

So, you have to specify libtool, gcc and gcc-libs at one time, else you will
run into a cross dependency problem pacman seems to be unable to solve itselve.
When it asks to upgrade pacman first, you should say no.

After this, you can upgrade pacman, and the rest of your system without issues.

I am required to use CCFinder for an assignment at my university.

As, on the download page, http://www.ccfinder.net/ccfinderxos.html there is
only a binary avialble for Windows, I must compile from source.

Please note: http://www.ccfinder.net/doc/10.2/en/install-ubuntui386.html
appears to indicate there used to be a binary for Linux available.)

There is a source package for Win32, and an additional download for
Ubuntu 9.10 i386. However, my environment is ArchLinux x86_64 bit.
From the blog http://nicolas-bettenburg.com/?p=290 it appears the
karmic.mk files are to be renamed to Makefile in order to compile.
The dependency libraries, the boost library is to be installed.
However, just installing the dependencies did not make it compile.

The first problem to encounter was

In file included from ../common/bitvector.cpp:2:0:
../common/bitvector.h:8:2: fout: ‘size_t’ does not name a type

and some more error messages related to size_t. This is caused
by the missing include of cstddef, so adding

#include <cstddef>

fixed this problem.

The next problem to run into:

g++ -licule -licutu -licuio -licuuc -liculx -licudata -licui18n -lboost_thread-mt -o ../ubuntu32/ccfx base64encoder.o bitvector.o ccfx.o ccfxcommon.o prettyprintermain.o rawclonepairdata.o unportable.o utf8support.o ccfxconstants.o
/usr/bin/ld: cannot find -lboost_thread-mt

Solution: Edit the Makefile, Changed to LIBS=
Replace -lboost-thread-my by -lboost-thread

 

As for the modules, I will not be describing every problem, only the solutions.


Compiling the picosel module

Copy karmic.mv to Makefile
no other problems encountered


Compiling pyeasytorq module

Copy karmic.mv to Makefile
Change
-I/usr/include/python2.6/
to
-I/usr/include/python2.7/

Add -fPIC to OPTS=
Change -lboost_thread-mt to -lboost_thread on the LIBS= line


Compiling the picosellib module

Note there is a type on the guide pointed to above,
the correct spelling is with double l.

Copy markic.mv to Makefile
Edit the Makefile:

Changes to OPTS=
Replace
-I/usr/lib/jvm/java-6-openjdk/include
by
-I/usr/lib/jvm/java-7-openjdk/include/ -I/usr/lib/jvm/java-7-openjdk/include/linux
Add -fPIC

Changed to LIBS=
Replace -lboost-thread-my by -lboost-thread


Compiling the CCFinderXLib module

Copy markic.mv to Makefile
Edit the Makefile:
Changes to OPTS=
Replace
-I/usr/lib/jvm/java-6-openjdk/include
by
-I/usr/lib/jvm/java-7-openjdk/include/ -I/usr/lib/jvm/java-7-openjdk/include/linux
Add -fPIC

Changed to LIBS=
Replace -lboost-thread-my by -lboost-thread



Apart from “personality problems”, this appeared to have produced a working binary

[andre@hp ubuntu32]$ ./ccfx
CCFinderX ver. 10.2.7.4 for Ubuntu i386 (C) 2009-2010 AIST
[andre@hp ubuntu32]$ file ccfx
ccfx: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=0x29bf88343a59e7c0f56c83115d704d139b066ff9, not stripped

However, the binary is not able to work as it should. The error message it produces is:

sh: /usr/bin/python: Bestand of map bestaat niet

This is due the fact, the path to the pyton 2 interpreter is hardcoded into the
program, in ccfx.cpp line 2567. As ArchLinux already switched to python 3, the
correct path to the python 2 interpreter should be /usr/bin/python2



In my opinion, it’s a bad habit to hard-code paths into your program that is
meant for public release. Open source projects usually distribute their
source code with a configure script included, that detects the correct
paths of includes et cetera. And for good reason, as for editing all
these Makefiles manually is quite inconvenient.

Please note: the paths and versions mentioned in this post are specific to my ArchLinux installation, and another distribution might ship other versions of python and java.

Another note: The binaries obtained by this procedure pass the test described in Nicolas Bettenburg’s blog. However, replacing the boost_thread-mt by boost_thread might introduce multithreading related issues. However, a non-multithreading version of a threading library doesn’t make much sense, therefore I don’t expect this to be an issue.

A few days ago, I received a complaint that the mail services on my server stopped working. When the user attempted to send an email, they got the error message “temporary error, please try again later.” The reason for this error was the fact the anti virus process had been killed, and my mail system refused to handle mails due this problem.

The reason the anti virus process was killed is because the system ran out of memory:

Mar 23 20:36:09 stock kernel: Out of memory: Kill process 24582 (clamd) score 66 or sacrifice child

In the recent month, the number of sites being hosted at my server has increased. Also, due requirements of some new software on my server, I have enabled a number of php modules that hadn’t been enabled before (such as PDO modules for both MySQL and PostgreSQL). These changes are, so far I can tell, responsible for the growth in Apache’s memory usage.

I will still have to investigate how to reduce the memory usage, as the server is running again, but the available memory is still rather low.

A look at Drupal

In the past months, I have been hosting several websites, which were WordPress based. These were my first experiences with the WordPress platform, and these experiences were quite positive. Therefore I decided to use it for my personal blog as well.

However, I was asked to host a website as well, which requirements go beyond the functionality provided by WordPress, and I was not aware of any out-of-the-box solution to forfill these needs. Therefore I was thinking about developing this website myself.

In the past, I have been developing my own website, which used to run at this domain. So I quite confident I could develop this website. I was planning to release this website as an open source product, and make it part of the BlaatSchaap Coding Projects. This would also mean, I would work on my own website as well, and create a kind of framework which supports both projects.

However, a friend of mine pointed me to take a look at Drupal. There is a so-called installation profile, Community Forge that, to some extend, provides the functionality I am looking for. The “offers” and “wants” functionality looks like what I am looking for, but it would require to add regional options, as the site I am asked to make, should indeed offer there functionality, but it should be one site, in which the user should be able to search in his region, but the site should run on national level.

Apart from that, it didn’t seem to function properly. Added items didn’t appear to be listed as they should. There was a message about selecting categories at the right, but they were missing. After disabling some modules, which provided functionality not needed, they suddenly appeared. When re-enabling those modules, the categories where still displayed.

Another problem with this Community Forge is, that it is based on Drupal version 6, while version 7 is the current version, and it appears not to be maintained. Another problem is, the site feels rather slow, while other sites on my server don’t feel that slow, therefore I conclude it is this site being slow.

I have also installed a Drupal 7 installation on my server, which runs faster then the Community Forge Drupal 6 site. Drupal 7 offers some additional features, like being able to install a module by entering it’s download url. In drupal 6, one had to upload the files to the server, and place them in the correct directory manually. In that respect, I find WordPress much better, as the installation process for plugins is fully integrated, and doesn’t require me to search and find the modules through another website.

This also includes dependencies between modules, which should be installed manually. In that respect, WordPress is more user-friendly. However… let’s take a closer look at Drupal. During my investigations I believe I might try using Drupal on my website as well, as it offers some features which might be interesting for my current plans with the BlaatSchaap Project.

But, let’s look at what is the plan for this website I am supposed to make. A desired feature is authentication through other websites. For example FaceBook. Yeah…. big and evil FaceBook. For this I was looking at OpenID Selector which is a nice looking interface for OpenID (as opposed to entering the OpenID URL manually), and can, through an additional module Facebook Connect also authenticate using FaceBook. At least, that’s what’s supposed to happen. However, it doesn’t work. The Facebook Connect module keeps complaining it cannot find the Facebook API, even though I have downloaded and placed the required files in the specified directory. (Yes, something that had to be done manually) The OpenID Selector also requires to place some files manually, and also, this one didn’t work either. A second attempt to install seemed to work better, and didn’t produce a screen full of error messages, and the login screen appears as it should, however, the login procedure fails. Another attempt to enable the Facebook Connect module doesn’t show the API not found error. This is really puzzling me. I haven’t changed anything between previous attempt.

Another thing that looked interesting is OAuth Connector, since many sites such as Facebook, Hyves and MySpace offer OAuth. In that respect, OpenID and OAuth are the two major protocols for cross-site authentication. Therefore, a general OAuth plugin, not specific to any site, would offer much more functionality. However, it appears this connector only provides an API to perform the authentication, however when I enabled it I didn’t find any configuration for it. (as there should be mentoned in this blog)

There are modules on the drupal site that offer a different kind of solution. Authentication through a third party server. So… the user *and* my site will have to trust a third party, a man in the middle. I don’t like the idea much, thank you. One of these Janrain Engage even charges for their services, while Gigya is free for non-commercial use.

Seeing in the Drupal 6 installation, the categories to appear after disabling some modules, seeing the Facebook Connect (on the Drupal 7 installation) complaining about the missing library one time, and working fine the next, gives me the feeling Drupal works rather unpredictable. I guess I will try some more, using a clean installation, perhaps I am overlooking something…

I might have found a possible explanation for this behaviour. It appears Drupal has enabled a cronjob which might have done something that enabled the things not working in previous attempts. This might be the libraries which I had to add manually, and this cronjob might have “found” then. I am just guessing.