Thursday, December 23, 2010

Tarfile with exclude: Linux vs. Solaris

There are differences, yes, and not only in tar syntax. But I'm going to stick to tar (for the moment).
I sometimes have to make a tar file excluding certain directories or files, and it's subtly different in linux from solaris.
For example, we want to create a tar file from a directory called ./mydir, name the output file mydir.tar(.gz) and we put into ./mydir/Exclude a list of files to be excluded from the resulting tar file.
Solaris version:

tar cvfX mydir.tar ./mydir/Exclude ./mydir

will create a tar file called mydir.tar containing all subdirectories of ./mydir and at extraction time it will create a directory ./mydir as well

Linux version:

tar -cvf  mydir.tar --exclude=Exclude ./mydir

or, if you directly want a gzip file:

tar -zcvf  mydir.tar --exclude=Exclude ./mydir


where the contents of the exclude file ./mydir/Exclude are in both cases identical, and for example like (it's assumend to be the relative path to ./mydir):

Exclude
foo
bar


So that's pretty much it. It's very easy, but I always get confused about it, and the differences existing between linux and the Solaris only make all worse to remember.

Wednesday, December 15, 2010

Recursively rename files

I know, I know that's a very easy one... but just when I should have known how to do that in a shot, I didn't. And of course there are a lot of references for such things out there, so I'm absolutely not claiming to be original, but sorry if I don't post any references.
So, here is the thing: imaging you want to rename all files in a certain directory that are *.txt with *.back for example, including any subdirectories:

find . -type f -name "*.txt" -print| rename -n 's/(.*)txt$/$1back/g'

Remarks:
- this works only on linux (or where you have the command 'rename')
- rename accepts the perl regexp syntax, hence all this $1 for the substitution of the previous match (.*)
- tip: it's good to try using rename with -n option, which actually does nothing but showing what it would do without the -n option
- careful: if you fear that something nasty could happen with very depth directories and the find consuming lots of CPU or for whatever reason, you can include a -maxdepth 10 (or any other reasonable number), like this:

find . -maxdepth 10  -type f -name "*.txt" -print| rename -n 's/(.*)txt$/$1back/g'


- another thing that confused me as I tried it out: don't include the 'usual' *.[file_type] (in my case *.txt) at the end, after the rename 's/.../.../' part; it will prevent find from really diving into the subdirectories

Wednesday, November 10, 2010

Real time audio and Ubuntu Studio

I've have been trying to work with the Linux audio recording and editing software for some time now, but it's not been quite a very satisfying experience. One of the problems I had and wasn't able to solve was the real time kernel issue: the soundserver jackd needs for some things to work properly, to run on a real time enabled kernel. And the thing is I couldn't find the right information in the internet to help me out with this (in fact it's surprisingly easy).

So recently I read about Ubuntu Studio, a Linux distro based obviously on Ubuntu which focuses mostly on audio and video recording and editing stuff. And I read about the kind of tuned kernel the distro was shipped with... and I decided to give a try.
I download the latest one (10.10) and, though it's not a live distro, which I didn't expect and I thing it's a pity that you can't try it without installing it, I nevertheless had a spendable partition on which I could install it. It's a pretty normal Debian installer, quite straight forward, and … congratulations to the Ubuntu Studio guys, they've done a great job! There's lots of interesting software, some of which I never heard (or read) of before.
The point of all this is actually, you don't have to really install a Ubuntu Studio if you already have an Ubuntu (or any other Linux installed) and you don't want to have more than one distro. Just install it to a virtual image (maybe there are already some virtual images available out there), try the software they ship with it, and copy their configuration settings.
And, as for what was my original problem, the real time linux kernel, just put the appended file (the Ubuntu Studio guys named it audio.conf) into /etc/security/limits.d (if the directory limits.d doesn't already exist, just create it), and put this lines into it:
# Provided by the jackd package.
#
# Changes to this file will be preserved.
#
# If you want to enable/disable realtime permissions, run
#
# dpkg-reconfigure -p high jackd
@audio - rtprio 95
@audio - memlock unlimited
#@audio - nice -19

and, of course, check that your user (the user you start the audio programs with) belongs to the 'audo' group.

Actually, it' easier than that: on a debian-like system, if jackd is already installed, and as the previous audio.conf file suggests, just execute:
dpkg-reconfigure -p high jackd
Or choose 'yes' in a fresh jackd install.

Another possibility is, if you already have a Ubuntu up and running, install the Ubuntu Studio related packages and you don't need the whole new Ubuntu Studio distro.

Thursday, April 1, 2010

Hardware acceleration again lost... and found!

On my laptop I use mainly Kubuntu (love kde4) and I've a radeon r600 chipset, which by the moment of writing this is not completely supported: (K)Ubuntu 9.10 uses kernel 2.6.31 and the experimental dri drivers for hardware acceleration to run need kernel >= 2.6.32.
So to get hardware acceleration with r600 chipset I:
1. installed a 2.6.32 ubuntu kernel
2. added the proper apt repositories (for ubuntu karmic):

### ati open source bleeding edge drivers
deb http://ppa.launchpad.net/xorg-edgers/drivers-only/ubuntu karmic main
deb-src http://ppa.launchpad.net/xorg-edgers/drivers-only/ubuntu karmic main
deb http://ppa.launchpad.net/xorg-edgers/ppa/ubuntu karmic main
deb-src http://ppa.launchpad.net/xorg-edgers/ppa/ubuntu karmic main

3. aptitude update && aptitude safe-upgrade (visual via kpackagetkit is also possible)
4. reboot with the new kernel
... everything is fine now, I can now enable composite for kdm, the visual effects are running, great, I love it!

But somehow recently it got all messed up with an innocent update: After a reboot the gray kde bar told me composite was disabled. Xorg.0.log reported some ugly stuff about disabling dri:

(EE)RADEON(0):
[dri] RADEONDRIGetVersion failed because of a version mismatch.

[dri] This chipset requires a kernel module version of 1.17.0,
[dri] but the kernel reports a version of 2.0.0.
[dri] If using legacy modesetting, upgrade your kernel.
[dri] If using kernel modesetting, make sure your module is
[dri] loaded prior to starting X, and that this driver was built
[dri] with support for KMS.
[dri] Disabling DRI


After some googling around I found this where the exact same message is reported, together with some possible solutions, with didn't work for me. But what became clear is that maybe the problem was, as suggested by the message log message itself, that the kernel module was being loaded to late (somehow, after X was started).

The solution was quite simple. Just add to the /etc/modules file the lines:
drm
radeon modset=1

Why? Kubuntu 9.10 uses upstart as init daemon, and I found that /etc/init/module-init-tools.conf is the file responsible for loading all kernel modules listed in /etc/modules. So if this is executed (or read) before running the /etc/init/kdm.conf file than it should work, which it actually does.
But, to be honest, I'm not quite sure if it always would (run module-init-tools.conf before kdm.conf). So to really be sure, I modified my kdm.conf file, adding a line:
start on (filesystem
and started hal
and tty-device-added KERNEL=tty7
and started module-init-tools
and (graphics-device-added or stopped udevtrigger))

which now guarantees that kdm is not started until module-init-tools file has been run.

Monday, March 8, 2010

Virtualbox: network confguration

Virtualbox comes out of the box with Ubuntu and it's really very easy to use. Why not use VMware Server (free edition)? I used to use it, but it has some disadvantages compared to Virtualbox:
- if you have a slightly non-standard linux distro, it doesn't install unless you compile the kernel modules, and this is very likely to fail (at least in my experience)
- they moved to an application server based implementation (tomcat) in version 2, which is much more resource consuming (in a computer with only 512MB RAM it is really not usable)

So, I finally switched over to VirtualBox, but one of the disadvantages of it is the networking: it's much more transparent to me in VMware than in VirtualBox. I want to be able to connect:
host (Kubuntu) <---- > guest (Solaris 10)
guest -----> outside world
world -----> guest (this is optional to me, and still not implemented, but shouldn't be too difficult)

So, based o this wiki and a few other sites/tutorials, I sum the steps I had to do to make it work:
Host:
1. enter this lines in my /etc/network/interfaces file (change user_name with the user name you're going to run VirtualBox):
iface br0 inet static
address 10.1.1.1
netmask 255.255.0.0
pre-up /usr/sbin/tunctl -t tap0 -u user_name
pre-up ip link set up dev tap0
pre-up brctl addbr br0
pre-up brctl addif br0 tap0
post-up ip route add 10.1.1.0/24 dev br0
post-up iptables -A FORWARD -i wlan0 -o br0 -j ACCEPT
post-up iptables -A FORWARD -i br0 -o wlan0 -j ACCEPT
post-up iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE
post-up /bin/echo "1" > /proc/sys/net/ipv4/ip_forward
pre-down ip link set down tap0
pre-down /usr/sbin/tunctl -d tap0
post-down brctl delbr br0

2. run the command (as root) ifup br0
3. start virtualbox guest choosing from the configuration gui net -> adapter1 -> attached to: select 'bridge' -> select 'tap0', which should appear if you have configured it before

Guest:
Yeah, that really depends on which guest you're planning to run. In my case, a Solaris 10, the thing is basically (for static ip configuration):
1. enter static ip address in /etc/hosts together with a loghost entry
2. create the /etc/hostname.
3. add default router in /etc/defaultrouter
4. add the routing table (in my case, 10.1.1.0 is the bridge-tap net on the host, and 192.168.0.0 is the my real host OS subnet):
route -p add 10.1.1.0 -netmask 255.255.0.0 10.1.1.1
route add -net 192.168.0.1/16 10.1.1.1
(and maybe I'm leaving something out here....)

Enjoy

Tuesday, February 9, 2010

Remote Desktop: nomachine

I know, nomachine is not exactly open source (they release lots of their stuff as open source and there are open source packages based on their open source released libraries), but hey, it's a great tool that works great out of the box. I use it to connect to my home box from work and it runs just great, this despite my home internet connection sends data only at about 30KB. I love it.
So recently I needed to connect to a second home machine, through the same router, and since ssh server port 22 was already in use, I had to configure the whole thing to work on a different port. It's really very easy, but you do it once and a few months later you can't remember how. So, here it is for a Debian(like) system:
A) client side:
1. download from the nxclient package
2. install it simply (as root or sudo su) with dpkg -i *.deb
that's it (it doesn't get installed in your default path, so you have to run /usr/NX/bin/nxclient from a terminal or, more easy, from the application launcher menu, usually in the 'internet' section)

B) server side:
1. download de nxnode, nxserver and the nxclient from the link above (according to the nomachine site install instructions, they're all needed)
2. install them in the this order: nxclient, nxnode, nxserver
3. now check that the nxserver is running with the command:
/usr/NX/bin/nxserver --status
and you should see something like:
NX> 900 Connecting to server ...
NX> 110 NX Server is running.
NX> 999 Bye.
(Hint: it should be running, but if it's not, try /usr/NX/bin/nxserver --start or --restart and check again)

Note: don't forget to configure your router, if you have one, to forward the ssh port to this new machine so you can connect at all!

The server is up and running and it should just work now, but here are some extra tweaks that could be useful:
1. Run the server on a different port:
- edit (on the server) the file /usr/NX/etc/server.cfg and search for SSHDPort; uncomment it with the port you want (careful: there're 2 entries with SSHDPort: one for the server daemon and another one for authentication)
- and, I don't know if this is necessary, but I also edited the /usr/NX/etc/node.cfg file, and uncommented the SSHDPort port entry to change it with my non-standard ssh server port

2. Run your nxclient with a different desktop environment: let's say lxde (which is really very light)
- in the session configuration window, go to 'General' tab -> section 'Desktop' and choose 'Unix' and 'Custom'; now click on 'Settings', mark 'Run the follwing command' and write 'startlxde' (of course you must have all necessary lxde packages installed).

Enjoy nomachine, it's really fast.