February 17, 2015

Toggling the background in Vim

I end up switching the background on my terminal a lot depending on where I am working and the ambient light conditions around. If you use vim, you know this can lead some text becoming almost invisible ( with the default colors only ? ) when you switch from a light background to a dark background or vice versa.
So, I ended up mapping a couple simple shortcuts for setting the vim background to dark or light :

October 21, 2014

Extending QCOW2 virtual hd on Mac using VBoxManage

If you are on a linux box, you could just run qemu-img to resize the virtual hd file. But on Mac, at least I could not get qemu to build. So using the Virtualbox command line tool I did the following :

VBoxManage clonehd original-hd-file.qcow vdi-copy-of-original-hd-file.vdi --format VDI

VBoxManage modifyhd vdi-copy-of-original-hd-file.vdi --resize [ new size in MB]

Virtual box does not have support for resizing QCOW files, so you have to convert it into either VDI, VMDK format before you can resize the file. If the file is not too big and you have ample space, this is a fairly quick solution. Also note that with modifyhd --resize you can only extend a file, you cannot shrink it.

October 15, 2014

The Narrows Hike - A Memoir

Very well written by Viswa about the Narrows day hike we did recently. One of the best groups I have hiked with.

Where the hearts meet and the girls eye diamonds,
Not the typical gentlemen's clubs,
But for him, PRASAD, a place to spade out some bucks,
He's a gambler with lots of luck.

Walked in with style, she dealt the cards out,
Amazing ride, his sequence flushed out,
He called it his hand, cashed in his demand,
Ready to walk out, she can; but SRI KAN'T.

All for one, One for all,
Consumed with chores, once and for all,
He has you but mostly you have him,
Trust you won't be lost, he's a MESH around U.

As the ocean calls for drops
Or the river Virgin may be,
The joy ends up with the caller
But the burden's borne by the KOLLI.

Don't know much about knee pain,
Don't care much about need to train,
Can't help him watch sing along,
That will make you see A RUN along.

Hopping over the stones like a rabbit,
Makes you wonder how he got into this habit,
A cam and a camera in each hand,
The lenses were blessed to be NAVEEN'S friend.

Just when you thought you got me,
I tell mom nature, not a chance there be,
Not for a moment, please think I'd drop,
Up like a bouncy BALL, GEE I pop.

Slide to the left, right and front,
Wait up to catch up, behind that were left,
Point to the steps, so they step to the point,
They do what you say, NAIDU what you do.

You pin to his route, he figured it all out,
Follow his steps and you will hardly step out,
In the event you missed, his trek so swift,
To help he will turn like an EERPINI bend.

Life's a journey, a longer one for some,
Like a drop of a whiskey versus a bottle of rum,
Moon may be bright, but it's the Sun with the light,
Who knew otherwise until CHANDu was with us, right?

Open the gates, you see them nine,
Expend an hour or two, they were fine,
The two streams met, then there were five,
When the Sun has set, SATEESH helped survive!

October 12, 2014

Survival vs Existence.

The Narrows Hike - at around ~10 Miles in

The last shuttle leaves the Temple of Sinawawa in another five hours, at 1930 hrs. Tall canyons, millions of years of water and ice shaping up the stone, the process visible before your own eyes as water gushes down on the smooth and round stones. Your mind snapshots the beauty of the surroundings, but your body would rather get this over with. No one in sight, only the sound of water persistently eroding away the canyon walls. With every step, you slip, stumble, your brain corrects for the movements, and before you topple, you gain a tighter footing. The swollen knee cap from the earlier fall is feeling better being in knee deep cold water, yet it could totally do without working against the steady current. Will I survive another 10 miles without injuries, will I make it before sunset, will it matter if I don't ? I am hungry, I have not had any food so far. Though there is no lack of energy, I can feel my stomach caving in. The solitude and force of the nature around, strangely encouraging and replenishing. No tasks, no schedules. I am not waiting for someone to reply back, I am not thinking about dealing with the next shit storm. Just the plain primal goal of survival. Relieving.

The Regular Sunday Night
I have to get to work tomorrow. Preferably early in the morning, so that I can retain that sleep and food schedule going for the rest of the week. Have a few tasks that have rolled over from friday and would possibly want to wrap them up before getting along with the day. Have to make sure I have breakfast in the morning. Sleeping now would be a good idea, but a better idea would be to turn away from the screen, read something for a while and then try to sleep. I had my flex armband turned off today, so I am probably going to figure in the bottom few on this week's leaderboards.

What ?
I rarely write about life, but when I do, it is pretty abstract. What I am comparing above are two self-centered memories. Questioning the flow of time around me and my buoyancy is the intent here, because it seems to be so very dependent on the layers of complexity you spin around yourself. The basic animal self never has to worry about anything more than the simple art of survival : food, sleep and sex.

January 29, 2013

git me and git wholog - useful git aliases for viewing commits

When using git, it is a common requirement to be able to get a log of all the commits that you have done or some one else has done. I find the git --committer a little long to type in and use every time, with a little bash-foo you can have simple aliases like "git me" and "git wholog " to do the same. The changes for this would be : 



if [ $# -gt 0 ]; then
        if [ "$1" == "me" ]; then
                exec git log --committer='Satish'
        elif [ "$1" == "wholog" ]; then
                if [ $# -lt 2 ]; then
                        echo "usage: git wholog "
                        exit 1
                exec git log --committer=$2
                exec git $@


alias git='~/bin/git`

Run "source ~/.bashrc" and you are all set to go.

In retrospect, this could have been done much more neatly with "git config --global alias.xxxx".

October 16, 2012



Android phone, rooted.
Linux host.
USB 2.0 Compliant connection between the above two devices.


A client browser program running on the android phone should be able to load the home page from the web server running on the Linux host.

More information coming soon.

May 09, 2012

Session logging in Linux

Most of you must be familiar with the "who" command on linux, it lists the users logged in currently on the machine and shows a lot more information related to where they are logged in from etc. There are a lot more options the command supports including the usual "who am i" and "who mom likes" versions.
Basically what "who" does is parse a couple of files which are used for session logging, this post aims to explain these files and the structure of the records in these files. Typically on linux systems two files are used for session logging :

/var/run/utmp : Stores user session related information, a record is appended at each login and then wiped out on logout.
/var/log/wtmp : Stores a trail of the login/logout information and other such logging information for the system. The same record which is pushed to utmp is pushed here on login and then the same record with the "user" field zeroed in is pushed on logout.

The paths to the files can change and are available more generically as _PATH_UTMP and _PATH_WTMP defined in /usr/include/paths.h. Applications should use these definitions rather than hard coding the paths.

Linux for compatibility provides both the utmp and utmpx API for dealing with these log files, the utmpx API is an extension(and a parallel API) of the utmp API and was created in System V Release 4. Linux however does not create parallel utmpx and wtmpx files, all the information is available in the above mentioned two files only.

The records stored in each of the files is of the type "struct utmpx" defined in /usr/include/bits/utmpx.h as :

Note that to get some of the types for the fields in the structure it is important to declare _GNU_SOURCE in your program.

Each of the lines in "utmp" and "wtmp" files contains records of the above format. As I mentioned before Linux supports both the old utmp API and the utmpx API, if the utmp API is used then the records are read in as "struct utmp" which is defined in /usr/include/bits/utmp.h. The main difference between the two APIs is that the former has some re-entrant versions for functions which the latter(utmpx) does not. The various defined function calls are : 
One field which requires some attention above is the type field. Records in the files can be of various types,  : 

EMPTY - Invalid accounting information
RUN_LVL - Indicates a change in run-level during boot or shutdown (requires definition of _GNU_SOURCE)
BOOT_TIME - Contains the system boot time in ut_tv field. 
NEW_TIME - Contains the new time after a system clock change, recorded in the ut_tv field.
OLD_TIME - Contains the old time before a system clock change, similar to the NEW_TIME type record.

INIT_PROCESS - Signifies a process which has been spawned by the INIT process.
LOGIN_PROCESS - Record for a process like "login"
USER_PROCESS - Signifies a user session process started by the login program.
DEAD_PROCESS - Identifies a process that has exited (occurs on logout)

RUN_LVL and BOOT_TIME type records are written by "init" and these records are written to both the "utmp" file and the "wtmp" file.

Using the utmpx API you can read and write entries to these files directly. The good thing is that you need not directly open/close these files, calling the function "setutxent()" opens the file or rewinds the file pointer if already open. Similarly when we are done reading/writing, we can close the file using "endutxent()".
The functions available in the API to deal with the "utmp" file can be divided into three categories :

1. Getting entire records  using the getutxent() function which returns the entire record starting from the current file pointer location.
       struct utmpx * getutxent(void);
2. Searching for records based on given parameters, the parameters are passed in a struct utmpx pointer.
       struct utmpx * getutxid(const struct utmpx *ut);
       struct utmpx * getutxline(const struct utmpx *ut);
3. For putting a record to the file
       struct utmpx * pututxline(const struct utmpx *ut);

Functions of the type getutx* return a pointer to the utmpx record read or NULL if EOF is reached. The returned pointer points to a static area and can be cached on certain systems, that is if the search parameters match the result in the pointer returned by a previous call, then the same contents might be returned again. It is a good idea to zero the static area pointed to by this pointer. Also the pututxline() function may internally call the getutx* set of functions, but this will not affect the contents of the static area that the getutx* set of functions return a pointer to. The pututxline() functions returns a pointer to the record passed as the argument  on success and NULL on failure.
By default all the getutx* functions work on the "utmp" file, we can change this (for example to the "wtmp" file) using the following function :

int utmpxname (const char *file); 

Note that this function does not open the file and thus will not report any errors pertaining to invalid paths, such errors will show up only on the consequent calls to the other getutx* functions or the setutxent() function.
Also since updates to the "wtmp" files are always just simple appends (remember that records are never removed from this file), this can be achieved using the following wrapper call which opens the file, writes the record and then closes it :

void updwtmpx (char *wtmpx_file, struct utmpx *ut);

Some systems might instead provide the updwtmp(char *, struct utmp *) or the logwtmp (const char *, const char *, const char *) functions for updating the wtmp file.

There is another file on linux (and some unix implementations) which provides useful session information, this is "/var/log/lastlog" which contains logs of the logout time for users. The records in this file are of type "struct lastlog"  defined in /usr/include/bits/utmp.h (search for lastlog). This information is sometimes used to display the last time when you logged into a machine on login.

Here is some sample code which reads the "utmp" file and lists the user name on each record. The code does not use the API calls but reads the records directly from the file, (this is not the preferred way of doing it though, the code is just a proof-of-concept).

Apart from "who", there are other commands like "last" which display information extracted from the utmp/wtmp files.

All the above information (and more to come) is a result of the time I have spending with the "The Linux Programming Interface". Chapter 40 (Login Accounting) in the book provides quite a lot more details and sample code listings which use the utmpx API and do a lot more than the above sample code.

April 28, 2012

Memory at each node on a NUMA machine using libnuma (linux only)

For the past few days I have been occupied with trying to understand how NUMA works and what are related implications and challenges for system design. Linux supports more than a few memory location policies on a NUMA supported machine and a corresponding user space library to deal with these policies, migration of memory, thread location etc among other things. The library for most part seems to get information from the files in /proc/self and in /sys/devices/system/node. The latter location in the sys filesystem has a multitude of information pertaining to the memory on the system.
Here is a simple program I wrote toady which prints the memory located at each "defined" memory node on the NUMA system, this program uses the numa library libnuma. It is a rather trivial piece of code. My next task at hand (once I am done with the semester finals) would be to play around with my parallel implementation of QuickSort using pthreads and see if I can improve performance by manually playing with locality of memory and thread execution.

I did hit upon one important piece of information which I did not know before, that my laptop (DELL E6410) is NUMA compatible but has a single memory node with all the 8 Gigs of memory. This might as well be the reason why the parallel QuickSort worked faster on my laptop than on some of the department machines with distributed memory banks and faster truly multithreaded cores.


if you have the numactl package installed, you can also list numa related hardware information using the following command :

numactl --hardware

looking at the information you can tell that it is the same as what you would see under the various entries in the /sys/devices/system/node directory.

April 24, 2012

Quick everyday scripting in bash

From my experience, mostly in grading courses and everyday tasks on my linux box, the most common requirements for scripting deal with iterating over a given set of entities and performing some task, or iterating over a given folder's contents and performing some task. Here are some sample cases and solutions to performing the same quickly. (I use bash so I am sure all the following work in bash).

Case 1 : 

You have a program X and you want to verify that it runs correctly (lets assume running correctly means the same as the program X exiting with success) over, say 100 iterations.


for i in `seq 100`
    ./X [ args for X]
    if [ $? -ne 0 ]; then
        echo "X failed in iteration $i"
        exit 1

The construct `...` runs the command inside and is replaced by the output of that command. the "basename" command gives you the last literal in a path.
The special variable $? stores the exit code of the previous program that was run in the shell script, this is typically what you return at the end of the main() function in a C program. A return code of 0 stands for success.
Note that the above snippet will work assuming that you are in the directory which contains the file (program) X.

Case 2 : 

A folder X has contents a, b, c, d, e, f, g , each of which is a folder. You want to create a file [parent_folder_name].txt in each of those folders.

for file in X/*
    touch $file/$(basename $file).txt

The construct $(...) runs the command inside and is replaced by the output of that command. the "basename" command gives you the last literal in a path. so

$ basename X/a
$ basename X/b

A slight modification to case 2, if X has both files and directories and you want to run the operation only for directories then you can modify the above snippet to check if the entry is a file/directory and act accordingly :

for file in X/*
    if [ -d $file ]; then
        touch $file/$(basename $file).txt

Note that the above two snippets (for Case 2) will work assuming that your current working directory is the parent of directory X.

Case 3 :

Execute a command via ssh on a set of remote machines, whose hostnames are in lexicographic order. For example the CS department here at purdue has the following set of machines : 
sslab01-sslab21 and suppose I want to run the command "who" on each one of them.

for i in `seq 21`
    if [ $i -lt 10 ]; then
        echo "Running 'who' on sslab0$i"
        ssh sslab0$i "who"
        echo "Running 'who' on sslab$i"
        ssh sslab$i "who"

The if conditions in the above snippet are required because, by default the "seq"  does not output numbers of equal width, that is , it does not pad zeros at the beginning. However if you pass the "-w" option to seq, you can simplify the above snippet to : 

for i in `seq -w 21`
    echo "Running 'who' on sslab$i"
    ssh sslab$i "who"

The "-w" option ensures that `seq` prints its output in fixed width, padding the number with zeroes when ever necessary.

I will edit the post and add more common scenarios as I come across new ones, but all the above snippets are short and once you get hold of things, you can actually type them in on the prompt whenever you want rather than storing them in a script file and running that file. 

Adding Scripts to the PATH

However if you do want to run them as a script (from a file), it is a better option to do the following : 

1. if  ! [ -d ~/bin ]; then mkdir ~/bin; fi
2. touch ~/bin/script_I_run_often
3. Add the snippet to the script
4. chmod +x ~/bin/script_I_run_often
5. add "export PATH=$PATH:~/bin" to ~/.bashrc

Now you can just run the script as : 

$ script_I_run_often
Other Special Variables

Some other special variables which are very useful in everyday scripting are : 

$# : the number of arguments passed to the script, you can check this variable to see if your script got the correct number of arguments or not 
$1, $2, $3 ... : are the arguments passed to the script
$$ : Pid of the current process

NOTE : None of the information mentioned above is "new" or "novel", the post is just meant to serve as a compilation of some quick bash tricks. I am sure many other such compilations exist on the internet.

April 17, 2012

[Fixed] Blue Tint on Youtube videos (flash problem) on Centos 6.x

I have the adobe flash plugin from the adobe repository with the following version installed on my laptop running Centos  :

flash-plugin.x86_64              @adobe-linux-x86_64

Since the last update, the videos on Youtube have mostly had a blue tint and less often a pale orange shade. After a couple of days of not trying to fix it (thanks to the hectic semester) I guessed this was a problem with flash or something to do with how flash uses the GPU, since those are the only two pieces of proprietary software on my laptop (and thus would be expected to be fixed rather slow for a problem so evident). 
Finally, a little googling around showed this post on the Arch Linux forums : https://bbs.archlinux.org/viewtopic.php?pid=1084648

I just followed whatever was mentioned there, and the flash plugin looks far more stable now. Here are the steps I followed on my Centos 6.x machine : 

1. Created the file /etc/adobe/mms.cfg and added the following contents : 

#Hardware video decoding

2. Added the following line at the beginning of  /etc/X11/xinit/xinitrc-common


You will need root permissions for both of the above, an alternative for the second step can be adding the same line to the user specific xinitrc file (~/.xinitrc). I logged out and logged in again after the changes, the blue tint is gone and the flash plugin itself is far more stable than before. As suggested in the thread on the arch linux forums, it might be a good idea to try a free alternative to the adobe flash player plugin if possible.

PS : 
1. I run Centos 6.x on my Dell E6410, which has a NVIDIA NVS 3100M gpu.
2. It is just amazing the thread which showed up was on the arch linux forums, Arch Linux continues to amaze me.