m: OMG Ubuntu! I can now play Netflix on my Ubuntu! Yay!!! http://bit.ly/12divkz

http://bit.ly/12divkz

http://www.iheartubuntu.com/2012/11/ppa-for-netflix-desktop-app.html

See how Ubuntu 12.04 running on MacBook Pro is faster than Mac OS X on Apple's own hardware

I compared Mac OS X to Ubuntu 12.04 running as dual-boot on the same hardware, as a VirtualBox guest running on Mac OS X host, and as VMWare guest running on  MacOS X host.

Results were repeated over multiple runs, and on every run (not just average of multiple runs) Ubuntu came out on top.

Do remember to sort results by 'Name' column, and then compare the scores.

http://bit.ly/TNfUek

The software I was using has only 32-bit tests in the free version, and they charged money for 64-bit tests. So these benchmarks are only 32-bit. In any case, still Ubuntu won hands down on Apple's hardware!

So much for 'Walled Garden' experience being better :)

http://browser.primatelabs.com/user/48560

Install pg (Node.js module node-postgres) in Meteor

Make sure you have Postgres server installed, because node-postgres requires pg_config binary.
sudo apt-get install postgersql
Make your application directory:
mkdir my_meteor_app
cd my_meteor_app
Change to Meteor's internal directory where it installs and uses Node.js
cd .meteor/local/build/server/
And now install the pg module: (On Ubuntu 12.04 I had to use this trick to get it to install)
sudo PATH=${PATH} $(which npm) install pg


I had to use sudo because the .meteor/local/build/server/node_modules is a symbolic link to /usr/lib/meteor/lib/node_modules which is owned by root.

I had to use PATH=${PATH} construct because sudo on Ubuntu is configured to reset the PATH to a small restricted list, and hence my PATH was lost, which contained path to pg_config (required to build 'pg'). Doing PATH=${PATH} made sudo retain my PATH in the sudoed environment.

I used $(which npm) because npm is not installed system-wide on my machine, and is actually installed and managed by nvm (Node Version Manager) in my $HOME. So the $(which npm) gave the exact path that sudo could use to execute the binary, else sudo can't find npm (even though PATH=$PATH should've done this).

Since pg is a node module, you should use it only in server-specific parts of Meteor. That is, either under the  my_meteor_app/server/ directory, or wrapped in a code like this:
if(Meteor.is_server) {
    var require = __meteor_bootstrap__.require,
        pg = require('pg');
}
PS: I got the hint from http://coderwall.com/p/2fveyq

How to install KVM with a working audio (on Ubuntu)

I used the instructions here [1] to install the KVM on my Ubuntu 12.04, and was even successful at creating and running another instance of Ubuntu 12.04 as a guest OS.

But I could not get any sound from the guest, and it was important considering I am going to test some audio-related setup in these VMs.

I had to scour the interwebs and try different things to get the audio in Guest OS working, so documenting them here, in the hopes that someone finds these helpful, and I can refer back to them if needed in future.

The things we are trying to fix are:
1) Make KVM run as a non-root user, specifically, as your login, so that it can share your ALSA audio.
2) Make KVM to not disable audio.
3) Replace the KVM binary with a script that sets up proper environment variable, so that KVM and use your alsa audio driver.

Instructions follow:
cd /usr/bin
sudo mv kvm kvm.bin
sudo touch kvm
sudo chmod +x kvm
Edit /usr/bin/kvm and paste this text in it:
#!/bin/sh
QEMU_AUDIO_DRV=alsa /usr/bin/kvm.bin $@
Edit /etc/libvirt/qemu.conf and look for the following lines (remember all these lines are in different sections of this file) :
# vnc_allow_host_audio = 0
# user = "root"
# group = "root"
and replace root with your local user name, for example:
vnc_allow_host_audio = 1
user = "gurjeet"
group = "gurjeet"
Now restart the KVM service
sudo service libvirt-bin restart

Now launch your guest OS and enjoy its sound.

[1] https://help.ubuntu.com/community/KVM
[2] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=649442

micro: 2 months/61 days lost 19 lbs/8.6 kgs on a mix of juice fast and Subway veggie sandwiches. Charts in blog body.

Weight record
BMI Record
Average weight loss average over 1,2,3,4,5,6,7 days.

Windows/Linux to MacBook Pro transition (that failed). And a tip on getting better hardware for half the price.

What follows are the notes I started taking when I first started trying to use the new MacBook Pro (provided by my company), in the hopes that it will be useful some other Linux user being forced to buy into the Apple's walled garden.

My colleagues know that I have always had a bias against Apple products, but given that I didn't have a choice this time (company policy for any new laptops requires buying only MacBooks), I honestly wanted to start using the new MBP so that I can get back to work at full throttle. But as will be seen by the notes below, it ain't an easy transition from Linux/Windows to MacOS.

And for the Apple fanboys, don't get me wrong. I love the hardware configuration of the MBP (except the keyboard layout and lack of some keys). The MBPs are a great choice for some people, but I am not one of them. At the end of this post I compare the MBP hardware we got for $2000 with a slightly better hardware of Samsung Series 7 laptop for $1000 I saw at a BestBuy store. Look for the phrase "comparison of the MacBook Pro and the Samsung Series 7" down below.

This exercise was done 9 days after the laptop actually arrived, so I had lugged it around, along with my older laptop, using it very sparingly.

Pardon the occasional profanity, they reflect the frustration I was feeling as I went through the process. Here it goes:

==================================


Fresh MacBook Pro

Hung on day 2. I had spent a total of no more than an hour since the time it arrived at the door.

Fucking hate the clamshell mode. And I hate the most the fact that there's no way one can disable it!

Found insomniaX to disable the clamshell mode.
Update: insomniaX is not reliable. Every time I run a VirtualBox virtual machine, the setting resets to default, and insomniaX doesn't even detect it, and keeps showing me that clamshell mode is off. I found NoSleep, and that has been working without a hitch.

Found 'MiddleClick' and used it to assign mouse-middle-click to three-finger-tap gesture. I absolutely need a middle-click (like on a 3-button mouse) to make the most of my Firefox experience (to open link in new background tab, and to close a tab).

I like that two-finger-swipe-right/left gesture is associated in browser with "Go back/forward in History".


If you minimize all windows of an app, say two windows of Firefox, the dock will show 2 Firefox icons, and yet clicking on both the icons will bring back only one window. To see the other window, focus on the app's window that is visible, and use the four-finger-swipe-down gesture to see the other windows appear in the icon form just above the Dock. Clicking those icons will bring back the other windows.

For all practical purposes, Command key is the same as Ctrl key on Windows and Linux. Don't be fooled by the "control" key on MacBook.

cmd-x : cut
cmd-c : copy
cmd-v : paste
cmd-z : undo
cmd-y : redo
cmd-s : save


Firefox shortcuts:
cmd-L : Focus on address bar (Location bar); works on Windows and Linux too.
cmd-k : Focus on search bar.
cmd-option-left/right arrow : switch tabs ; Windows/Linux : Ctrl+PgUp/PDn


Focus on spotlight : cmd+space
Show all windows of an app : cmd-` (but this won't show the windows that are minimized)

Go to beginning of line and end of line in an editor: (Windows/Linux : Home/End)
cmd+left-arrow
cmd+right-arrow
(But apparently these do not work in Blogspot.com post editor, where I'm taking these notes.)

Go to beginning of document and end of document in an editor: (Windows/Linux : Ctrl+Home/End)
cmd+up-arrow
cmd+down-arrow
(But apparently these do not work in Blogspot.com post editor)


The 'delete' key works like the backspace key on a regular keyboard, and to get the actual behavior of a 'delete' key as on other keyboards, one has to use fn+delete key combination.

Unanswered questions:
How to maximize/minimize/restore windows.
    I am addicted to the following combinations on Windows/Linux: 'alt+space X', 'alt+space N', 'alt+space C'

How to add more clocks/cities to the standard clock in the menu bar.
    At times I need to lookup times of different cities. I used to use Google searches like 'Delhi time' to find out, but later added all the clocks to my standard clock in Ubuntu. I now have the following clocks: New York, UTC, London, Istanbul, Delhi, Sydney. On Mac I don't see a way of doing that, except for adding widgets to the Dashboard.

How to make cmd-tab switch between all open windows, not just that are un-minimized.

How to kill app automatically when all its tabs are closed.
    I expect the app to terminate itself when all its tabs are closed, but that clearly is not the case in Mac. I have to hit an extra cmd+Q to make sure that the app is actually killed! This frustrates me on Firefox and Terminal applicationsk which I use the most.

How to make Terminal close the tab when the bash shell exits.
    After I hit the control+D, the tab simply sits there with a useless message "[Process completed]"!

As noted above, cmd key behaves like control key, but in Terminal, cmd+D splits the window. To exit the bash shell one has to actually use control-D; this is freaking confusing.

Unix command 'top' is different! It doesn't understand the same options that Linux's top does.

There's no Home/End key. No PageUp/PageDown key.
(You can scroll down in man pages on the Terminal by pressing space key, but I couldn't find a way to scroll-up one page at a time (shift-space is the same as just the space); PageUp used to help on Linux. And going to the beginning/end of the man page requires that you scroll through the whole document laboriously; Home/End keys on a standard PC made it so easy.)

There's no USB socket on the right side of the laptop.
I have a wired mouse, and I keep its wire tied up to keep it short and keeping it from messing my desk. With MacBook I will have to open up that wire to make it long enough to go around the back of the laptop and on to the left side of the laptop. This is not a big deal for me.

Pressing control-left-arrow takes me to the previous workspace/launchpad/dock... I can't remember what they call it; the one that has these widgets (edit: Dashboard). All I wanted to do was go to the start of previous word while I'm typing. Apparently the shortcut is option-left-arrow. I have made this mistake numerous times and landed on the Dashboard, just while taking these notes.

Mac is fucking crippled; or maybe it thinks I am crippled. And I can't believe my company policy is forcing me to go through this. Spent $2000+ and I got just the hardware, since the software it comes with is useless for me. If I have to tweak every little shortcut to make it usable for me, then WhyTF am I not using Linux in the first place anyway.

Enough of this madness. Spent 4 hours of my Saturday morning figuring these things out. I have had this laptop for 9 days now, and never used it seriously, and today I wanted to dedicate time to learning it. Searching for keyboard shortcuts, configuring it to work my way was all going okay until I realized that the utilities like top, less, etc I use so often don't work the same way as on Linux. This means I have to redo my .bashrc file [1] and that is I something I want to avoid doing because that has been customized over years with my little tweaks and I am pretty sure I'll have to discover Mac version of those tweaks again. To be fair, I was warned in advance that the /proc filesystem does not exist on MacOS, but I didn't realize that the command-line tools/utilities to be used here are not GNU tools! And that GNU tools like top, iostat etc. are impossible to to compile on MacOS because they rely on /proc.

[1] https://github.com/gurjeet/home/blob/master/.bashrc

For someone who hasn't had the good fortune of talking me into using a Macbook, here are my gripes that I had thrown at the people forcing me into using it, and these were raised *before* I was handed this good-looking devil:

.) I spent 2-3 years, over mutiple attempts, to move away from Windows to Linux.
    This involved first trying some Linux distros on VirtualBox on Vista, then dual-booting my laptop with a distro I liked. But had to move back because I was too tied to Vista. Move back to Linux again because the development environment was much faster under Linux.

.) The time I spent learning to do things in Linux over last 3 years, will have to be forgotten.
.) I will have to learn new stuff to do the same things. And as seen above, that's a lot of work.
.) I moved my development environment, and the whole mindset to using Linux, and I have to do it again to get locked into another commercial system.
.) The things I would learn about MacOS, all the tunables etc. would be of no use when it comes to helping customers, none of whom is running Postgres (or any other service for that matter) on a MacOS.
.) If I ultimately decide to use Linux in a VirtualBox setup, my company spent $2000 for a hardware of which I can use only a part of, since a VM performace will never be close to bare-metal performance.

On the plus side, I got to test Firefox's Sync feature, and was delighted to see that I could migrate all my addons, history, even open tabs (50+ of them) from my Linux. Firefox + LastPass plugin + TabMixPlus (with tab settings imported from my other laptop) made my switch almost unnoticable, and I didn't even feel the difference between my old laptop and the new one, until I left Firefox to do other things :(

On to  trying Ubuntu 12.04 in VirtualBox. But I don't have high hopes from that setup either, because of the various missing keys that I use on a minute basis. I am not going to try dual-boot yet, because the setup procedure is pretty arcane and nothing is documented for MacBook Model Identifier 9,1. Compare that to any other laptop that is made out there; you don't need special instructions to install Linux for every different version of hardware, but with MacBook Pros people had to develop procedures for every new version of hardware [2].

[2] https://help.ubuntu.com/community/MacBookPro

Okay, installed VirtualBox and Ubuntu, and oh ... my... god.... I feel at home. It has key-bindings already assigned so that I won't have to retrain much.

The command key is properly assigned to the the Windows/Super key; alt and control keys do what they are supposed to do, so it is now control+c/v/x/z/y/... instead of the command+c/v/x/z/y/... madness. Has key combinations for PageUp, PageDown, Home, End (fn + up/down/left/right).

Assigned 7 CPUs  (my MacBook Pro quad-core, but I think it is configured by default to use HyperThreading, so it shows 8 CPUs on the system) and 96 MB of video memory, enabled smooth-scrolling in Firefox and voila, it's like I'm back to my old laptop, except it is now incredibly faster (boots up in under 20 seconds, and shutdown in under 5 seconds!)

I am now going to try and assign a raw disk partition for my $HOME directory, and use LVM for that, so that
1) I get to use the same partitions when/if I dual boot some day,
2) I get to use the same partition if I choose to use a different distro in VM (Fedora, CentOS, ...), and
3) I can expand my $HOME directory whenever I run out of disk space.

I liked the fact that MacOS allowed me to reduce the primary partition size from 496 GB to 100 GB on the fly, using the pre-installed Disk Utility. I don't think Linux would let me do that!

Sunday evening:
So I went hunting for a comparable hardware at the nearby BestBuy store, and found these beauties for much less than what we paid for the MacBook Pro: Samsung Series 7 NP700Z5C and Samsung Series 9 NP900X4C.

Here's the specification I gave to the guy at the store: Quad-core CPU, 8GB RAM, 500 GB Hard Disk. And the girl who was assigned to help me, after a little deliberation took me to these laptops. And boy was I surprised!

Here's a comparison of the MacBook Pro and the Samsung Series 7 NP700Z5C hardware:

MacBook Pro:

CPU:  2.3GHz quad-core Intel Core i7 Turbo Boost up to 3.3GHz
Display: 15" LED 1680x1050  pixel, anti-glare screen
Disk: 500 GB SATA 5400 RPM HDD
RAM: 8 GB 1600 MHz
Size: 0.95" x 14.5" x 9.82" (2.41 cm x 36.4 cm x 24.9 cm)
Weight: 5.6 lbs. (2.54 Kg)
Extra: Mini DisplayPort to VGA Adapter
Price $2028
(We paid $100 extra for the anti-glare screen (I don't want to use a glossy screen), and got HiRes as a result of that, else I would have chosen the 1440x900 pixel display. Had to get the VGA adapter, else I won't be able to use the projectors that most of the world uses.)

Samsung Series 7 NP700Z5C-S01UB: Samsung.com BestBuy.com  Amazon.com

CPU: Intel Core i7-3615QM Processor, 2.3GHz, 6MB Cache
Display: 15.6" LED 1600 x 900 anti-glare screen
Disk: 1 TB 5400 RPM, with 8 GB ExpressCache
RAM: 8 GB DDR3 1600 MHz
Size: 0.94" x 14.2" x 9.3"
Weight: 5.29 lbs.
Price: $999

(Apple doesn't say exactly which model of Intel CPU is in there, but from what I can tell, both have the same CPU.

Although I do not care about the graphics card much, here's what the Samsung machine is using "NVIDIA® GeForce® GT 630 M, external, 512 MB Cache". Apple: NVIDIA GeForce GT 650M with 512MB of GDDR5 memory.)

I can't say if Apple and Samsung both employed the same benchmark, but MacBook's battery life is shown on the specs page to be 7 hours, and that of Series 7 is 9.2 hours.

Smaller, lighter, cheaper, more hard disk, and easily dual-bootable :) Which one would you have chosen.

Also, the body of the Samsung Series 7 compared here has a metal body, like the MacBook, and has a more vents for heat to exit, so I believe it must be cooler to touch too, as compared to the MacBook Pro that has no visible vents dedicated to heat dissipation.

GeekSquad (BestBuy's partner!) sells 1 year warranty for $169 and 2 years warranty for $269. Even counting that in, Series 7 laptop comes out way cheaper than the MacBook Pro, not considering my time spent retraining :)

Git merge vs. Git rebase

One should always prefer rebase instead of a merge, because it gives you a clean, linear history graph.

But you should NOT perform a rebase if somebody is following a branch that you are about to rebase. This is becasue rebase operation rewrites history by changing the parent pointers of commits, and this can wreak havoc on somebody's local branches who are pulling changes from your branches.

Rebase is perfectly safe for your local branches, because no one knows anything about your local branches.

To make your existing branches use this facility autmatically, you can use this Git command:

git config branch..rebase true

Alternatively, you can open a project's .git/config file and append the 'rebase = yes' line to every branch, like so:

[branch "Daas_1.1.1"]
    remote = origin
    merge = refs/heads/Daas_1.1.1
    rebase = yes

Editing .git/config file may be preferable when you have a lot of branches that follow the remote branches. This option only applies to branches that have the 'merge = ' attribute set, because those are the only branches that Git will try to perform a 'merge' on when you perform a 'git pull'

To enable this attribute for any new branches that you may create in a repository, you can use this command when inside that repository.

git config branch.autosetuprebase always

And if you want this attribute to be set for every new branch in every Git repository on this machine, the add the --global flag (this actually sets the attribute in your $HOME/.gitconfig file)

git config --global branch.autosetuprebase always

What follows is an explanation of difference between merge and rebase using ASCII art. I am not showing any Git commands to keep it clean. For a better and graphical representation, including the Git commands to perform actionas, use the PDF file here: https://github.com/downloads/stevenharman/git-workflows/git-workflow-with-notes.pdf


Let's start with this branch. It has 3 commits A,B and C. This is what is visible to both the developers Dev1 and Dev2, since this is what is in the remote repository.

A ----> B ----> C

Dev1 starts working and performs a commit to the local repository. This is what Dev1's local repository looks like. Remember Dev1 hasn't pushed any commits to the remote repository yet.
                 
A ----> B ----> C ----> C`

Now Dev2 performs a commit to her repository.  This is what Dev2's local repository looks like. Remember Dev2 hasn't pushed any commits to the remote repository yet.

A ----> B ----> C ----> C``


If we hypothetically combine the two local repositories, this is what it'd look like. Even though none of the two developers created any branches, Git treats any offshoot of a commit to be a branch. So these are two branches, because they share a common parent (commit C) and yet their contents are different. Remember, this is a hypothetical combination of two local repositories, and Git on each machine knows nothing about the commits on the other machine.

                  ----> C`
                 /
A ----> B ----> C
                 \
                  --------> C``

At this point Dev2 decides to push her commit to the remote repository. So this is what the *remote* repository will look like, before and after the push:

Before:
A ----> B ----> C

After:
A ----> B ----> C ----> C``

Dev1 continues to work on his local repository and performs another commit. Here's his *local* repository. Remember, he hasn't performed any 'pull' operations since he started his work on commit C`.


A ----> B ----> C  ----> C` ----> C```

Now if he performs a 'fetch' operation, this is what he'd see in his local repository:

Note: By default, 'pull' == ('fetch' + 'merge')

                  ---> C` --------> C```
                 /
A ----> B ----> C --------> C``


And now Dev1 decides to perform a 'merge' operation. Git will try to merge the changes done on the two branches, and if it finds any merge conflicts, it will wait for the user to resolve those conflicts before it performs commit. If no conflicts were found, Git performs a commit automatically.

D => The commit that represents a merge of two branches

                  ---> C` --------> C```
                 /                   \
A ----> B ----> C --------> C`` ------ D --->


This is called a merge bubble. The local branch can now be pushed to the remote repository, and this bubble will persist in the history forever.

Now lets assume that Dev1 decided to avoid the merge bubble, here's what he would do. Lets start after the 'fetch' operation.

                  ---> C` --------> C```
                 /
A ----> B ----> C --------> C``


Dev1 performs a 'rebase' so that C` parent commit is changed from C to C``. Even in the case of a rebase, Git has to perform a merge of changes, to make sure that the two branches did not modify the same code in different ways. If there's a conflict, Git will prompt you to resolve it before it performs a commit.

                          ----> C` ----> C```
                         /
A ----> B ----> C ----> C``


Now the local branch can be pushed to the reomte repository, and we end up with a nice and linear history. Do note that the C``` commit in this case has the same contents as the D commit in the 'merge' case.

A ----> B ----> C ----> C`` ----> C` ----> C```

My Google search stats


Above are some statistics of my Google searches of all time, as of today, and some conclusions jumped at me.
  • I search most on Mondays.
  • I give it a rest (somewhat) on Saturdays.
  • My typical day starts at 8 AM and ends around 5 PM.
  • I become active again between 9 PM and 11 PM.
  • My average sleep/rest period is between 2 AM and 6 AM.
  • And most of all, it tells me that I am a search junkie!
Get yours at  https://www.google.com/history/trends

One World, One login

After setting up personal website, one of my first objectives was to setup my own OpenID Server, so that I can use one login for every new website I sign-up for (provide they support ubiquitous OpenID authentication.)

A little research showed that we don't have to setup a personal OpenID Server just to use personal webpage as an OpenID.

OpenID has delegation feature, explained here [1], which effectively allows you to use a personal URL as OpenID and yet use a bigger OpenID provider who can guarantee better availability of the service.

And if you don't like your current OpenID provider, or if they go out of business, you can always switch to a new one without having to change your OpenID registered with various services.

So, I used my existing account at www.myopenid.com to setup delegation from gurjeet.singh.im to myopenid.com hosted service.

I added these two lines to gurjeet.singh.im/index.html and was done!




Now my OpenID is gurjeet.singh.im, and I can use this as a login for any service that supports it.

OpenID++! Kudos to whoever first came up with the idea of OpenID!

Update June 15, 2012: Just 3 days after I implemented this, the technique has paid off! www.myopenid.com seems to be down today, I waited for about 3 hours for it to come back online. Finally I gave up and decided to switch my OpenID provider. I replaced the above mentioned 2 link tags with these two, to use www.blogspot.com as my OpenID provider:



... and my personal OpenID url gurjeet.singh.im is back online!

Busy weekend: www.singh.im, Redmine, Dream Studio Linux

Had a great fruitful weekend.

Finally setup a server for www.singh.im, setup personal Redmine, with Postgres as the backend.

Migrated from Linux Mint 10 to Dream Studio 12.04 and am loving it, so much so that I seeded a torrent for its ISO. It's 2.7 GB! Much larger than Ubuntu's 600 MB, but totally worth it since it comes preinstalled with all kinds of media related software, which was a pain to get working in LinuxMint/Ubuntu earlier.

With this distro (based on Ubuntu 12), I can finally not feel ashamed of using a Linux desktop. It looks good, feels snappy, and is very keyboard friendly. I used to miss Windows particularly because of Windows' ability to be a keyboard friendly GUI. Yeah, I know, Linux Desktop from any distro can be very easily configured to do what you want, but I want it out of the box in a distro.

Finally, I got hold of Sri Guru Granth Sahib's line-by-line English translation and transliteration in a .doc file (thanks to the leads from the announcer at Medford Gurudwara). I hope to start work on making that amazing work available in a single HTML file, with search functionality built-in.

Cover your tracks in Bash


unset HISTFILE

Bash shell stores history of the commands you execute, and you can inspect this history using the history command.

But if you do not want your session history to be saved (maybe you are doing something nasty, which you don't want others to know about), you can use the above command to disable the history logging feature for your session, and nobody will know what you did.

PS: There are other audit trails which can still incriminate you, so don't smugly assume that the above command makes you an invisible hacker.

Generating self-signed SSL certificates

Here are a set of commands to create self-signed certificates.
# Create a Certificate Signing Request
umask u=rw,go= && openssl req -new -text -nodes -subj '/C=US/ST=Massachusetts/L=Bedford/O=Personal/OU=Personal/emailAddress=example@example.com/CN=example-postgres-host.com' -keyout server.key -out server.csr

# Generate self-signed certificate
umask u=rw,go= && openssl req -x509 -text -in server.csr -key server.key -out server.crt

# Also make the server certificate to be the root-CA certificate
umask u=rw,go= && cp server.crt root.crt

# Remove the now-redundant CSR
rm server.csr

# Generate client certificates to be used by clients/connections

# Create a Certificate Signing Request
umask u=rw,go= && openssl req -new -nodes -subj '/C=US/ST=Massachusetts/L=Bedford/O=Personal/OU=Personal/emailAddress=example@example.com/CN=example' -keyout client.key -out client.csr

# Create a signed certificate for the client using our root certificate.
umask u=rw,go= && openssl x509 -req  -CAcreateserial -in client.csr -CA root.crt -CAkey server.key -out client.crt

# Remove the now-redundant CSR
rm client.csr


I use them to create self-signed certificates for my Postgres installations.

For the purposes of Postgres connections, you need to replace CN=example with CN=actual-database-user-name in the command titled 'Create a signed certificate for the client'. Then place the server.* and root.* files in the Postgres' data directory. Place the client.* and root.crt files on the client machine and use the following format to connect, say psql utility, to the database:

PGSSLMODE=verify-ca PGSSLCERT=client.crt PGSSLKEY=client.key PGSSLROOTCERT=root.crt psql -h postgres-server.com -p 5432 -U postgres -d postgres

Of course, you also need ssl = on in your postgresql.conf file.

How to change time in Linux from a network source

Short answer:
sudo /etc/init.d/ntpd stop
sudo ntpd -q -x -g
sudo /etc/init.d/ntpd start

One can always use the `date --set=` command to change system time, but that entails human error. I needed a command that would set my system time from a network source, which is more reliable than my wall clock.

ntpd does this automatically for you. But it does in very small increments so that the programs running on the system do not see a sudden huge change in time and go crazy; which makes sense most of the times.

But if you are running virtual machines, which you suspend and wake up often, then you need a way to change the system time of the virtual machine immediately after you unsuspend it. ntpd does not work well in this case, since it will take forever to bring the system up to current time.

A little digging brought up ntpdate, but since that is being depracated in favour of ntpd, I used `ntpd -q -x -g` to bring the virtual machine hosted system to current time.

If you are running a half-decent virtual machine software, like VirtualBox, it comes with 'Guest Additions', which, when installed on the guest OS will perform such chores for you after every wake up.

Allowing root access in AMIs created/derived from Amazon Linux AMIs

Short answer: Edit /etc/cloud/cloud.cfg and set disable_root: 0, and in /etc/ssh/sshd_config set PermitRootLogin to without-password.

There are a lot of people asking on the AWS forums, and elsewhere, about how to make AMIs derived from Amazon Linux AMIs, such that the users of the derived AMI can launch an instance and allow root user to login vi SSH.

But, the way Amazon Linux AMIs are configured, the root user is greeted with a message like 'Please login as ec2-user rather than root user' and the connection is terminated after 10 seconds.

The reasoning behind having such a setup is that, that allowing root user login from SSH opens up the instance to vulnerabilities. And at the same time the recommended solution is to login as ec2-user and do a `sudo su -` to gain root access.

I find it bogus to disallow root access over SSH and then allow ec2-user to access root account without any restrictions!!! And mind you, all access is using public/private keypairs generated and supposedly handled carefully by the user.

If anything, they should document how to deny root access to the ec2-user, if so desired by the AMI creator.

Okay, now the technical guts of how to fix this situation.

The reason behind that message upon root SSH login is that the file /root/.ssh/authorized_keys contains a 'command' prefix to the authorized key, similar to:

command="echo Please login as ec2-user user rather than root; sleep 10; exit 0" ssh-rsa AAP...

Even after you remove the 'command' and everything before the 'ssh-rsa', the /etc/ssh/sshd_config has a setting that will disallow root login.

And even if you fix all this, you will discover that when you bundle up an AMI from your instance (which is created from an Amazon Linux AMI) and launch the instance from this derived AMI, you will be back to square one, since the /root/.ssh/authorized_keys will again contain the same 'command=' prefix!

So here's how to fix this:

Launch an instance from Amazon Linux AMI, and do whatever customization you want. When you are ready to create an AMI (derived AMI) from this instance, run the following 4 commands, and the instances created from your derived AMI will not have this problem:

$ sudo perl -i -pe 's/disable_root: 1/disable_root: 0/' /etc/cloud/cloud.cfg
$ sudo perl -i -pe 's/#PermitRootLogin .*/PermitRootLogin without-password/' /etc/ssh/sshd_config
$ sudo perl -i -pe 's/.*(ssh-rsa .*)/\1/' /root/.ssh/authorized_keys
$ sudo /etc/init.d/sshd reload # optional command

  1. Ask the EC2 node configuration scripts installed on the AMI to not disable root login.
  2. Ask sshd daemon to allow password-less (but public-key based) root logins.
  3. Strip the 'command=...' prefix from root user's authorized_keys.
  4. Reload shd config for the sshd_config to take effect.

Commands 3 and 4 are really necessary only if you want to login into your current instance (created from Amazon Linux AMI) using root login. The first two commands are sufficient to allow SSH based root login into instances of your derived AMI.

How to use MTP on Google's Galaxy Nexus in Linux

Short answer: You don't have to use MTP; you can use PTP.

By now it is no secret that Galaxy Nexus does not mount as USB mass storage for some wise technical decisions made by the Nexus developers. Now there are a plenty of posts floating around on how to install software in Linux to enable MTP access so that you can access the files from your Linux.

None of those worked for me, partly because of the fact that I am running Linux-Mint 10, and the libmtp that comes for this version is outdated. I can do some hacking to update my version of libmtp, and hopefully work. But, as much as I love Linux, and OSS in general, I hate to give out instructions that require compiling, editing config files, etc.

So the simplest solution that worked for me was to use PTP instead of MTP. Choosing this option causes my Nautilus file explorer to immediately identify the phone as a photo source. And voila, you can now open a file explorer and browse, add, remove files from the phone.

Savor the screenshots to get an idea how easy it is to use PTP to access the files as compared to compiling code and what-not.




How to play a long MP3 on Android alarm

I recently bought a Samsung Galaxy Nexus; Google's flagship Android phone, sporting the latest Ice Cream Sandwich. Suffice it to say that I am simply in love with this device.

I wanted to play recordings of Japji Sahib and Rehraas Sahib every morning and evening, respectively. These are about 22 minutes long each, and they should be played exactly once, and the player should stop after the play.

If your MP3 file is shorter than 10 minutes, then the standard 'Clock' App can play your MP3 without issues. You just have to create a folder named "alarm" on the storage card and place your MP3 files there, and they'd magicaly appear in the "ringtone" choices when creating an alarm.

There's a lot of material already floating around the 'net for doing this: www.google.com/search?q=android+alarm+MP3

There are a few problems with everything I've seen so far:
  • The default Clock's alarm runs for only 10 minutes.
This is not ideal if you have a song/track longer than 10 minutes. And if it is shorter than 10 minutes, I think the song will be looped.
  • Any other alarm/clock app runs the MP3 in a loop
This is not ideal if you want to run your MP3 only once and then stop.
  • 'AlarmDroid' lets you stop the alarm (mp3 song) after a set number of minutes (Advanced > Ringer Duration).
The problem with AlarmDroid is that it hogs the screen, and won't let you use any other app until either you dismiss the alarm, or the alarm times out after the duration you've set. Yet another minor issue is that since the granularity of auto-dismiss is in minutes, the song loops for a few seconds after the first run, before the auto-dismiss actually stops the alarm. I used AndroidAlarm until I found the perfect solution.

None of the alarm apps allowed me to do what I wanted. So I started looking for some customizable way of launching the Music app on my own, and finally I found AppAlarm.

AppAlarm allows you to launch any app on an alarm. So I could create a playlist that has just one song, and then use AppAlarm to play that playlist using the Music App. By default the Music app will not loop the songs, but if you were using the Music app and you configured the app to either 'Repeat All' or 'Repeat One' setting then the Music app will loop (repeat) the alarm song too.

So here's how to to get it done. In the Music app create a playlist with just one song that you want to play when alarm goes off. Launch AppAlarm, and 'Add New Alarm'; 'Enable' the alarm and tap on 'App to Launch'; choose 'Create Shortcut', choose 'Music Playlist', and select the playlist you just created above. On the resulting dialog box, click 'Select App' and scroll down to select the 'Music' app from the list. Choose time, repeat everyday, etc settings to your taste, and you are done.