Wednesday, December 4, 2013

When it comes to MTP, Dolphin is a bitch! Konqueror to rescue

KDE is great desktop environment and I use it everyday since the time I got introduced to it in 2001. It's free, it's open source, its highly configurable and its really fun to use.  Since version 4.10, KDE has inbuilt support of the MTP protocol which most of the smart phones in the market are using nowadays for connecting to PC. Mass Storage has become a thing of the past with MTP offering easier access, in built support in Windows and Linux OS and transfer speeds comparable to mass storage mode.

KDE in Linux offers MTP support using its robust, feature rich and pluggable I/O layer, KIO. It uses a a kio-slave called kio-mtp which is now pretty standard in almost all the current distributions. kio-mtp is supposed to provide a complete and seamless integration of MTP protocol in KDE using whatever file management tool you use. Dolphin is the default file manager these days on all distributions and its a great piece of software for the most part.

Until it comes to MTP. Unfortunately, as per my experience, the MTP implementation in Dolphin is broken and has a multitude of issues. I use distributions based on Ubuntu mostly (Kubuntu, Linux Mint etc.) so this holds true for those only. I have no idea about distros like Fedora and OpenSUSE. Some of the issues that I have faced are:

  • Not able to connect at all.
  • Can connect but waiting forever to show the contents of the device.
  • While copying files getting stuck for a long time and then giving error 'MTP Protocol has died unexpectedly'.
  • and some more...
If you go searching, you will find a lot of posts with people whining about these issues and mostly suggestions given to them include using a different version of libmtp, using a different access driver altogether. But it seems, there is no luck if you want to use the plain-jane kio based integration.

Then today, I accidentally put my foot over Konqueror and  just out of curiosity I switched it to File Manager profile. I tried accessing my MTP devices and viola! everything worked as it should. I am able to copy to and from the device, both single files/folders as well as a bunch of them.

So I think the issue lies in the way Dolphin accesses MTP devices and not in the KIO layer itself. I have marked it as a solution for time being for myself. Just create shortcut for konqueror with command line:

konqueror ~

You can replace ~ with any other path. Use this shortcut whenever you want to access MTP devices. If you would like to set konqueror as your default file manager, you can do so from System Settings > Default Applications > File Manager and choosing Konqueror there in the list.

One last thing to keep in mind for Android device owners: The KIO based MTP implementation is a bit quirky about USB Debugging so turn that option off on your device when you are connecting for file management tasks.

Let me know in comments whether these worked out for you or not.

Cheers!

Sunday, November 24, 2013

Solution to Linux NTFS performance woes :- Bad performance / 100% CPU usage when using VirtualBox / VMWare and In general

Hey everybody!

On my dual boot system, apart from the system partition for Windows and Root+Swap partitions for Linux, I have one large partition of about 400 GB dedicated for storing my data which includes my shared Thunderbird email data folders, my documents, my videos, my music and what not. For obvious reasons, this partition is formatted with NTFS file system, for easy data sharing between Windows and Linux (BTW, If you really wanna know, I use Windows 8.1 Pro and Kubuntu 13.10, Kubuntu being my primary OS as of now :-) ). I was quite happy with my setup except for two very annoying problems (which use to happen in Linux Mint too that I was using previously) :

1. The Thunderbird installation use to lag too much on Linux, and sometimes on Windows too while starting up and reading multiple mails or accessing folders.

2. While using any Virtual Box or VMWare VM under Linux, I was getting pathetic performance and my host system use to hang a lot while reading or writing files inside the VMs. In system monitor i could see a lot of processor time going to the process : mount.ntfs. Also, in VirtualBox, whenever I tried merging a snapshot with the base image, the process use to hang and never complete.

High CPU usage by mount.ntfs

The Thunderbird problem was not that severe so my attention was solely on the VM problem and for some time I thought that this may be an issue with VMWare and Virtual Box. But even after upgrading to newest versions of both the software multiple times, this problem was never gone. And besides, on giving it a well deserved thought, I realized that the process mount.ntfs is not specific to Virtual Box or VMWare.

So basically, this seemed to me as an issue with the file system driver itself, namely NTFS-3G. I searched a lot on the net for a solution but didn't find any, there were only the same questions, that I was also asking. Frustrated, I decided to look into the official specifications and FAQ section of the driver developer's website (www.tuxera.com) and viola! I found the answers to all my issues with NTFS file system in Linux. Below are the exact steps using which you can also get great performance on NTFS drives under Linux:

1. Keep it Uncompressed! Period.

NTFS is a closed source file system and the NTFS-3G driver was created using some very sophisticated reverse-engineering techniques. Now, all the code revisions over the past few years have made it very speedy and bug free but still there are some grey areas where it cannot compete with the native driver as far as performance is concerned (and it is also not expected to. Remember, NTFS is not a preferred FS under Linux, it's there for compatibility with the Windows world).

Transparent Compression, is one such feature. Under Windows, you can compress a particular folder or even a whole drive using the native file system compression feature and it works great. The files are compressed and decompressed on the fly when you use them and you don't notice a thing. Performance optimizations have also been done by Microsoft to make it work seamlessly. But when working in Linux, all is not so hunky dory. While decompressing and compressing files, the NTFS-3G driver takes way too much CPU power and being a file system driver with more privileges, it hogs the system resources like a monster, uninterrupted for the most part. So the basic thing you can do to get about 10 times more performance is to decompress your drives that you share under both Windows and Linux. To do that, just right click the drive in Windows Explorer, select properties and uncheck the option  “Compress drive to save disk space.” and click Apply. In the next dialog that appears, choose “Apply changes to :\, subfolders and files” and click OK.

Remove drive NTFS compression under Windows

 If you have lots of files on the drive, this process can take some time so have some tea and snacks. This procedure will decompress the whole drive. If you don't want that, then at least decompress the performance critical folders on your drive like the ones where you have kept your VM virtual hdds (VDI, VMDK or VHD files). You can do that by right clocking on the folder, clicking Advance... button and unchecking the checkbox that says “Compress contents to save disk space”. This will improve the performance a lot and you will immediately notice the difference as soon as you will boot into Linux.


Remove NTFS folder compression under Windows

2. Enable Big Write Mode and Disable Last Access Timestamp Updation:

The NTFS-3G driver supports a flag called big_writes while configuring your file system in /etc/fstab or while mounting using mount command. What this essentially does is that it instructs the driver to write data to disk in larger chunks instead of on every single write instruction received by it. This helps a lot with throughput while writing/copying/moving large files and is in general good for small files too.

Similarly, NTFS has a feature of recording last access time of a file and this this is done every time the file is accessed, which adds up to the total time it takes to read from or write to the file. This can be safely turned off without causing any harm to the data.

To configure these options, below are the settings I use in my /etc/fstab file. You can use the same flags as in screenshot, other details will vary from system to system depending on how many drives you have and how you configure them.Basically, the highlighted items are the ones you wanna change in your config.

Configure big_writes and noatime mode in /etc/fstab

3. Disable mlocate/locate indexing of NTFS drives

mlocate or locate is a standard program under linux which can be used to search the file system quickly for a file or directory. It uses a high performance index of the file system,  generated and updated every day by the updatedb command. Usually, this is a scheduled activity by default on most systems.

The updatedb utility has some issues with NTFS file system, where even if a single file or folder is changed on the file system, it considers all the files and folders as changed and re-indexes everything on the drive. This obviously takes CPU resources and if the drive is compressed, the situation becomes more problematic because of the high CPU utilization by compression/decompression routines. This doesn't happen too much now-a-days it seems, probably due to updated versions of these two commands but still, changing a little configuration option for these commands can give you much better results.

The trick is to disable the index generation on NTFS file systems altogether. Usually, indexing is not required on NTFS and you can always go and search items using your GUI file managers if you need to. To disable it, edit the file /etc/updatedb.conf and add the entries ntfs and ntfs-3g to the "PRUNEFS=" line, like in the screenshot below. I am not sure whether ntfs-3g is needed or not but there is no harm in adding it so I add it nevertheless.



After applying all these tricks, my system has become so fast and responsive that I can finally use it without a hitch as my production machine for all purposes. Try these and let me know in the comments how it worked out for you.


 Cheers!

Tuesday, May 21, 2013

Truth About Internal Memory in Samsung Android Devices

Hey Everybody!

On 15th this month, I got my shiny new Samsung Galaxy Note II, GT-N7100, the international version or to be precise, the Indian version, in the color of my choice, Titanium Grey. I moved on to it from a 32 GB Galaxy S3,  basically for a bigger screen, much better battery, and the mighty S-Pen :) Needless to say, I am on cloud nine since then :D

But as always, you don't get everything of everything and this phone is no exception. My biggest gripe is the 16 GB memory touted by Samsung. It is not 16 GB actually. You only get 10.45 GB out of the box, and the rest is taken by : 

  1. The calculation fiasco that virtually every company making storage devices on earth do, where they calculate 1 GB = 1 Billion Bytes instead of 1073741824 bytes, and 
  2. The memory eaten away by Android OS and pre-installed software, basically your phone ROM. 

In addition, when you first boot your device, the initialization process also creates a few sqlite databases and other files on this memory reducing it even further.

In newer Android devices, I think the Samsung has stopped making different physical partitions (or memory chips) for ROM and internal memory. Instead, what they do is they divide the same physical memory chip into 2 or more logical partitions and then mount the partition with "ROM" contents as read only. If you are aware of the process of flashing a Samsung phone with ODIN, you might have found the word PIT file used often. PIT file actually defines the partition layout. That's why it is said that while doing normal ROM flashing, the re-partition checkbox should not be checked, it can wreak havoc if not done properly.

This layout provides them the flexibility where if the size of ROM contents decreases or increases, they can just re-partition the memory to adjust for it. This was a problem in previous devices, like the Galaxy S, which only got the Value Pack instead of an upgrade to Android 4.0 ICS, as their isolated ROM memory chip was not having enough space. Last, i checked around 60-70 MB was free in /system on my Galaxy S i9003.

Samsung doesn't release 32/64 GB versions of their phones in many countries like India many times. For example, in Philippines, 64 GB version of Note 2 is readily available but In India, only 16 GB is there. That's one of the reason why people resort to Rooting. Rooting actually provides a way to go over this limitation by using a technique called Directory Binding, which I will talk about in my upcoming posts. But this is one thing that I don't like about you, Samsung! I hope you are listening!

Friday, May 3, 2013

Solution: Bluetooth not working after upgrading to Ubuntu 13.04 (Raring Ringtail)

If you are like me, there are good chances that you love Ubuntu as an OS and there are even better chances that you have already upgraded to the latest and the greatest flavor of it, version 13.04 code named Raring Ringtail. Upgrading an OS is not similar to Installing a fresh copy, well, we all computer veterans know that. 


Upgrading almost always brings with it its fair share of problem.One such problem i recently had was a non working Bluetooth. In Ubuntu 13.04, the Bluetooth applet has been changed a little and its more sophisticated looking now. But somehow, on some computers, when you are upgrading from 12.04 or 12.10, the Bluetooth stops working. Whenever I was trying to send any file to my Samsung Galaxy S3 smartphone, i was getting following error :



Error : GDBus.Error.openobex.Error.Failed: Unable to request session

I researched a lot and couldn't find any proper solution for it. In the process, I came across a Bugzilla entry : https://bugs.launchpad.net/ubuntu/+source/gnome-bluetooth/+bug/1148033 .But here also, the bug is only listed, no official solution has been posted as of now. But going through the user comments I found that few people have commented a solution, to use a software called BlueMan a.k.a Bluetooth Manager.It is available in official repositories so you can install it by simply issuing the following command:

sudo apt-get install blueman

I installed it and started it. Immediately, I saw another Bluetooth icon in my notification area which had somewhat similar options to the "official" icon. So I thought that was it and tried to send a file from this new icon's Send Files option. Turns out, it was able to pair with the phone but not able to send anything. It was repeatedly getting stuck and then reporting that there was an error sending the file without any much helpful message. BAM!

This was testing my patience now and I cursed the Ubuntu team a little :) Then out of the clear, blue sky, I got an idea and I tried the Send File option of original Bluetooth icon. It worked!! But when I exited Blueman, the situation reverted to previous state.

So basically the solution was to keep Blueman running and using the original Bluetooth icon. It was less than perfect as I now had two icons in my notification area, but it worked.

But we tech-guys do not stop until we get what we want, at least as far as computers are concerned. So I dig further and finally found an option to turn off the blueman icon while still keeping it running. In the Blueman icon's menu, there is an option to access its Plugins. Use that, and disable the tray icon plugin. That will get rid of the extra icon. However, if anytime you need to get the icon back, you will have to re-install the utility. I personally don't know how to re-enable it without that. If anybody has some idea, let me know in comments.

Hope this will help some poor soul. :)

Happy Computing!