Saturday, August 22, 2009

My Android Phone and Switch From Sprint to T-Mobile

I really like my MyTouch. I knew I was going to get an Android phone and hoped that one would come out for Sprint. I kept waiting and waiting. I have been with Sprint for about ten years. I read an article about how the next Android phone would probably be for T-Mobile. The article quoted Sprint relics speaking about their problem with Google's vision and nonsense and blaming their failure to produce an Android phone on Google's poor quality OS. I became angry and decided to go with T-Mobile.

For one thing, T-Mobile is German, which I love. Going with a non-US carrier was a plus. Also a plus was that they have been the most Google-friendly US carrier and some suggested that they deserved some patronage because of that. I agreed. Once I switched, however, it became clear that some of the best reasons for switching were simply due to T-Mobile's superior online offerings. They have free online backup, a much better, more interactive website, than Sprint, a user forum. These things lend themselves to a much more user-friendly and Googly environment than Sprint's. Sprint's website is crap. They have login after login that must be navigated in order to use picture mail. They charge for online backup. If your phone is screwed, so is your data. No SIM card for Spring phones. Sorry! Want to get your personal data out of a Sprint phone? Go to the store.

I read that Sprint wanted to customize (read: ruin) Android and make it into some overly commercialized mess, just like their other interfaces. T-Mobile, on the other hand, has released honest-to-goodness Google phones, phones that are pleasant to use and flexible and customizable to a great degree.

So, that's my switch. Now the phone.

I had to mess with my phone a lot to grasp what its limitations are. The first thing I did was power it on and download app after app from the Android market. Then my phone became slow. I became depressed! Then I deleted all of the apps. Then I reinstalled a bunch.

After repeating these steps many times in a very unscientific exercise, I finally started getting the hang of some things. Here are some bullet points:
  • GPS drains the battery.
  • Some apps stay resident even though you wouldn't think so. Install Task Manager, look at Process View, and see which ones aren't terminating when you close the interface.
  • There isn't a whole lot of RAM to play with so a couple unwanted resident apps using up 15MB of RAM each can really slow things down.
App-wise:
The main things I learned are that it's possible to install a whole lot of apps on the MyTouch and still have a speedy, responsive interface. You just need to pay attention to which ones you install.

Task Manager also monitors the CPU of running processes. Some things, like the process monitors themselves, use a lot of CPU as well as a fair amount of RAM. I thought I could leave them running as services but decided that it wasn't worth it.

When I was running tests, comparing the interface speed while an app is installed with the interface speed after I uninstall the app, I realized that some apps stay in memory even after you uninstall them.

Sorry for the scattered ideas. I'm not publishing this to a magazine or anything.

Wednesday, July 29, 2009

Starting Fresh With New Firefox Profile

It made quite a difference for me. I have been using the same profile and preferences from about Firefox 2+ to 3.5.

Also:
  • Personas for Firefox messed up my forward and backward buttons.
  • Enabling a master password for Firefox became a hassle when using TwitterFox. I had to enter my master password for every window that opened at startup.
  • Something about my profile prevented some pages on t-mobile.com from redirecting properly. Firefox kept telling me that it was redirecting in a way that would never resolve. This was after I disabled all extensions.


Saturday, June 27, 2009

Status

  1. Slowly getting through "Learning Perl".
  2. Slowly getting through the Catalyst tutorial on CPAN.
  3. Still getting the hang of Fink and Macports.
  4. Figuring out how I'm going to upgrade from Tiger to Snow Leopard in September.
  5. Figuring out what to do next about my seemingly dead Lacie external hard drive.
  6. Still procrastinating about putting my finances into Gnucash.
  7. Still procrastinating about updating my Flickr.
It looks like using Gnucash in conjunction with Dropbox is a great way to keep your finances safe and available from remote locations.

Tuesday, January 27, 2009

Tip: avoiding net overhead using git over sshfs | KernelTrap

Tip: avoiding net overhead using git over sshfs | KernelTrap

Lovely idea!

The reason I was looking for this is I was getting:

[ajn26@ajn26-fedora-01 cbl]$ git init
error: could not commit config file /home/ajn26/mnt/www5/centers/csj/cbl/.git/config
error: could not commit config file /home/ajn26/mnt/www5/centers/csj/cbl/.git/config
error: could not commit config file /home/ajn26/mnt/www5/centers/csj/cbl/.git/config
error: could not commit config file /home/ajn26/mnt/www5/centers/csj/cbl/.git/config
Reinitialized existing Git repository in /home/ajn26/mnt/www5/centers/csj/cbl/.git/
Hopefully the "-o workaround=rename" part will fix that error.

Monday, November 17, 2008

Cisco VPN Client for Linux!!!!

Hallelujah!

http://projects.tuxx-home.at/?id=cisco_vpn_client

I installed an Ubuntu 8.10 VMWare appliance at home on my Mac Pro and needed to connect to work. I wanted my VM to connect rather than OS X so I looked for a VPN client. Cisco didn't permit me to use their client so I found the one I'm using now at the URL above. My VMWare appliance is 32-bit so I tried installing an older, 32-bit Cisco client from that site, couldn't make it work, installed a bunch of patches, etc. Finally, I just tried installing the latest 64-bit version and it worked! I'm very pleased. Time to stay up even later working.

Thanks, not-Cisco!

Friday, July 18, 2008

SSHFS from ext3 to NTFS woes

This is a follow-up to http://blog.alijnabavi.info/2008/07/difficult-experience-trying-to-create.html.

I thought I eliminated all of my problems yet there remained more problems: bytes of some of the files were zeroed out. It took me a while to figure this out, unfortunately. Finally, though, here is the config that seems to work (in my .bash_profile):

echo 'me_password' | sshfs me_username@www.me.domain.com: ~/mnt/www.me.domain.com -o password_stdin -o reconnect -o allow_root -o workaround=all -o ro -o umask=222 -o use_ino


Some of that is probably unnecessary and uninterested, but what solved my problem was the "-o use_ino".

I didn't want to blindly plug something in without understanding what it was doing . . . HOWEVER :-) I am not ready to learn about use_ino apparently because the results page that Google showed to me when I searched for it looked entirely unappealing, especially at 00:35 when I have to get up at 06:00. I'm just happy that my files are intact.

Friday, July 11, 2008

Difficult experience trying to create decent Web environment

Goals:
  1. Put service configurations (Apache, JRun, etc.) from production Web servers into Subversion repositories.
  2. Duplicate the Subversion repositories at a remote location for data security purposes.
These are pretty modest goals, as I'm sure anyone would agree. However, there is a litany of obstacles.

Circumstances:
  1. I don't have shell access to the Web server OS.
  2. The OS is Windows 2000.
  3. I can RDC into the boxes but the only thing I can do is shut down or restart the machine.
  4. I have SFTP access.
  5. The SFTP server is VShell and appears to behave significantly differently than OpenSSH.
  6. The "server administrators" are unwilling to install a Subversion client on the remote machines. (Of course, since I can't access the OS anyway then I couldn't operate a SVN client even if they were to deign to install one.)
I decided that a solution would be to mirror the remote filesystem on my local machine, in a working copy, and then commit the changes to a Subversion repository on my machine. I would then regularly dump the repository to a remote server via SSH.

That turned out to be tricky. I used lftp (which is awesome) for mirroring the remote filesystems to my local machine but I felt like the filtering/matching mechanisms were not flexible enough for my purposes. I ended up using lftp because using rsync over SSH to a Windows OS running Van Dyke VShell SFTP server does not work. The reason it does not work is because the "server engineers" disabled SSH on VShell. lftp operates over SFTP, as well as a zillion other protocols, so it worked perfectly.

That happened over the course of several months and was reasonably complete a few months ago. A couple weeks ago, I ended up using fuse and sshfs to mount all of the remote filesystems locally so I could use bash and GNU utilities on them. It dawned on me that this might allow me to use my local svn on the locally mounted remote Web servers!!! None of that mirroring crap!

Oh, were it so simple. I realized that there is a problem with renaming file with svn over sshfs and I just could not get it to work properly, even with the "-o workaround=all" option to sshfs. I thought, "Okay, whatever, I can try git." I recently started using git for some local development and thought that perhaps the way it uses the filesystem would not be impacted by the POSIX versus SFTP renaming problem. WELL . . . it was impacted alright. Of course. That's when I realized that since the filesystems are locally mounted that might allow me to try using rsync again!

rsync worked almost perfectly right away. The only thing was I discovered some corruption in the form of incomplete files. For example, I opened a file over sshfs with vi and the end of the file was truncated and filled with garbage characters. I opened the same file with jEdit and saw the same thing. I opened the same file with jEdit *over sftp rather than sshfs* and there was no corruption. I tried different sshfs and fuse options but it was when I disabled caching, "-o cache=no" that the corruption disappeared.

I just used some rsync filters to protect the .svn directories from deletion and the mirroring went fine. Previously, with lftp, I had to do some pretty convoluted things in bash to ensure that the local .svn directories were not deleted during mirroring.

So, it finally works, despite the best efforts of the "server engineers" and "server administrators" and "IT professionals".

I'm a Red Hat Certified Engineer and as of July 12, 2008 I'm still looking for a professional environment in which to work. Feel free to offer me a position with your organization that embraces best practices! :-)