Showing posts with label computers. Show all posts
Showing posts with label computers. Show all posts

Friday, November 20, 2009

My notes, screenshots and first impressions on Google Chromium OS on VMware!

I was eagerly awaiting the release of Googles ChromeOS (Chromium OS). Google opened up the source at about 10:30AM today and i have it compiled on my Ubuntu 9.04 and working on my Vmware Workstation. Phew! The following are my notes, screenshots and first impressions of the whole experience.

Updates : A few corrections based on comments by Ethan.
  
Updates:  I have uploaded my VMWare  disk (.vmdk) here. Its about 350MB tar gzipped. MD5 Checksum is  8b158acfff42572dce632fdcb0707009. To use this vmdk one needs to first create a virtual machine and give the path to this vmdk file as the logical disk. Note that this is NOT .vmx but .vmdk. Thus you cannot open this file in VMWare directly. You will need to create a virtual machine.

ChromeOS Getting Started Documentation 
The documentation is pretty neat and things worked out-of-the-box for me. I did not have to hack even a line of script. Started by watching the videos and reading the documentation here.

Setup
My compile environment was a Ubuntu 9.04 ACER Aspire Netbook. I actually wanted to get ChromeOS running on the same Netbook but the documentation suggested that the Chrome install process will nuke the entire harddrive and so i opted for creating the VMWare Disk Image instead.

Building the Image
The whole process, right from reading the initial documentation to getting up the VMWare took me about 5 hours and most of it was spent creating the chroot environment, compiling the packages and the kernel. After that, the image building and the creation of the VMWare Disk was pretty quick.

Running ChromeOS on VMware

1. Bootup Time 
    Ofcourse, running it on VMWare meant that i could not test its claimed bootup speed! But the bootup definitely 'felt' faster relative to my other OS bootups on VMWare. ChromeOS creates a file called /home/chronos/chrome_startup.log which showed bootup time as 47seconds. I believe that is good on VMware.

    2. Login Prompt

      The login prompt is plain and simple blue with two boxes for username and password.I noticed two things here:
      1. The username/password could be your gmail credentials.That means that your Google account could act as a profile store.Does this mean someone can use a ChromeOS device only when online? Or only having a google account? I am not sure as of now.
      2. It also accepts the username/password that i created while i was building the code. I think this option would be disabled for regular users.

      3. Login Using Google Credentials

      To begin with, i logged in with my Google credentials and was presented the following error page saying that the security certificate for google.com was revoked. My login had succeeded.



      This seems to be like a bug to me but i will have to do some more trials before concluding that this is a real bug.

      4. Login Using regular credentials
      I tried logging in with the testuser account that i had created earlier. That seemed to work fine and i finally got presented with a functioning chrome browser.

      I could login into gmail.com, reader.google.com, docs.google.com etc. with my regular gmail credentials and could operate my account as usual. No problems. Things even seemed a tad faster in my slow VMWare.

      5. Some UI features

      From the above screenshot, its clear that all the user sees when he logs in is the chrome browser interface. There is no desktop and no icons. The only icons that i could spot are 4 on the top right: time, an inactive icon, networks and a drop-down menu. A single chrome icon exists to the top left. Clicking on it takes you to Google Shortlinks which i believe is Googles replacement for desktop icons with links to Google Products. Smell a monopoly in the making?
      Update: Ethan points out that it will be far from a monopoly because whatever is web-based would be supported. I agree but i would like to wait and watch and would be happy to be wrong.



      6. Task Manager and Resource Stats

      Clicking on the top gives an option to open the Task Manager which looks as below. This is pretty much the standard task manager except that we see a lot fewer tasks in it. Also, it hints at the multiprocess nature of the Chrome browser.


      Clicking on Stats for Nerds shows an additional memory usage view. This is equivalent to typing about:memory in the browser tab. I don't understand everything in the stats yet but will dig in later. For example, i don't understand what  Proportional Memory is.



      A minor point: the note in the above figure states that other browsers like IE and Firefox will also be shown here if they are running. This could be due to the fact that the Chrome browser code-base used is the one used for Chrome on desktops. Or maybe they really intend to do that in the future ?

      I couldn't navigate any further and could not find out additional shortcuts or additional interesting options and settings. Will need to dig more in the documentation to see if there are more interesting peeks here and there.

      7. Browsing of files

      The file browser is contained in the Chrome browser itself. Typing file:/// in the address bar shows the root file system as seen when browsing a remote directory. Not the best way to navigate a local file system i guess.



      8. Shell and command line tools

      To get to the command line, one has to press Ctrl+Alt+T. Frankly, i could not figure out how to navigate back to the GUI or to other open command-line and i had to keep doing Ctrl+Ds on the command line to get back to the GUI.
      Update: Ethan points out that typing exit takes us back to the GUI. It is essentially the shortcut Ctrl+D.



      The most irritating aspect to me was that standard utilities like ifconfig, route etc. were missing.
      Update: I missed this completely. You can access all of these commands by using sudo as Ethan pointed out correctly. Thanks for the correction. 

      I could use vi, python and the standard shell builtin commands as far as i tried. Also, I found apt-get and dpkg  installed but it would not let me install any packages using apt-get (the locks were read-only). I am not sure if this is intentional or a bug.

      Thats all i could get my hands on for today but this is the beginning and the exploration would continue.I will be digging into the documentation and source code and keep reporting nuggets of information as and when i discover it for myself.

      Conclusion
      ChromeOS is exciting and would get even more exciting in the coming months and years. I remember my Professor telling us in class that systems should be like 'Toasters' i.e. it must not be required to read a manual to operate it. ChromeOS is definitely a step in that direction. Also, the lean philosophy adopted by ChromeOS should reduce the burden on end users as far as managing and securing systems is concerned. Ofcourse, there will be newer challenges but atleast ChromeOS reduces the surface area of problems.

      I think Google needs to watch out and not make ChromeOS a Google-Centric product. That may not be well received by consumers already struggling to break free of existing monopolies.

      Wednesday, October 28, 2009

      Using Mendeley effectively on multiple systems using an external storage drive

      If you do not already know, Mendeley is soon to become the defacto standard for storing, indexing, searching, ordering and sharing all your academic research papers. Its free download and easy to setup. Check the website to get a feel for it. I will not describe Mendeley here but will get to my point.

      WARNINGS

      The solution presented below worked for me. It may NOT work for you. There is destruction of state involved so please go through the complete post and decide for yourself. I am making certain assumptions about this solution, so make sure your assumptions match mine before trying this out.

      Assumptions:
      • My environment is Mendeley Desktop v 0.9.4.1 running on Ubuntu 9.04.
      • I assume good familiarity with Linux and esp. Ubuntu.
      • I assume you have 2 working Mendeley setups on both systems. Will still work if you dont have but all steps may not apply.
      • I assume that you have access to the original data that you indexed with Mendeley.
      • The solution may break with future releases of Mendeley esp. if they change some of the document paths.
      • This solution may not easily port to Windows/ MAC though i believe the line of reasoning should still apply.
      • I have used Mendeley possibly since its first release and i am quite familiar with its interface / settings etc.

      My Problem

      I have 2 Ubuntu 9.04 systems (home and work) that i use for my research and i spend equal time on both of them. So i end up storing a lot of papers (my total collection is about 5000) on both. Once i discovered Mendeley (about 5-6 months back), i would index documents on both my systems separately and then sync up the bibliographies (and corresponding metadata like notes) with Mendeley Web (which is the online component of the Mendeley system). Now, though i had access to the full bibliography on both my systems, i could not access the corresponding documents on both systems i.e. i could only access documents on the system on which they were indexed.

      Mendeley currently gives an online account with 500MB storage and allows storing documents in it. My collection is somewhere about 5GB and its not worth syncing that much anyway esp. since some of it is of proprietary nature. But i desperately needed a solution since i am so used to using Mendeley now and need it whenever i am doing my work.

      So this is what i did:

      Initial Setup
      • I got a 120GB external harddrive and formatted it with ext4 (filesystem doesnt matter).
      • Then I set the properties of the external drive to always mount as /media/extstor2. This ensures that we always have a constant path prefix whenever you attach the drive. This can be easily done by clicking properties of the drive, selecting Volume tab and fixing the mount point settings (works on Gnome).
      • Created two folders called db and papers on the new drive.
      mkdir /media/extstor2/db
      mkdir /media/extstor2/papers
      On my home system
      • I first reset my complete database. WARNING: This will completely destroy the database (i.e. your bibliography and notes but NOT your documents). I hate this step but this was necessary because mendeley stores absolute paths to all its documents (you can dig into their sqlite3 database and see for yourself) and so if you just shift the database onto a new folder all references to documents get messed up. Even using the repair option of Mendeley doesnt fix this. This should be a good feature for them to add soon. 
      mendeleydesktop --reset
        • After this i logged into my online account and deleted the entire collection from there. Note that if you have any notes attached to the document, this is the time to save them. There is no easy way to do that except cut and paste into some text editor.Note that the above step is essential because i think Mendeley has a bug where if it synchronizes again with the online account after you have created a new local database things get extremely messy and it crashes. Talking with experience here.
        • Now come the tricks:
        cd /media/extstor2/db
        mkdir Mendeley\ Ltd.
        cd ~/.local/share/data
        rm -rf Mendeley\ Ltd.

        ln -s /media/extstor2/db/Mendeley\ Ltd./ Mendeley\ Ltd.
        • What i am doing above is essentially repointing the database location to the external drive. Note that this also means that you will always require the external harddrive to use Mendeley.
        • Opened up Mendeley again. This time it should start with nothing in it and offer you to login into your online account. DO NOT LOGIN YET.
        • Opened Tools->Options and clicked on File Organizer.
        • Enabled the 'Organize My Files' tab and set the path to /media/extstor2/papers
        • Enabled other options as desired.
        • Now logged in to my online account and let it sync. Nothing should sync as nothing exists but it is good to let the desktop handshake with the web account.
        • I then added all my folders where i keep my collection. Luckily i had not deleted them.
        • After this i let Mendeley index my complete collection (takes time proportional to your collection and system speed). Took about 5-6 hours.
        • Once done, coped all the saved notes to the corresponding papers. Had to do manually :(.
        • Then i synchronized again with the online account.
        At this point we have a working database stored on the external harddrive. Now take this drive to your other system and proceed as follows:

        On the work system
        • Connected the drive to the system and mount it as /media/extstor2 to begin with.
        • Then set the properties of the external drive to always mount as /media/extstor2. This ensures that we always have a constant path prefix whenever you attach the drive. This can be easily done by clicking properties of the drive, selecting Volume tab and fixing the mount point settings.
        • Reset current database. 
        mendeleydesktop --reset
          • Tricks Again
          cd ~/.local/share/data
          rm -rf Mendeley\ Ltd.

          ln -s /media/extstor2/db/Mendeley\ Ltd./ Mendeley\ Ltd.
            • Opened up Mendeley.
            • Opened Tools->Options and click on File Organizer.
            • Enabled the 'Organize My Files' tab and set the path to /media/extstor2/papers
            • Enabled other options as desired.
            • Now logged in to my online account and let it sync. Now i saw everything just the same way as my home system.
            • Added your desired folders for watching files on my work system.

            That's it!!! Problem solved. Enjoy. Let me know if this solution solves somebody else's pain.

            Tuesday, October 13, 2009

            A possible Twitter Worm or Scam!

            I got 4 email messages today saying that 4 people (whom i do not know) are following me on Twitter. Seems to me to be a possible twitter scam with a possibility that it might be a new worm.

            These are my observations for now:
            1. All four accounts belong extremely good looking girls
            2. All seem to be from Mumbai, India.
            3. All have the same Bio line which says "I am smart and simple girl, wanting to make some good friends"
            4. All have about 800 followers and are following about 400 people.
            5. All have almost the same tweets which are a combination of marketing website links and some mundane tweets albeit in different order.
            6. All accounts were created on Oct 6th.
            7. The tweets are posted using a combination of API and Web with lots of them being from the web.API alludes to a script doing the posting.

            Questions i have:
            1. How did they get so many people to follow their accounts in such a short time? Seems to me like some bug is being exploited. Or maybe people really fell into the beauty trap?

            If anyone is interested these are the four twitter accounts
            • http://twitter.com/mariya_gonzales
            • http://twitter.com/pari_choudhary
            • http://twitter.com/mansi_joshi
            • http://twitter.com/janhavi_agarwal

            More updates as i debug this further!

            Tuesday, November 18, 2008

            Embracing New Technology : The Twitter Case

            I am a technogeek and try to integrate new technology in my everyday life as much as possible. This post is about how i am using twitter.

            As most of you may know, twitter is this short messaging service which people use to convey updates in real time. It has become the micro-blogging platform of choice and has many cool advantages, one amongst them being able to convey live updates using a computer, regular mobile or smart phones. These updates can then be fetched via RSS feeds.

            I use twitter to tell people (whoever is interested) what i am currently upto. These are normally sent to my twitter account as an SMS from my mobile. The current status then appears on my webpage.

            So the next time, you call me and i do not pick up the phone, please check the twit on my webpage!

            Sunday, November 16, 2008

            IPv4 Countdown vs. State of IPv6

            The Internet Assigned Numbers Authority (IANA) is the body that manages the unicast IPv4 address pool (ie from 0.0.0.0 to 223.255.255.255.255). IANA assigns blocks of this space to the 5 RIR's (Regional Internet Registries) i.e. AFRINIC, APNIC, ARIN, RIPENCC and LACNIC. The RIR's use their distribution policies to further allocate addresses to local registries and ISP's which propogate them to the endhosts.

            Potaroo.net predicts the following dates for the exhaustion of IPv4 address space.
            Projected IANA Unallocated Address Pool Exhaustion: 04-Feb-2011
            Projected RIR Unallocated Address Pool Exhaustion: 05-Mar-2012
            A live down-counter counting number of days until we hit exhaustion of the IPv4 address space can be found here . This counter is generated using data from potaroo.net report The report is pretty detailed and explains the modelling used for predicting the dates. Please note that the modelling is based on current address distribution policies used by RIRs and current consumption trends. The following graph (from potaroo.net report) shows the current status of IPV4.


            An explanation of the graph follows:

            Note that there are 256 /8's where each /8 is 16,777,216 addresses.

            IETF_Reserved : Blocks reserved for special purpose. It consists of 16 /8 Multicast blocks + 16 /8 reserved blocks + 1 /8 (0.0.0.0/8) block for local identification + 1 /8 (127.0.0.0/8) for loopback + 1 /8 (10.0.0.0/8) for personal use + 1 /8 (14.0.0.0/8) for public-data networks.

            IANA_Pool : Pools of /8 left with IANA for allocation to RIRs.

            Allocated : Allocated by IANA to RIR. This does not reflect current consumption because RIRs may have a pool of their own.

            So much for IPv4. Now lets look at IPv6.

            In 2008, there have been atleast 2 big studies around the state of IPv6.

            The reports are detailed but these are a few interesting points.

            1) Arbor networks experiment measured the total amount of IPv6 flowing in the backbone , and they note that

            At its peak, IPv6 represented less than one hundredth of 1% of Internet traffic.

            2) The biggest reason cited in the summary for the above observation is money.

            Specifically, the department of commerce estimates it will cost $25 billion for ISPs to upgrade to native IPv6.
            3) Googles effort measures the state of IPv6 from a end node perspective as opposed to the Arbor measurement. Their key observations are :

            • 0.238% of users have useful IPv6 connectivity (and prefer IPv6).
            • 0.09% of users have broken IPv6 connectivity.
            • Probably a million distinct IPv6 hosts exist.
            • Russia leads the chart in IPv6 penetration.
            • IPv6 prevalance is low but increasing steadily by the week.
            • IPv6 - IPv4 tunelling is the most common transition mechanism.
            • MacOS has better IPv6 penetration than Vista because of its default policies in the OSes.

            So given the predictions about end of IPv4 and the rate of adoption of IPv6, are we ready for migration? In Feb 2008, ICANN added IPv6 addresses for 6 of the 13 root DNS servers (news here) which is a step in the right direction but is it enough to prod people to migrate?

            I have the following concerns about the migration:

            • What would dictate the migration: economics or a better-future-internet?
            • Will ISPs be willing to pay the price?
            • Even if they are willing to do so, can the consumers and business transition to IPv6 seamlessly?
            • Will security products continue working the same way?
            • Are the vendors testing their implementations with IPv6 to make a simple software update to the tons of software already out there?
            • How will this migration be different in impact than the Y2k bug of the last century? Are these comparable in any sense?

            I have a feeling that economics will dominate this race more than anything else. If the migration is going to cost a lot of money for businesses without any added value then there is bound to be a huge pushback. Somehow the cost has to be justified to them to make this transition happen and just saying address space exhaustion may not strike a chord with every business.

            Saturday, September 20, 2008

            A Linux solution for copying and burning DVDs

            The following are my experiences with copying and burning DVDs on Linux. To summarize the experience in a phrase : "It was a walk in the park".

            Operating System Ubuntu 8.04

            Tools of the trade
            • k9copy (for copying DVDs)
            • brasero (for burning DVDs)
            Installation
            Installation in ubuntu for the above packages is as simple as
            $ sudo apt-get install k9copy
            $ sudo apt-get install brasero

            Procedure
            • Insert DVD into tray and open k9copy.
            • Choose File -> Open. This will load the DVD and show the chapters and titles as shown below. Select all the titles that you wish to copy.
            • Select Action -> Copy. You will be prompted for a location where the final iso file will be saved. Make sure that you have disk space atleast 2 times the size of DVD.
            • Leave all the options in the below pane as is unless you know what those options mean.
            • Once the copy starts you will be able to view the progress in the right-side pane.
            • The copy process creates a folder called dvd and an iso image in the location specified earlier.
            • You can remove the folder dvd as it is not required during the burning process.
            • Now to burn the iso image, open brasero and select the option for burning iso images.
            • Insert and blank DVD and start the burn process.
            • Enjoy !
            In my experience, i have copied 4 DVDs and burnt around 12 DVDs and the whole process took slightly more than half a day. There were absolutely no errors and the original DVD quality was maintained in all the copied DVDs.

            Saturday, August 23, 2008

            When will people learn ?

            Airtel (one of India's leading cell phone providers) has recently tied up with Apple to offer the iPhone 3G in Indian market. Everything is good but is the following sort of sales pitch necessary to sell of iPhones?? Airtel is quoted here as saying :

            "even the most deadly hackers on the planet won't be able to crack the
            codes that support the iPhone's Airtel applications with rival company
            SIMs."

            My question is : WHY ???. Even if you really have provided tamper-proof security, throwing a n open challenge to the highly skilled and distributed hacker work force on the internet is nothing short of the proverbial "hitting the axe on your own leg". Such stunts may be good to test your products before entering the market but not once the products are already out there. Such stupidity has surely attracted the bees and its just a matter of time before the bees sting.

            Thursday, August 21, 2008

            Return gifts from an internet cafe

            Today, i was at an internet cafe for getting a printout as my old printer died its natural death. As usual, the cafe was running Windows XP machines in administrator mode. I never like the look of a windows machine running in administrator mode in a public place and i was quite sure that it was already pwned. Nevertheless, i plugged in my USB drive which contained just the file i wanted to print. After a few seconds, my drive was detected and i could print the file i wanted. All was well and good.

            Then i took the drive home and plugged it back into my laptop which fortunately runs Ubuntu. Lo and behold, my drive now had three return gifts from the internet cafe. Doing a quick antivirus scan on the files revealed the following

            neoblitz@n30:/tmp$ clamscan /media/PKBACK#\ 001/*
            /media/PKBACK# 001/1.jpg: OK
            /media/PKBACK# 001/2.jpg: OK
            /media/PKBACK# 001/autorun.inf: OK
            /media/PKBACK# 001/New Folder .exe: Trojan.Autoit.gen FOUND
            /media/PKBACK# 001/regsvr.exe: Trojan.Autoit.gen FOUND

            ----------- SCAN SUMMARY -----------
            Known viruses: 396428
            Engine version: 0.92.1
            Scanned directories: 0
            Scanned files: 6
            Infected files: 2
            Data scanned: 1.57 MB
            Time: 6.231 sec (0 m 6 s)

            As you can see, i had 2 trojan binaries and an autorun.inf which pointed to those binaries. For people who didnt realize, this is a worm which uses an unsuspecting user to physically propogate it from machine to machine.

            It makes me wonder, how many unsuspecting folks would have got infected by this. Also, the public machine itself is probably a part of some botnet and has all types of exotic malware already installed, sniffing passwords and recording transactions of unsuspecting users. Phew !

            So the moral of the story is two-fold,
            • Do NOT trust public machines. Avoid using them for doing electronic transactions using your credit card, using your username/password for your email accounts and so on and so forth.
            • If you run as administator, then very likely you are not the only administrator :)
            I will publish results of analysis of the binaries in the next post soon.

            Sunday, July 27, 2008

            Sound bytes could now play the devils tune !

            The next time you want to download your favorite song (illegally ofcourse :)) from a p2p network or some illegal site, think twice. The latest in malware infection has just been found. According to this report from Kaspersky Lab, there is now a worm to infects your .mp3 files.

            From the report, the workings of this worm are as follows:

            The worm, which was named Worm.Win32.GetCodec.a, converts mp3 files to the Windows Media Audio (WMA) format (without changing the .mp3 extension) and adds a marker with a link to an infected web page to the converted files. The marker is activated automatically during file playback. It opens an infected page in Internet Explorer where the user is asked to download and install a file which, according to the website, is a codec. If the user agrees to install the file, a Trojan known as Trojan-Proxy.Win32.Agent.arp is downloaded to the computer, giving cybercriminals control of the victim PC.

            You can get directly infected by the worm or via an already infected mp3 file downloaded from some malicious site or P2P share. The simple precautions to take against this type of infection are the age-old and time tested ones:

            1. Never run as administrator on your computer. I repeatedly keep hearing that its insane to not be administrator on your own machine. Please note that, if you run as administrator of your own machine, then there is probably another administrator of your machine :). This simple precaution can help mitigate tons of security issues and make attacking your system that more difficult.

            2. Do not install stuff from websites that you do not know or trust i.e. do not randomly click install buttons unless you are absolutely sure.

            3. If you are really crazy about p2p downloading as if your life depends on it, then try using a VMWare to download stuff. This way, in the event of a compromise, atleast your critical data residing on your real system is protected.

            Thursday, July 24, 2008

            Testing your DNS servers for CERT VU#800113 (or Dan Kaminskys bug)

            Here are 2 pointers to diagnostic tools for testing your DNS server against Dan Kaminsky's vulnerability (or CERT VU#800113). Please note that these tools use a very simple test and may not be enough to provide a foolproof assessment of the strength of your DNS server.

            [Update] There is also a simple command-line from the same folks at DNS-OARC. Just fire the following command from a command line (ofcourse you need to have dig installed).
            dig +short porttest.dns-oarc.net TXT
            The first gives out a real cool output. Sample shown below. If you see GREAT as the output it means you are ok against the bug (as far as the tool is concerned).


            "Kaminskys DNS bug" drama

            If you are reading this article and interested in computer security then you may already know about the blockbuster DNS bug and the whole drama around it. To put the long story short, a major bug in the DNS protocol was discovered around 6 months ago by Dan Kaminsky. To describe the bug simply, the bug makes DNS cache-poisoning attacks a walk in the park (details here). Kaminsky, being a white-hat researcher, responsibly informed all DNS vendors about the bug and wanted to make sure that all the DNS server implementations were patched before he disclosed the bug to the world at Blackhat'08. But, Kaminsky chose to address a press conference a month before the disclosure and hinted about the existence of the bug without giving all the details. This caused a furore in the community about Kaminsky's claims but he managed to convince people (THomas Ptacek of Matasano Chargen in particular) off-line about the importance of the bug by confiding in them with the details and encouraged ISPs to patch the bug as fast as they could before Blackhat. Thomas Ptacek, relaizing his mistake, immediately changed his pitch and started encouraging people to patch as soon as they can.

            All was supposedly well, until Halvar Flake made an almost right guess at the bug. This somehow caused panic at Matasano Chargen and they released the full details of the bug before hurriedly pulling back the post. Ofcourse, with feed readers around the world caching his blog, pulling back the post did not do any real good and in their own words "the cat was out of the bag" for sure now. Dan Kaminsky diplomatically handled the disclosures and is now encouraging people to patch the bug as fast as they can. Unfortunately, the disclosure of details has already resulted in sample exploits for Metasploit to come out within the end of the day. Thomas Ptacek, ofcourse has now posted an apology on his blog !

            Now, one can blame Halvar Flake and Thomas Ptacek for disclosing the bug. One can also blame Thomas Ptacek for breaking Dan Kaminskys trust. But if you are a security researcher, you know very well that you may not be the first one to discover the bug. Blackhats aka evil hackers, dont care about disclosing the bugs they find. They just use them !! So i believe that these discloures may atleast force lazy ISP's to patch their systems sooner. If that will really happen is something that remains to be seen. My guess is that inspite of all this drama, there will be DNS servers which will still never get patched and will be sitting ducks ready to be exploited. It will be interesting to see if hackers are able capitalize on this bug and use it for real damage.

            If you want to test your DNS servers resilience to this attack, you can use the small "Check my DNS" widget on Dan Kaminsky's site (www.doxpara.com). It may be interesting to note that DJBDNS is not affected by this bug because of its sound design. Read Bruce Scheniers article for his praise of its design and the importance of thinking about security while development and not as an add-on.

            Sunday, July 6, 2008

            A fun javascript flipbook

            Getting bored with my work, i tried doing something completely useless this weekend :). I had some photos taken in burst mode during my trip to La Jolla in San Diego County, California. So i stitched them together using javascript to create a flip book effect. You can check out the demo and the accompanying javascript on my website here. Enjoy !

            Unfortunately, i could not put up the demo here as blogspot does not accept stuff within the <script> tags.

            Saturday, June 21, 2008

            Ruby Command line for dhingana.com

            I got bored of clicking links on dhingana and thought there should be an easy way to listen to songs. Being a command line freak, it was very natural for me to think of writing a command line tool. So i wrote a small ruby utility for downloading songs from dhingana.

            The tool can be found here. Please read the disclaimer before using the tool and feel free to drop me a line.

            PS: The website is work in progress and will be fully functional in 2-3 weeks.

            Thursday, June 19, 2008

            Surfing in a hostile world !

            To get a hostile view of the world we surf in, here are a few statistics about all the current day malware forms coexisting with us.

            A few highlights (as of today)
            1. There are around 3000 botnet command and control servers active at any time in the day.
            2. There are around 100K bot machines (using a 30-day age value of each bot).
            3. US has around 4500 bot C&C's (the largest in the world ). Interesting to see that China is way down the list with only 115.
            4. There were around 3.5 Million unique malware binaries seen in October 2007 with the number of unique binaries being atleast 1 Million every month ever since.
            5. The 0-day detection stats for Antivirus vendors is very interesting. Out of the 68000 samples of new malware that were tested against wellknown vendors in the last 24 hours, the really well known ones like Kaspersky, McAfee etc. were able to detect only 70% of them while AntiVir detected around 98% of them. Curiosly, Symantec is not on the list.

            These statistics are from ShadowServer. Shadowserver's statistics are generally considered very reliable in the security community.It is not clear to me as to what percentage of the address range they monitor but the stats are nevertheless very revealing.

            Saturday, May 24, 2008

            Security is all about breaking assumptions !


            Anyone with a slight understanding of security would appreciate the fact that security is all about breaking assumptions. Any system is always built with certain assumptions because otherwise the system requirements will tend to be infinite. Hackers always target the assumptions to break the system. It thus becomes very important for system and process designers to be very careful about the assumptions they make for their system. I believe that systems which stand the test of time are the ones that have their assumptions clearly laid out and which provide their users a clear understanding of the strengths and weaknesses of the system.

            While one may say that the above is clearly very logical and there is nothing surprising about it, reality indicates that not many get this simple axiom right. But there seems to be a paradoxical situation here. I said that a system cannot be built without assumptions and also that security is all about breaking assumptions. So that would imply that there is nothing called 100% secure !!! And as it turns out, that is precisely the point.
            Vendors who claim that their products provide 100% security or are 100% secure are essentially trying to fool the customers or maybe even themselves.

            A case in point, there was a very recent incident in the US involving the company LifeLock (read this). LifeLock is a company which guarantees protection against identity theft. Infact, its CEO advertises his own Social Security Number on the website and claims that their service guarantees complete protection against identity thefts. They do this by setting fraud alerts at the three major Credit Bureaus namely, Experian, TransUnion and Equifax. They thought that by doing this , anyone who tries to use a SSN not belonging to himself will get caught. But they made a very very big assumption here that any of the outfits like CreditCard companies, banks etc. will always run a credit check before activitating services for an individual. Guess what ! they were proved wrong in a really stupid way. Someone stole the CEO's own identity from his website and took a $500 loan in the CEO's name. The reason the fraud alerts did not get tripped was because the loan company did not bother to run a credit check at all !!

            The take home message from this post is thus two-fold
            1. If you are a customer, carefully evaulate the security assumptions yourself without getting sold to the vendors advertising.
            2. If you are vendor, make sure that all your assumptions are clearly stated and avoid hidden ones.

            Saturday, May 3, 2008

            30th Anniversary of SPAM

            As per this BBC news article, 3rd May 2008 is the 30th Anniversary of email SPAM. The first spam message was sent to 400 users on the ARPANET by a DEC employee on 3rd May 1978. But, this was not yet the beginning of the commercial SPAM era. It was only in April 1994 when a group of immigration lawyers sent the first commercial spam message to more than 6000 USENET discussion groups thus,spawning a new rogue business model using the internet.

            Saturday, March 8, 2008

            EyeOS ! Is it anything more than EyeCandy?

            EyeOS Professional Services has released this Browser based OS called EyeOS. Check the Demo at : http://demo.eyeos.org/


            From the site :

            eyeOS was thought as a new definition of Operating System, where everything inside it can be accessed from everywhere in a Network. All need to do is to login into your eyeOS server with a normal Internet Browser, and access your personal or corporate desktop, with your applications, documents and files... just like you left it last time.

            eyeOS comes with a preloaded suite of applications, some for private use, like the file manager, a word processor, a music player, calendar, notepad or contacts manager. There are also some groupware applications, such as a group manager, a file sharing application, a group board and many more.

            I am finding it difficult to comprehend why anyone would want to use this. Wasnt RemoteDesktop over VPN good enough?

            This kind of technology opens a can of security worms. First of all, this is browser based and anyone can login from anywhere. So if a users account is compromised because he was logging into his company eyeOS server from a cybercafe, the companies data is at threat. Secondly, why would anyone want to leave word and excel and use applications provided by them? Or for that matter, why would anyone using Google Docs want to use this technology? I find it difficult to find the true value proposition in this product.

            The idea is nevertheless neat.

            Saturday, February 23, 2008

            Google Scanner from cDc

            The Cult of the Dead Cow (cDc) has released Goolag: a google scanner for searching website vulnerabilities and other juicy information using Google. The scanner is based on google hacking techniques developed by Johnny Long. The tool comes with its own dork database and helps in scanning fast.

            As defined by JohnnyLong in his hacking database: googledorks are Inept or foolish people as revealed by Google.

            Technically, dorks are search patterns that reveal sites with potential vulnerabilities. Check the hacking database for the extensive list of dorks. These search patterns are not specific to google but just that its more effective with google because of its vast index.

            An example dork from the hacking database is "intitle:admin intitle:login" which gives Admin Login pages. Now, the existance of this page does not necessarily mean a server is vulnerable, but it sure is handy to let Google do the discovering for you, no? Let's face it, if you're trying to hack into a web server, this is one of the more obvious places to poke.

            Microsoft opens up its Treasure Chest

            Microsoft has finally opened up their "treasure" chest. Microsoft has started a protocols program under which they are releasing loads and loads of Microsoft documentation.

            From their website

            "The Microsoft Protocol Programs foster innovation and interoperability by offering partners access to Windows Vista, Windows Server 2008, Microsoft SQL Server 2008, the 2007 Microsoft Office release, Microsoft Exchange Server 2007, and Microsoft Office SharePoint Server 2007 protocols for use on any platform. These programs enable and encourage a vibrant development community and support it with customer service. The result will be smarter, interoperable products that can be released in coordination with Microsoft product releases."

            All the documents are available in PDF format.

            The following document gives a roadmap for ploughing through the documents. [MS-DOCO]: Windows Protocols Documentation Roadmap

            Thursday, January 31, 2008

            MS08-001 Proof-Of-Concept Exploit


            Check out this cool proof-of-concept exploit developed by immunitysec for the IGMPv3 vulnerability (MS08-001).

            http://immunityinc.com/documentation/ms08_001.html


            The tool being used is their flagship Canvas product.