This would be part 1 about some old, some new issues related to security in technology.

I would have to start the story around 20 years back. Where as a school going kid I was introduced to computers. In those days, Windows 3.1 had just been introduced in India, and most of us rather than learning DOS commands and using spreadsheets were busy fragging on Wolfenstein and Doom. There was nothing like the Internet and the only way to get new games was to go to a rich friend’s place, get on his 28.8k k line and get on the only BBS we knew at the time, Jabberwocky and get our dose of games, music etc. Life was good except for a crash now and then.

During those days, my understanding of computer systems was pretty limited (it still is). It was much, much later after series of crashes, viruses and worms that I researched and came to know that when a system wakes up, the software or the Operating system operates the different components with something called a ‘process’. It took its own time to understand how this processes functioned. Borrowing from biology about how cells multiply and divide and have different functions, a similar role was being performed in both Unix and MS-Windows by the various programs in the respective Operating Systems. The only difference between biology and Computer Science was that each software process in this digital system has a unique process id which is/was called a PID.

Somewhere during those on-off periods of engaging with computers, Internet came along and blew all over us. Till the time we were on protected networks such as jabberwocky or floppynet and sneakernet. Viruses, trojans were few and far between. Once Internet came in though, viruses such as the CIH virus, , the ILOVEYOU virus and various such trojans and viruses made my computing life a nightmare. Add to that Adobe Reader and Adobe Flash used to get loads of viruses and even Microsoft word could and would be injected by Macro viruses, it was (and probably is) easier to get infected on the web on a MS-Windows system. I left the MS-Windows world after couple of years of Windows XPSP2 and hence have no idea of the state of the affairs today in the MS-world.

I did found, in the hardest way though, that in the MS-world no anti-virus is infalliable and when you had two or more anti-virus at times they would negate each other’s efforts, makes the system sluggish and at times would just crash the machine. The amount of time it used to take to rebuild the machine (software only) was atrocious and used to end up feeling frustrated as didn’t really know who or what was at fault. Even if I applied any update, it didn’t give any confidence as the updates themselves were cryptic and there was no way to raise it with the company if a patch or patches didn’t work. As a good citizen, I also used to update the anti-virus definitions but still the scenario continued.

While with the above frustrations, I came in contact with something called the PCQ Linux and hence over the years have had my share of GNU/Linux distributions and BSD’s. This was actually where my real educations about a computer system started. One of the selling points of this new system was that there were no computer viruses. As people say, once bitten twice shy, was skeptic but it quickly became clear that once you got over the hurdles, this new system was unlike any other system. The uptime of my system which used to be of few hours quickly became weeks. Uptime is the time from when your system is awake till you put it to sleep or something or the other.

The following is from one of my systems which is pretty much experimental in nature :-

[$] uptime
23:39:05 up 2 days, 12 min, 8 users, load average: 0.82, 0.70, 0.52

As can be seen it has been up for 2 days from the last time. The only time this specific system goes down is when electricity is down for more than an hour. It is only on very rare situations where I have to shut down the system to troubleshoot the system.

Remember the bit about PID I shared previously, while in MS-Windows you could just see the PID and how much memory they were confusing there was nothing to tell how the relationship was. In GNU/Linux all this changed.

pstree -p with PID of each process and relationship snapshot

As can be seen you could get a snapshot of how the system was laid out in this. Obviously this is just to give an idea, if I really wanted to do see which process was using the most memory, would have used htop in tree mode (run htop and use t while running it).

Lot of learnings happened when I started using GNU/Linux. While in MS-Windows people were and are blissfully unaware of whatever happens in the system, in GNU/Linux the tools, the understanding and the philosophy compel people to be aware of the system.

In GNU/Linux the worst thing that is there are what are called as ‘binary blobs’. Let’s take an analogy, let’s say you are given a suitcase but no key and are told that you need to take it everywhere with you. You also cannot function without the ‘suitcase’ because then you won’t look like a business-man and things will be much slower for you or won’t work at all. Now the ‘suitcase’ may have a listening device or have a bomb we won’t ever know but till there is a suitcase which is invented where the businessman can also have the key, he has to make do with what is. This is how and where most GNU/Linux distributions find themselves.

Things were going ‘fine’ when a certain Edward Snowdem made the world realize that the NSA, FBI and other actors have been monitoring us, the masses. While in MS-Windows and Mac’s it was evident that they would have backdoors and what-not (there was and is a huge history there of them co-operating with powers that be) freedom-loving GNU/Linux distributors also became quickly concerned. While investigating, while it was spoken in corners, it came to be known that weaker algorithms had been used to make cryptographic keys which is the basis of many online security systems, encrypted messages and the way things are. There was increased awareness and a healthy dose of paranoia which led to discovery of vulnerabilities of Shellshock, Heartbleed and Poddle in open-source libraries. This is what lead to the creation of forked utilities such as, and These has also lead to awakening of not just the code quality but also governing practises at various at various organizations as well as how they take and give feedback.

Two instances relate this story, one which is by-and-large probably finished and the one on-going. The one which was finished was that in iceweasel (firefox) it was downloading Cisco’s H264 codec. You can read all about it in #769716, the bug itself is a huge read but comes down to a simple thing, should you trust some third-part who says this black box/suitcase only has documents and not some bomb or listening device? If there is no way to x-ray and share as to what they are saying to be true is really true or not. Another point to that bug/story is that h264 has patents, so it might have exposed Debian to a patent suit and even if not on a philosophical level Debian is against patents. So better not to use it rather than inviting trouble. I haven’t read those patents but they probably would be more on the encoding side as that is where the algorithms shine, while probably a bit less on the decoding side.

A similar issue is what has been having with chromium as well, see temp tracker as well as the bug itself.

As shared both of them downloaded binary blobs whose intention is suspect.

While these are the known ones, they are many which are unknown. The way forward for companies is to audit and test the code in as many ways as they can, document the ways as well as the results, take help of the community in supporting and improving both the testing infrastructure as well as the software itself.