This guide will act as the man with the short shorts and explorer hat for those of you moving to Linux, interested in Linux, or have heard of Linux but can’t remember where and constantly wake up in cold sweats at night dreaming of killer penguins, feeling their life being overshadowed by this constant feeling of something in the shadows, the cold, dark, snowy shadows stalking them.
For those of you who want to know everything, I suggest taking a cup of tea and a few minutes to go through the Complicated Stuff section. For people of more general interest, the rest of this Linux compass will suit you quite well, namely the Desktop, Distribution and Software sections further down the page.
The Complicated Stuff
“Yes Officer, it’s just here, let me get it” AKA The Licence
To truly get a grasp of how Linux and everything around it works, perhaps it’s best to first start off with one of the fundamental things that makes it possible in the first place.
Most software is licensed, which acts as the basis of an agreement of sorts between you and the vendor/producer of the software when you use, install or hand over the money for the software. Often, these licenses are restrictive. They can often tell you that you cannot modify the software, that you shall not redistribute or that you cannot count how long it took for the bloody thing to load. In this case however, Linux uses a license known as the GPL a license brought into being by Richard Stallman and was aimed at giving people freedom with software. It states basically that you are allowed to redistribute, you are allowed to modify and that you should be able to have access to the source code (as this is required for the previous two).
Linux was distributed under said license. The use of said license brought about several ramifications:
- That the original creator, Linus Torvalds, didn’t have express control over what others did with it, nor was there any attempt to exert control.
- That other people not only took it for themselves, but contributed back, sending what they thought might be improvements and advancements, and as required by the GPL, with the same rights as they were originally given.
- That other projects could also build on top of Linux, either for themselves or openly, without having to be directly involved or subject to the same restrictions as was usually imposed by other software or operating systems.
This produced the unique position of an operating system started by a hobbyist and progressively built by other hobbyists. Sometimes you’ll hear or see individuals post (crap) about how Linux will always be a hobbyist OS. This is essentially where this notion comes from, a meme taken from the roots of how Linux was first started, initially built and to some extent still largely built. Other things this difference in license and development style has also lent itself to is people proclaiming that for Linux to “get anywhere” it needs to standardise or become one distribution. Doing so however would make Linux just another OS. Far easier to ignore, and would wipe out Linux’s biggest advantages – it’s flexibility and adaptability, made possible by its license and accentuated by the way it is subsequently developed. To proclaim Linux should try to make itself and its ecosystem less diverse is to miss the point of Linux and the license that has enabled it. A prominent Linux developer said it best:
“It’s What’s Underneath That Counts” AKA The Kernel
The above has affected the development of Linux itself significantly. What has happened with the GPL and Linux is that it has introduced the concept of “peer review” into software development. What does this mean? In science, all knowledge is open and shared. Along with this comes peer review – the ability not just for the scientists who performed the experiment to judge the accuracy of their work, but also the the scientific community at large, thus allowing for quicker and better review of theories and results, leading into more progress and stronger theories that guide our understanding of the world and what allows us to predict and harness it. In Linux, hundreds and thousands review, alongside which hundreds more contribute, change and improve based upon that.
To further understand part of what Linux is, you need to understand some of the building blocks of hardware and software. Linux is in itself a kernel, and just a kernel. The kernel is the layer of syrup that makes things run nice and sweetly. The job of the kernel is to handle requests for resources from other software installed on the system and allocate those resources appropriately and intelligently, the resources in this case referring to your processor, your ram etc. – anything hardware related. This is also why operating systems can run on many pieces of hardware, whilst the software still being able to work – it’s the kernel that handles the messy job of communicating and also working with all this hardware so the software doesn’t have to. This leads to the concept of drivers that all of us have come into contact with – the ability or lack of to use a piece of hardware because apparently a driver was missing – a driver being in itself something that allows the kernel to understand and communicate with the hardware it is trying to manage.
Also what has come about is the idea of distributed development. Similar to how peer to peer (P2P) distribution works, rarely is there a genuinely central point of both distribution, control, and in this case, modification. Whilst there is technically a particular form of Linux that most distributions obtain, and usually presented as if it were the definitive version of Linux, in reality, there are many versions, hundreds if not thousands created every day. When a change in Linux is made, it is often initially specific to someone’s particular copy of the Linux source code on their computer. This is called a “fork” – a change that has been made to the supposedly original form, that may be kept separate either as another form of Linux (a form specifically targeted at desktop PC’s for example, as compared to one that has changes made to it that are aimed for high end servers), or purely for temporary purposes in testing and reviewing. These changes may then be suggested back to the original “branch”. The main branch might then pull in these changes because they are deemed beneficial enough. Effectively, the main kernel ends up nearly the same as the fork that was made. There is no true central point of development.
The combination of the above ideas and methods then leads into something that some people find odd or confusing – that Linux, or at least many “distributions” of Linux, don’t all have the same desktop…
Linux itself is just the kernel. Because of this, both choice and competition has resulted in providing an end-user friendly desktop. The most common Linux desktops are called GNOME and KDE, both having slightly different approaches but both with ease of use in mind. There is no reason to “standardise” Linux itself as either desktop project or for that matter, any others. instead, you should think of the competition between these desktops as simply any other form of competition, like the choice in MP3 players, phones, etc. It is down to personal preference of the user and the creators of a distribution.
The above mentioned approach does several things:
- More choice for the user. They can still use Linux, but rather than simply being tied into a whole package, the user themselves can decide what environment they want to use or work in.
- Creates variety not just for varieties sake, but also produces very different ways of looking at how the typical interface can be used and changed for the better. These ideas can be changed, mashed together and improved upon, creating an environment perfect for rapidly changing to fit with new devices and form factors.
Using the 2 main desktop projects mentioned above as an example, KDE, is leaning more and more toward a social desktop – features that allow integration with online services and social networks, along with its approach of everything is a widget, allowing for better and easier customisation.
GNOME is in the middle of a transition to GNOME 3.0, something that takes a task orientated view to running, using and displaying applications and files, whilst providing a much easier way to deal with the multiple desktops feature that has been a standard of most Linux desktops for a long time now, but has been hard to get across to people.
Both these approaches are different to what is currently considered the leader in interface design, Apple, and stretching farther away from what seems to me a more “me too” like approach from Windows over the past few years. Both have their merits, with distributions being able to pick the approach that best suits them and their users.
Also note how distributions often take what would basically end up being the same desktops – typically GNOME or KDE – yet still provide differentiation in look and features. Ubuntu in the past few years has been specifically focusing on this, with changes to the default GNOME interface via additions like the user switcher applet, OSD notifications and the Netbook Remix interfaces, all aiming to improve the experience of using Linux.
After Ubuntu gets their hands on GNOME
After openSUSE gets their hands on GNOME
Also note variations that may be specific to certain devices, like netbooks – a far cry from the one size fits all approach of Windows.
Ubuntu Netbook Remix – optimized for small screens
“If I want to crack a nut I’ll use my foot not some toy soldier” AKA The Software
A long time ago, Linux distributions were in a pickle. Lack of a decent, mainstream way for users to get software, combined with a problem then known as dependency Hell made maintaining your desktop a pain. This is how “Package Management” systems were born. These work essentially like Apples App Store, but came many years earlier. They killed 2 birds with one stone – they provided a way for software to be easily found and installed, whilst also managing dependencies, (typically smaller programs or “libraries” that are required to be able to run another program) taking away large amounts of work in using and maintaining someones setup. It also essentially provided an update mechanism too – except now not only was your OS updated, so too were the other programs you had installed.
No longer did you have to tread the likes of Google, wandering whether the site and software was legit or not – now, just about every distribution carries a “package manager” of some kind that doesn’t just contain the essential system software and updates, but also large collections of third party software, thanks also in part again to the licensing of much of the software in the Linux ecosystem. This then also meant you were free of free trial programs trying to lure you in.
There’s a lot of software out there too. For most things, you should be able to find an as-good or better alternative. Want to play music? How about Banshee or Rhythmbox. Video editing? How about Kdenlive or Openshot. Whilst there are a few areas that are behind (the aforementioned video editing alternatives are at the moment too unstable or too little feature-wise), for the most part you should be set, providing you aren’t “locked in” to a particular piece of software through DRM (Digital Rights Management) or otherwise. Even then, you maybe be able to get around it – Projects like WINE make it possible to run many Windows programs and games on Linux.
“We don’t just have fish and stale bread, there’s some steak and a few bowls of nibbles going round too” AKA The Distributions
All that was explained in The Complicated Stuff (for those that read it) feeds into how different “distributions” came about. Using the freedoms granted by the permissive licenses to their fullest, distributions are particular configurations or arrangements of the Linux kernel itself (the underlying part of any operating system), often one of the desktop projects mentioned above which willl also be appropriately modified and branded, then combined with a selection of default software with ways to install and manage that software easily. The flexibility and freedom means that many distributions have come about, providing something to fit a need not met before, or simply offering their own take of how they think it should be done.
And yes, there are a lot of “distros”. However, not all of these will be suitable. Varying on your own specific requirements, a large amount of distributions will probably not be worth a look. First, let’s start with the general distros. These are the ones that take an approach similar to Windows or Mac, in that they do their best to run on a wide variety of hardware, try to be good enough to cover most basic needs and come equipped with a wide range of software. The top 3 you’re most likely to hear about:
Beyond that, there are distributions that cater to more specific needs in the hope that they can provide a more suitable way to perform a particular task. For example, for the musical or visual of you, there are several distributions specifically tailored to your creative side, like Ubuntu Studio, or for those of you that live your life on the web, gOS.
Some are almost purely company/business supported, some are community supported, some are a mixture of the two (this is the most common, for example this is how Ubuntu, Fedora and openSUSE are run). They often focus on building communities around it, rather than just as something put out solely as an end product, including creating boards and similar structures and authorities within these communities that help harness outside help and contribution.
“Hey man, pass the $100 bills, I need to, like, snort some” AKA Money in Open Source/Free software
Some people claim there’s no way someone can make enough money on Linux. Either it’s that the user base is too small, that everybody who uses Linux just wants things for free (therefore will never pay for anything), or that producing software that can be freely redistributed is quite simply suicide. Quite simply, this is not true. There is a large commercial environment around Linux and associated projects. Red Hat is currently the jewel in the crown, contributing large amounts of work into Linux itself, a paid for support service around its business focused distribution, Red Hat Enterprise Linux, and supporting the desktop end user and community focused distribution, Fedora.
Canonical is the company that sponsors Ubuntu. They’re similar to Red Hat – they sell support services, hire developers to work on Ubuntu and the related projects along with the more recent Ubuntu One service they’ve started. Other companies of note are Novell, perhaps Linpus, Xandros, Mandriva amongst others.
Also note that others can make money in the Linux world. The developers of World of Goo produced a Linux version, DRM free like all other versions. It beat the previous best daily sales by 40%. They more recently had a pay as much as you want sale – Linux users paid the most per copy by a fairly big margin, with Linux having a fairly equal amount of sales as the Mac version.
Another indie developer was also surprised at just how well the Linux version sold, making roughly 30% of all sales, and had the highest conversion rate of user to customer. That same developer went on to also push the idea that you should support Linux with 5 reasons. Linux (alongside Mac) should be supported more often, suggesting that indies who don’t are missing opportunities to be a “big fish in a little pond”, and the idea that it’s simply leaving money on the table.
Note that the idea of Free software does not mean nor was it ever intended to mean non-commercial. The 2 are not one in the same.
“Free software” does not mean “non-commercial.” A free program must be available for commercial use, commercial development, and commercial distribution. Commercial development of free software is no longer unusual; such free commercial software is very important. You may have paid money to get copies of free software, or you may have obtained copies at no charge. But regardless of how you got your copies, you always have the freedom to copy and change the software, even to sell copies.Free Software Definition
“It’s too cold without Windows, how can you live like this?” AKA Common Linux Fallacies
I can’t use it so no one can!
This argument has been known to crop up from time to time, and quite simply, it’s poppycock. I myself used Windows all my life from Windows 95 up to Windows XP, and at one point Windows 3.1 on an old PC someone gave to me. I only started using Linux – specifically, Ubuntu – merely 2/3 years ago, a while after I had recieved a new PC as a pressie. I was not a particularly technical user either. I found I could comfortably use the interface after only a few minutes of simply exploring, and not sitting there screaming in my head “This isn’t Windows! This isn’t Windows! OMG OMG OMG!!!!1111”.
This is not to dissuade genuine complaints or issues, but this argument is of the thinnest kind, and rarely has genuinely decent reasons behind it.
Linux isn’t ready for prime time because of x feature only I and other niche users need
Imposing what is often a very particular need as the reason why something will never get anywhere – forgetting that things never remain static, and development never stops – as something that is genuinely a blockade to all users is often an enticing argument to make at first, but only so long as there is no concept for wider, more important needs that may often be much simpler, or even has a very simple solution in itself. For example, asserting to users that there are perfectly good if not better stand-ins and replacements that work with existing documents created by that program as well, also averting worries of losing productivity.
Often a perfect argument to start up a pointless flame war that produces nothing of value in discussion or action.
But It Doesn’t Have Many Applications/Popular Windows Applications!
Did you read the Software section above? The Linux world has a wide variety of applications for many purposes, the trick is to – get this – be just the tiniest bit adventurous and actually attempt at using an alternative with an open mind. This argument is more often than not based on an irrational fear of the unknown, rather than a rational questioning of suitability. However, you may already be using many applications that actually do run in Linux already. Do you use Firefox? Thunderbird? Skype? Gimp? OpenOffice? Pidgin? Emesene? All of these run on Linux as well as Windows. For a more comprehensive set of alternatives, check OSalt.
If you feel genuinely tied down to particular pieces of software, then check out the WINE Project (run Windows programs), which allows you to run Windows programs under Linux. Web applications are also becoming more powerful, the advantage of these being that they are often OS agnostic.
This, for the most part, concludes my map to help you find your through what may at first be very treacherous and dangerous ground. However, once you have the lay of the land, you often find navigation is much easier than you thought, especially with short shorts. So too, I hope you can now find your way much easier in the Linux world. Penguins are nothing to be scared of!