I started to learn Linux due to the fact that it runs on a RaspBerry Pi board, that it is its os. It was relatively easy to learn it to the degree were I was able to do cross development, in my case using Python 3. So I had Windows 10 on my PC and Raspbial Jessie, the Linux distribution for the Raspi and using the PYCharm IDE from Jetbrains. So I got to the point where I could have the desktop of each of my 2 Raspi boards connecting them via WLAN and having entries for both on a free DNS provider. You can then work on your Raspis from your PC as if the screen and Keyboard/mouse were connected directly to the Raspis.
But then, even having my os up-to-date, having powerful protection software running on my PC it was attacked successfully via the raspi boards. I only could get over this by updating to Windows 10.
I am giving my own experience in learning Linux by running it on a Raspi. Security is key if you want to develop for something that will be connected to the Internet! So my focus moved from learning how to work with Linux to learn how to make it as secure as possible, or as it is named this days, less vulnerable. So I started this second step in learning Linux by focusing on learning how to setup Linux in such a way that the attack to my PC is less likely to succeed.
Virtualization is, to my personal opinion, one of the tracks to reduce the vulnerability, both of the target and the host environment, when developing. It starts with learning how to define your workflow. So I started by moving away from doing development work in the native environment of my host machine and working out of a virtual machine on my host. The second step is to have software that I run on my target "connected device", here a Raspi, within a virtual environment on Raspian. So when ever an attack to my setup is successful I revert to the previous status of my virtual machines.
As by law providers of "connected devices" can be suit if their devices are used in a DDoD , learning how to start a design by starting with the security concept of it frm the very beginning:
Actually this days i.e. ARM has announced in 4Q16 2 new ARM Cortex M devices, 23 and 33, being the first a ARM Cortex M0/0+ and the other an ARM Cortex M3/M4 kind of device. FreeScale i.e. has announced to have their implementation available for the general public in late 1Q17, the "i.MX8". These new devices in the case of ARM implement what is called the "TrustedZone". You can find valuable learning material on the website of ARM. In general this new TrustedZone functionality is part of the ARMv8 architecture. I see forward to what I do name "Raspberry Pi 4", which hopefully will have a SoC with cores that fully implement the TrustedZone concept and a version of Linux that uses this functionality.
This graphic shows how a TrustedZone is organized. What is really "new" is this kind of security features and the adaptation of a mature technology of virtualization into what is called "deeply embedded systems", the kind of boards with a micro controller that connect to the Internet as we often use in electronic systems.
A key concept and tool used to create virtual machines is the one of the "Hypervisors". It exists in 2 types as shown in the graphic above. A hypervisor type 2 is the kind of setup we can use on a host machine running Windows or Linux using i.e. "VirtualBox". On the physical hardware the os runs as we are used to and the virtual machine runs as an application. Hypervisor type 2 runs directly on the hardware, called "bare metall" and over it you have the virtual machines running.
The kind of virtualization shown with "Hypervisor Type 2" is better suited for deeply embedded systems and is more prone to make real time coding possible. So on a Raspi, on a board running Linux the easy way is to run hypervisor type 2 kind of tools for virtualization, as long as real time is not a demand. But is you run a connected device more like an Arduino kind, were resources are a scarce resource, the type1 approach running of a hardware that in the case of ARM implements its architecture ARMv8.
Another technique that is starting to play an important role is Containerization! The dominant marketshare tool here is "Docker"! The graph compares virtualization and Containerization. Again the role of Linux is key in my personal opinion!
In hypervisors type 1 you use the terms Domain0 and DomainU.
Domain0 is the priviledged os, Linux, running and its job is to manage the hypervisor type 1 and its so called "guest virtual machines" or DomainU which are created from there. Each guest os believes to be running alone of the hardware due to the fact that the hypervisor type 1 ensures all calls to the hardware are routed properly. Architectures like TrustedZone offer multiple sets os registers involved in executing calls by drivers of the guest os to the hardware. To achieve this You have an additional level of privilege that alone is entitled to use the "TrustedZone" hardware resources, while the privileged Linux drivers do operate at a priviledge level between the level 2 used for trustedZone and the none priviledged user applications.
Now Docker and the container functionality it offers uses a Linux Kernel, that is part of the Docker engine. Containers are a structure that contains all it needs to execute the application it contains, allowing to move the applications within its container to any environment that supports and executes Docker. Its isolation is more like the one we know exists for applications in an os like Windows or Linux.
Now it is interesting to tell you, that Docker exists native in Windows 10 Pro and Enterprise versions of Windows only. And I did find it also interesting that Docker for Windows includes a Linux Kernel! By the way, Intel also offers its virtualization support as it was the dominant CPU for Servers over decades and so their knowledge of virtualization is very mature as is this true for Microsoft.
Containers blends beautifully to the "Micro Systems" software concept, where containers are the receptacle for a micro service. For us more interested in the electronics and so into deeply embedded systems the fact that containers demand far less system resources than it is the case of virtualization, as I described above! Also real time behavior benefits from it. It just takes a couple of minutes after installing Docker environment on your host machine with Windows and create a container and have it implement a "Hello World" application. This makes it worth to look into it!
So my personal goal to achieve the smallest possible vulnerability in my experiments goes a "hybrid" approach, by the way that is what is currently pursued to combine the benefits of virtualization and containerization! Learning about containers and about Linux and virtualization it becomes evident that by setting up the parameters of the containers and virtualization not to have the default values allows to achieve the best value possible for the vulnerability of the setup for my experiments:
I do work on my host machine running Windows 10 Pro using a virtual machine in which I do execute Docker that has a Linux Kernel. This host machine communicates with my experiment setups via WLAN to Raspis with a IP address stores on a DNS Server. As you can get Raspis for a 2 digit cost they do not increase my costs for experiments in a critical way, but allow me to benefit from the vulnerability limiting effect from working in a Linux environment and the tools presented here. From each of the Raspis I do access vial GPIO communication with ARDUINO or similar controllers like those from NXP of those integrated into the Trinamic IDE for stepper motors.
I hope I have been able to present and share with you the fact that learning Linux is not difficult, but to use hardware with Linux connected to the Internet demands, in my personal opinion having suffered serious consequences, to learn how to setup Linux and use it in a as safe as possible way. I hope I have been able to present the 2 concepts which in my opinion combined make it as expensive as it can be to potential attackers. Not really having anything economically valuable available, the effort to attack me successful would be hard to justify! I do plan to use the topics addressed in my experiments in a model sail boat I am working on. So the connectivity available today and the revolutionary evolution that the technologies related have due to IoT, IIoT, autonomous cars and many more areas have convinced me to invest in learning how to work with as little as possible vulnerability.