A network of virtual machines can be used for practical testing and research. Sandia National Laboratories recently announced that they have created a virtual network of three hundred thousand virtual machines, each of which is running the Android operating system, on a single large supercomputer. In the recent past the researchers at Sandia National Laboratories also built a network of one hundred thousand virtual machines running Microsoft Windows and another network of one million virtual machines running the Linux operating system.
The researchers are using very large networks of virtual machines to investigate the behavior of malicious bot-nets and other cyber-threats to computer networks and to mobile networks. Using many virtual machines that each run a full operating system allows the researchers to model real-world behavior and identify unexpected behaviors in large bot-net networks. This is an extreme example of using virtual machines to build useful test laboratories for research or functional testing. Smaller-scale projects, using tens or hundreds of virtual machines running on cheap hardware, would be easy and inexpensive for any interested individual or company to implement.
The basic concept of Sandia’s lab configuration can be applied to the much smaller-scale work I am doing, investigating open-source routing and network simulation. I will be creating a network of up to ten virtual machines, or more if the host computer can handle the load, on a single host. Each virtual machine will run a fully-configured operating system that runs its own network routing software so we can set up a network of virtual machines that will model real-world behaviour. This model offers the ability to investigate real-world functionality but not necessarily with real-time performance. Just like the researchers at Sandia National Laboratory, I am interested in investigating the functionality (and behavior) or Linux routing and switching software and the lack of real-time performance is not an issue, as long as performance is fast enough that the timers associated with the routing protocols running on the virtual machines do not start to expire.
Fortunately, I do not have to deal with the issues caused by having to start up, manage, and shut down a million virtual machines. I am using standard Linux tools that are managed by the open-source network simulation tools I am investigating. The specific methods the Sandia researchers used will probably not be applicable to my investigations although when they publish some more details about their work it will be interesting to see how they manage the networking between one million virtual machines.
One complication I introduced, different from what the Sandia researchers did, is to build this network of many virtual machines inside one virtual machine that runs on my computer’s native operating system. Some would say this is an unnecessary complication, but I think it offers benefits in small-scale simulations, such as a simple way to save the state of a project, that outweigh the costs, such as slower overall performance as I already outlined in an earlier blog post.
The research team at Sandia National Laboratories published an overview of their work in which they stated that they intend to release, as open-source, the tools they created to manage their extremely large network of virtual machines. These tools could provide another potentially useful network simulation environment for small-scale projects. However, I expect that the researchers’ focus on inter-machine messaging, large-scale management, fault tolerance, and other high-performance computing issues will mean the tools will be too specialized for use in small-scale personal projects. However, I still look forward to when these tools will be released towards the end of 2012.
In their overview, the researchers state that they are working on booting up ten million Linux virtual machines. That will something!