Computers and modern gadgets

Annotation: Information technologies have brought many useful and interesting things to the life of modern society. Every day, inventive and talented people come up with more and more new applications for computers as effective tools for production, entertainment and collaboration. Many different software and hardware, technologies and services allow us to improve the convenience and speed of working with information every day. It is becoming more and more difficult to single out truly useful technologies from the stream of technologies falling upon us and learn to use them with maximum benefit. This lecture will talk about another incredibly promising and truly effective technology that is rapidly breaking into the world of computers - virtualization technology, which occupies a key place in the concept of cloud computing.

The purpose of this lecture is to obtain information about virtualization technologies, terminology, types and main advantages of virtualization. Get acquainted with the main solutions of leading IT vendors. Consider the features of the Microsoft virtualization platform.

Virtualization technologies

According to statistics, the average level of processor capacity utilization for servers running Windows does not exceed 10%; for Unix systems this figure is better, but nevertheless on average does not exceed 20%. The low efficiency of server utilization is explained by the “one application - one server” approach that has been widely used since the early 90s, i.e., each time a company purchases a new server to deploy a new application. Obviously, in practice this means a rapid increase in the server park and, as a consequence, an increase in the costs of it. administration, Energy consumption and cooling, as well as the need for additional premises to install more and more servers and purchase licenses for the server OS.

Virtualization of physical server resources allows you to flexibly distribute them between applications, each of which “sees” only the resources allocated to it and “believes” that a separate server has been allocated to it, i.e. in this case the “one server - several applications” approach is implemented. , but without reducing the performance, availability and security of server applications. In addition, virtualization solutions make it possible to run different operating systems on partitions by emulating their system calls to server hardware resources.


Rice. 2.1.

Virtualization is based on the ability of one computer to perform the work of several computers by distributing its resources across multiple environments. With virtual servers and virtual desktops, you can host multiple operating systems and multiple applications in a single location. Thus, physical and geographical restrictions cease to have any meaning. In addition to saving energy and reducing costs through more efficient use of hardware resources, virtual infrastructure provides high levels of resource availability, more efficient management, enhanced security, and improved disaster recovery.

In a broad sense, the concept of virtualization is the hiding of the real implementation of a process or object from its true representation for the one who uses it. The product of virtualization is something convenient for use, in fact, having a more complex or completely different structure, different from that which is perceived when working with the object. In other words, there is a separation of representation from the implementation of something. Virtualization is designed to abstract software from the hardware.

In computer technology, the term “virtualization” usually refers to the abstraction of computing resources and the provision to the user of a system that “encapsulates” (hides) its own implementation. Simply put, the user works with a convenient representation of the object, and it does not matter to him how the object is structured in reality.

Nowadays, the ability to run multiple virtual machines on a single physical machine is of great interest among computer professionals, not only because it increases the flexibility of the IT infrastructure, but also because virtualization actually saves money.

The history of the development of virtualization technologies goes back more than forty years. IBM was the first to think about creating virtual environments for various user tasks, then still on mainframes. In the 60s of the last century, virtualization was of purely scientific interest and was an original solution for isolating computer systems within a single physical computer. After the advent of personal computers, interest in virtualization weakened somewhat due to the rapid development of operating systems, which placed adequate demands on the hardware of that time. However, the rapid growth of computer hardware power in the late nineties of the last century forced the IT community to once again recall the technologies of virtualization of software platforms.

In 1999, VMware introduced x86-based system virtualization technology as an effective means of transforming x86-based systems into a single, general-use, purpose-built hardware infrastructure that provides complete isolation, portability, and a wide choice of operating systems for application environments. VMware was one of the first to make a serious bet exclusively on virtualization. As time has shown, this turned out to be absolutely justified. Today, WMware offers a comprehensive fourth-generation virtualization platform, VMware vSphere 4, that includes tools for both the individual PC and the data center. The key component of this software package is the VMware ESX Server hypervisor. Later, such companies as Parallels (formerly SWsoft), Oracle (Sun Microsystems), Citrix Systems (XenSourse) joined the “battle” for a place in this fashionable direction of information technology development.

Microsoft entered the virtualization market in 2003 with the acquisition of Connectix, releasing its first product, Virtual PC, for desktop PCs. Since then, it has consistently increased the range of offerings in this area and today has almost completed the formation of a virtualization platform, which includes such solutions as Windows 2008 Server R2 with the Hyper-V component, Microsoft Application Virtualization (App-v), Microsoft Virtual Desktop Infrastructure (VDI), Remote Desktop Services, System Center Virtual Machine Manager.

Today, virtualization technology providers offer reliable and easy-to-manage platforms, and the market for these technologies is booming. According to leading experts, virtualization is now one of the three most promising computer technologies. Many experts predict that by 2015, about half of all computer systems will be virtual.

The increased interest in virtualization technologies at present is not accidental. The computing power of current processors is growing rapidly, and the question is not even what to spend this power on, but the fact that the modern “fashion” for dual-core and multi-core systems, which has already penetrated into personal computers (laptops and desktops), could not be better allows you to realize the richest potential of ideas for virtualizing operating systems and applications, bringing the ease of using a computer to a new qualitative level. Virtualization technologies are becoming one of the key components (including marketing ones) in the newest and future processors from Intel and AMD, in operating systems from Microsoft and a number of other companies.

Benefits of Virtualization

Here are the main advantages of virtualization technologies:

  1. Efficient use of computing resources. Instead of 3, or even 10 servers, loaded at 5-20%, you can use one, used at 50-70%. Among other things, this also saves energy, as well as a significant reduction in financial investments: one high-tech server is purchased that performs the functions of 5-10 servers. Virtualization can achieve significantly more efficient resource utilization because it pools standard infrastructure resources and overcomes the limitations of the legacy one-application-per-server model.
  2. Reducing infrastructure costs: Virtualization reduces the number of servers and associated IT equipment in a data center. As a result, maintenance, power and cooling requirements for assets are reduced, and much less money is spent on IT.
  3. Reduced software costs. Some software manufacturers have introduced separate licensing schemes specifically for virtual environments. So, for example, by purchasing one license for Microsoft Windows Server 2008 Enterprise, you get the right to simultaneously use it on 1 physical server and 4 virtual ones (within one server), and Windows Server 2008 Datacenter is licensed only for the number of processors and can be used simultaneously on an unlimited number of processors. number of virtual servers.
  4. Increased flexibility and responsiveness of the system: Virtualization offers a new method for managing IT infrastructure and helps IT administrators spend less time on repetitive tasks such as provisioning, configuration, monitoring and maintenance. Many system administrators have experienced trouble when a server crashes. And you can’t take out the hard drive, move it to another server, and start everything as before... What about the installation? searching for drivers, setting up, launching... and everything takes time and resources. When using a virtual server, instant launch is possible on any hardware, and if there is no such server, then you can download a ready-made virtual machine with an installed and configured server from libraries supported by companies that develop hypervisors (virtualization programs).
  5. Incompatible applications may run on the same computer. When using virtualization on one server, it is possible to install Linux and Windows servers, gateways, databases and other applications that are completely incompatible within the same non-virtualized system.
  6. Increase application availability and ensure business continuity: With reliable backup and migration of entire virtual environments without service interruptions, you can reduce planned downtime and ensure rapid system recovery in critical situations. The “fall” of one virtual server does not lead to the loss of the remaining virtual servers. In addition, in the event of a failure of one physical server, it is possible to automatically replace it with a backup server. Moreover, this happens unnoticed by users without rebooting. This ensures business continuity.
  7. Easy archiving options. Since a virtual machine's hard drive is typically represented as a file of a specific format located on some physical media, virtualization makes it possible to simply copy this file to backup media as a means of archiving and backing up the entire virtual machine. The ability to completely restore the server from the archive is another great feature. Or you can raise the server from the archive without destroying the current server and see the state of affairs over the past period.
  8. Increasing infrastructure manageability: the use of centralized management of virtual infrastructure allows you to reduce time for server administration, provides load balancing and “live” migration of virtual machines.

Virtual machine we will call a software or hardware environment that hides the real implementation of a process or object from its visible representation.

is a completely isolated software container that runs its own OS and applications, just like a physical computer. A virtual machine acts just like a physical computer and contains its own virtual (i.e. software) RAM, hard drive, and network adapter.

The OS cannot differentiate between virtual and physical machines. The same can be said for applications and other computers on the network. Even herself virtual machine considers himself a “real” computer. Even so, virtual machines consist solely of software components and do not include hardware. This gives them a number of unique advantages over physical hardware.


Rice. 2.2.

Let's look at the main features of virtual machines in more detail:

  1. Compatibility. Virtual machines are generally compatible with all standard computers. Like a physical computer, a virtual machine runs its own guest operating system and runs its own applications. It also contains all the components standard for a physical computer (motherboard, video card, network controller, etc.). Therefore, virtual machines are fully compatible with all standard operating systems, applications and device drivers. A virtual machine can be used to run any software suitable for the corresponding physical computer.
  2. Isolation. Virtual machines are completely isolated from each other, as if they were physical computers. Virtual machines can share the physical resources of a single computer and yet remain completely isolated from each other, as if they were separate physical machines. For example, if four virtual machines are running on one physical server and one of them fails, the availability of the remaining three machines is not affected. Isolation is an important reason why applications running in a virtual environment are much more available and secure than applications running on a standard, non-virtualized system.
  3. Encapsulation. Virtual machines completely encapsulate the computing environment. A virtual machine is a software container that bundles, or “encapsulates,” a complete set of virtual hardware resources, as well as the OS and all its applications, in a software package. Encapsulation makes virtual machines incredibly mobile and easy to manage. For example, a virtual machine can be moved or copied from one location to another just like any other program file. In addition, the virtual machine can be stored on any standard storage medium: from a compact USB flash memory card to enterprise storage networks.
  4. Hardware independence. Virtual machines are completely independent of the underlying physical hardware on which they run. For example, for a virtual machine with virtual components (CPU, network card, SCSI controller), you can configure settings that are completely different from the physical characteristics of the underlying hardware. Virtual machines can even run different operating systems (Windows, Linux, etc.) on the same physical server. Combined with the properties of encapsulation and compatibility, hardware independence provides the ability to freely move virtual machines from one x86-based computer to another without changing device drivers, OS, or applications. Hardware independence also makes it possible to run a combination of completely different operating systems and applications on one physical computer.

Let's look at the main types of virtualization, such as:

  • server virtualization (full virtualization and paravirtualization)
  • virtualization at the operating system level,
  • application virtualization,
  • presentation virtualization.

Specialists from Kosmonova’s company work every day with various virtualization systems, both when working with their own cloud and when performing design work. During this time, we managed to work with a considerable number of virtualization systems and determine for ourselves the strengths and weaknesses of each of them. In this article, we have collected the opinions of our engineers about the most common virtualization systems and their brief characteristics. If you are thinking about building a private cloud and are considering various virtualization systems to solve this problem, this article is for you.

First, let's figure out what a virtualization system is and why it is needed. Virtualization of physical machines (servers, PCs, etc.) allows you to divide the power of one physical device between several virtual machines. Thus, these virtual machines can have their own operating system and software, in no way dependent on neighboring virtual machines. Today there are many virtualization systems, each of them has its own characteristics, so let's look at each of them separately.

VMware vSphere - the flagship product of VMware, the undisputed leader in virtualization market share for many years in a row. It has wide functionality and is specially created for data centers providing cloud solutions and companies building private clouds of various sizes. It has a well-thought-out interface and a large amount of technical documentation. If you have little experience with virtualization, this system will be a good choice for you. Licensed by the number of physical processors in the cloud, regardless of the number of cores. Due to the extensive functionality and many modules, this system is quite demanding in terms of resources required for its operation.

WMware Esxi- is a free analogue of VMware vSphere. Since this hypervisor is free, it has more modest functionality, but it is quite sufficient for implementing most typical tasks of virtualization and managing a private cloud. Also quite easy to use

Hyper - V- a Microsoft product developed as an addition to OS Windows server, starting from version 2008. It also exists as a separate product, but uses OS Windows server for operation. This hypervisor is quite easy to configure and operate, and of course, supports all versions of OS Windows for guest machines, but the manufacturer does not guarantee the operation of many OS Linux. Please note that the hypervisor itself is distributed under a free license, but requires a paid OS Windows to operate.

OpenVZ- a completely free virtualization system implemented on the Linux kernel. Like most Linux systems, it has good productivity and resource consumption indicators and works perfectly with any Linux distribution as a guest OS. However, it does not support OS Windows, due to which this virtualization system cannot be considered universal.

KVM - The virtualization system is also based on the Linux kernel and distributed under a free license. It has very good efficiency indicators in terms of the amount of resources consumed. It has great functionality and is quite universal from the point of view of guest operating systems, as it supports absolutely all operating systems. To configure and support in its pure form, it requires certain knowledge and skills in working with Unix systems. However, there are many GUIs available as add-ons to the hypervisor, with different licensing options ranging from free to paid versions.

Xen- a product developed by the University of Cambridge with open source code. Most components are moved outside the hypervisor, which allows achieving good efficiency indicators. Along with hardware virtualization, it also supports paravirtualization mode. Xen supports running most existing operating systems.

LXC- a fairly new virtualization system at the operating system level that allows you to run multiple instances of the Linux operating system on one physical machine. The peculiarity of this system is that it does not operate with virtual servers, but with applications that use a common OS kernel, which at the same time are isolated from each other, which gives the highest efficiency in resource consumption.

Virtualization system

Supported OS

Advantages

Flaws

License

VMware vSphere

Win/Lin

Easy to use. Wide functionality

Resource consumption.

Paid. By number of processors

WMware Esxi

Win/Lin

Easy to use

Not the widest functionality.

Free

OpenVZ

Linux

Efficient resource consumption

Linux only supported

Free

Win/Lin

Efficient consumption of resources. Supports all OS

Free

Hyper-V

Windows

Easy to use

Only Windows is supported. Resource consumption

Free. Runs on a paid OS

Win/Lin

High efficiency. Open source.

Requires knowledge of Unix systems to configure and manage

Free

Linux

High efficiency

Does not support Windows

Free

Let us also remind you that in the Kosmonov cloud you can implement an infrastructure of any complexity without plunging into the intricacies of the operation of virtualization systems and hardware with minimal time expenditure. In the Kosmonov cloud, both ready-made solutions and cloud servers are available to solve your business problems.

The topic of virtualization is very broad and there are many nuances in the operation of the listed virtualization systems, along with many variations in hardware design. In this article, we do not give advantages to a specific system, but present their general characteristics for selecting the appropriate system at the initial stage.

The history of virtualization technologies goes back more than forty years. However, after a period of their triumphant use in the 70s and 80s of the last century, primarily on IBM mainframes, this concept faded into the background when creating corporate information systems. The fact is that the very concept of virtualization is associated with the creation of shared computing centers, with the need to use a single set of hardware to form several different logically independent systems. And since the mid-80s, a decentralized model of organizing information systems based on mini-computers and then x86 servers began to dominate in the computer industry.

Virtualization for x86 architecture

In the personal computers that appeared over time, the problem of virtualization of hardware resources, it would seem, did not exist by definition, since each user had at his disposal the entire computer with his own OS. But as PC power increased and the scope of x86 systems expanded, the situation quickly changed. The “dialectical spiral” of development took its next turn, and at the turn of the century, another cycle of strengthening centripetal forces to concentrate computing resources began. At the beginning of this decade, against the background of growing interest of enterprises in increasing the efficiency of their computer resources, a new stage in the development of virtualization technologies began, which is now mainly associated with the use of x86 architecture.

It should be immediately emphasized that although there seemed to be nothing previously unknown in the ideas of x86 virtualization in theoretical terms, we were talking about a qualitatively new phenomenon for the IT industry compared to the situation 20 years ago. The fact is that in the hardware and software architecture of mainframes and Unix computers, virtualization issues were immediately resolved at a basic level. The x86 system was not built with the expectation of working in data center mode, and its development in the direction of virtualization is a rather complex evolutionary process with many different options for solving the problem.

Another and perhaps even more important point is the fundamentally different business models for the development of mainframes and x86. In the first case, we are actually talking about a single-vendor software and hardware complex to support a generally rather limited range of application software for a not very wide range of large customers. In the second, we are dealing with a decentralized community of equipment manufacturers, basic software providers and a huge army of application software developers.

The use of x86 virtualization tools began in the late 90s with workstations: simultaneously with the increase in the number of client OS versions, the number of people (software developers, technical support specialists, software experts) who needed to have several at once on one PC was constantly growing. copies of various OS.

  • Virtualization for server infrastructure began to be used a little later, and this was primarily associated with solving problems of consolidating computing resources. But here two independent directions immediately formed: ·
  • support for heterogeneous operating environments (including for running legacy applications). This case most often occurs within corporate information systems. Technically, the problem is solved by simultaneously running several virtual machines on one computer, each of which includes an instance of the operating system. But the implementation of this mode is carried out using two fundamentally different approaches: full virtualization and paravirtualization; ·
  • support for homogeneous computing environments, which is most typical for application hosting by service providers. Of course, you can use the option of virtual machines here, but it is much more effective to create isolated containers based on a single OS kernel.

The next life stage of x86 virtualization technologies started in 2004-2006. and was associated with the beginning of their mass use in corporate systems. Accordingly, if earlier developers were mainly concerned with creating technologies for executing virtual environments, now the tasks of managing these solutions and their integration into the overall corporate IT infrastructure have begun to come to the fore. At the same time, there was a noticeable increase in demand from personal users (but if in the 90s these were developers and testers, now we are talking about end users - both professional and home).

To summarize the above, in general, we can highlight the following main scenarios for the use of virtualization technologies by customers: ·

  • software development and testing; ·
  • modeling the operation of real systems on research stands; ·
  • consolidation of servers in order to increase the efficiency of equipment use; ·
  • consolidation of servers to solve the problems of supporting legacy applications; ·
  • demonstration and study of new software; ·
  • deployment and updating of application software in the context of existing information systems; ·
  • work of end users (mainly home) on PCs with heterogeneous operating environments.

Basic software virtualization options

We have already said earlier that the problems of developing virtualization technologies are largely related to overcoming the inherited features of the x86 software and hardware architecture. And there are several basic methods for this.

Full virtualization (Full, Native Virtualization). Unmodified instances of guest operating systems are used, and to support the operation of these operating systems, a common layer of emulation of their execution is used on top of the host operating system, which is a regular operating system (Fig. 1). This technology is used, in particular, in VMware Workstation, VMware Server (formerly GSX Server, Parallels Desktop, Parallels Server, MS Virtual PC, MS Virtual Server, Virtual Iron. The advantages of this approach include the relative ease of implementation, versatility and reliability of the solution; all management functions are taken over by the host OS.Disadvantages - high additional overhead costs for the hardware resources used, lack of consideration of the features of the guest OS, less flexibility in the use of hardware than necessary.

Paravirtualization. The guest OS kernel is modified in such a way that it includes a new set of APIs, through which it can work directly with the hardware without conflicting with other virtual machines (VMs; Fig. 2). In this case, there is no need to use a full-fledged OS as host software, the functions of which in this case are performed by a special system called a hypervisor. It is this option that is today the most relevant direction in the development of server virtualization technologies and is used in VMware ESX Server, Xen (and solutions from other suppliers based on this technology), Microsoft Hyper-V. The advantages of this technology are that there is no need for a host OS - VMs are installed virtually on bare metal, and hardware resources are used efficiently. The disadvantages are the complexity of implementing the approach and the need to create a specialized OS hypervisor.

Virtualization at the OS kernel level (operating system-level virtualization). This option involves using a single host OS kernel to create independent parallel operating environments (Fig. 3). For guest software, only its own network and hardware environment is created. This option is used in Virtuozzo (for Linux and Windows), OpenVZ (a free version of Virtuozzo) and Solaris Containers. Advantages - high efficiency in the use of hardware resources, low technical overhead, excellent manageability, minimizing the cost of purchasing licenses. Disadvantages - implementation of only homogeneous computing environments.

Application virtualization implies the use of a model of strong isolation of application programs with controlled interaction with the OS, in which each application instance and all its main components are virtualized: files (including system ones), registry, fonts, INI files, COM objects, services (Fig. 4 ). The application is executed without the installation procedure in its traditional sense and can be launched directly from external media (for example, from flash cards or from network folders). From an IT department's perspective, this approach has obvious benefits: speeding up the deployment and management of desktop systems, minimizing not only conflicts between applications, but also the need for application compatibility testing. In fact, this particular virtualization option is used in Sun Java Virtual Machine, Microsoft Application Virtualization (formerly called Softgrid), Thinstall (became part of VMware in early 2008), Symantec/Altiris.

Questions about choosing a virtualization solution

To say: “product A is a solution for software virtualization” is not at all enough to understand the real capabilities of “A”. To do this, you need to take a closer look at the various characteristics of the products offered.

The first of them is related to the support of various operating systems as host and guest systems, as well as the ability to run applications in virtual environments. When choosing a virtualization product, the customer must also keep in mind a wide range of technical characteristics: the level of application performance loss as a result of the appearance of a new operating layer, the need for additional computing resources to operate the virtualization mechanism, and the range of supported peripherals.

In addition to creating mechanisms for executing virtual environments, systems management tasks are coming to the fore today: converting physical environments into virtual ones and vice versa, restoring a system in case of failure, transferring virtual environments from one computer to another, deploying and administering software, ensuring security, etc.

Finally, the cost indicators of the virtualization infrastructure used are important. It should be borne in mind that here the main thing in the cost structure may not be so much the price of the virtualization tools themselves, but the possibility of saving on the purchase of licenses for basic OS or business applications.

Key players in the x86 virtualization market

The market for virtualization tools began to take shape less than ten years ago and today has acquired quite definite shape.

Created in 1998, VMware is one of the pioneers in the use of virtualization technologies for x86 computers and today occupies a leading position in this market (according to some estimates, its share is 70-80%). Since 2004, it has been a subsidiary of ECM Corporation, but operates independently on the market under its own brand. According to EMC, VMware's workforce grew during this time from 300 to 3,000 people, and sales doubled annually. According to officially announced information, the company's annual income (from the sale of virtualization products and related services) is now approaching $1.5 billion. These data well reflect the general increase in market demand for virtualization tools.

Today, WMware offers a comprehensive third-generation virtualization platform, VMware Virtual Infrastructure 3, that includes tools for both the individual PC and the data center. The key component of this software package is the VMware ESX Server hypervisor. Companies can also take advantage of the free VMware Virtual Server product, which is available for pilot projects.

Parallels is the new (as of January 2008) name of SWsoft, which is also a veteran of the virtualization technology market. Its key product is Parallels Virtuozzo Containers, an OS-level virtualization solution that allows you to run multiple isolated containers (virtual servers) on a single Windows or Linux server. To automate the business processes of hosting providers, the Parallels Plesk Control Panel tool is offered. In recent years, the company has been actively developing desktop virtualization tools - Parallels Workstation (for Windows and Linux) and Parallels Desktop for Mac (for Mac OS on x86 computers). In 2008, it announced the release of a new product - Parallels Server, which supports the server mechanism of virtual machines using different operating systems (Windows, Linux, Mac OS).

Microsoft entered the virtualization market in 2003 with the acquisition of Connectix, releasing its first product, Virtual PC, for desktop PCs. Since then, it has consistently increased the range of offerings in this area and today has almost completed the formation of a virtualization platform, which includes the following components. ·

  • Server virtualization. There are two different technology approaches offered here: using Microsoft Virtual Server 2005 and the new Hyper-V Server solution (currently in beta). ·
  • Virtualization for PC. Performed using the free Microsoft Vitrual PC 2007 product. ·
  • Application virtualization. For such tasks, the Microsoft SoftGrid Application Virtualization system (formerly called SoftGrid) is proposed. ·
  • Presentation virtualization. It is implemented using Microsoft Windows Server Terminal Services and in general is a long-known terminal access mode. ·
  • Integrated management of virtual systems. System Center Virtual Machine Manager, released late last year, plays a key role in solving these problems.

Sun Microsystems offers a multi-tiered set of technologies: traditional OS, resource management, OS virtualization, virtual machines and hard partitions. This sequence is built on the principle of increasing the level of application isolation (but at the same time reducing the flexibility of the solution). All Sun virtualization technologies are implemented within the Solaris operating system. In hardware terms, there is support for x64 architecture everywhere, although UltraSPARC-based systems are initially better suited for these technologies. Other operating systems, including Windows and Linux, can be used as virtual machines.

Citrix Systems Corporation is a recognized leader in remote application access infrastructures. It seriously strengthened its position in the field of virtualization technologies by purchasing XenSource, the developer of Xen, one of the leading operating system virtualization technologies, in 2007 for $500 million. Just ahead of this deal, XenSource introduced a new version of its flagship product XenEnterprise based on the Xen 4 kernel. The acquisition caused some confusion in the IT industry, since Xen is an open source project and its technologies underlie commercial products from vendors such as , Sun, Red Hat and Novell. There is still some uncertainty about Citrix's position in the future promotion of Xen, including in marketing terms. The company's first product based on Xen technology, Citrix XenDesktop (for PC virtualization), is scheduled to be released in the first half of 2008. An updated version of XenServer is then expected to be introduced.

In November 2007, Oracle announced its entry into the virtualization market, introducing software called Oracle VM for virtualizing server applications from this corporation and other manufacturers. The new solution includes an open source server software component and an integrated browser-based management console for creating and managing virtual pools of servers running on systems based on x86 and x86-64 architectures. Experts saw this as Oracle's reluctance to support users who run its products in virtual environments from other manufacturers. It is known that the Oracle VM solution is implemented based on the Xen hypervisor. The uniqueness of this move by Oracle lies in the fact that this seems to be the first time in the history of computer virtualization that the technology is actually tailored not to the operating environment, but to specific applications.

The virtualization market through the eyes of IDC

The x86 architecture virtualization market is at a stage of rapid development, and its structure has not yet been established. This complicates the assessment of its absolute indicators and comparative analysis of the products presented here. This thesis is confirmed by the IDC report “Enterprise Virtualization Software: Customer Needs and Strategies” published in November last year. Of greatest interest in this document is the option for structuring server virtualization software, in which IDC identifies four main components (Fig. 5).

Virtualization platform. It is based on a hypervisor, as well as basic resource management elements and an application programming interface (API). Key characteristics include the number of sockets and number of processors supported by one virtual machine, the number of guests available under one license, and the range of supported operating systems.

Managing virtual machines. Includes tools for managing host software and virtual servers. Today, here are the most noticeable differences in vendor offerings, both in the composition of functions and in scaling. But IDC is confident that the capabilities of the leading vendors' tools will quickly level out, and physical and virtual servers will be managed through a single interface.

Virtual machine infrastructure. A wide range of additional tools that perform tasks such as software migration, automatic restart, load balancing of virtual machines, etc. According to IDC, it is the capabilities of this software that will decisively influence the choice of suppliers by customers, and it is at the level of these tools that the battle will be fought between vendors.

Virtualization solutions. A set of products that enable the above-mentioned core technologies to be linked to specific types of applications and business processes.

In terms of general analysis of the market situation, IDC identifies three camps of participants. The first divide is between those who virtualize at the top OS level (SWsoft and Sun) and at the bottom OS level (VMware, XenSource, Virtual Iron, Red Hat, Microsoft, Novell). The first option allows you to create the most efficient solutions in terms of performance and additional resource costs, but implementing only homogeneous computing environments. The second makes it possible to run several operating systems of different types on one computer. Within the second group, IDC draws another line separating suppliers of standalone virtualization products (VMware, XenSource, Virtual Iron) and manufacturers of operating systems that include virtualization tools (Microsoft, Red Hat, Novell).

From our point of view, the market structuring proposed by IDC is not very accurate. Firstly, for some reason IDC does not highlight the presence of two fundamentally different types of virtual machines - using a host OS (VMware, Virtual Iron, Microsoft) and a hypervisor (VMware, XenSource, Red Hat, Microsoft, Novell). Secondly, if we talk about the hypervisor, it is useful to distinguish between those who use their own core technologies (VMware, XenSource, Virtual Iron, Microsoft) and those who license others (Red Hat, Novell). And finally, it must be said that SWsoft and Sun have in their arsenal not only virtualization technologies at the OS level, but also tools for supporting virtual machines.

Information technologies have brought many useful and interesting things to the life of modern society. Every day, inventive and talented people come up with more and more new applications for computers as effective tools for production, entertainment and collaboration. Many different software and hardware, technologies and services allow us to improve the convenience and speed of working with information every day. It is becoming more and more difficult to single out truly useful technologies from the stream of technologies falling upon us and learn to use them with maximum benefit. This article will talk about another incredibly promising and truly effective technology that is rapidly breaking into the world of computers - virtualization technology.

In a broad sense, the concept of virtualization is the hiding of the real implementation of a process or object from its true representation for the one who uses it. The product of virtualization is something convenient for use, in fact, having a more complex or completely different structure, different from that which is perceived when working with the object. In other words, there is a separation of representation from the implementation of something. In computer technology, the term “virtualization” usually refers to the abstraction of computing resources and the provision to the user of a system that “encapsulates” (hides) its own implementation. Simply put, the user works with a convenient representation of the object, and it does not matter to him how the object is structured in reality.

The term “virtualization” itself in computer technology appeared in the sixties of the last century along with the term “virtual machine”, meaning a product of virtualization of a software and hardware platform. At that time, virtualization was more of an interesting technical discovery than a promising technology. Developments in the field of virtualization in the sixties and seventies were carried out only by IBM. With the advent of the experimental paging system on the IBM M44/44X computer, the term "virtual machine" was used for the first time, replacing the earlier term "pseudo machine". Then, on the IBM System 360/370 series mainframes, virtual machines could be used to preserve previous versions of operating systems. Until the end of the nineties, no one except IBM dared to use this original technology seriously. However, in the nineties, the prospects of the virtualization approach became obvious: with the growth of hardware capacity, both personal computers and server solutions, it will soon be possible to use several virtual machines on one physical platform.

In 1997, Connectix released the first version of Virtual PC for the Macintosh platform, and in 1998, VMware patented its virtualization techniques. Connectix was subsequently acquired by Microsoft and VMware by EMC, and both companies are now the two main potential competitors in the virtualization technology market in the future. Potential - because now VMware is the undisputed leader in this market, but Microsoft, as always, has an ace up its sleeve.

Since their inception, the terms “virtualization” and “virtual machine” have acquired many different meanings and are used in different contexts. Let's try to understand what virtualization really is.

The concept of virtualization can be divided into two fundamentally different categories:

  • platform virtualization
    The product of this type of virtualization is virtual machines - certain software abstractions that run on the platform of real hardware and software systems.
  • resource virtualization
    This type of virtualization aims to combine or simplify the presentation of hardware resources for the user and obtain certain user abstractions of equipment, namespaces, networks, etc.

Platform virtualization

Platform virtualization refers to the creation of software systems based on existing hardware and software systems, dependent or independent of them. The system that provides the hardware resources and software is called the host, and the systems it simulates are called guests. In order for guest systems to function stably on the host system platform, it is necessary that the host software and hardware be sufficiently reliable and provide the necessary set of interfaces to access its resources. There are several types of platform virtualization, each of which has its own approach to the concept of “virtualization”. The types of platform virtualization depend on how fully the hardware is simulated. There is still no consensus on virtualization terms, so some of the types of virtualization listed below may differ from what other sources provide.

Types of platform virtualization:

  1. Full emulation (simulation).

    With this type of virtualization, the virtual machine completely virtualizes all the hardware while keeping the guest operating system unchanged. This approach allows you to emulate various hardware architectures. For example, you can run virtual machines with guests for x86 processors on platforms with other architectures (for example, on Sun RISC servers). For a long time, this type of virtualization was used to develop software for new processors even before they were physically available. Such emulators are also used for low-level debugging of operating systems. The main disadvantage of this approach is that the emulated hardware very, very significantly slows down the performance of the guest system, which makes working with it very inconvenient, therefore, except for the development of system software, as well as educational purposes, this approach is rarely used.

    Examples of products for creating emulators: Bochs, PearPC, QEMU (without acceleration), Hercules Emulator.

  2. Partial emulation (native virtualization).

    In this case, the virtual machine virtualizes only the necessary amount of hardware so that it can be run in isolation. This approach allows you to run guest operating systems designed only for the same architecture as the host. This way, multiple guest instances can be running simultaneously. This type of virtualization can significantly increase the performance of guest systems compared to full emulation and is widely used today. In addition, in order to increase performance, virtualization platforms using this approach use a special “layer” between the guest operating system and the hardware (hypervisor), allowing the guest system to directly access hardware resources. The hypervisor, also called the Virtual Machine Monitor, is one of the key concepts in the world of virtualization. The use of a hypervisor, which is a link between guest systems and hardware, significantly increases the performance of the platform, bringing it closer to the performance of the physical platform.

    The disadvantages of this type of virtualization include the dependence of virtual machines on the architecture of the hardware platform.

    Examples of native virtualization products: VMware Workstation, VMware Server, VMware ESX Server, Virtual Iron, Virtual PC, VirtualBox, Parallels Desktop and others.

  3. Partial virtualization, as well as “address space virtualization”.

    With this approach, the virtual machine simulates several instances of the hardware environment (but not all), in particular, the address space. This type of virtualization allows you to share resources and isolate processes, but does not allow you to separate instances of guest operating systems. Strictly speaking, with this type of virtualization, virtual machines are not created by the user, but some processes are isolated at the operating system level. Currently, many of the well-known operating systems use this approach. An example is the use of UML (User-mode Linux), in which the “guest” kernel runs in the user space of the base kernel (in its context).

  4. Paravirtualization.

    When using paravirtualization, there is no need to simulate the hardware, but instead (or in addition to this), a special application programming interface (API) is used to interact with the guest operating system. This approach requires modification of the guest system code, which, from the point of view of the Open Source community, is not so critical. Systems for paravirtualization also have their own hypervisor, and API calls to the guest system are called “hypercalls”. Many doubt the prospects of this virtualization approach, since at the moment all hardware manufacturers' decisions regarding virtualization are aimed at systems with native virtualization, and para-virtualization support must be sought from operating system manufacturers who have little faith in the capabilities of the tool they offer. Currently, paravirtualization providers include XenSource and Virtual Iron, which claim that paravirtualization is faster.

  5. Operating system level virtualization.

    The essence of this type of virtualization is the virtualization of a physical server at the operating system level in order to create several secure virtualized servers on one physical one. The guest system, in this case, shares the use of one kernel of the host operating system with other guest systems. A virtual machine is an environment for applications that run in isolation. This type of virtualization is used when organizing hosting systems, when it is necessary to support several virtual client servers within one kernel instance.

    Examples of OS level virtualization: Linux-VServer, Virtuozzo, OpenVZ, Solaris Containers and FreeBSD Jails.

  6. Application Layer Virtualization.

    This type of virtualization is not like all the others: if in previous cases virtual environments or virtual machines are created that are used to isolate applications, then in this case the application itself is placed in a container with the necessary elements for its operation: registry files, configuration files, user and system objects. The result is an application that does not require installation on a similar platform. When such an application is transferred to another machine and launched, the virtual environment created for the program resolves conflicts between it and the operating system, as well as other applications. This method of virtualization is similar to the behavior of interpreters of various programming languages ​​(it is not for nothing that the interpreter, the Java Virtual Machine (JVM), also falls into this category).

    Examples of this approach are: Thinstall, Altiris, Trigence, Softricity.

Resource virtualization

When describing platform virtualization, we considered the concept of virtualization in a narrow sense, mainly applying it to the process of creating virtual machines. However, if we consider virtualization in a broad sense, we can come to the concept of resource virtualization, which generalizes approaches to creating virtual systems. Resource virtualization allows you to concentrate, abstract, and simplify management of groups of resources such as networks, data stores, and namespaces.

Types of resource virtualization:

  1. Combination, aggregation and concentration of components.

    This type of resource virtualization refers to the organization of several physical or logical objects into resource pools (groups) that provide convenient interfaces to the user. Examples of this type of virtualization:

    • multiprocessor systems, which appear to us as one powerful system,
    • RAID arrays and volume management tools that combine multiple physical disks into one logical disk,
    • virtualization of storage systems used in the construction of SAN (Storage Area Network) storage networks,
    • virtual private networks (VPN) and network address translation (NAT), which allow the creation of virtual spaces of network addresses and names.
  2. Computer clustering and distributed computing (grid computing).

    This type of virtualization includes techniques used to combine many individual computers into global systems (metacomputers) that jointly solve a common problem.

  3. Partitioning.

    When dividing resources in the process of virtualization, any one large resource is divided into several objects of the same type that are convenient for use. In storage area networks, this is called resource zoning.

  4. Encapsulation.

    Many people know this word as an object hiding its realization within itself. In relation to virtualization, we can say that this is the process of creating a system that provides the user with a convenient interface for working with it and hides the details of the complexity of its implementation. For example, the CPU's use of cache to speed up calculations is not reflected on its external interfaces.

Resource virtualization, unlike platform virtualization, has a broader and vaguer meaning and represents a lot of different approaches aimed at improving the user experience of systems as a whole. Therefore, further we will rely mainly on the concept of platform virtualization, since the technologies associated with this concept are currently the most dynamically developing and effective.

Where is virtualization used?

Operating system virtualization has advanced very well over the past three to four years, both technologically and in a marketing sense. On the one hand, it has become much easier to use virtualization products, they have become more reliable and functional, and on the other hand, many new interesting applications have been found for virtual machines. The scope of application of virtualization can be defined as “the place where there are computers,” but at the moment the following options for using virtualization products can be identified:

  1. Server consolidation.

    At the moment, applications running on servers in companies' IT infrastructure create a small load on the server's hardware resources (on average 5-15 percent). Virtualization allows you to migrate from these physical servers to virtual ones and place them all on one physical server, increasing its load to 60-80 percent and thereby increasing the utilization of equipment, which allows you to significantly save on equipment, maintenance and electricity.

  2. Application development and testing.

    Many virtualization products allow you to run multiple different operating systems simultaneously, allowing developers and software testers to test their applications on different platforms and configurations. Also, convenient tools for creating “snapshots” of the current state of the system with one click of the mouse and the same simple restoration from this state allow you to create test environments for various configurations, which significantly increases the speed and quality of development.

  3. Business use.

    This use case for virtual machines is the most extensive and creative. It includes everything that may be needed in the daily handling of IT resources in business. For example, based on virtual machines, you can easily create backup copies of workstations and servers (by simply copying a folder), build systems that provide minimal recovery time after failures, etc. This group of use cases includes all those business solutions that use the basic advantages of virtual machines.

  4. Using virtual workstations.

    With the advent of the era of virtual machines, it will be pointless to make yourself a workstation with its connection to hardware. Now, once you have created a virtual machine with your work or home environment, you can use it on any other computer. You can also use ready-made virtual machine templates (Virtual Appliances) that solve a specific problem (for example, an application server). The concept of using virtual workstations in this way can be implemented on the basis of hosting servers to run roaming user desktops on them (something similar to mainframes). In the future, the user can take these desktops with him without synchronizing the data with the laptop. This use case also provides the ability to create secure user workstations that can be used, for example, to demonstrate the software's capabilities to a customer. You can limit the time a virtual machine can be used - and after this time, the virtual machine will no longer start. This use case has great potential.

All of the listed use cases for virtual machines are actually just areas of their application at the moment; over time, undoubtedly, new ways to make virtual machines work in various IT industries will appear. But let's see how things stand with virtualization now.

How virtualization works today

Today, IT infrastructure virtualization projects are being actively implemented by many leading systems integration companies that are authorized partners of virtualization system providers. In the process of virtualization of the IT infrastructure, a virtual infrastructure is created - a set of systems based on virtual machines that ensure the functioning of the entire IT infrastructure, which has many new capabilities while maintaining the existing pattern of activity of IT resources. Vendors of various virtualization platforms are ready to provide information about successful projects to implement virtual infrastructure in large banks, industrial companies, hospitals, and educational institutions. The many benefits of operating system virtualization allow companies to save on maintenance, personnel, hardware, business continuity, data replication, and disaster recovery. Also, the virtualization market is beginning to be filled with powerful tools for managing, migrating and supporting virtual infrastructures, allowing you to use the benefits of virtualization to the fullest. Let's look at exactly how virtualization allows companies that implement virtual infrastructure to save money.

10 reasons to use virtual machines

  1. Hardware savings with server consolidation.

    Significant savings on the purchase of hardware occur when placing several virtual production servers on one physical server. Depending on the virtualization platform vendor, options for workload balancing, control of allocated resources, migration between physical hosts, and backup are available. All this entails real savings on the maintenance, management and administration of server infrastructure.

  2. Ability to support older operating systems for compatibility purposes.

    When a new version of the operating system is released, the old version can be supported on a virtual machine until the new OS is fully tested. Conversely, you can “lift” a new OS on a virtual machine and try it out without affecting the main system.

  3. Ability to isolate potentially dangerous environments.

    If any application or component raises doubts about its reliability and security, you can use it in a virtual machine without the risk of damaging vital system components. This isolated environment is also called a sandbox. In addition, you can create virtual machines that are limited by security policies (for example, the machine will stop starting after two weeks).

  4. Ability to create required hardware configurations.

    Sometimes it is necessary to use a given hardware configuration (processor time, amount of allocated RAM and disk memory) when checking the performance of applications under certain conditions. It is quite difficult to “drive” a physical machine into such conditions without a virtual machine. In virtual machines, it's a couple of mouse clicks.

  5. Virtual machines can create views of devices that you don't have.

    For example, many virtualization systems allow you to create virtual SCSI disks, virtual multi-core processors, etc. This can be useful for creating various types of simulations.

  6. Several virtual machines can be running simultaneously on one host, united in a virtual network.

    This feature provides unlimited possibilities for creating virtual network models between several systems on one physical computer. This is especially necessary when you need to simulate a distributed system consisting of several machines. You can also create several isolated user environments (for work, entertainment, surfing the Internet), launch them and switch between them as needed to perform certain tasks.

  7. Virtual machines provide excellent learning opportunities for operating systems.

    You can create a repository of ready-to-use virtual machines with different guest operating systems and run them as needed for training purposes. They can be subjected to all sorts of experiments with impunity, since if the system is damaged, restoring it from a saved state will take a couple of minutes.

  8. Virtual machines increase mobility.

    The folder with the virtual machine can be moved to another computer, and the virtual machine can be launched there immediately. There is no need to create any images for migration, and, moreover, the virtual machine is decoupled from specific hardware.

  9. Virtual machines can be organized into "application packages".

    You can create a virtual environment for a specific use case (for example, a designer's machine, a manager's machine, etc.), installing all the required software in it, and deploy desktops as needed.

  10. Virtual machines are more manageable.

    Using virtual machines significantly improves manageability for backups, virtual machine snapshots, and disaster recovery.

Of course, the advantages of virtual machines do not end there; this is just food for thought and research into their capabilities. Of course, like any new and promising solution, virtual machines also have their drawbacks:

  1. Inability to emulate all devices.

    At the moment, all major hardware platform devices are supported by virtualization system vendors, but if you use, for example, any controllers or devices that are not supported by them, you will have to abandon virtualization of such an environment.

  2. Virtualization requires additional hardware resources.

    Currently, the use of various virtualization techniques has made it possible to bring the performance of virtual machines closer to real ones, however, in order for a physical host to be able to run at least a couple of virtual machines, a sufficient amount of hardware resources is required for them.

  3. Some virtualization platforms require specific hardware.

    In particular, VMware's great platform, ESX Server, would be great if it didn't have stringent hardware requirements.

  4. Good virtualization platforms cost good money.

    Sometimes, the cost of deploying one virtual server is equal to the cost of another physical one; under certain conditions this may not be practical. Fortunately, there are many free solutions, but they are mainly aimed at home users and small businesses.

Despite the listed and completely removable shortcomings, virtualization continues to gain momentum, and in 2007 a significant expansion is expected of both the market for virtualization platforms and virtual infrastructure management tools. Over the past few years, interest in virtualization has grown significantly, as can be seen from Google Trends statistics:

Virtualization trend statistics

However, due to the complexity and high cost of deploying and maintaining virtual infrastructure, as well as the difficulty of properly assessing the return on investment, many virtualization projects fail. According to a study conducted by Computer Associates among various companies that have attempted virtualization, 44 percent could not describe the result as successful. This circumstance holds back many companies planning virtualization projects. Another problem is the lack of truly competent specialists in this field.

What does the future hold for virtualization?

2006 was a key year for virtualization technologies: many new players entered this market, many releases of virtualization platforms and management tools, as well as a considerable number of partnership agreements and alliances concluded, indicate that in the future the technology will be very, very in demand. The virtualization market is in the final stage of its formation. Many hardware manufacturers have announced support for virtualization technologies, and this is a sure guarantee of the success of any new technology. Virtualization is becoming closer to people: interfaces for using virtual machines are being simplified, agreements on the use of various tools and techniques appear, not yet officially established, and migration from one virtual platform to another is being simplified. Of course, virtualization will occupy its niche in the list of necessary technologies and tools when designing the IT infrastructure of enterprises. Regular users will also find their use for virtual machines. As the performance of desktop computer hardware platforms increases, it will become possible to support multiple user environments on one machine and switch between them.

Hardware manufacturers are also not going to remain static: in addition to existing hardware virtualization techniques, hardware systems will soon appear that natively support virtualization and provide convenient interfaces for the software being developed. This will allow you to quickly develop reliable and efficient virtualization platforms. It is possible that any installed operating system will be immediately virtualized, and special low-level software, supported by hardware functions, will switch between running operating systems without compromising performance.

The very idea inherent in virtualization technologies opens up wide possibilities for their use. After all, ultimately, everything is done for the convenience of the user and simplifying the use of things familiar to him. Whether it is possible to significantly save money on this, time will tell.

Question 56

OS virtualization systems. Basic concepts, paravirtualization, hardware virtualization, hypervisor. Application examples.

Virtualization is a technology that abstracts processes and their representation from computing resources. The concept of virtualization is far from new and was introduced back in the 60s companyIBM.

The following types of virtualization can be distinguished:

    Server virtualization . Server virtualization involves running several virtual servers on one physical server. Virtual machines, or servers, are applications running on a host operating system that emulate the server's physical hardware. Each virtual machine can have an operating system on which applications and services can be installed. Typical representatives are products VMware vSphere And Microsoft Hyper-V.

    Application Virtualization . Application virtualization involves emulating operating system resources (registry, files, etc.). This technology allows you to use several incompatible applications simultaneously on one computer, or rather on the same operating system. Application virtualization is implemented based on the Microsoft Application Virtualization (AppV) product. AppV allows users to run the same pre-configured application or group of applications from the server. In this case, the applications will work independently of each other, without making any changes to the operating system. Moreover, all this happens transparently to the user, as if he were working with a regular locally installed application.

    View Virtualization . View virtualization involves emulating the user interface. Those. the user sees the application and works with it on his terminal, although in fact the application is running on a remote server, and only an image of the remote application is transmitted to the user. Depending on the operating mode, the user can see the remote desktop and the application running on it, or only the application window itself. This is implemented based on Microsoft Terminal Services and based on Citrix solutions.

    Operating system level virtualization . Operating system-level virtualization involves isolating services within a single instance of the operating system kernel. This is implemented on Parallels (SWsoft) Virtuozzo and is used most often by hosting companies.

What virtualization can do:

    Run multiple operating systems simultaneously.

    Guaranteed isolation of OS from each other.

    Possibility of flexible sharing of resources between machines.

Benefits of virtualization:

    Increased insulation.

    Limit one or a group of tightly coupled services to its own virtual machine.

    Reducing the likelihood of failures due to mutual influence of programs.

    Safety.

    Distribution of administration tasks - the ability to limit the rights of each administrator to only the most necessary ones.

    Reducing the potential harmful consequences of hacking any of the services.

    Resource distribution - each machine receives as many resources as it needs, but no more.

    Prioritization of tasks.

    Memory allocation on demand.

    Flexible distribution of network traffic between machines.

    Disk resource allocation.

    Constant availability.

    It is possible to perform live migration of machines.

    Smooth upgrade of critical servers.

    Improving the quality of administration.

    Ability to perform regression tests.

    Opportunity for experimentation and exploration.

Principles and types of virtualization:

    Interpretation and dynamic recompilation - when using dynamic recompilation, the emulator program converts fragments of the executable program into code that can be executed on another computer directly while it is running. The recompiler has less compatibility than the interpreter, but it is faster.

Examples: Bochs, PearPC, QEMU, Microsoft VirtualPC for MAC.

    Paravirtualization and porting - the guest OS kernel is modified in such a way that it includes a new set of APIs, through which it can work directly with the hardware without conflicting with other virtual machines. In this case, there is no need to use a full-fledged OS as host software, the functions of which in this case are performed by a special system called a hypervisor. This type of virtualization is hardware-based.

Hypervisor (or Virtual Machine Monitor ) - program or a hardware circuit that provides or allows the simultaneous, parallel execution of several or even many operating systems on the same host computer. The hypervisor also provides isolation of operating systems from each other, protection and security, resource sharing between different running OSes, and resource management.

The hypervisor can also (but is not obligated to) provide operating systems running under its control on the same host computer with the means to communicate and interact with each other (for example, through file sharing or network connections) as if these operating systems were running on different physical computers.

The hypervisor itself is in some way a minimal operating system (microkernel or nanonucleus ). It provides a service to operating systems running under its control. virtual machine, virtualizing or emulating the real (physical) hardware of a particular machine, and manages these virtual machines, allocating and releasing resources for them. The hypervisor allows independent “switching on,” rebooting, “switching off” any of the virtual machines running a particular OS. However, an operating system running in a virtual machine running a hypervisor can, but does not have to, “know” that it is running in a virtual machine and not on real hardware.The word “hypervisor” appeared in an interesting way: once upon a time, a very long time ago, the operating system was called “supervisor”, and the software that was “under supervision” was called “hypervisor”.

Hypervisor types:

    Autonomous hypervisor (Type 1)

It has its own built-in device drivers, driver models and scheduler and is therefore independent of the underlying OS. Since a standalone hypervisor runs directly on the hardware, it is more productive.

Example:VMware ESX

    Based on base OS (Type 2, V)

This is a component that runs in the same ring with the main OS kernel (ring 0). Protection rings are an architecture of information security and functional fault tolerance that implements hardware separation of system and user privilege levels.

Guest code can run directly on the physical processor, but access to the computer's I/O devices from the guest OS is done through a second component, the host OS's normal user-level monitor process.

Examples: Microsoft Virtual PC, VMware Workstation, QEMU, Parallels,VirtualBox .

    Hybrid (Type 1+)

A hybrid hypervisor consists of two parts: a thin hypervisor that controls the processor and memory, as well as a special service OS running under its control in a lower-level ring. Through the service OS, guest OSes gain access to physical hardware.

Examples: Microsoft Virtual Server, Sun Logical Domains, Xen, Citrix XenServer, Microsoft Hyper-V

Paravirtualization (English)Paravirtualization) - technique virtualization, in which guests OS are prepared for execution in a virtualized environment, for which their kernel is slightly modified. The operating system interacts with the program The hypervisor that provides it with the guest API, rather than using resources such as the memory page table directly. Code related to virtualization is localized directly into the operating system. Paravirtualization thus requires that the guest operating system be modified for the hypervisor, and this is a disadvantage of the method, since such modification is only possible if the guest OS is open source, which can be modified under a license. But paravirtualization offers performance almost like that of a real, non-virtualized system. As with full virtualization, many different operating systems can be supported simultaneously.

The goal of the interface change is to reduce the percentage of guest execution time spent performing operations that are significantly more difficult to run in a virtual environment compared to a non-virtual environment. Paravirtualization provides specially installed interrupt handlers to allow the guest(s) and host to accept and recognize these tasks that would otherwise be performed in the virtual domain (where performance is less). Thus, a successful paravirtualized platform can enable virtual machine monitor (VMM) be simpler (by offloading critical tasks from the virtual domain to the domain host) and/or reduce the overall performance overhead of machine execution within the virtual guest.

The term first appeared in the project Denali, and after this word was used by researchers from the University of Cambridge computer laboratory in the project Xen, it has finally established itself in terminology. The prefix “para” in the word paravirtualization does not mean anything, the authors of this idea just needed a new term.

      Advantages: no need for a host OS. The virtual machine is installed virtually on bare metal, and hardware resources are used efficiently.

      Flaws: the complexity of implementing the approach and the need to create a specialized OS hypervisor.

Examples: Xen, UML, lguest, Microsoft Hyper-V, KVM, VMware ESX Server.

    Virtualization at the OS level - this approach uses one host OS kernel to create independent parallel operating environments. The kernel ensures complete container isolation, so programs from different containers cannot affect each other.

    • Advantages: high efficiency in the use of hardware resources, low technical overhead, excellent manageability, minimizing the cost of purchasing licenses.

      Disadvantages: implementation of only homogeneous computing environments.

Examples: FreeVPS, iCore Virtual Accounts, Linux-VServer, OpenVZ, Parallels VirtuozzoContainers, Zones, FreeBSD, Jail, sysjail, WPARs, Solaris Containers.

    Full virtualization - with this approach, unmodified copies of guest operating systems are used, and to support the operation of these operating systems, a common layer of emulation of their execution is used on top of the host operating system, which is a regular operating system.

    • Advantages: relative ease of implementation, versatility and reliability of the solution; All management functions are taken over by the host OS.

      Flaws: high additional overhead costs for the hardware resources used, lack of consideration of the features of the guest OS, less than necessary flexibility in the use of hardware.

Examples: VMware Workstation, VMware Server, Parallels Desktop, Parallels Server, Microsoft VirtualPC, Microsoft Virtual Server, Microsoft Hyper-V, QEMUWith modulekqemu, KVM, Virtual Iron.

    Compatibility Layer

The compatibility layer can be understood as a software product that allows you to run programs not intended for the working environment. For example,Wineallows you to work with programsWindowsin the operating systemLinux.

Examples:Cygwin, Wine.

If you notice an error, select a piece of text and press Ctrl+Enter
SHARE:
Computers and modern gadgets