I was at a Hyper-V course (50286) last week for a couple days that were discussing Architecting solutions for a hyper-V implementation.
--Course Outline--
Design a server virtualization strategy, including planning for virtualization server workload capacities, licensing, and management.
Module 1: Designing a Virtualization Strategy
Module 2: Designing a Virtualization Platform Infrastructure and High-Availability Strategy
Module 3: Designing a Management Strategy
The Basics
The first day I was very skeptical during the discussions but understand now why there is performance gains for the hyper-v implementation over a Physical server implementation.
Firstly, the Hyper-V installation runs on a windows 2008 R2 server. Already an issue for performance right from the start. The instructor discussed about a 4 ring Binary Translation or Ring compression, ( http://en.wikipedia.org/wiki/Binary_translation ) in a discussion where a hypervisor kernel lies within the emulation chain. See ( http://upload.wikimedia.org/wikipedia/commons/0/06/Hyper-V.png ) He discussed the virtualization mode being at "Ring -1" or of higher priority than the running kernel of the host machine. (Personal note: This sounds fishy to me. I question then why do we need the host OS at all then)
After the Hypervisor layer the host OS layer supplies the hardware drivers for any guest OS running on the Hyper-V server (VSP). The guest requires a VMBUS or integration Services Service running (VSC) (VMtools for the VMware Admins out there) to increase the performance for the VM Guests. This removes the driver layer from the Guest OS reducing the overhead for the guests. ( Would this be a vulnerable area to the security of the guests if the host was compromised?)
So in concept Hyper-V could provide guest performance improvements.
Maximums
- 384 Max VMGuests can run on a Hyper-V server or 512 total Virtual processors
- 4 Virtual Processors per guest
- 8 Synthetic Network Interface Cards and 4 Legacy Network Interface Cards per VM Guest
- 1 IDE interface is required per VMguest - IDE Hard Disk, or IDE CDROM
Limitations
- Processor Affinity is not available
- Different Processor Version issue. Limited Processor compatibility, there is a check mark box to scale the processor for all VMguests to the lowest common processor version.
Features not available on VMware
- Core Parking - Processor Core can be turned off to conserve power
More to come...
--Course Outline--
Design a server virtualization strategy, including planning for virtualization server workload capacities, licensing, and management.
- Install Server Core
- Install and use MAP 4.0
- Design a Server Virtualization
- Infrastructure with Hyper-V
- Design a Storage Infrastructure with Hyper-V
- Design Networking with Hyper-V
- Design a Snapshot Strategy with Hyper-V
- Design a High-Availability Server Environment
- Design a High-Availability Virtual Machine Environment
- Design Migration Options
- Configure Hyper-V Storage
- Configure Hyper-V Network
- Configure Hyper-V Snapshot Operations
- Configure 2-Node VM Failover Cluster
- Develop a Virtualization Infrastructure Management Framework with System Center Suite
- Deploy Virtual Machines
- Convert Virtual Machines
- Design an Administrative Strategy
- Design a Migration Strategy
- Design a Disaster Recovery Strategy
- Configure remote administration
- Export and Import virtual machines using SCVMM
Module 1: Designing a Virtualization Strategy
- Lab: Hyper-V Server 2008 R2 Installation
Module 2: Designing a Virtualization Platform Infrastructure and High-Availability Strategy
- Lab: Configuring and Using MAP 4.0
Module 3: Designing a Management Strategy
- Lab: Configuring and Using a VMM Self-Service Portal
- Lab: Performing a P2V Conversion using VMM
- Lab: Configuration and Use of the Remote Server Administration Toolkit
- Lab: Performing a Virtual Machine Export/Import using VMM
- Lab: Performing a Host-Level Backup Using DPM
The Basics
The first day I was very skeptical during the discussions but understand now why there is performance gains for the hyper-v implementation over a Physical server implementation.
Firstly, the Hyper-V installation runs on a windows 2008 R2 server. Already an issue for performance right from the start. The instructor discussed about a 4 ring Binary Translation or Ring compression, ( http://en.wikipedia.org/wiki/Binary_translation ) in a discussion where a hypervisor kernel lies within the emulation chain. See ( http://upload.wikimedia.org/wikipedia/commons/0/06/Hyper-V.png ) He discussed the virtualization mode being at "Ring -1" or of higher priority than the running kernel of the host machine. (Personal note: This sounds fishy to me. I question then why do we need the host OS at all then)
After the Hypervisor layer the host OS layer supplies the hardware drivers for any guest OS running on the Hyper-V server (VSP). The guest requires a VMBUS or integration Services Service running (VSC) (VMtools for the VMware Admins out there) to increase the performance for the VM Guests. This removes the driver layer from the Guest OS reducing the overhead for the guests. ( Would this be a vulnerable area to the security of the guests if the host was compromised?)
So in concept Hyper-V could provide guest performance improvements.
Maximums
- 384 Max VMGuests can run on a Hyper-V server or 512 total Virtual processors
- 4 Virtual Processors per guest
- 8 Synthetic Network Interface Cards and 4 Legacy Network Interface Cards per VM Guest
- 1 IDE interface is required per VMguest - IDE Hard Disk, or IDE CDROM
Limitations
- Processor Affinity is not available
- Different Processor Version issue. Limited Processor compatibility, there is a check mark box to scale the processor for all VMguests to the lowest common processor version.
Features not available on VMware
- Core Parking - Processor Core can be turned off to conserve power
More to come...
Comments