http://summit.ubuntu.com/lpc-2012/ Virtualization

Wednesday 14:00 - 14:45 PDT
Not Attending Security and Storage
Virtualization Topics: 1. Virtualization Security Discussion 2. Storage Virtualization for KVM === Virtualization Security Discussion === This proposal is for a discussion of the threats facing Linux based Virtualization technologies and what can be done to help mitigate these threats. The focus will be on hypervisor based virtualization, e.g. KVM, but container based virtualization can also be discussed if there is sufficient interest among the attendees. Possible topics of discussion include: * Confining malicious/exploited guests * Validating host identity and integrity from the guest * Enforcing network separation/virtualization Topic Lead: Paul Moore Paul has been involved in various Linux security efforts for the past eight years, with a strong focus on mandatory access control and network security. He has served as the Linux Kernel's labeled networking maintainer since 2007. Paul has given a number of presentations over the years at Linux conferences on Linux security, SELinux, SELinux/MLS, and labeled networking. === Storage Virtualization for KVM === In KVM based virtualization ecosystem, there are multiple choices for filesytem/storage and management tools. While this allows for different custom solutions, there is no single default storage solution that caters to the majority of use case scenarios. In this presentation, we will look at integrating different individual projects like QEMU, GlusterFS, oVirt/VDSM and libstoragemgmt to arrive at one filesystem/storage solution for KVM that works for most of the scenarios. Various aspects like making GlusterFS virtualization-ready and cross-vendor storage array integration will be discussed. We will finally discuss how some of the virtualization features like VM migration, taking snapshots etc can be done seamlessly in our storage solution using oVirt. Virtualization/Data Center administrators and users of KVM based virtualization will benefit from attending this presentation. Topic Lead: Bharata B Rao <email address hidden> Bharata B Rao is part of IBM Linux Technology Center, Bangalore He is currently working in the area of Virtualization. Earlier he has worked in the area of File Systems, Scheduler, Debuggers, Embedded Linux and Linux Clusters. Bharata graduated from The National Institute of Engineering, Mysore in 1999 and did his post graduation(MS) from BITS, Pilani in 2003. In his spare time, Sanskrit language, Mountains and Mridangam (an Indian percussion instrument) keep him engaged. Topic Lead: Deepak C Shetty <email address hidden> Deepak C Shetty is working with IBM's Linux Technology Center (LTC), Bangalore in the area of open virtualisation. Earlier he has worked in area of Virtualisation aware File Systems. Prior to being part of LTC, Deepak worked in the areas of Platform Management (IBM Systems Director product) and Hardware Validation of IBM Power systems, in IBM. Deepak holds a Bachelor of Engineering degree in Electronics from Pune University, India and Diploma in Advanced Computing from C-DAC, Pune, India. Other areas of interest include Software design and Stamp collection (Philately). Topic Lead: M Mohan Kumar <email address hidden> M. Mohan Kumar is an open source developer working at IBM Linux Technology Center, Bangalore. He has contributed to various components of Linux ecosystem including kexec (fast boot), kdump (kernel crash dump mechanism) for PowerPC, 9p file system and QEMU. Prior to IBM he has worked on various SCSI and Fibre Channel related Linux projects. Mohan obtained his Bachelor of Engineering in Computer Science and Engineering from Bharathiar University, TamilNadu, India. He has 11 years of experience in Linux area. Topic Lead: Balamurugan Aramugam <email address hidden> Balamurugan works as Principle Software Engineer in Red Hat. He is contributor to the upstream VDSM project, focusing on adding Gluster support. He has been involved in design/development of various Gluster products, FreeIPMI etc. Balamurugan works out of the Red Hat office in Bengaluru and his topics of interest include cloud technologies, Big Data, Kernel Development, Artificial Intelligence etc Topic Lead: Shireesh Anjal <email address hidden> Shireesh Anjal works as a Principal Software Engineer with Red Hat. He is a contributor to the upstream oVirt project, focusing on adding GlusterFS support. He has been involved with building scalable Banking and eGovernance systems over the past 12 years. Shireesh works out of the Red Hat office in Bengaluru and his topics of interest include cloud technologies, Big Data and mobile computing.

Participants:
attending amitshah (Amit Shah)
attending eblake (Eric Blake)
attending lpc-virt-lead (LPC Virtualization Lead)
attending paulmoore (Paul Moore)
attending stefano-stabellini (Stabe)

Tracks:
  • Virtualization
Nautilus 2
Wednesday 15:45 - 16:30 PDT
Not Attending x86 Virtualization
Virtualization Topics: 2) COLO: COarse-grain LOck-stepping Virtual Machines for Non-stop Service - Eddie Dong 3) Reviewing Unused and Up-coming Hardware Virtualization Features - Jun Nakajima === COLO: COarse-grain LOck-stepping Virtual Machines for Non-stop Service === Virtual machine (VM) replication (replicating the state of a primary VM running on a primary node to a secondary VM running on a secondary node) is a well known technique for providing application-agnostic, non-stop service. Unfortunately, existing VM replication approaches suffer from excessive replication overhead and, for client-server systems, there is really no need for the secondary VM to match its machine state with the primary VM at all times. We propose COLO (COarse-grain LOck-stepping virtual machine solution for non-stop service), a generic and highly efficient non-stop service solution, based on on-demand VM replication. COLO monitors the output responses of the primary and secondary VMs. COLO considers the secondary VM a valid replica of the primary VM, as long as network responses generated by the secondary VM match that of the primary. The primary VM state is propagated to the secondary VM, if and only if outputs from the secondary and primary servers no longer match. Topic Lead: Eddie Dong === Reviewing Unused and Up-coming Hardware Virtualization Features === We review unused and up-coming hardware virtualization (Intel VT) features to discuss how they can improve virtualization for open source. First, we review the existing hardware features that are not used by KVM or Xen today, showing examples for use cases. 1) For example, The descriptor-table exiting should be useful for the guest kernels or security agent to enhance security features. 2) The VMX-preemption timer allows the hypervisor to preempt guest VM execution after a specified amount of time, which is useful to implement fair scheduling. The hardware can save the timer value on each successive VM exit, after setting the initial VM quantum. 3) VMFUNC is an operation provided by the processor that can be invoked from VMX non-root operation without a VM exit. Today, EPTP switching is available, and we discuss how we can use the feature. Second, we talk about the new hardware features, especially for interrupt optimizations. Topic Lead: Jun Nakajima Jun Nakajima is a Principal Engineer leading open source virtualization projects, such as Xen and KVM at the Intel Open Source Technology Center. He presented a number of times at technical conferences, including Xen Summit, OLS, KVM Forum, and USENIX.

Participants:
attending amitshah (Amit Shah)
attending lpc-virt-lead (LPC Virtualization Lead)

Tracks:
  • Virtualization
Nautilus 2
Thursday 14:25 - 15:10 PDT
Not Attending Virtualization Memory Management
Virtualization Topics: 1) NUMA and Virtualization, the case of Xen 2) Automatic NUMA CPU scheduling and memory migration 3) One balloon for all - towards unified baloon driver === NUMA and Virtualization, the case of Xen === Having to deal with NUMA machines is becoming more and more common, and will likely continue to do so. Running typical virtualization workloads on such systems is particularly challenging, as Virtual Machines (VMs) are typically long lived processes with large memory footprints. This means one might incur really bad performance if the specific characteristics of the platform are not properly accounted for. Basically, it would be ideal to always run a VM on the CPUs of the node that host its memory, or at least as close as possible to that. Unfortunately, that is all but easy, and involves reconsidering the current approaches to scheduling and memory allocation. Extensive benchmarks have been performed, running memory intensive workloads inside Linux VMs hosted on NUMA hardware of different kinds and size. This has then driven the design and development of a suite of new VM placement, scheduling and memory allocation policies for the Xen hypervisor and its toolstack. The implementation of such changes has been benchmarked against the baseline performance and proved to be effective in yielding improvements, which will be illustrated during the talk. Although some of the work is hypervisor specific, it covers are issues that can be considered of interest for the whole Linux virtualization community. Whether and how to export NUMA topology related information to guests, just to give an example. We believe that the solutions we are working on, the ideas behing them and the performance evaluation we conducted are something the community would enjoy hearing and talking about. Topic Lead: Dario Faggioli Dario has interacted with the Linux kernel community in the domain of scheduling during his PhD on real-time systems. He now works for Citrix on the Xen Open Source project. He spent the last months on investigating and trying to improve the performance of virtualization workloads on NUMA systems. === Automatic NUMA CPU scheduling and memory migration === Topic Lead: Andrea Arcangeli === One balloon for all - towards unified baloon driver === During Google Summer of Code 2010 (Migration from memory ballooning to memory hotplug in Xen) it was discovered that in main line Linux Kernel exists 3 balloon driver implementations for 3 virtualization platforms (KVM, Xen, VMware). It quickly came out that they are almost identical but of course they have different controls and API/ABI. In view of e.g. memory hotplug driver which has generic base (not linked with specific hardware/software solution) this situation is not acceptable. The goal of this project is generic balloon driver which could be placed in MM subsystem and which could be linked with as little as possible platform specific code (placed e.g. in relevant arch directory). This solution could give unified ABI (which could ease administration) and unified API for developer (i.e. easier integration with e.g. tmem, memory hotplug, etc.). Additionally, balloon driver behavior would be almost identical on all platforms. Discussion should outline the goals and key solutions for such driver. Topic Lead: Daniel Kiper Daniel was a Google Summer of Code 2010 (memory hotplug/balloon driver) and Google Summer of Code 2011 (kexec/kdump) student. He is involved in *NIX administration/development since 1994. Currently his work and interests focuses on kexec/kdump implementation for Xen.

Participants:
attending amitshah (Amit Shah)
attending dkiper (Daniel Kiper)
attending lpc-virt-lead (LPC Virtualization Lead)
attending raistlin (Dario Faggioli)

Tracks:
  • Virtualization
Nautilus 2
Thursday 17:25 - 18:10 PDT
Not Attending ARM Virtualization
Virtualization Topics: 1. Xen on ARM Cortex A15 2. Porting KVM to the ARM Architecture === Xen on ARM Cortex A15 === During the last few months of 2011 the Xen Community started an effort to port Xen to ARMv7 with virtualization extensions, using the Cortex A15 processor as reference platform. The new Xen port is exploiting this set of hardware capabilities to run guest VMs in the most efficient way possible while keeping the ARM specific changes to the hypervisor and the Linux kernel to a minimum. Developing the new port we took the chance to remove legacy concepts like PV or HVM guests and only support a single kind of guests that is comparable to "PV on HVM" in the Xen X86 world. This talk will explain the reason behind this and other design choices that we made during the early development process and it will go through the main technical challenges that we had to solve in order to accomplish our goal. Notable examples are the way Linux guests issue hypercalls and receive event channels notifications from Xen. Is there anything that we could have done better? Is the architecture that we lied down in the Linux kernel generic enough to be re-used by other hypervisors? Topic Lead: Stefano Stabellini Stefano is a Senior Software Engineer at Citrix, working on the Open Source Xen Platform team. He has been working on Xen since 2007, focusing on several different projects, spanning from Qemu to the Linux kernel. He currently maintains libxenlight, Xen support in Qemu and PV on HVM in the Linux kernel. Before joining Citrix he was a researcher at the Institute for Human and Machine Cognition, working on mobile ad hoc networks. === Porting KVM to the ARM Architecture === With the introduction of the Virtualization Extensions to the ARM architecture (as implemented in the Cortex A7 and A15 processors), it is possible to implement a hardware assisted hypervisor. The KVM port to the ARM architecture, started by Christoffer Dall (University of Columbia) is an example of such a hypervisor. Our proposal is to describe the current state of the project, explain how the various virtualization extensions (hypervisor mode, second stage translation, virtual interrupt controller, timers) are used, how the KVM implementation on ARM differs from other architectures, and what our plans are for upstreaming the code. Topic Lead: Marc Zyngier <email address hidden> Marc has been toying with the Linux kernel since 1993, and has been involved over time with the RAID subsystem (MD), all kind of ancient architectures (by maintaining the EISA bus), messed with consumer electronics, and now focuses on the ARM architecture.

Participants:
attending amitshah (Amit Shah)
attending eblake (Eric Blake)
attending lpc-virt-lead (LPC Virtualization Lead)
attending marc-zyngier (Marc Zyngier)
attending srwarren (Stephen Warren)
attending stefano-stabellini (Stabe)

Tracks:
  • Virtualization
Nautilus 5
Friday 11:00 - 11:45 PDT
Not Attending Network Virtualization and Lightning Talks
1. VFIO - Are We There Yet? 2. KVM Network performance and scalability 3. Enabling overlays for Network Scaling 4. Marrying live migration and device assignment 5. Lightning Talks   - QEMU disaggregation - Stefano Stabellini   - Xenner- Alexander Graf   - From Server to Mobile: Different Requirements/Different Solution - Eddie Dong, Jun Nakajima === VFIO - Are We There Yet? === VFIO is new userspace driver interface intended to generically enable assignment of devices into qemu virtual machines. VFIO has had a bumpy road upstream and is currently into it's second redesign. In this talk we'll look at the new design, the status of the code, how to make use of it, and where it's going. We'll also look back at some of the previous designs to show how we got here. This talk is intended for developers and users interested in the evolution of device assignment in qemu and kvm as well as those interested in userspace drivers. Topic Lead: Alex Williamson Alex has been working on virtualization for over 5 years and concentrates on the I/O side of virtualization, especially assignment of physical devices to virtual machines. He is a member of the Red Hat Virtualization team. === KVM Network performance and scalability === In this presentation we will discuss ongoing work to improve KVM networking I/O performance and scalability. We will share performance numbers taken using both vertical (multiple interfaces) and horizontal (many VMs) to highlight existing bottleneck's in the KVM stack as well as improvements observed with pending changes. These experiments have shown impressive gains can be obtained by using per-cpu vhost threads and leveraging hardware offloads. These offloads include flow steering and interrupt affinity. This presentation intends to highlight ongoing research from various groups working on the Linux kernel, KVM, and upper layer stack. Finally we will propose a path to include these changes in the upstream projects. This should be of interest to KVM developers, kernel developers, and anyone using a virtualized environment. Topic Lead: John Fastabend <email address hidden> Required attendees: Vivek Kashyap, Shyam Iyer === Enabling overlays for Network Scaling === Server virtualization in the data-center has increased the density of networking endpoints in a network. Together with the need to migrate VMs anywhere in the data-center this has surfaced network scalability limitations (layer 2, cross IP subnet migrations, network renumbering). The industry has turned its attention towards overlay networks to solve the network scalability problems. The overlay network concept defines a domain connecting virtual machines belonging to a single tenant or organization. This virtual network may be built across the server hypervisors which are connected over an arbitrary topology. This talk will give an overview of the problems sought to be solved through the use of overlay networks, and discusses the active proposals such as VxLAN, NVGRE, and DOVE Network. We further will delve into options for implementing the solutions on Linux. Topic Lead: Vivek Kashyap <email address hidden> Vivek works in IBM's Linux Technology Center. Vivek has worked on Linux resource management, delay-accounting, Energy & hardware management, and authored InfiniBand and IPoIB networking protocols, and worked on standardizing and implementing the IEEE 802.1Qbg protocol on network switching. === Marrying live migration and device assignment === Device assignment has been around for quite some time now in virtualization. It's a nice technique to squeeze as much performance out of your hardware as possible and with the advent of SR-IOV it's even possible to pass a "virtualized" fraction of your real hardware to a VM, not the whole card. The problem however is that you lose a pretty substantial piece of functionality: live migration. The most commonly approach used to counterfight this for networking is to pass 2 NICs to the VM. One that's emulated in software and one that's the actually assigned device. It's the guest's responsibility to treat the two as a union and the host needs to be configured in a way that allows packets to float the same way through both paths. When migrating, the assigned device gets hot unplugged and a new one goes back in on the new host. However, that means that we're exposing crucial implementation details of the VM to the guest: it knows when it gets migrated. Another approach is to do the above, but combine everything in a single guest driver, so it ends up invisible to the guest OS. That quickly becomes a nightmare too, because you need to reimplement network drivers for your specific guest driver infrastructure, at which point you're most likely violating the GPL anyway. So what if we restrict ourselves to a single NIC type? We could pass in an emulated version of that NIC into our guest, or pass through an assigned device. They would behave the same. That also means that during live migration, we could switch between emulated and assigned modes without the guest even realizing it. But maybe others have more ideas on how to improve the situation? The less guest intrusive it is, the better the solution usually becomes. And if it extends to storage, it's even better Required attendees Peter Waskiewicz Alex Williamson Topic Lead: Alexander Graf <email address hidden> Alexander has been a steady and long time contributor to the QEMU and KVM projects. He maintains the PowerPC and s390x parts of QEMU as well as the PowerPC port of KVM. He tends to become active, whenever areas seem weird enough for nobody else to touch them, such as nested virtualization, mac os virtualization or ahci. Recently, he has also been involved in kicking off openSUSE for ARM. His motto is なんとかなる.

Participants:
attending alex-l-williamson (Alex Williamson)
attending amitshah (Amit Shah)
attending eblake (Eric Blake)
attending lpc-virt-lead (LPC Virtualization Lead)

Tracks:
  • Virtualization
Nautilus 5
Friday 11:55 - 12:40 PDT
Not Attending Network Virtualization and Lightning Talks
1. VFIO - Are We There Yet? 2. KVM Network performance and scalability 3. Enabling overlays for Network Scaling 4. Marrying live migration and device assignment 5. Lightning Talks   - QEMU disaggregation - Stefano Stabellini   - Xenner- Alexander Graf   - From Server to Mobile: Different Requirements/Different Solution - Eddie Dong, Jun Nakajima === VFIO - Are We There Yet? === VFIO is new userspace driver interface intended to generically enable assignment of devices into qemu virtual machines. VFIO has had a bumpy road upstream and is currently into it's second redesign. In this talk we'll look at the new design, the status of the code, how to make use of it, and where it's going. We'll also look back at some of the previous designs to show how we got here. This talk is intended for developers and users interested in the evolution of device assignment in qemu and kvm as well as those interested in userspace drivers. Topic Lead: Alex Williamson Alex has been working on virtualization for over 5 years and concentrates on the I/O side of virtualization, especially assignment of physical devices to virtual machines. He is a member of the Red Hat Virtualization team. === KVM Network performance and scalability === In this presentation we will discuss ongoing work to improve KVM networking I/O performance and scalability. We will share performance numbers taken using both vertical (multiple interfaces) and horizontal (many VMs) to highlight existing bottleneck's in the KVM stack as well as improvements observed with pending changes. These experiments have shown impressive gains can be obtained by using per-cpu vhost threads and leveraging hardware offloads. These offloads include flow steering and interrupt affinity. This presentation intends to highlight ongoing research from various groups working on the Linux kernel, KVM, and upper layer stack. Finally we will propose a path to include these changes in the upstream projects. This should be of interest to KVM developers, kernel developers, and anyone using a virtualized environment. Topic Lead: John Fastabend <email address hidden> Required attendees: Vivek Kashyap, Shyam Iyer === Enabling overlays for Network Scaling === Server virtualization in the data-center has increased the density of networking endpoints in a network. Together with the need to migrate VMs anywhere in the data-center this has surfaced network scalability limitations (layer 2, cross IP subnet migrations, network renumbering). The industry has turned its attention towards overlay networks to solve the network scalability problems. The overlay network concept defines a domain connecting virtual machines belonging to a single tenant or organization. This virtual network may be built across the server hypervisors which are connected over an arbitrary topology. This talk will give an overview of the problems sought to be solved through the use of overlay networks, and discusses the active proposals such as VxLAN, NVGRE, and DOVE Network. We further will delve into options for implementing the solutions on Linux. Topic Lead: Vivek Kashyap <email address hidden> Vivek works in IBM's Linux Technology Center. Vivek has worked on Linux resource management, delay-accounting, Energy & hardware management, and authored InfiniBand and IPoIB networking protocols, and worked on standardizing and implementing the IEEE 802.1Qbg protocol on network switching. === Marrying live migration and device assignment === Device assignment has been around for quite some time now in virtualization. It's a nice technique to squeeze as much performance out of your hardware as possible and with the advent of SR-IOV it's even possible to pass a "virtualized" fraction of your real hardware to a VM, not the whole card. The problem however is that you lose a pretty substantial piece of functionality: live migration. The most commonly approach used to counterfight this for networking is to pass 2 NICs to the VM. One that's emulated in software and one that's the actually assigned device. It's the guest's responsibility to treat the two as a union and the host needs to be configured in a way that allows packets to float the same way through both paths. When migrating, the assigned device gets hot unplugged and a new one goes back in on the new host. However, that means that we're exposing crucial implementation details of the VM to the guest: it knows when it gets migrated. Another approach is to do the above, but combine everything in a single guest driver, so it ends up invisible to the guest OS. That quickly becomes a nightmare too, because you need to reimplement network drivers for your specific guest driver infrastructure, at which point you're most likely violating the GPL anyway. So what if we restrict ourselves to a single NIC type? We could pass in an emulated version of that NIC into our guest, or pass through an assigned device. They would behave the same. That also means that during live migration, we could switch between emulated and assigned modes without the guest even realizing it. But maybe others have more ideas on how to improve the situation? The less guest intrusive it is, the better the solution usually becomes. And if it extends to storage, it's even better Required attendees Peter Waskiewicz Alex Williamson Topic Lead: Alexander Graf <email address hidden> Alexander has been a steady and long time contributor to the QEMU and KVM projects. He maintains the PowerPC and s390x parts of QEMU as well as the PowerPC port of KVM. He tends to become active, whenever areas seem weird enough for nobody else to touch them, such as nested virtualization, mac os virtualization or ahci. Recently, he has also been involved in kicking off openSUSE for ARM. His motto is なんとかなる.

Participants:
attending alex-l-williamson (Alex Williamson)
attending amitshah (Amit Shah)
attending eblake (Eric Blake)
attending lpc-virt-lead (LPC Virtualization Lead)

Tracks:
  • Virtualization
Nautilus 5

PLEASE NOTE The Linux Plumbers Conference 2012 schedule is still in a draft format and is subject to changes at any time.