http://summit.ubuntu.com/lpc-2012/ < Friday >

08:15 - 09:15 PDT [PLENARY]
Not Attending Breakfast
Grand Ballroom
09:10 - 09:55 PDT
Not Attending Lubricating Kernel Plumbing with Social Engineering ( Refereed Presentations )
Abstract: One of the essential features of plumbing is that disparate groups of people have to work together to produce all of the pieces required to be fitted together into the whole. Unfortunately, different groups tend to have different working methods, different procedures and even different ways of communicating. The result is that working together on several interlocking features can be much harder than you think. This talk will give an overview of some of the essential social aspects of working well with the kernel community: namely what a good change log is, how to write one and how to convince a subsystem maintainer to trust you (and by extension, your patches). We'll give a simple framework for interacting with the kernel community in ways that can increase your patch uptake with actual proof points from experience at Parallels and some amusing anecdotes and examples of how and how not to go about this. Audience: Almost anyone who needs to extend the kernel and therefore has to write a patch (or even those who just discover bugs during testing and want to submit patches to fix them). Bio: James Bottomley is CTO of Server Virtualisation at Parallels and Linux Kernel maintainer of the SCSI subsystem, PA-RISC Linux and the 53c700 set of drivers. He has made contributions in the areas of x86 architecture and SMP, filesystems, storage and memory management and coherency. He is currently a Director on the Board of the Linux Foundation and Chair of its Technical Advisory Board. He was born and grew up in the United Kingdom. He went to university at Cambridge in 1985 for both his undergraduate and doctoral degrees. He joined AT&T Bell labs in 1995 to work on Distributed Lock Manager technology for clustering. In 1997 he moved to the LifeKeeper High Availability project. In 2000 he helped found SteelEye Technology, Inc as Software Architect and later as Vice President and CTO. He joined Novell in 2008 as a Distinguished Engineer at Novell's SUSE Labs and Parallels in 2011.

Participants:
attending ezannoni-7 (LPC Submitter)

Tracks:
  • Refereed Presentations
Nautilus 1
Not Attending Real Time Microconference ( Real Time )
http://wiki.linuxplumbersconf.org/2012:real-time Schedule for this track Getting RCU further out of the way - Paul McKenney Handling the thorns of mainline - Steven Rostedt SCHED_DEADLINE: a new deadline based realtime scheduling policy - Peter Zijlstra Review of the stable realtime release process - Frank Rowand Lessons We Learned: Common mistakes while testing applications with RT - Luis Claudio Goncalves State of RT/Thomas's thoughts - Thomas Gleixner

Participants:
attending paulmck (Paul McKenney)
attending tglx (tglx)

Tracks:
  • Real Time
Nautilus 2
Not Attending Petitboot && /boot Unification ( Core OS )
=== Petitboot - A kexec based bootloader === Petitboot is a platform independent bootloader based on the Linux kexec warm reboot mechanism. Petitboot supports loading kernel and initrd image files from any mountable Linux device, plus can load image files from the network using TFTP, NFS, HTTP, HTTPS, and SFTP. Petitboot can boot any operating system supported by kexec. In essence, petitboot is a user friendly front end to the Linux exec program. If installed as a standard user program petitboot can be used as a convenient menu based way to initiate a kexec system reboot. A petitboot package is already available for several Linux distributions. Petitboot can also be used as a traditional 2nd stage bootloader by including the petitboot program and necessary dependencies like busybox and kexec-tools in the embedded initramfs of a Linux kernel image and converting that kernel image to a form that is loadable by the 1st stage bootloader. The method of creating the initramfs, converting the Linux kernel image to a 2nd stage bootloader image, and arranging for the petitboot program and its dependencies to be started on boot are all specific to the platform, the Linux distribution, and the 1st stage bootloader. Discussions in this session can explore methods to prepare a petitboot 2nd stage package for various distributions, requests for petitboot enhancements, etc. Topic Lead: Geoff Levand <email address hidden> Geoff is a Linux Architect at the Huawei R&D Center in San Jose, California. In his spare time he maintains the Petitboot bootloader, the TWIN windowing system, and Linux support for the Sony Playstation 3 game console. === Peace, Love, and Unification in /boot === A simple filesystem layout for command line parameters, kernel and initramfs images. /boot might be managed by multiple distributions. These distributions fight over the boot loader configuration and don't know much about each other. In this session a proposal for a simple filesystem layout is presented, which can be used as the base for boot loaders without a special configuration file. This also obsoletes the need of regenerating a configuration file (grub-mkconfig) after dropping in files via package managers. Topic Lead: Harald Hoyer Harald joined the Linux community in 1996. His first kernel patch was the module ip_masq_quake in 1997, followed by boot support for md raid devices. He joined Red Hat in July of 1999, working on projects ranging from udev, network daemons and CD recording packages to creating configuration tools, extending smolt and writing python interfaces. Lately he created a cross distribution initramfs generator called dracut.

Participants:
attending eblake (Eric Blake)
attending kaysievers (Kay Sievers)

Tracks:
  • Core OS
Nautilus 3
Not Attending Android (tentative)
Nautilus 4
Not Attending Classification/Shaping && HW Rate Limiting && Open-vswitch ( Networking )
Networking Bufferbloat Topics 1. Linux Traffic Classification and Shaping 2. TC Interface to Hardware Rate Limiting 3. Harmonizing Multiqueue, Vmdq, virtio-net, macvtap with open-vswitch === Linux Traffic classification and Shaping === Linux provides advanced mechanism for traffic classification and shaping. Central to this role is the queuing discipline. Recently we have done work allowing hardware to offload some of these traditionally CPU intensive task and have experimented with mechanisms to improve performance on many-core systems. Here we would like to highlight work such as the queuing scheduler 'mqprio' that have recently been accepted upstream. As well as share results from experimental work running lockless queuing disciplines and classifiers on many-core systems and fat pipes (10Gbps and greater). Topic Lead: Tom Herbert === tc(?) interface to hardware transmit rate limiting === Intel 10 Gigabit hardware (and others) can provide transmit rate limiting. This presentation will discuss development of a new simple qdisc that can either provide all-software transmit rate limiting, or when installed over hardware that supports the capability, can directly configure the hardware's rate limiting. One problem that will need discussion is that the Intel hardware's rate limiting is per-queue. Another option besides a qdisc that could be discussed is direct ethtool control over the rate limiting. Topic Lead: Jesse Brandeburg Jesse is a senior Linux developer in the Intel LAN Access Division (Intel Ethernet). He has been with Intel since 1994, and has worked on the Linux e100, e1000, e1000e, igb, ixgb, ixgbe drivers since 2002. His time is split between solving customer issues, performance tuning Intel's drivers, and working on bleeding edge development for the Linux networking stack. === Harmonizing Multiqueue, Vmdq, virtio-net, macvtap with open-vswitch === Multiqueue virtio-net, macvtap and qemu is being worked upon by Jason Wang and Krishna Kumar. Inspired by their work I had like to extend it a step further and discuss introducing open-vswitch based flows for multiqueue aware virtio-net queuing. This requires plumbing in openvswitch to utilize linux tc to instantiate QoS flows per queue in addition to the virtio-net multiqueue work. Open-vswitch also needs to incorporate support for opening tap fds multiple times so it can create as many queues. To this end openvswitch might want to become macvtap aware. There is a need to understand and discuss gaps in realizing openvswitch usecases in synchronization with features already implemented in macvtap and linux tc. For instance.. features like vepa, veb etc are implemented in the macvtap/macvlan driver only but are useful for openvswitch based flows too. I had like to discuss features/gaps that require plumbing in these subsystems and related work. *Required attendees(If present)* Developers like Jason Wang, Krishna Kumar, Michael Tsirkin, Arnd Bergmann, Stephen Hemminger, Dave Miller, open-vswitch developers, netdev developers, libvirt developers, qemu developers Topic Lead: Shyam Iyer <email address hidden> Shyam Iyer is a senior software engineer in Dell's Operating Sytems Advanced Engineering Group focused on Linux with over 8 years of experience in developing linux based solutions. Apart from enabling Dell PowerEdge Servers and Storage for Enterprise Linux Operating Systems he focuses on bridging new hardware technology usecases with emerging new Linux technologies. His interests encompass Server Hardware Architecture, Linux Kernel Debugging, Server Platform bringup, efficient storage, networking, Virtualization architectures and performance tuning.

Participants:
attending therbert (Tom Herbert)

Tracks:
  • Networking
Nautilus 5
10:05 - 10:50 PDT
Not Attending Real Time Microconference ( Real Time )
http://wiki.linuxplumbersconf.org/2012:real-time Schedule for this track Getting RCU further out of the way - Paul McKenney Handling the thorns of mainline - Steven Rostedt SCHED_DEADLINE: a new deadline based realtime scheduling policy - Peter Zijlstra Review of the stable realtime release process - Frank Rowand Lessons We Learned: Common mistakes while testing applications with RT - Luis Claudio Goncalves State of RT/Thomas's thoughts - Thomas Gleixner

Participants:
attending paulmck (Paul McKenney)
attending tglx (tglx)

Tracks:
  • Real Time
Nautilus 2
Not Attending Android (tentative)
Nautilus 4
Not Attending Classification/Shaping && HW Rate Limiting && Open-vswitch ( Networking )
Networking Bufferbloat Topics 1. Linux Traffic Classification and Shaping 2. TC Interface to Hardware Rate Limiting 3. Harmonizing Multiqueue, Vmdq, virtio-net, macvtap with open-vswitch === Linux Traffic classification and Shaping === Linux provides advanced mechanism for traffic classification and shaping. Central to this role is the queuing discipline. Recently we have done work allowing hardware to offload some of these traditionally CPU intensive task and have experimented with mechanisms to improve performance on many-core systems. Here we would like to highlight work such as the queuing scheduler 'mqprio' that have recently been accepted upstream. As well as share results from experimental work running lockless queuing disciplines and classifiers on many-core systems and fat pipes (10Gbps and greater). Topic Lead: Tom Herbert === tc(?) interface to hardware transmit rate limiting === Intel 10 Gigabit hardware (and others) can provide transmit rate limiting. This presentation will discuss development of a new simple qdisc that can either provide all-software transmit rate limiting, or when installed over hardware that supports the capability, can directly configure the hardware's rate limiting. One problem that will need discussion is that the Intel hardware's rate limiting is per-queue. Another option besides a qdisc that could be discussed is direct ethtool control over the rate limiting. Topic Lead: Jesse Brandeburg Jesse is a senior Linux developer in the Intel LAN Access Division (Intel Ethernet). He has been with Intel since 1994, and has worked on the Linux e100, e1000, e1000e, igb, ixgb, ixgbe drivers since 2002. His time is split between solving customer issues, performance tuning Intel's drivers, and working on bleeding edge development for the Linux networking stack. === Harmonizing Multiqueue, Vmdq, virtio-net, macvtap with open-vswitch === Multiqueue virtio-net, macvtap and qemu is being worked upon by Jason Wang and Krishna Kumar. Inspired by their work I had like to extend it a step further and discuss introducing open-vswitch based flows for multiqueue aware virtio-net queuing. This requires plumbing in openvswitch to utilize linux tc to instantiate QoS flows per queue in addition to the virtio-net multiqueue work. Open-vswitch also needs to incorporate support for opening tap fds multiple times so it can create as many queues. To this end openvswitch might want to become macvtap aware. There is a need to understand and discuss gaps in realizing openvswitch usecases in synchronization with features already implemented in macvtap and linux tc. For instance.. features like vepa, veb etc are implemented in the macvtap/macvlan driver only but are useful for openvswitch based flows too. I had like to discuss features/gaps that require plumbing in these subsystems and related work. *Required attendees(If present)* Developers like Jason Wang, Krishna Kumar, Michael Tsirkin, Arnd Bergmann, Stephen Hemminger, Dave Miller, open-vswitch developers, netdev developers, libvirt developers, qemu developers Topic Lead: Shyam Iyer <email address hidden> Shyam Iyer is a senior software engineer in Dell's Operating Sytems Advanced Engineering Group focused on Linux with over 8 years of experience in developing linux based solutions. Apart from enabling Dell PowerEdge Servers and Storage for Enterprise Linux Operating Systems he focuses on bridging new hardware technology usecases with emerging new Linux technologies. His interests encompass Server Hardware Architecture, Linux Kernel Debugging, Server Platform bringup, efficient storage, networking, Virtualization architectures and performance tuning.

Participants:
attending therbert (Tom Herbert)

Tracks:
  • Networking
Nautilus 5
Not Attending Status of the ARM architecture ( Refereed Presentations )
The ARM architecture is one of the fastest moving subsystems in the kernel at the moment, with over 400 individual contributors and close to 5000 changesets since Linux-3.0. This gives an update of the hot topics that are keeping everyone busy, and where we're headed in the future. The three most important technical problems we are working on are the conversion to device tree based booting, allowing multiple platforms to coexist in the same kernel, and moving code out of the architecture code into new and existing subsystems that are maintained separately. At the same time as we are doing these changes, many new SoCs based on ARM are being developed and submitted for inclusion into Linux. Aside from the technical work, the presentation will also describe the challenges of dealing with a subsystem of this scale, both in terms of working with a large number of people and organizing the patches for upstream submission. Arnd Bergmann is co-maintaining the arm-soc tree, together with Olof Johansson, which is where most of the ARM patches end up getting merged. He has been working for the IBM Linux Technology center for ten years and is currently on assignment from IBM to the Linaro project. He is also the primary contact for new CPU architectures in the kernel and has contributed to almost every subsystem in the past.

Participants:
attending apm (Antti P Miettinen)
attending srwarren (Stephen Warren)

Tracks:
  • Refereed Presentations
Nautilus 1
11:00 - 11:45 PDT
Not Attending Real Time Microconference ( Real Time )
http://wiki.linuxplumbersconf.org/2012:real-time Schedule for this track Getting RCU further out of the way - Paul McKenney Handling the thorns of mainline - Steven Rostedt SCHED_DEADLINE: a new deadline based realtime scheduling policy - Peter Zijlstra Review of the stable realtime release process - Frank Rowand Lessons We Learned: Common mistakes while testing applications with RT - Luis Claudio Goncalves State of RT/Thomas's thoughts - Thomas Gleixner

Participants:
attending paulmck (Paul McKenney)
attending tglx (tglx)

Tracks:
  • Real Time
Nautilus 2
Not Attending Android (tentative)
Nautilus 4
Not Attending Enhancing the Thermal Management Infrastructure in Linux ( Refereed Presentations )
Abstract: With the number of devices running Linux increasing day by day, the need for a robust Thermal management infrastructure has become critical. Linux already has support for minimal Thermal management, which often does not suffice to do a complete Thermal management solution. Recently, a lot of discussions are happening in the mailing lists, and patches are being submitted, to enhance the existing Thermal infrastructure in Linux. The intention of talk is to discuss the Thermal Framework API/ABI changes, registration mechanisms for thermal and cooling drivers, methods to throttle devices, priority based throttling, mechanisms to provide platform specific data to the thermal framework, notification mechanisms (in-kernel and kernel-user space), ways to implement mapping between thermal zones and cooling devices, providing debugfs support for thermal statistics and data collection, etc. The target audience would be the Linux developers/users who face the thermal issue and want to fix it (or willing to help to fix it). Bio: Durga (R. Durgadoss) is a Software Engineer in the Intel Architecture Group at Intel Corporation. He has been working on Thermal and Burst Current Management for Intel Atom based SOC platforms. Durga has been actively involved in Linux Kernel Development since he joined Intel two years ago; he recently upstreamed Thermal and Current management drivers to the Linux kernel. He is currently working on platform agnostic solutions to leverage the Thermal Framework inside the kernel.

Participants:
attending apm (Antti P Miettinen)
attending ezannoni-7 (LPC Submitter)

Tracks:
  • Refereed Presentations
Nautilus 1
Not Attending PM constraints: Tegra ( Constraint Framework )
NVIDIA Tegra power management features DVFS, sleep states and switching between low power and high performance CPU clusters. In the current Linux kernel the central subsystems affecting the state of CPUs are cpuidle, cpufreq and CPU hotplug. We propose a new framework, cpuquiet, for coordinated control of CPU cores allowing migration between CPU clusters, maximizing available performance in the presence of EDP and thermal constraints and maximizing the utilization of low power states in the presence of CPU power state constraints and discuss how PM QoS could be extended to allow more efficient power management utilizing e.g. device context and application knowledge to guide the behavior of the different subsystem governors. Topic Lead: Antti P Miettinen Energy Efficiency Engineer at NVIDIA, working on Tegra Linux kernel, previously a mobile device power management researcher at Nokia Research Center. Topic Lead: Peter De Schrijver NVIDIA Tegra Linux kernel engineer, Debian developer, previously working on power management in maemo for Nokia.

Participants:
attending apm (Antti P Miettinen)
attending mark97229 (Mark Gross)
attending srwarren (Stephen Warren)

Tracks:
  • Constraint Framework
Nautilus 3
Not Attending Network Virtualization and Lightning Talks ( Virtualization )
1. VFIO - Are We There Yet? 2. KVM Network performance and scalability 3. Enabling overlays for Network Scaling 4. Marrying live migration and device assignment 5. Lightning Talks   - QEMU disaggregation - Stefano Stabellini   - Xenner- Alexander Graf   - From Server to Mobile: Different Requirements/Different Solution - Eddie Dong, Jun Nakajima === VFIO - Are We There Yet? === VFIO is new userspace driver interface intended to generically enable assignment of devices into qemu virtual machines. VFIO has had a bumpy road upstream and is currently into it's second redesign. In this talk we'll look at the new design, the status of the code, how to make use of it, and where it's going. We'll also look back at some of the previous designs to show how we got here. This talk is intended for developers and users interested in the evolution of device assignment in qemu and kvm as well as those interested in userspace drivers. Topic Lead: Alex Williamson Alex has been working on virtualization for over 5 years and concentrates on the I/O side of virtualization, especially assignment of physical devices to virtual machines. He is a member of the Red Hat Virtualization team. === KVM Network performance and scalability === In this presentation we will discuss ongoing work to improve KVM networking I/O performance and scalability. We will share performance numbers taken using both vertical (multiple interfaces) and horizontal (many VMs) to highlight existing bottleneck's in the KVM stack as well as improvements observed with pending changes. These experiments have shown impressive gains can be obtained by using per-cpu vhost threads and leveraging hardware offloads. These offloads include flow steering and interrupt affinity. This presentation intends to highlight ongoing research from various groups working on the Linux kernel, KVM, and upper layer stack. Finally we will propose a path to include these changes in the upstream projects. This should be of interest to KVM developers, kernel developers, and anyone using a virtualized environment. Topic Lead: John Fastabend <email address hidden> Required attendees: Vivek Kashyap, Shyam Iyer === Enabling overlays for Network Scaling === Server virtualization in the data-center has increased the density of networking endpoints in a network. Together with the need to migrate VMs anywhere in the data-center this has surfaced network scalability limitations (layer 2, cross IP subnet migrations, network renumbering). The industry has turned its attention towards overlay networks to solve the network scalability problems. The overlay network concept defines a domain connecting virtual machines belonging to a single tenant or organization. This virtual network may be built across the server hypervisors which are connected over an arbitrary topology. This talk will give an overview of the problems sought to be solved through the use of overlay networks, and discusses the active proposals such as VxLAN, NVGRE, and DOVE Network. We further will delve into options for implementing the solutions on Linux. Topic Lead: Vivek Kashyap <email address hidden> Vivek works in IBM's Linux Technology Center. Vivek has worked on Linux resource management, delay-accounting, Energy & hardware management, and authored InfiniBand and IPoIB networking protocols, and worked on standardizing and implementing the IEEE 802.1Qbg protocol on network switching. === Marrying live migration and device assignment === Device assignment has been around for quite some time now in virtualization. It's a nice technique to squeeze as much performance out of your hardware as possible and with the advent of SR-IOV it's even possible to pass a "virtualized" fraction of your real hardware to a VM, not the whole card. The problem however is that you lose a pretty substantial piece of functionality: live migration. The most commonly approach used to counterfight this for networking is to pass 2 NICs to the VM. One that's emulated in software and one that's the actually assigned device. It's the guest's responsibility to treat the two as a union and the host needs to be configured in a way that allows packets to float the same way through both paths. When migrating, the assigned device gets hot unplugged and a new one goes back in on the new host. However, that means that we're exposing crucial implementation details of the VM to the guest: it knows when it gets migrated. Another approach is to do the above, but combine everything in a single guest driver, so it ends up invisible to the guest OS. That quickly becomes a nightmare too, because you need to reimplement network drivers for your specific guest driver infrastructure, at which point you're most likely violating the GPL anyway. So what if we restrict ourselves to a single NIC type? We could pass in an emulated version of that NIC into our guest, or pass through an assigned device. They would behave the same. That also means that during live migration, we could switch between emulated and assigned modes without the guest even realizing it. But maybe others have more ideas on how to improve the situation? The less guest intrusive it is, the better the solution usually becomes. And if it extends to storage, it's even better Required attendees Peter Waskiewicz Alex Williamson Topic Lead: Alexander Graf <email address hidden> Alexander has been a steady and long time contributor to the QEMU and KVM projects. He maintains the PowerPC and s390x parts of QEMU as well as the PowerPC port of KVM. He tends to become active, whenever areas seem weird enough for nobody else to touch them, such as nested virtualization, mac os virtualization or ahci. Recently, he has also been involved in kicking off openSUSE for ARM. His motto is なんとかなる.

Participants:
attending alex-l-williamson (Alex Williamson)
attending amitshah (Amit Shah)
attending eblake (Eric Blake)
attending lpc-virt-lead (LPC Virtualization Lead)

Tracks:
  • Virtualization
Nautilus 5
11:55 - 12:40 PDT
Not Attending Network Virtualization and Lightning Talks ( Virtualization )
1. VFIO - Are We There Yet? 2. KVM Network performance and scalability 3. Enabling overlays for Network Scaling 4. Marrying live migration and device assignment 5. Lightning Talks   - QEMU disaggregation - Stefano Stabellini   - Xenner- Alexander Graf   - From Server to Mobile: Different Requirements/Different Solution - Eddie Dong, Jun Nakajima === VFIO - Are We There Yet? === VFIO is new userspace driver interface intended to generically enable assignment of devices into qemu virtual machines. VFIO has had a bumpy road upstream and is currently into it's second redesign. In this talk we'll look at the new design, the status of the code, how to make use of it, and where it's going. We'll also look back at some of the previous designs to show how we got here. This talk is intended for developers and users interested in the evolution of device assignment in qemu and kvm as well as those interested in userspace drivers. Topic Lead: Alex Williamson Alex has been working on virtualization for over 5 years and concentrates on the I/O side of virtualization, especially assignment of physical devices to virtual machines. He is a member of the Red Hat Virtualization team. === KVM Network performance and scalability === In this presentation we will discuss ongoing work to improve KVM networking I/O performance and scalability. We will share performance numbers taken using both vertical (multiple interfaces) and horizontal (many VMs) to highlight existing bottleneck's in the KVM stack as well as improvements observed with pending changes. These experiments have shown impressive gains can be obtained by using per-cpu vhost threads and leveraging hardware offloads. These offloads include flow steering and interrupt affinity. This presentation intends to highlight ongoing research from various groups working on the Linux kernel, KVM, and upper layer stack. Finally we will propose a path to include these changes in the upstream projects. This should be of interest to KVM developers, kernel developers, and anyone using a virtualized environment. Topic Lead: John Fastabend <email address hidden> Required attendees: Vivek Kashyap, Shyam Iyer === Enabling overlays for Network Scaling === Server virtualization in the data-center has increased the density of networking endpoints in a network. Together with the need to migrate VMs anywhere in the data-center this has surfaced network scalability limitations (layer 2, cross IP subnet migrations, network renumbering). The industry has turned its attention towards overlay networks to solve the network scalability problems. The overlay network concept defines a domain connecting virtual machines belonging to a single tenant or organization. This virtual network may be built across the server hypervisors which are connected over an arbitrary topology. This talk will give an overview of the problems sought to be solved through the use of overlay networks, and discusses the active proposals such as VxLAN, NVGRE, and DOVE Network. We further will delve into options for implementing the solutions on Linux. Topic Lead: Vivek Kashyap <email address hidden> Vivek works in IBM's Linux Technology Center. Vivek has worked on Linux resource management, delay-accounting, Energy & hardware management, and authored InfiniBand and IPoIB networking protocols, and worked on standardizing and implementing the IEEE 802.1Qbg protocol on network switching. === Marrying live migration and device assignment === Device assignment has been around for quite some time now in virtualization. It's a nice technique to squeeze as much performance out of your hardware as possible and with the advent of SR-IOV it's even possible to pass a "virtualized" fraction of your real hardware to a VM, not the whole card. The problem however is that you lose a pretty substantial piece of functionality: live migration. The most commonly approach used to counterfight this for networking is to pass 2 NICs to the VM. One that's emulated in software and one that's the actually assigned device. It's the guest's responsibility to treat the two as a union and the host needs to be configured in a way that allows packets to float the same way through both paths. When migrating, the assigned device gets hot unplugged and a new one goes back in on the new host. However, that means that we're exposing crucial implementation details of the VM to the guest: it knows when it gets migrated. Another approach is to do the above, but combine everything in a single guest driver, so it ends up invisible to the guest OS. That quickly becomes a nightmare too, because you need to reimplement network drivers for your specific guest driver infrastructure, at which point you're most likely violating the GPL anyway. So what if we restrict ourselves to a single NIC type? We could pass in an emulated version of that NIC into our guest, or pass through an assigned device. They would behave the same. That also means that during live migration, we could switch between emulated and assigned modes without the guest even realizing it. But maybe others have more ideas on how to improve the situation? The less guest intrusive it is, the better the solution usually becomes. And if it extends to storage, it's even better Required attendees Peter Waskiewicz Alex Williamson Topic Lead: Alexander Graf <email address hidden> Alexander has been a steady and long time contributor to the QEMU and KVM projects. He maintains the PowerPC and s390x parts of QEMU as well as the PowerPC port of KVM. He tends to become active, whenever areas seem weird enough for nobody else to touch them, such as nested virtualization, mac os virtualization or ahci. Recently, he has also been involved in kicking off openSUSE for ARM. His motto is なんとかなる.

Participants:
attending alex-l-williamson (Alex Williamson)
attending amitshah (Amit Shah)
attending eblake (Eric Blake)
attending lpc-virt-lead (LPC Virtualization Lead)

Tracks:
  • Virtualization
Nautilus 5
Not Attending Polyhedral optimizations for LLVM ( LLVM )
Polly is a polyhedral optimizer run as plug-in to the LLVM compiler. Polly optimizes loops for data-locality, auto-vectorization, and auto-parallelization. Although this tool is still under development, it has been gaining traction in the community over last few months and its quality increased significantly. This session will provide an overview on its current state and discuss the roadmap for production utilization. Topic Lead: Zino Benaissa Zino Benaissa is a senior staff engineer at QuIC, Qualcomm’s open source subsidiary, responsible for developing back end optimizations for the LLVM compiler. Before joining Qualcomm in 2011, he worked at Intel for 11 years leading the effort to incorporate Intel micro-architectural tunings into the Microsoft Visual C++ compiler. In particular, the support Advanced Vector Extensions (AVX) and collaborated with Microsoft compiler architects and engineers to bring up the auto-vectorizer in the latest release of Microsoft (Dev11) compiler.

Tracks:
  • LLVM
Nautilus 2
Not Attending CoreOS: Initramfs Systemd && libkmod ( Core OS )
=== Systemd in the Initramfs === Introduction of a systemd based initramfs to boot a system, for which an initramfs is needed. Topic Lead: Harald Hoyer Harald joined the Linux community in 1996. His first kernel patch was the module ip_masq_quake in 1997, followed by boot support for md raid devices. He joined Red Hat in July of 1999, working on projects ranging from udev, network daemons and CD recording packages to creating configuration tools, extending smolt and writing python interfaces. Lately he created a cross distribution initramfs generator called dracut. === From libabc to libkmod: designing core libraries === On Kernel Summit last year Kay and and Lennart put together a wish list for Linux. From the discussions was born libabc as way to help people to design core libraries and therefore help userspace to make use of Linux features. Libkmod is the first library to use their library skeleton to implement one of the items in the wish list: create a library to manage kernel modules and refactor module-init-tools to use it. In this discussion we will share the experience gained with this task, how libabc helped kmod to replace module-init-tools on all major distributions after less than half a year and how other core developers could benefit from that. Topic Lead: Lucas De Marchi <email address hidden> Lucas started to work with Linux at University of Sao Paulo while doing his undergraduate course in computer engineering. He completed his master's degree at Politecnico di Milano in 2009. His research focused on optimizations to the real-time Linux scheduler on multi-core architectures. In 2010, Lucas joined ProFUSION Embedded Systems and continued to work with embedded systems where he got involved with several open source projects such as BlueZ, oFono, ConnMan, EFL, WebKit, systemd and others. Currently he's the lead developer of kmod which is the subject of this talk.

Participants:
attending eblake (Eric Blake)
attending kaysievers (Kay Sievers)

Tracks:
  • Core OS
Nautilus 3
Not Attending An RCU-protected scalable trie for user-mode use ( Scaling )
In the past year, the RCU lock-free hash table hash been polished and made production-ready within the Userspace RCU project. It performs and scales really well for updates, key lookups and traversals in no particular key order, but does not fulfill ordered key traversal use-cases. This talk is presenting ongoing work on an ordered data structure that supports RCU reads: a cache-efficient, compact, fast, and scalable trie, inspired from Judy Arrays. Topic Lead: Mathieu Desnoyers <email address hidden> Mathieu Desnoyers main contributions are in the area of tracing (monitoring/performance analysis/debugging) and scalability, both at the kernel and user-space levels. He is maintainer of the LTTng project and the Userspace RCU library. He works in close collaboration with the telecommunication industry, many Linux distributions, and with customers developing hardware scaling from small embedded devices to large-deployment servers. He is CEO and Senior Software Architect at EfficiOS.

Participants:
attending mathieu-desnoyers (Mathieu Desnoyers)
attending paulmck (Paul McKenney)

Tracks:
  • Scaling
Nautilus 4
12:40 - 14:10 PDT
Lunch
14:10 - 14:55 PDT
Not Attending Systemd for the User Session ( Core OS )
It's a little known secret that systemd is extremely capable of starting, controlling and regulating more than just system services, but can easily start an entire Desktop UI. Not many people have sat down and implemented and worked out the problems of starting an X service, a few UI components, the session bus and DBus services for normal users with the mechanisms that systemd provides. The benefits are obvious: Systemd provides excellent service monitoring and restarting capabilities, provides socket and DBus activation for relevant services, and overall improves desktop startup by allowing user services to start well before core services like Xorg or wayland start. In effect, we're saying goodbye to XDG autostart entirely, and getting back reliability and scalability. We converted several desktop environments including Tizen's Mobile UI, Xfce4, Enlightenment and more to systemd user sessions. We "pop the hood" and take a look at the implications for startup, what's possible to further improve on the session startup and where we can do better. Topic Lead: Auke KokAuke Kok <email address hidden> Auke is a software engineer at Intel's Open Source Technology center, and has been attempting to make Linux boot faster since 2007. In 2008, he co-presented the "5-second boot" with Arjan van de Ven at the first LPC. Since then, Auke has worked on further improving the Linux Core OS start sequence, first for Moblin and later with MeeGo, where we made the first switch to systemd. Auke now works on Tizen, which will heavily integrate systemd in the Core OS.

Participants:
attending eblake (Eric Blake)

Tracks:
  • Core OS
Nautilus 2
Not Attending UEFI Open Discussion

Participants:
attending srwarren (Stephen Warren)
Nautilus 3
Not Attending ASMP: Improving performance through dedication of OS tasks to specific processors ( Scaling )
Processors increase performance by adding cores instead of clock speed these days and therefore algorithms in general need to be able to work in a distributed way. In the kernel we have tried to go to more fine grained locking in order to increase performance. However, with that approach locking overhead grows if highly concurrent processing occurs in the Kernel. Synchronization becomes expensive. This session investigates how performance is affected if we do the opposite: Use coarse grained locking to perform large chunks of work on a single core instead which means that locking overhead is reduced and the processor caches are fully available for a significant piece of work. Topic Lead: Christoph Lameter <email address hidden> Christoph has been contributing to various core kernel subsystems over the years and created much of the NUMA infrastructure in the Linux Kernel when he worked as a Principal Engineer for Silicon Graphics on adapting Linux for use in Supercomputers. Scaling Linux is a focus of his work both in terms of performance for HPC (High Performance Computing) as well s for low latency in HFT (High Frequency Trading). Christoph maintains the slab allocators and the per cpu subsystem in the Linux Kernel and currently works as an architect for a leading HFT company.

Participants:
attending apm (Antti P Miettinen)
attending mathieu-desnoyers (Mathieu Desnoyers)
attending paulmck (Paul McKenney)

Tracks:
  • Scaling
Nautilus 4
15:05 - 15:50 PDT
Not Attending UEFI Open Discussion

Participants:
attending srwarren (Stephen Warren)
Nautilus 3
Not Attending Application of deadline scheduling for powersaving strategies ( Scheduler )
Power consumption is an important issue in the design of real-time embedded and general purpose systems. How to reduce power consumption, enabling longer lifetime for battery powered systems, is however still an open problem. A real-time scheduling algorithm based on the deadline concept (like SCHED_DEADLINE) can be utilized to provide power-aware scheduling strategies. In this talk I will first present work that has been done in the past on the subject. Then I will propose some simple ideas on how deadline scheduling could be effective in reducing power consumption. Topic Lead: Juri Lelli

Participants:
attending paulmck (Paul McKenney)
attending vincent-guittot (Vincent Guittot)

Tracks:
  • Scheduler
Nautilus 2
Not Attending Unavailable
Room Unavailable
Nautilus 4
Not Attending Unavailable
Room Unavailable
Nautilus 5
16:00 - 17:00 PDT [PLENARY]
Not Attending LPC Closing Discussion
Open session for readout from track leads, lightning talks and general discussion
Nautilus 4/5
18:00 - 21:00 PDT [PLENARY]
Not Attending Evening Event @ El Vitral
Offsite
< Friday >

PLEASE NOTE The Linux Plumbers Conference 2012 schedule is still in a draft format and is subject to changes at any time.