http://summit.ubuntu.com/lpc-2012/ Nautilus 4

Wednesday, 09:50 - 10:35 PDT
Not Attending Unavailable
Room Unavailable
Nautilus 4
Wednesday, 10:45 - 11:30 PDT
Not Attending PM Constraints: Intel ( Constraint Framework )
Power Management Constraints Topics: 1) Power vs. Performance 2) Priority Based PM QoS Constraints Framework === Power vs Performance === In order to suffice problems related to a "minimum" quality of service for basic HW like CPU/networking/DMA, we have the existing PM QoS framework to solve our problems. As the discrete HW units grow on a SoC, so IMO should the PM QoS layer scale. The second part of the problem is the scaling "up" or "down". We can mirror the PM QoS constraints to Perf QoS constraints which would allow independent devices to scale the generic system upwards into the performance mode. The performance QoS can also be used by platform specific constraints like thermals to either throttle device/SoC features OR constraints like accelerated workloads that may require a minimal CPU/SoC operating point to scale SoC features. Examples include issue faced on memory throughput on TI platforms, increasing CPU frequency as discussed in mailing lists by NVidia developers, imposing constraints on devices like CPU/LCD for thermal management etc. Topic Lead: Sundar Iyer Worked/Working on PM/tunings/efficiency for Linux in ST-E/Intel === Priority Based PM QoS Constraints Framework === There are workloads that require certain CPU Freqs, these Workloads should be able to tell the system what Freq is the optimal freq. But there may be thermal constraints that should not be allowed to be breached which requires Freq throttling or a user may want to artificially throttle or bump up the Freq all these Freq constraint requirements becomes difficult to manage since now we have a problem of which of those to be selected. In order to solve this problem, we propose a priority based PM QoS for Freq Management. High Priority Constraints may be reserved for thermal Throttling, Medium for user throttling or bump up, and Low for process PM QoS constraints Topic Lead: Illyas Mansoor Illyas works in Power Management Domain in Intel, specifically contributed to Intel SoC Power Management Linux Kernel

Participants:
attending apm (Antti P Miettinen)
attending lorenzo-pieralisi (Lorenzo Pieralisi)
attending mark97229 (Mark Gross)

Tracks:
  • Constraint Framework
Nautilus 4
Wednesday, 11:40 - 12:25 PDT
Not Attending The 0.5 second software update ( Core OS )
Three years ago at the inaugural Linux Plumbers Conference, we showed that a Linux OS should boot to the UI in 5 seconds or less. Since then, much has happened to take this from prototype to general deployed production technology. However, booting is only one of the operations for which the performance matters to users. The performance, and capabilities of the software update and associated tooling have an even bigger impact on the experience of the user than just booting in 2 seconds does. In this presentation I'll show what a 0.5 second software update looks like. However, since it'd boring to just show that you can be 300 times better than the yum tooling, I'll also show how what options this capability opens for those who develop operating systems to service their userbase better. The revealing of the code and project that showcase these capabilities will happen at the time of the LPC presentation. I'll likely will have a few other surprises ready by LPC as well; there's still time left between now and LPC. Topic Lead: Arjan van de Ven Arjan van de Ven is a Sr Principal Engineer at Intel's Open Source Technology Center, where he works on various things Linux, ranging from general kernel technology to power and performance tools and optimizations.

Participants:
attending eblake (Eric Blake)
attending kaysievers (Kay Sievers)

Tracks:
  • Core OS
Nautilus 4
Wednesday, 14:00 - 14:45 PDT
Not Attending Containers Microconference ( Containers )
Containers Topics: 1) Status of cgroups 2) CRIU - Checkpoint/Restore in Userspace 3) Kernel Memory accounting - memcg 4) ploop - container in a file 5) /proc virtualization 6) Building application sandboxes on top of LXC and KVM with libvirt 7) Syslog Virtualization 8) Time Virtualization cgroups are a crucial part of containers technologies, providing resource isolation and allowing multiple independent tasks to run without harming each other (too much) when it comes to shared resource usage. It is, however, a quite controvertial piece of the Linux Kernel, with a lot of work being done lately to revert that. We should discuss what kind of changes core cgroup users should expect, how to cope with them. === CRIU - Checkpoint/Restore in Userspace === Checkpoint/Restore functionality was attempted to be merged into the kernel many times, all without much success. CRIU is an attempt to solve the problem in userspace, augmenting the kernel functionality only when absolutely needed to aid that. === Kernel Memory accounting - memcg === The cgroup memory controller is already well established in the kernel. Recently, work is being done into adding kernel memory tracking to it. Patches exist for a part of it. We could use this session to explore possibly uncovered areas, and discuss the use cases. === ploop - container in a file === When dealing with container filesystems, some problems arise: How to limit the amount of data a container uses, since quota solutions that are container aware are not there yet. Also, when migrating the container to another destination, one usually remakes the filesystem by copying over the files (unless shared storage is assumed). This means that inode numbers are not preserved and also creates a bad I/O pattern of randomly copying over a zillion of tiny files, much slower than copying image of block-device (seq I/O). Another problem with all CTs sharing host fs is non-scalable journal on host fs: as soon as one CT filled the journal, other CTs should wait for its freeing. === /proc virtualization === When we run a fully featured container, we need a containerized view of the proc filesystem. Current upstream kernel can do that for the process tree and a few other things, but that is just not enough. For running top, for instance, one needs to know not only that, but also how much cpu time each container used in total, how much of that is system time, for how long that container was out of the CPU, etc. That information is available - or will be (some if it is patches pending) in different cgroups. What are the best ways to achieve this? What problems will we face? Is there any value in keep trying to push this functionality into the kernel, or would a userspace implementation suffice ? === Building application sandboxes on top of LXC and KVM with libvirt === This session will provide an overview of the recent virt-sandbox project, which aims to provide sandboxing of applications via the use of lightweight guests (both KVM and LXC). Discussion will cover techniques such as sVirt (use of SELinux labeling to prevent the sandboxed virtual machine/container from altering unauthorized resources in the host) and filesystem sharing & isolation (to allow the guest to share a specified portion of the host file system), integration with systemd for containerized application management, and tools to facilitate setup of sandboxed application service environments Topic Lead: Daniel Berrange Daniel has been the lead architect & developer of libvirt for more than 5 years, original developer of the libvirt-sandbox, virt-manager & entangle applications, and part-time hacker on QEMU, KVM, OpenStack, GTK-VNC, SPICE and more. === Syslog Virtualization === There is currently one buffer in the system to hold all messages generated by syslog. It would be ideal to have access to the syslog buffer in a per-container manner for isolation purposes. === Time Virtualization === Containers should have an independent and stable notion of time that is independent of the host. This is specially important when live migrating them around. It should also be perfectly possible for a process running gettimeofday() in a container to get a different value than the host system. We will discuss strategies to achieve that.

Tracks:
  • Containers
Nautilus 4
Wednesday, 14:50 - 15:35 PDT
Not Attending Containers Microconference ( Containers )
Containers Topics: 1) Status of cgroups 2) CRIU - Checkpoint/Restore in Userspace 3) Kernel Memory accounting - memcg 4) ploop - container in a file 5) /proc virtualization 6) Building application sandboxes on top of LXC and KVM with libvirt 7) Syslog Virtualization 8) Time Virtualization cgroups are a crucial part of containers technologies, providing resource isolation and allowing multiple independent tasks to run without harming each other (too much) when it comes to shared resource usage. It is, however, a quite controvertial piece of the Linux Kernel, with a lot of work being done lately to revert that. We should discuss what kind of changes core cgroup users should expect, how to cope with them. === CRIU - Checkpoint/Restore in Userspace === Checkpoint/Restore functionality was attempted to be merged into the kernel many times, all without much success. CRIU is an attempt to solve the problem in userspace, augmenting the kernel functionality only when absolutely needed to aid that. === Kernel Memory accounting - memcg === The cgroup memory controller is already well established in the kernel. Recently, work is being done into adding kernel memory tracking to it. Patches exist for a part of it. We could use this session to explore possibly uncovered areas, and discuss the use cases. === ploop - container in a file === When dealing with container filesystems, some problems arise: How to limit the amount of data a container uses, since quota solutions that are container aware are not there yet. Also, when migrating the container to another destination, one usually remakes the filesystem by copying over the files (unless shared storage is assumed). This means that inode numbers are not preserved and also creates a bad I/O pattern of randomly copying over a zillion of tiny files, much slower than copying image of block-device (seq I/O). Another problem with all CTs sharing host fs is non-scalable journal on host fs: as soon as one CT filled the journal, other CTs should wait for its freeing. === /proc virtualization === When we run a fully featured container, we need a containerized view of the proc filesystem. Current upstream kernel can do that for the process tree and a few other things, but that is just not enough. For running top, for instance, one needs to know not only that, but also how much cpu time each container used in total, how much of that is system time, for how long that container was out of the CPU, etc. That information is available - or will be (some if it is patches pending) in different cgroups. What are the best ways to achieve this? What problems will we face? Is there any value in keep trying to push this functionality into the kernel, or would a userspace implementation suffice ? === Building application sandboxes on top of LXC and KVM with libvirt === This session will provide an overview of the recent virt-sandbox project, which aims to provide sandboxing of applications via the use of lightweight guests (both KVM and LXC). Discussion will cover techniques such as sVirt (use of SELinux labeling to prevent the sandboxed virtual machine/container from altering unauthorized resources in the host) and filesystem sharing & isolation (to allow the guest to share a specified portion of the host file system), integration with systemd for containerized application management, and tools to facilitate setup of sandboxed application service environments Topic Lead: Daniel Berrange Daniel has been the lead architect & developer of libvirt for more than 5 years, original developer of the libvirt-sandbox, virt-manager & entangle applications, and part-time hacker on QEMU, KVM, OpenStack, GTK-VNC, SPICE and more. === Syslog Virtualization === There is currently one buffer in the system to hold all messages generated by syslog. It would be ideal to have access to the syslog buffer in a per-container manner for isolation purposes. === Time Virtualization === Containers should have an independent and stable notion of time that is independent of the host. This is specially important when live migrating them around. It should also be perfectly possible for a process running gettimeofday() in a container to get a different value than the host system. We will discuss strategies to achieve that.

Tracks:
  • Containers
Nautilus 4
Wednesday, 15:45 - 16:30 PDT
Not Attending Containers Microconference ( Containers )
Containers Topics: 1) Status of cgroups 2) CRIU - Checkpoint/Restore in Userspace 3) Kernel Memory accounting - memcg 4) ploop - container in a file 5) /proc virtualization 6) Building application sandboxes on top of LXC and KVM with libvirt 7) Syslog Virtualization 8) Time Virtualization cgroups are a crucial part of containers technologies, providing resource isolation and allowing multiple independent tasks to run without harming each other (too much) when it comes to shared resource usage. It is, however, a quite controvertial piece of the Linux Kernel, with a lot of work being done lately to revert that. We should discuss what kind of changes core cgroup users should expect, how to cope with them. === CRIU - Checkpoint/Restore in Userspace === Checkpoint/Restore functionality was attempted to be merged into the kernel many times, all without much success. CRIU is an attempt to solve the problem in userspace, augmenting the kernel functionality only when absolutely needed to aid that. === Kernel Memory accounting - memcg === The cgroup memory controller is already well established in the kernel. Recently, work is being done into adding kernel memory tracking to it. Patches exist for a part of it. We could use this session to explore possibly uncovered areas, and discuss the use cases. === ploop - container in a file === When dealing with container filesystems, some problems arise: How to limit the amount of data a container uses, since quota solutions that are container aware are not there yet. Also, when migrating the container to another destination, one usually remakes the filesystem by copying over the files (unless shared storage is assumed). This means that inode numbers are not preserved and also creates a bad I/O pattern of randomly copying over a zillion of tiny files, much slower than copying image of block-device (seq I/O). Another problem with all CTs sharing host fs is non-scalable journal on host fs: as soon as one CT filled the journal, other CTs should wait for its freeing. === /proc virtualization === When we run a fully featured container, we need a containerized view of the proc filesystem. Current upstream kernel can do that for the process tree and a few other things, but that is just not enough. For running top, for instance, one needs to know not only that, but also how much cpu time each container used in total, how much of that is system time, for how long that container was out of the CPU, etc. That information is available - or will be (some if it is patches pending) in different cgroups. What are the best ways to achieve this? What problems will we face? Is there any value in keep trying to push this functionality into the kernel, or would a userspace implementation suffice ? === Building application sandboxes on top of LXC and KVM with libvirt === This session will provide an overview of the recent virt-sandbox project, which aims to provide sandboxing of applications via the use of lightweight guests (both KVM and LXC). Discussion will cover techniques such as sVirt (use of SELinux labeling to prevent the sandboxed virtual machine/container from altering unauthorized resources in the host) and filesystem sharing & isolation (to allow the guest to share a specified portion of the host file system), integration with systemd for containerized application management, and tools to facilitate setup of sandboxed application service environments Topic Lead: Daniel Berrange Daniel has been the lead architect & developer of libvirt for more than 5 years, original developer of the libvirt-sandbox, virt-manager & entangle applications, and part-time hacker on QEMU, KVM, OpenStack, GTK-VNC, SPICE and more. === Syslog Virtualization === There is currently one buffer in the system to hold all messages generated by syslog. It would be ideal to have access to the syslog buffer in a per-container manner for isolation purposes. === Time Virtualization === Containers should have an independent and stable notion of time that is independent of the host. This is specially important when live migrating them around. It should also be perfectly possible for a process running gettimeofday() in a container to get a different value than the host system. We will discuss strategies to achieve that.

Tracks:
  • Containers
Nautilus 4
Thursday, 08:30 - 09:25 PDT
Not Attending Breakfast
Nautilus 4
Thursday, 09:30 - 10:15 PDT
Not Attending System Storage Manager; A single tool to manage your storage ( File and Storage Systems )
In more sophisticated enterprise storage environments, management with Device Mapper (dm), Logical Volume Manager (LVM), or Multiple Devices (md) is becoming increasingly more difficult. With file systems added to the mix, the number of tools needed to configure and manage storage has grown so large that it is simply not user friendly. With so many options for a system administrator to consider, the opportunity for errors and problems is large. The btrfs administration tools have shown us that storage management can be simplified, and we are working to bring that ease of use to Linux storage in general. I would like to introduce the new easy to use command line interface to manage your storage using various technologies like lvm, btrfs, crypt and more. System Storage Manager is currently under development with lots of features already available and more to come. I will discuss those features and problems we are facing when getting the project ready. I will also describe the scope of this project as well as where we see it in the future and of course gather useful feedback from the audience. Topic Lead: Lukas Czerner (<email address hidden>) Lukas is one of the core ext4 developers employed by Red Hat, Inc located in Czech Republic. He has been involved in performance evaluation of Linux discard support and was examining of alternative approaches which led to establishing the interface for filesystem batched discard support aka FITRIM as well as implementation for Ext4/3 filesystems. He is actively working on improvements of ext4 file system, its user space utilities. He is currently is working on simplification of constructing and administrating heterogeneous storage using various technologies like dm, md file systems and brtfs which resulted in System Storage Manager tool.

Participants:
attending ricwheeler (Ric Wheeler)

Tracks:
  • File and Storage Systems
Nautilus 4
Thursday, 10:25 - 11:10 PDT
Not Attending External Storage Array Management API ( File and Storage Systems )
The ability to manage external storage arrays and exploit the advanced features they provide in a programmatic way is an important capability for Linux. However, achieving storage array API nirvana isn't an easy thing to do with numerous obstacles in its path (e.g. licensing, terminology, features). In this session I would like to give a brief introduction to the libStorageMgmt project I am working on, discuss the difficulties it has encountered, and have an open discussion on how it can best work with other storage management components. Topic Lead: Tony Asleson Tony has a long and varied history of working in the storage industry and Linux. When he isn't sitting in front of his computer screen or spending time with his familiy, he can be found rock climbing and touring the back roads of MN on two wheels. He is currently a member of the Red Hat kernel storage team and lives in Rochester, MN.

Participants:
attending ricwheeler (Ric Wheeler)

Tracks:
  • File and Storage Systems
Nautilus 4
Thursday, 11:20 - 12:05 PDT
Not Attending Anaconda, Snapper and Booting ( File and Storage Systems )
Topic Lead: Peter Jones Topic Lead: Matthias G. Eckermann Topic Lead: David Cantrell

Participants:
attending ricwheeler (Ric Wheeler)

Tracks:
  • File and Storage Systems
Nautilus 4
Thursday, 13:30 - 14:15 PDT
Not Attending Configuration and Management Open Discussion ( File and Storage Systems )

Participants:
attending ricwheeler (Ric Wheeler)

Tracks:
  • File and Storage Systems
Nautilus 4
Thursday, 14:25 - 15:10 PDT
Not Attending Local File Systems ( File and Storage Systems )
Topic Lead: Chris Mason Topic Lead: Ric Wheeler

Participants:
attending ricwheeler (Ric Wheeler)

Tracks:
  • File and Storage Systems
Nautilus 4
Thursday, 15:20 - 16:05 PDT
Not Attending Hinting vs Heuristics: Plumbing I/O Cache Hints Through the Linux Storage Stack ( File and Storage Systems )
The Linux-storage and wider storage community are actively investigating ways to express and leverage the varying performance characterics of storage devices. A storage device may do a better job servicing the I/O stream if it can discern details deeper than just the currently requested block address range. The T10 committee is in the process of specifying a hinting scheme to classify the in-flight data in a SCSI request. Similarly, a filesystem can do a better job of allocation if it is given some explicit hints from the application about how a file will be used. EXT4 is investigating an O_HOT/O_COLD hint that applications could use to express a coarse quality of service for a given file. At the same time, bcache has arrived as a stacking block device driver that uses heuristics to guide the decision of whether an I/O request should be cached in a high performance device or passed on to the next tier in the storage hierarchy. This presentation investigates an approach to plumbing hints through the filesystem to be consumed by a modified bcache block device. The tradeoffs between hinting and heuristics, as well as a proposed mechanism for specifying cache policy in userspace, are explored. The target audience is kernel filesystem/block developers and application developers that want to express caching or other policies to a storage configuration. Topic Lead: Dan Williams (<email address hidden>) Dan is a Linux-storage developer at Intel. He contributed support for offloading raid5/6 calculations, developed bios-raid support for md/mdadm, and currently maintains the libsas based isci driver. He has presented at the Ottawa Linux Symposium, the Linux Storage Summit, and authored an article for LWN.net.

Participants:
attending ricwheeler (Ric Wheeler)

Tracks:
  • File and Storage Systems
Nautilus 4
Thursday, 16:30 - 17:15 PDT
Not Attending NFS Advanced Projects ( File and Storage Systems )
Topic Lead: Jeff Layton Jeff is a long-time Linux enthusiast. After working as a Unix System Administrator for almost a decade, he joined the Red Hat kernel engineering team in 2007, focusing mainly on NFS and CIFS. He is also a member of the worldwide Samba team by virtue of his work on the Linux kernel CIFS filesystem. Topic Lead: Bruce Fields Bruce has worked on the Linux NFS code since 2002, first at the University of Michigan and then since 2010 at Red Hat. He maintains the kernel's NFS server, contributes to the IETF's NFSv4 working group, and generally enjoys solving problems wherever they turn up. Topic Lead: Chuck Lever

Participants:
attending ricwheeler (Ric Wheeler)

Tracks:
  • File and Storage Systems
Nautilus 4
Thursday, 17:25 - 18:10 PDT
Not Attending NFS Advanced Projects ( File and Storage Systems )
Topic Lead: Jeff Layton Jeff is a long-time Linux enthusiast. After working as a Unix System Administrator for almost a decade, he joined the Red Hat kernel engineering team in 2007, focusing mainly on NFS and CIFS. He is also a member of the worldwide Samba team by virtue of his work on the Linux kernel CIFS filesystem. Topic Lead: Bruce Fields Bruce has worked on the Linux NFS code since 2002, first at the University of Michigan and then since 2010 at Red Hat. He maintains the kernel's NFS server, contributes to the IETF's NFSv4 working group, and generally enjoys solving problems wherever they turn up. Topic Lead: Chuck Lever

Participants:
attending ricwheeler (Ric Wheeler)

Tracks:
  • File and Storage Systems
Nautilus 4
Friday, 09:10 - 09:55 PDT
Not Attending Android (tentative)
Nautilus 4
Friday, 10:05 - 10:50 PDT
Not Attending Android (tentative)
Nautilus 4
Friday, 11:00 - 11:45 PDT
Not Attending Android (tentative)
Nautilus 4
Friday, 11:55 - 12:40 PDT
Not Attending An RCU-protected scalable trie for user-mode use ( Scaling )
In the past year, the RCU lock-free hash table hash been polished and made production-ready within the Userspace RCU project. It performs and scales really well for updates, key lookups and traversals in no particular key order, but does not fulfill ordered key traversal use-cases. This talk is presenting ongoing work on an ordered data structure that supports RCU reads: a cache-efficient, compact, fast, and scalable trie, inspired from Judy Arrays. Topic Lead: Mathieu Desnoyers <email address hidden> Mathieu Desnoyers main contributions are in the area of tracing (monitoring/performance analysis/debugging) and scalability, both at the kernel and user-space levels. He is maintainer of the LTTng project and the Userspace RCU library. He works in close collaboration with the telecommunication industry, many Linux distributions, and with customers developing hardware scaling from small embedded devices to large-deployment servers. He is CEO and Senior Software Architect at EfficiOS.

Participants:
attending mathieu-desnoyers (Mathieu Desnoyers)
attending paulmck (Paul McKenney)

Tracks:
  • Scaling
Nautilus 4
Friday, 14:10 - 14:55 PDT
Not Attending ASMP: Improving performance through dedication of OS tasks to specific processors ( Scaling )
Processors increase performance by adding cores instead of clock speed these days and therefore algorithms in general need to be able to work in a distributed way. In the kernel we have tried to go to more fine grained locking in order to increase performance. However, with that approach locking overhead grows if highly concurrent processing occurs in the Kernel. Synchronization becomes expensive. This session investigates how performance is affected if we do the opposite: Use coarse grained locking to perform large chunks of work on a single core instead which means that locking overhead is reduced and the processor caches are fully available for a significant piece of work. Topic Lead: Christoph Lameter <email address hidden> Christoph has been contributing to various core kernel subsystems over the years and created much of the NUMA infrastructure in the Linux Kernel when he worked as a Principal Engineer for Silicon Graphics on adapting Linux for use in Supercomputers. Scaling Linux is a focus of his work both in terms of performance for HPC (High Performance Computing) as well s for low latency in HFT (High Frequency Trading). Christoph maintains the slab allocators and the per cpu subsystem in the Linux Kernel and currently works as an architect for a leading HFT company.

Participants:
attending apm (Antti P Miettinen)
attending mathieu-desnoyers (Mathieu Desnoyers)
attending paulmck (Paul McKenney)

Tracks:
  • Scaling
Nautilus 4
Friday, 15:05 - 15:50 PDT
Not Attending Unavailable
Room Unavailable
Nautilus 4

PLEASE NOTE The Linux Plumbers Conference 2012 schedule is still in a draft format and is subject to changes at any time.