http://summit.ubuntu.com/lpc-2012/ Nautilus 2

Wednesday, 09:50 - 10:35 PDT
Not Attending Target CPU selection && Sharing Sched Info ( Scheduler )
=== How to improve the selection of the target cpu in the timer and workqueue framework === It's sometime difficult to move a timer/hrtimer on a set of cpus in order to follow the load balance policy of the scheduler. We have some use cases where a timer, that is not specifically pinned to a particular cpu, stays on this cpu (or set of cpus) whereas all the tasks activity has moved on other ones. The timer/hrtimer frameworks currently call a scheduler function under some conditions but there is a limited number of use case where the timer will effectively migrate. The wokqueue framework doesn't have any link with the scheduler when it looks for a cpu on which it can run a work. The goal of this talk is to describe the potential issue and to discuss the possible solution. Topic Lead: Vincent Guittot Vincent Guittot is an embedded Linux engineer at ST-Ericsson. Since 2005, he has focused on mobile phone running Linux and Android. In 2010, he has joined the power management working group of Linaro. === Sharing information between scheduler and frameworks === The scheduler could take advantage of information from other frameworks, when it selects run queue for a task. These information are helpful not only for power consumption but also for performance by giving a mask of preferred CPUs. As an example, the wake up latency of a task is impacted by the C-states of the selected CPU. Choosing a CPU with a low C-state will reduce this latency. This talk will discuss the various inputs that could be shared with the scheduler Topic Lead: Vincent Guittot

Participants:
attending apm (Antti P Miettinen)
attending paulmck (Paul McKenney)
attending vincent-guittot (Vincent Guittot)

Tracks:
  • Scheduler
Nautilus 2
Wednesday, 10:45 - 11:30 PDT
Not Attending The Core OS ( Core OS )
The term “Core OS” has recently been coined in an LWN article, describing the core (userspace) bits of an operating system. In this talk we want to give a quick overview of what we believe the Linux “Core OS” consists of technically, we want to draw the line where we believe the various projects belong in this (such as systemd, udev, dbus, …), and where they don’t, and where we want to go with this in the future. Topic Lead: Kay Sievers Topic Lead: Lennart Pottering

Participants:
attending kaysievers (Kay Sievers)

Tracks:
  • Core OS
Nautilus 2
Wednesday, 11:40 - 12:25 PDT
Not Attending Expose Routing && DSPs in ASoC ( Audio )
=== Exposing Routing to the Application Layer === In order to allow nicer configuration UIs and easier diagnostics it would be good if we could expose audio routing information to the application layer. === Representing DSPs in ASoC === As embedded systems continue to evolve, host-based processing is often offloaded to a co-processor dedicated for specific tasks in order to improve the performance and power utilization. In this the audio subsystem seems to be no exception and more vendors are trying to offload tasks to DSPs, be they embedded in SoCs or audio codecs. This evolution requires changes in ASoC to represent all the routing and processing capabilities of these DSPs. The current efforts are based on two methods being propagated. The first approach developed by Liam Girdwoodand TI, typically referred to as soc-dsp aka dynamic PCM, introduces the notion of dynamic PCM nodes, where audio front-ends (FEs-classic PCM device visible to user) and audio back-ends (BEs-soc-hardware interfaces) represent audio links. The second approach being discussed is referred as CODEC<-->CODEC model as proposed by Mark Brown, where the DSP is represented as another codec in the system and links to the real codec in the system through the machine map. This talk takes a look at both the approaches and latest evolutions, future plans on these methods and discusses the common infrastructure needs which need to considered for making this representation an effective one for SoC and codec vendors Topic Lead: Vinod Koul <email address hidden> Topic Lead: Pierre-louis Bossart <email address hidden>

Participants:
attending broonie (Mark Brown)
attending diwic (David Henningsson)
attending srwarren (Stephen Warren)
attending tiwai (Takashi Iwai)
attending vinod-koul (Vinod Koul)

Tracks:
  • Audio
Nautilus 2
Wednesday, 14:00 - 14:45 PDT
Not Attending Security and Storage ( Virtualization )
Virtualization Topics: 1. Virtualization Security Discussion 2. Storage Virtualization for KVM === Virtualization Security Discussion === This proposal is for a discussion of the threats facing Linux based Virtualization technologies and what can be done to help mitigate these threats. The focus will be on hypervisor based virtualization, e.g. KVM, but container based virtualization can also be discussed if there is sufficient interest among the attendees. Possible topics of discussion include: * Confining malicious/exploited guests * Validating host identity and integrity from the guest * Enforcing network separation/virtualization Topic Lead: Paul Moore Paul has been involved in various Linux security efforts for the past eight years, with a strong focus on mandatory access control and network security. He has served as the Linux Kernel's labeled networking maintainer since 2007. Paul has given a number of presentations over the years at Linux conferences on Linux security, SELinux, SELinux/MLS, and labeled networking. === Storage Virtualization for KVM === In KVM based virtualization ecosystem, there are multiple choices for filesytem/storage and management tools. While this allows for different custom solutions, there is no single default storage solution that caters to the majority of use case scenarios. In this presentation, we will look at integrating different individual projects like QEMU, GlusterFS, oVirt/VDSM and libstoragemgmt to arrive at one filesystem/storage solution for KVM that works for most of the scenarios. Various aspects like making GlusterFS virtualization-ready and cross-vendor storage array integration will be discussed. We will finally discuss how some of the virtualization features like VM migration, taking snapshots etc can be done seamlessly in our storage solution using oVirt. Virtualization/Data Center administrators and users of KVM based virtualization will benefit from attending this presentation. Topic Lead: Bharata B Rao <email address hidden> Bharata B Rao is part of IBM Linux Technology Center, Bangalore He is currently working in the area of Virtualization. Earlier he has worked in the area of File Systems, Scheduler, Debuggers, Embedded Linux and Linux Clusters. Bharata graduated from The National Institute of Engineering, Mysore in 1999 and did his post graduation(MS) from BITS, Pilani in 2003. In his spare time, Sanskrit language, Mountains and Mridangam (an Indian percussion instrument) keep him engaged. Topic Lead: Deepak C Shetty <email address hidden> Deepak C Shetty is working with IBM's Linux Technology Center (LTC), Bangalore in the area of open virtualisation. Earlier he has worked in area of Virtualisation aware File Systems. Prior to being part of LTC, Deepak worked in the areas of Platform Management (IBM Systems Director product) and Hardware Validation of IBM Power systems, in IBM. Deepak holds a Bachelor of Engineering degree in Electronics from Pune University, India and Diploma in Advanced Computing from C-DAC, Pune, India. Other areas of interest include Software design and Stamp collection (Philately). Topic Lead: M Mohan Kumar <email address hidden> M. Mohan Kumar is an open source developer working at IBM Linux Technology Center, Bangalore. He has contributed to various components of Linux ecosystem including kexec (fast boot), kdump (kernel crash dump mechanism) for PowerPC, 9p file system and QEMU. Prior to IBM he has worked on various SCSI and Fibre Channel related Linux projects. Mohan obtained his Bachelor of Engineering in Computer Science and Engineering from Bharathiar University, TamilNadu, India. He has 11 years of experience in Linux area. Topic Lead: Balamurugan Aramugam <email address hidden> Balamurugan works as Principle Software Engineer in Red Hat. He is contributor to the upstream VDSM project, focusing on adding Gluster support. He has been involved in design/development of various Gluster products, FreeIPMI etc. Balamurugan works out of the Red Hat office in Bengaluru and his topics of interest include cloud technologies, Big Data, Kernel Development, Artificial Intelligence etc Topic Lead: Shireesh Anjal <email address hidden> Shireesh Anjal works as a Principal Software Engineer with Red Hat. He is a contributor to the upstream oVirt project, focusing on adding GlusterFS support. He has been involved with building scalable Banking and eGovernance systems over the past 12 years. Shireesh works out of the Red Hat office in Bengaluru and his topics of interest include cloud technologies, Big Data and mobile computing.

Participants:
attending amitshah (Amit Shah)
attending eblake (Eric Blake)
attending lpc-virt-lead (LPC Virtualization Lead)
attending paulmoore (Paul Moore)
attending stefano-stabellini (Stabe)

Tracks:
  • Virtualization
Nautilus 2
Wednesday, 14:50 - 15:35 PDT
Not Attending HD-audio cooking && QA/Testing ( Audio )
=== HD-Audio Cooking Recipe === This session will cover the debugging practices for typical problems seen with HD-audio, which is the standard component on all modern PCs, using the sysfs kernel interface and user-space helper programs. Also, we will discuss open problems, jack retasking, HDMI stream assignment, missing speaker/mic streams, and better interaction / organization with user-space applications like PulseAudio. === Bug Tracking and QA === Bug tracking in the sound subsystem is far away from the optimal situation. We'd like to discuss how to improve the situation, looking through the experiences from the current ALSA and distro bug trackers. Also, we're going to show the automated testing of HD-audio driver using hda-emu emulator for detecting the errors and regression coverage. Topic Lead: Takashi Iwai (<email address hidden>) Working as a member of hardware-enablement team in SUSE Labs at SUSE Linux Products GmbH in Nuremberg, Germany, while playing a role as a gatekeeper of the Linux sound subsystem tree over years. Topic Lead: David Henningsson (<email address hidden>) Working for Canonical with audio hardware enablement, fixing audio bugs and maintaining the audio stack, and also part of PulseAudio's current development team.

Participants:
attending broonie (Mark Brown)
attending diwic (David Henningsson)
attending srwarren (Stephen Warren)
attending tiwai (Takashi Iwai)

Tracks:
  • Audio
Nautilus 2
Wednesday, 15:45 - 16:30 PDT
Not Attending x86 Virtualization ( Virtualization )
Virtualization Topics: 2) COLO: COarse-grain LOck-stepping Virtual Machines for Non-stop Service - Eddie Dong 3) Reviewing Unused and Up-coming Hardware Virtualization Features - Jun Nakajima === COLO: COarse-grain LOck-stepping Virtual Machines for Non-stop Service === Virtual machine (VM) replication (replicating the state of a primary VM running on a primary node to a secondary VM running on a secondary node) is a well known technique for providing application-agnostic, non-stop service. Unfortunately, existing VM replication approaches suffer from excessive replication overhead and, for client-server systems, there is really no need for the secondary VM to match its machine state with the primary VM at all times. We propose COLO (COarse-grain LOck-stepping virtual machine solution for non-stop service), a generic and highly efficient non-stop service solution, based on on-demand VM replication. COLO monitors the output responses of the primary and secondary VMs. COLO considers the secondary VM a valid replica of the primary VM, as long as network responses generated by the secondary VM match that of the primary. The primary VM state is propagated to the secondary VM, if and only if outputs from the secondary and primary servers no longer match. Topic Lead: Eddie Dong === Reviewing Unused and Up-coming Hardware Virtualization Features === We review unused and up-coming hardware virtualization (Intel VT) features to discuss how they can improve virtualization for open source. First, we review the existing hardware features that are not used by KVM or Xen today, showing examples for use cases. 1) For example, The descriptor-table exiting should be useful for the guest kernels or security agent to enhance security features. 2) The VMX-preemption timer allows the hypervisor to preempt guest VM execution after a specified amount of time, which is useful to implement fair scheduling. The hardware can save the timer value on each successive VM exit, after setting the initial VM quantum. 3) VMFUNC is an operation provided by the processor that can be invoked from VMX non-root operation without a VM exit. Today, EPTP switching is available, and we discuss how we can use the feature. Second, we talk about the new hardware features, especially for interrupt optimizations. Topic Lead: Jun Nakajima Jun Nakajima is a Principal Engineer leading open source virtualization projects, such as Xen and KVM at the Intel Open Source Technology Center. He presented a number of times at technical conferences, including Xen Summit, OLS, KVM Forum, and USENIX.

Participants:
attending amitshah (Amit Shah)
attending lpc-virt-lead (LPC Virtualization Lead)

Tracks:
  • Virtualization
Nautilus 2
Thursday, 08:30 - 09:25 PDT
Not Attending Breakfast
Nautilus 2
Thursday, 09:30 - 10:15 PDT
Not Attending Simplify volume setting at startup/shutdown ( Audio )
Currently, on a normal desktop session, volume is set four times on startup - initally by the kernel, then by alsactl, then by PulseAudio in the DM session, then by PulseAudio in the logged in session. When shutting down, both PulseAudio and alsactl saves volumes to restore them later. And then we also have suspend and hibernate to consider, and that cards can be plugged in at any time. First, isn't this quite complex for something as simple as setting volumes? Second, can we facilitate new features, such as 1) having a "set this volume as default, for all users, on startup" button in the volume control, or 2) "allow the DM user to introspect different users' volumes"? Topic Lead: David Henningsson (<email address hidden>) Working for Canonical with audio hardware enablement, fixing audio bugs and maintaining the audio stack, and also part of PulseAudio's current development team.

Participants:
attending broonie (Mark Brown)
attending tiwai (Takashi Iwai)

Tracks:
  • Audio
Nautilus 2
Thursday, 10:25 - 11:10 PDT
Not Attending Multipath TCP && TCP Loss Probe && Client Congestion Manager ( Networking )
Networking Topics: 1. Linux Kernel Implementation of Multipath TCP 2. TCP Loss Probe (TLP): fast recovery for tail losses 3. Client-based Congestion Manager for TCP === Linux Kernel Implementation of Multipath TCP === MultiPath TCP (short MPTCP) is an extension to TCP that allows a single TCP-connection to be split among multiple interfaces, while presenting a standard TCP-socket API to the applications. Splitting a data-stream among different interfaces has multiple benefits. Data-center hosts may increase their bandwidth; smartphones with WiFi/3G may seamlessly handover traffic from 3G to WiFi,... MultiPath TCP works with unmodified applications over today's Internet with all its middleboxes and firewalls. A recent Google Techtalk about MultiPath TCP is available at [1] In this talk I will first present the basics of MultiPath TCP and how it works and show some of the performance results we obtained with our Linux Kernel implementation (freely available at [2]). Second, I will go into the details of our implementation in the Linux Kernel, and our plans to try submitting the MPTCP-patches to the upstream Linux Kernel. [1] http://www.youtube.com/watch?v=02nBaaIoFWU [2] http://mptcp.info.ucl.ac.be Topic Lead: Christoph Paasch <email address hidden> === TCP Loss Probe (TLP): fast recovery for tail losses === Fast recovery (FR) and retransmission timeouts (RTOs) are two mechanisms in TCP for detecting and recovering from packet losses. Fast recovery detects and repairs losses quicker than RTOs, however, it is only triggered when connections have a sufficiently large number of packets in transit. Short flows, such as the vast majority of Web transfers, are more likely to detect losses via RTOs which are expensive in terms of latency. While a single packet loss in a 1000 packet flow can be repaired within a round-trip time (RTT) by FR, the same loss in a one packet flow takes many RTTs to even detect. The problem is not just limited to short flows, but more generally losses near the end of transfers, aka tail losses, can only be recovered via RTOs. In this talk, I will describe TCP Loss Probe (TLP) - a mechanism that allows flows to detect and recover from tail losses much faster than an RTO, thereby speeding up short transfers. TLP also unifies the loss recovery regardless of the "position" of a loss, e.g., a packet loss in the middle of a packet train as well as at the tail end will now trigger the same fast recovery mechanisms. I will also describe experimental results with TLP and its impact on Web transfer latency on live traffic. Topic Lead: Nandita Dukkipati <email address hidden> Nandita is a software engineer at Google working on making Networking faster for Web traffic and Datacenter applications. She is an active participant at the IETF and in networking research. Prior to Google she obtained a PhD in Electrical Engineering, Stanford University. === Client-based Congestion Manager for TCP === Today, one of the most effective ways to improve the performance of chatty applications is to keep TCP connection open as long as possible to save the overhead of SYN exchange and slow start on later requests. However, due to Web domain sharing, NAT boxes often run out of ports or other resources and resort to dropping connections in ways that make later connections even slower to start. A better solution would be to enable TCP to start a new connection as quickly as restarting an idle connection. The approach is to have a congestion manager (CM) on the client that constantly learns about the network and adds some signaling information to requests from the client, indicating how the server can reply most quickly, for example by providing TCP metrics similar to today's destination cache. Such a CM could even indicate to the server what type of congestion control to use, such as the relentless congestion control algorithm such that opening more connections does not gain advantage on aggregate throughput. It also allows receiver-based congestion control which opens new possibilities to control congestion. The Linux TCP metrics have similar concept but there is a lot of room for improvement. Topic Lead: Yuchung Cheng <email address hidden> Yuchung Cheng is a software engineer at Google working on the Make-The-Web-Faster project. He works on the TCP protocol and the Linux TCP stack focusing on latency. He has contributed Fast Open, Proportional Rate Reduction, Early Retransmit implementation in Linux kernel and wrote a few papers and IETF drafts of them. He has also contributed to rate-limiting Youtube streaming and the cwnd-persist feature of the SPDY protocol.

Participants:
attending therbert (Tom Herbert)

Tracks:
  • Networking
Nautilus 2
Thursday, 11:20 - 12:05 PDT
Not Attending Multipath TCP && TCP Loss Probe && Client Congestion Manager ( Networking )
Networking Topics: 1. Linux Kernel Implementation of Multipath TCP 2. TCP Loss Probe (TLP): fast recovery for tail losses 3. Client-based Congestion Manager for TCP === Linux Kernel Implementation of Multipath TCP === MultiPath TCP (short MPTCP) is an extension to TCP that allows a single TCP-connection to be split among multiple interfaces, while presenting a standard TCP-socket API to the applications. Splitting a data-stream among different interfaces has multiple benefits. Data-center hosts may increase their bandwidth; smartphones with WiFi/3G may seamlessly handover traffic from 3G to WiFi,... MultiPath TCP works with unmodified applications over today's Internet with all its middleboxes and firewalls. A recent Google Techtalk about MultiPath TCP is available at [1] In this talk I will first present the basics of MultiPath TCP and how it works and show some of the performance results we obtained with our Linux Kernel implementation (freely available at [2]). Second, I will go into the details of our implementation in the Linux Kernel, and our plans to try submitting the MPTCP-patches to the upstream Linux Kernel. [1] http://www.youtube.com/watch?v=02nBaaIoFWU [2] http://mptcp.info.ucl.ac.be Topic Lead: Christoph Paasch <email address hidden> === TCP Loss Probe (TLP): fast recovery for tail losses === Fast recovery (FR) and retransmission timeouts (RTOs) are two mechanisms in TCP for detecting and recovering from packet losses. Fast recovery detects and repairs losses quicker than RTOs, however, it is only triggered when connections have a sufficiently large number of packets in transit. Short flows, such as the vast majority of Web transfers, are more likely to detect losses via RTOs which are expensive in terms of latency. While a single packet loss in a 1000 packet flow can be repaired within a round-trip time (RTT) by FR, the same loss in a one packet flow takes many RTTs to even detect. The problem is not just limited to short flows, but more generally losses near the end of transfers, aka tail losses, can only be recovered via RTOs. In this talk, I will describe TCP Loss Probe (TLP) - a mechanism that allows flows to detect and recover from tail losses much faster than an RTO, thereby speeding up short transfers. TLP also unifies the loss recovery regardless of the "position" of a loss, e.g., a packet loss in the middle of a packet train as well as at the tail end will now trigger the same fast recovery mechanisms. I will also describe experimental results with TLP and its impact on Web transfer latency on live traffic. Topic Lead: Nandita Dukkipati <email address hidden> Nandita is a software engineer at Google working on making Networking faster for Web traffic and Datacenter applications. She is an active participant at the IETF and in networking research. Prior to Google she obtained a PhD in Electrical Engineering, Stanford University. === Client-based Congestion Manager for TCP === Today, one of the most effective ways to improve the performance of chatty applications is to keep TCP connection open as long as possible to save the overhead of SYN exchange and slow start on later requests. However, due to Web domain sharing, NAT boxes often run out of ports or other resources and resort to dropping connections in ways that make later connections even slower to start. A better solution would be to enable TCP to start a new connection as quickly as restarting an idle connection. The approach is to have a congestion manager (CM) on the client that constantly learns about the network and adds some signaling information to requests from the client, indicating how the server can reply most quickly, for example by providing TCP metrics similar to today's destination cache. Such a CM could even indicate to the server what type of congestion control to use, such as the relentless congestion control algorithm such that opening more connections does not gain advantage on aggregate throughput. It also allows receiver-based congestion control which opens new possibilities to control congestion. The Linux TCP metrics have similar concept but there is a lot of room for improvement. Topic Lead: Yuchung Cheng <email address hidden> Yuchung Cheng is a software engineer at Google working on the Make-The-Web-Faster project. He works on the TCP protocol and the Linux TCP stack focusing on latency. He has contributed Fast Open, Proportional Rate Reduction, Early Retransmit implementation in Linux kernel and wrote a few papers and IETF drafts of them. He has also contributed to rate-limiting Youtube streaming and the cwnd-persist feature of the SPDY protocol.

Participants:
attending therbert (Tom Herbert)

Tracks:
  • Networking
Nautilus 2
Thursday, 13:30 - 14:15 PDT
Not Attending The Core OS Wish List ( Core OS )
In the context of systemd we started collecting nice-to-have or that-should-just-work items, which we would wish the Linux kernel would provide us with. The emails to LKML have been called "A Plumber's Wish List for Linux". We give a quick update what problems have been solved, and what we still wish would work. Topic Lead: Lennart Poettering Topic Lead: Kay Sievers Lennart Poettering and Kay Sievers are the maintainers of systemd and udev and spend almost their entire work time on building infrastructure for the Linux Core OS.

Participants:
attending eblake (Eric Blake)
attending kaysievers (Kay Sievers)

Tracks:
  • Core OS
Nautilus 2
Thursday, 14:25 - 15:10 PDT
Not Attending Virtualization Memory Management ( Virtualization )
Virtualization Topics: 1) NUMA and Virtualization, the case of Xen 2) Automatic NUMA CPU scheduling and memory migration 3) One balloon for all - towards unified baloon driver === NUMA and Virtualization, the case of Xen === Having to deal with NUMA machines is becoming more and more common, and will likely continue to do so. Running typical virtualization workloads on such systems is particularly challenging, as Virtual Machines (VMs) are typically long lived processes with large memory footprints. This means one might incur really bad performance if the specific characteristics of the platform are not properly accounted for. Basically, it would be ideal to always run a VM on the CPUs of the node that host its memory, or at least as close as possible to that. Unfortunately, that is all but easy, and involves reconsidering the current approaches to scheduling and memory allocation. Extensive benchmarks have been performed, running memory intensive workloads inside Linux VMs hosted on NUMA hardware of different kinds and size. This has then driven the design and development of a suite of new VM placement, scheduling and memory allocation policies for the Xen hypervisor and its toolstack. The implementation of such changes has been benchmarked against the baseline performance and proved to be effective in yielding improvements, which will be illustrated during the talk. Although some of the work is hypervisor specific, it covers are issues that can be considered of interest for the whole Linux virtualization community. Whether and how to export NUMA topology related information to guests, just to give an example. We believe that the solutions we are working on, the ideas behing them and the performance evaluation we conducted are something the community would enjoy hearing and talking about. Topic Lead: Dario Faggioli Dario has interacted with the Linux kernel community in the domain of scheduling during his PhD on real-time systems. He now works for Citrix on the Xen Open Source project. He spent the last months on investigating and trying to improve the performance of virtualization workloads on NUMA systems. === Automatic NUMA CPU scheduling and memory migration === Topic Lead: Andrea Arcangeli === One balloon for all - towards unified baloon driver === During Google Summer of Code 2010 (Migration from memory ballooning to memory hotplug in Xen) it was discovered that in main line Linux Kernel exists 3 balloon driver implementations for 3 virtualization platforms (KVM, Xen, VMware). It quickly came out that they are almost identical but of course they have different controls and API/ABI. In view of e.g. memory hotplug driver which has generic base (not linked with specific hardware/software solution) this situation is not acceptable. The goal of this project is generic balloon driver which could be placed in MM subsystem and which could be linked with as little as possible platform specific code (placed e.g. in relevant arch directory). This solution could give unified ABI (which could ease administration) and unified API for developer (i.e. easier integration with e.g. tmem, memory hotplug, etc.). Additionally, balloon driver behavior would be almost identical on all platforms. Discussion should outline the goals and key solutions for such driver. Topic Lead: Daniel Kiper Daniel was a Google Summer of Code 2010 (memory hotplug/balloon driver) and Google Summer of Code 2011 (kexec/kdump) student. He is involved in *NIX administration/development since 1994. Currently his work and interests focuses on kexec/kdump implementation for Xen.

Participants:
attending amitshah (Amit Shah)
attending dkiper (Daniel Kiper)
attending lpc-virt-lead (LPC Virtualization Lead)
attending raistlin (Dario Faggioli)

Tracks:
  • Virtualization
Nautilus 2
Thursday, 15:20 - 16:05 PDT
Not Attending Atomic upgrades, booting, and package systems ( Core OS )
Current major consumer operating systems like Microsoft Windows and the Playstation 3 explicitly warn the user "don't turn off your computer" for upgrades. But the state of the art in many Linux-based "distributions" is to simply ignore this; if you happen to lose power or the kernel crashes, your system is quite likely toast, and you need a recovery CD. This isn't acceptable. This presentation will discuss my research into the area, working prototype code, and further work necessary in the core plumbing (particularly bootup and configuration management) to get fully atomic upgrades. Topic Lead: Colin Walters Colin has contributed to many different Free and Open Source software projects, including GNU Emacs, Debian, rpm, and dbus, but primarily works on GNOME for Red Hat, Inc.

Participants:
attending kaysievers (Kay Sievers)

Tracks:
  • Core OS
Nautilus 2
Thursday, 16:30 - 17:15 PDT
Not Attending Time Alignment && PulseAudio on Android ( Audio )
=== Time alignment in the Linux Audio Stack === The Linux Audio stack provides very little support for precise timing, despite the availability of hardware audio wall clocks and the adoption of new protocols such as IEEE1588 and Ethernet AVB, which align networked devices several orders of magnitude more precisely than NTP. In this presentation, we show how providing user-space applications with access to the audio wall clock can improve audio rendering/capture for local and networked devices. In the first case, the resolution of the wall clock can help PulseAudio track with more precision the drift between system time and audio time. Likewise for networked devices, the differences in audio wall clocks can help a server adjust asynchronous sample-rate conversions without large and frequent variations of the sample-rate ratio. We will present some ideas on modifications of the audio stack and data structures and gather feedback from the open-source community. Topic Lead: Pierre Bossart === PulseAudio on Android === As part of our efforts to make 'standard' Linux components available in the Android world, we are working on porting PulseAudio to Android. In this session, we talk about challenges in the initial porting effort, the approach we are taking to make PulseAudio an out-of-the-box replacement for the native system, and what advantages we hope to be able to provide with this work. Topic Lead: Arun Raghavan <email address hidden> Arun Raghavan is a long-time open source supporter and mainly hacks on the PulseAudio audio server at Collabora. He contributes to the GStremaer multimedia framework, and secretly is a developer on the Gentoo Linux distribution as well.

Participants:
attending broonie (Mark Brown)
attending tiwai (Takashi Iwai)

Tracks:
  • Audio
Nautilus 2
Thursday, 17:25 - 18:10 PDT
Not Attending ALSA channel-mapping API ( Audio )
The functionality to query and/or set the PCM channel-mapping is a long-standing missing feature in ALSA. The session will cover the requirement by the actual hardware and discuss the pros and cons of proposed implementations. * REQUIRED AUDIENCE ALSA devs, PulseAudio devs, gstreamer devs Topic Lead: Takashi Iwai <email address hidden>: Working as a member of hardware-enablement team in SUSE Labs at SUSE Linux Products GmbH in Nuremberg, Germany, while playing a role as a gatekeepr of the Linux sound subsystem tree over years.

Participants:
attending broonie (Mark Brown)
attending diwic (David Henningsson)
attending srwarren (Stephen Warren)
attending tiwai (Takashi Iwai)

Tracks:
  • Audio
Nautilus 2
Friday, 09:10 - 09:55 PDT
Not Attending Real Time Microconference ( Real Time )
http://wiki.linuxplumbersconf.org/2012:real-time Schedule for this track Getting RCU further out of the way - Paul McKenney Handling the thorns of mainline - Steven Rostedt SCHED_DEADLINE: a new deadline based realtime scheduling policy - Peter Zijlstra Review of the stable realtime release process - Frank Rowand Lessons We Learned: Common mistakes while testing applications with RT - Luis Claudio Goncalves State of RT/Thomas's thoughts - Thomas Gleixner

Participants:
attending paulmck (Paul McKenney)
attending tglx (tglx)

Tracks:
  • Real Time
Nautilus 2
Friday, 10:05 - 10:50 PDT
Not Attending Real Time Microconference ( Real Time )
http://wiki.linuxplumbersconf.org/2012:real-time Schedule for this track Getting RCU further out of the way - Paul McKenney Handling the thorns of mainline - Steven Rostedt SCHED_DEADLINE: a new deadline based realtime scheduling policy - Peter Zijlstra Review of the stable realtime release process - Frank Rowand Lessons We Learned: Common mistakes while testing applications with RT - Luis Claudio Goncalves State of RT/Thomas's thoughts - Thomas Gleixner

Participants:
attending paulmck (Paul McKenney)
attending tglx (tglx)

Tracks:
  • Real Time
Nautilus 2
Friday, 11:00 - 11:45 PDT
Not Attending Real Time Microconference ( Real Time )
http://wiki.linuxplumbersconf.org/2012:real-time Schedule for this track Getting RCU further out of the way - Paul McKenney Handling the thorns of mainline - Steven Rostedt SCHED_DEADLINE: a new deadline based realtime scheduling policy - Peter Zijlstra Review of the stable realtime release process - Frank Rowand Lessons We Learned: Common mistakes while testing applications with RT - Luis Claudio Goncalves State of RT/Thomas's thoughts - Thomas Gleixner

Participants:
attending paulmck (Paul McKenney)
attending tglx (tglx)

Tracks:
  • Real Time
Nautilus 2
Friday, 11:55 - 12:40 PDT
Not Attending Polyhedral optimizations for LLVM ( LLVM )
Polly is a polyhedral optimizer run as plug-in to the LLVM compiler. Polly optimizes loops for data-locality, auto-vectorization, and auto-parallelization. Although this tool is still under development, it has been gaining traction in the community over last few months and its quality increased significantly. This session will provide an overview on its current state and discuss the roadmap for production utilization. Topic Lead: Zino Benaissa Zino Benaissa is a senior staff engineer at QuIC, Qualcomm’s open source subsidiary, responsible for developing back end optimizations for the LLVM compiler. Before joining Qualcomm in 2011, he worked at Intel for 11 years leading the effort to incorporate Intel micro-architectural tunings into the Microsoft Visual C++ compiler. In particular, the support Advanced Vector Extensions (AVX) and collaborated with Microsoft compiler architects and engineers to bring up the auto-vectorizer in the latest release of Microsoft (Dev11) compiler.

Tracks:
  • LLVM
Nautilus 2
Friday, 14:10 - 14:55 PDT
Not Attending Systemd for the User Session ( Core OS )
It's a little known secret that systemd is extremely capable of starting, controlling and regulating more than just system services, but can easily start an entire Desktop UI. Not many people have sat down and implemented and worked out the problems of starting an X service, a few UI components, the session bus and DBus services for normal users with the mechanisms that systemd provides. The benefits are obvious: Systemd provides excellent service monitoring and restarting capabilities, provides socket and DBus activation for relevant services, and overall improves desktop startup by allowing user services to start well before core services like Xorg or wayland start. In effect, we're saying goodbye to XDG autostart entirely, and getting back reliability and scalability. We converted several desktop environments including Tizen's Mobile UI, Xfce4, Enlightenment and more to systemd user sessions. We "pop the hood" and take a look at the implications for startup, what's possible to further improve on the session startup and where we can do better. Topic Lead: Auke KokAuke Kok <email address hidden> Auke is a software engineer at Intel's Open Source Technology center, and has been attempting to make Linux boot faster since 2007. In 2008, he co-presented the "5-second boot" with Arjan van de Ven at the first LPC. Since then, Auke has worked on further improving the Linux Core OS start sequence, first for Moblin and later with MeeGo, where we made the first switch to systemd. Auke now works on Tizen, which will heavily integrate systemd in the Core OS.

Participants:
attending eblake (Eric Blake)

Tracks:
  • Core OS
Nautilus 2
Friday, 15:05 - 15:50 PDT
Not Attending Application of deadline scheduling for powersaving strategies ( Scheduler )
Power consumption is an important issue in the design of real-time embedded and general purpose systems. How to reduce power consumption, enabling longer lifetime for battery powered systems, is however still an open problem. A real-time scheduling algorithm based on the deadline concept (like SCHED_DEADLINE) can be utilized to provide power-aware scheduling strategies. In this talk I will first present work that has been done in the past on the subject. Then I will propose some simple ideas on how deadline scheduling could be effective in reducing power consumption. Topic Lead: Juri Lelli

Participants:
attending paulmck (Paul McKenney)
attending vincent-guittot (Vincent Guittot)

Tracks:
  • Scheduler
Nautilus 2

PLEASE NOTE The Linux Plumbers Conference 2012 schedule is still in a draft format and is subject to changes at any time.