The Enterprise End User Summit is an exclusive invitation-only event that brings together high performance end users with the highest level Linux community developers. CTOs, architects, senior IT representatives and kernel developers are able to connect directly to advance the features most critical to using Linux in the enterprise.
View the Conference Slides
Linux Foundation Executive Director Jim Zemlin opens the 2013 Enterprise End User Summit.
Major disruptions to industries occur only so often. The FiTech industry is facing a cascade of landscape altering disruptions in data, programming, and business models. An Open Platform will help the industry adapt to these disruptions and take advantage of them in order to build new capabilities and profits. By harnessing the advantage of Open Platforms, the industry can return to the forefront of innovation and growth. This session will show how open minds can change attitudes, bringing about some of the greatest innovations in capital markets technologies still to come.
Frank Frankovsky will provide an update on the Open Compute Project, an initiative dedicated to reshaping the infrastructure hardware industry to be more open, more innovative, and more efficient and will follow-up with a panel discussion.
Linux has support for high speed interconnects which includes the offload technology necessary in order to do extremely fast communication between data in memory of the nodes in a cluster. The panel will discuss the current state of the Linux RDMA subsystem, use cases, current issues as well as future areas of development.
This is a Panel discussion featuring Financial Sector customers, vendors, and KVM developers IBM and Red Hat. Participants will discuss the unique value proposition of KVM from their point of view, including success stories and other experiences with KVM, also barriers to adoption of KVM. You should learn how the Panelists obtained thevalue from KVM they were expecting, whether it was lower costs, better technology, or a combination of both. The audience will be able to ask questions and provide feedbackto the Panel.
72% of the 21 million U.S. health care records that have been compromised since 2009 should have been trivially protected using comprehensive encryption of the data before being written to disk (see: http://1.usa.gov/a2UGEG). A busy cloud compute node from OpenStack or Amazon EC2 might spin up thousands of instances per day. Ephemeral, block, and object storage - each and every one of these should always be encrypted before being written to the underlying physical media. The good news is multiple excellent file and disk encryption solutions exist in Linux, such as eCryptfs and dm-crypt, making encrypting everything simple and cost-effective.
This session is designed for security professionals, IT decision makers and solution architects. We'll discuss encryption and key management best practices and examine use cases for companies that store sensitive and regulated data in the cloud.
This presentation will discuss the technical features of KVM today and in the upcoming releases, with a focus on the requirements of the Financial Sector. The presentation willalso cover relevant hardware roadmaps that will effect KVM’s capabilities over the next couple of years - with a focus on high-performance, high-throughput, and high-memory workloads.
At the Enterprise End User Summit in 2010, Bhaskar presented results of benchmarking filesystems for datababase performance using the 3n+1 problem on the GT.M database engine. Since that time, xfs and ext4 have been sped up, and btrfs is production grade. This presentation revisits the benchmark to provide updated results with a more recent Linux kernel, and with copy-on-write turned off for brfs. Time permitting, results from the 3n+1 benchmark will also be compared with results from iothrash, a pure ANSI C / POSIX file system benchmark which allows comparison across systems, even those without a GT.M implementation.
The intended audience is technical, those developing filesystems, and tuning filesystem performance, especially for update intensive applications like transaction processing databases.
This Presentation followed by a Panel Discussion will focus on automated deployment of KVM and virtual machines, the scalability of automation tools, V2V migration tools,and the configuration and tuning of high-performance VMs in cloud data centers. Also maintaining compute clusters and VM resource pools within a cloud, batch processing,and other complex configurations. The audience will be encouraged to ask questions and provide feedback.
This presentation will cover my experiences with tuning large NUMA systems such as the HP DL980 for high performance when used with SAN-attached storage. Tuning topics include areas such as BIOS settings, CPU power states, interrupt and process pinning, performance testing, and Oracle. The audience for this presentation is core developers.
Typically the Linux on System z environments are running virtualized under the z/VM hypervisor. This session addresses such environments for planning and deployment. How to use common code base and effectively allocate memory and CPU for high dynamic workloads.Come and see examples and use cases for your day to day operations for single or cloud based environments.
Linux has already taken over the datacenter, smartphones, and many other leading consumer devices. As new trends disrupted markets, Linux and open source software helped shape and define the next generation of technology underpinning those trends. This same process is taking shape in the networking market. Recently, The Linux Foundation launched a Collaborative Project with networking industry leaders called OpenDaylight to build an open source Software-Defined Networking controller. The project was launched with open governance, an open source license and a technical meritocracy to enable building a vibrant, engaging ecosystem of developers, vendors and users. Join us in this session to find out more about OpenDaylight and how open source collaboration is leading software-defined networking.
Join us on the New York Stock Exchange trading floor for our evening reception.
A roundtable discussion on the Linux Kernel: Moderated by Linux Foundation Fellow, Greg Kroah-Hartman, the panel will address the technology, the process and the future of Linux.
Open Source communities have proven to be leaders in bringing about innovations that lead to staggering monetization potential. This session will show how Open Source can unite communities of competitors, partners and users to create speed-to-market, highly innovative, standards driven solutions to achieve shared goals of innovation and opportunity for new services. Simultaneously, these same concepts enable significant cost reduction. Several case studies will be provided, including the evolution of the NYSE Technologies 'Open Platform'
Open source cloud projects are now armed with tremendous industry backing, critical mass and huge momentum. With cloud computing having moved beyond compute, integrating technologies in storage, networking, security, management and automation, open source is ready to compete. Hear about the agility, scalability and redundancy.
This talk is aimed at the enterprise IT professional (systems and network administrators) who are looking to improve their operational agility and adopt cloud computing technologies that can be integrated easily into their current infrastructure.
The growth of Cloud computing, Big Data, and newly emerging Hyperscale Computing applications will revolutionize data center designs of tomorrow. Future data centers will leverage unparalleled scalability and flexibility, replacing traditional cabling with sophisticated fabric interconnect technologies, and exploiting novel Server-on-Chip integration opportunities, as well emerging alternative computer architectures (such as ARM) to build very high density solutions.
In this talk, two industry visionaries will demystify emerging hyperscale, Fabric and alternative architecture technologies, describing their immediate and longer term impact upon the next generation data center of tomorrow.
Linux distributions are complex systems with many discrete parts, generally configured separately. System administrators are required be deeply knowledgeable of many disparate systems, each with their own flavors of configuration language. The OpenLMI project (Open Linux Management Infrastructure) is an effort to unify all aspects of primary configuration into a single common API built atop industry standard CIM and DMTF technologies.
KVM users have been asking for a neutral forum where they can meet with developers, vendors and other users to identify common technical requirements, discuss best practices on KVM deployment, and raise issues and concerns. Building on feedback from the earlier KVM sessions as well as the experiences of individual participants, this highly interactive roundtable will lay the groundwork for a KVM End User Council and propose its scope, membership and initial focus areas.
Of all the different types of virtualisation technology, containers have been regarded either as a cheap way of packing a hosting environment or a curiosity. Now, however, with the advent of the cloud revolution and the focus on elasticity and density within a lean data centre, containers are coming into their own as the densest and most elastic virtualisation technology for supporting cloud environments. This talk will cover the principles of containerisation, its similarities and differences from traditional virtualisation technologies, why it achieves higher (3x) densities and far greater elasticity and what's being done to advance it within Linux.
The Linux kernel code is robust, but even the best kernel hackers are only human and make mistakes. So, while kernel crashes are rare, they can occur and are unpleasant events. Because mainframes traditionally run mission critical workloads, there is a particular big focus on the serviceability of the platform. Therefore, when Linux was ported to s390 beginning of 2000, IBM also provided a reliable kernel dump mechanism. This was about five years before kdump was integrated into the upstream Linux kernel. In some areas, kdump offers advantages over the traditional s390 dump methods. Therefore, in 2011, kdump was ported to s390. The code was enriched by specific s390 features and, as much as possible, kdump was integrated into the existing s390 dump infrastructure. This presentation reviews the traditional s390 Linux kernel dump methods a describes the specifics of the s390 kdump port.
Big Data - with its volume, velocity, and varied (or lack of) structure - eludes traditional business analytics and business intelligence approaches. Solutions are needed to monitor the dynamic flows of Big Data, finding and analyzing and responding to insights and patterns as they occur. To create such a data-driven infrastructure, you need the right foundation. A foundation capable of processing data-intensive workloads, the power and flexibility to scale, yet with ease of deployment and management. And let's not forget the need tune for performance gains as well as fine-grained controls for I/O, CPU, memory, storage, and networking performance. Join us as we delve into the key points that should be considered for a Big Data foundation.
Persistent memory is a new class of hardware that has roughly the same cost, capacity and performance as DRAM, but keeps its state over power loss. In both the Linux community and in broader industry circles, the challenge these parts present stress our current data path and present challenges to applications as well. This talk will give details on the current work in the kernel space, look at different APIs that are being proposed for applications and talk about how these parts might be used in systems.
A constant refrain over the decades from database, high-performance computing (HPC), and real-time developers has been: "Can't you just get the kernel out of the way?". Recent developments in the Linux kernel are paving the way to just that ideal: Linux is there whenever you need it, but if you follow a few simple rules, it is completely out of your way when you don't need it.Pau
This adaptive-idle approach will provide bare-metal multicore performance and scalability to databases as well as to HPC and real-time applications. However, it is at the same time able to improve energy efficiency for upcoming asymmetric multicore systems, allowing these systems to better support workloads with extreme peak-to-mean utilization ratios. This talk will describe how this feat is accomplished and how it may best be used.
The networking performance available to Virtual Machines (VMs) can be low due to the inefficiencies of transferring network packets between the host domain and guests. This can limit the application-level performance of VMs on a 10 Gb/s network. SR-IOV capable network devices offer the benefits of direct I/O throughput and reduced CPU utilization while greatly increasing the scalability and sharing capabilities of the device. SR-IOV allows the benefits of the paravirtualized driverâs throughput increase and additional CPU usage reductions in HVMs (Hardware Virtual Machines). SR-IOV uses direct I/O assignment of a network device to multiple VMs, maximizing the potential for using the full bandwidth capabilities of the network device, as well as enabling unmodified guest OS based device drivers which will work for different underlying VMMs.