Our second annual Stacki user conference will be at San Francisco’s iconic Clift Hotel on April 27, 2017. StackiFest 2017 will bring together Stacki users, partners, and others who want to learn more about modern bare metal provisioning and its application to data center automation in solving crucial big data, container, and private cloud deployments.
The day will be filled with presentations done by the core Stacki engineering team and Stacki users. Breakfast and lunch will be served with a happy hour following the event. Space is limited, so register on the right now to reserve your spot!
Join our Slack Team to stay up to date.
WANT TO SPONSOR?
Please join us as a sponsor to showcase your product and meet with members of the community. Having our partners there this year will make StackiFest complete. This is an excellent opportunity to build deep connections with industry experts over the course of an entire day.
Breakfast & Registration
Welcome & Opening Address
The Best ARM on the team is an Ace: Installing a Cluster of Raspberry Pis (with Contest to Win one with Stacki Ace Installed!)
Spanish Suite on the 15th Floor of the Clift Hotel
Intro & Demo of Stacki
What’s Stacki all about? Greg will briefly go over what Stacki is, a history of Stacki, what some of the new features in Stacki are, and a what’s coming out in Stacki 4.0. Greg will also do a short demo of Stacki for all the newbies.
Automation of your OpenStack Infrastructure with Stacki
One of the primary purposes of CloudLabs in Flex is to provide optimized rack level solutions while integrating third-party open source vendor products. With the fast growing adoption of cloud computing, many companies are looking to shift their workloads into the cloud. CloudLabs’ already established engineering and design services allow for an environment where validation and performance testing can be done in a controlled manner on a variety of cloud platforms, including OpenStack. This talk will look at how CloudLabs utilizes open source tools: a Linux provisioning software called Stacki, Ansible, and OpenStack to help the validation process of hardware and rack level solutions for cloud infrastructure.
Taking Stacki Past Servers: Configuration of NetApp Storage Arrays
The three pillars of the data center are “compute”, “network”, and “storage”. Stacki obviously has a strong heritage as a system for automating the deployment of compute nodes, but the configuration of networking and storage have been limited to the internals of the servers. With the Stacki Pallet for Netapp, we’ve now taken what we’ve learned from the development of Stacki and extended it to include NetApp external Storage Arrays. This means all the same tricks you’ve learned for working with compute nodes in Stacki will work with storage arrays. During the talk, Bill will walk through the process of creating a configuration for a storage array — managing networking and disks — to create LUN’s accessible by Stacki backend nodes over iSCSI.
Customizing your Stacki Backend Nodes with Carts
In a default step, Stacki installs backend nodes with a very small software footprint. In Stacki parlance, the backend node is brought up to a ping and a prompt. To make the backend node more useful, other application software and services will need to be installed and configured. Carts are created by end users in order to customize the configuration of backend nodes in your cluster. In this presentation, Joe will teach you hot to build a simple Cart in a few minutes.
Post-Kick Cluster Independence with Ansible
Stacki is a feature-full configurator that presumes cluster-dependence after OS bootstrapping is complete. While this is an elegant solution for many, you may need the ping and prompt without these dependencies because you have other tools for OS and app configuration and deployment (e.g., curated package repositories). We’ll walk through a simple PoC wherein target metal is bootstrapped in a state that is independent of the cluster and homed in the permanent network.
Building a Hadoop Cluster with Stacki Workshop
Step 1 of every Hadoop vendor’s documentation reads something like this: “First install a cluster.” Without a consistent group of installed machines, a Hadoop installation is prone to failure. Open source Stacki installs machines to a ping and a prompt enabling the consistency and configuration required for a functioning Hadoop installation. StackIQ released a new open source Hortonworks bridge pallet to enable the installation of Hortonworks through the Ambari appliance at the beginning of 2017. In this presentation, Joe will show you how to set-up Stacki, the HDP Bridge pallet, Ambari, and then install Hadoop on a running cluster.
Provisioning Heterogeneous Bare Metal with Stacki
Stacki was used to upgrade a high-performance computing (HPC) cluster at the National Institute of Standards and Technology (NIST) in Gaithersburg, Maryland. NIST is the United States’ federal metrology institute, performing research and creating standards for measurements and technology, including materials, data, and cyber-security. A 1,200 node CentOS5 Maui/Torque cluster was upgraded to CentOS7 with a Slurm queuing system. At the same time, hundreds of servers were removed and added to this cluster. This presentation will show the application of Stacki to this HPC cluster and contrast previous methods used for provisioning. Stacki carts and pallets are used to provision role-based servers, including GPU, high-memory, and multiple login servers. Ideas are proposed to allow us to extend this application to managing multiple clusters. Any mention of commercial products within this presentation, including Stacki, is for information purposes; it does not imply recommendation or endorsement by NIST.
How Teradata uses Stacki
Teradata is the global leader in large-scale Linux Data Warehouse and data analytics applications. Teradata’s challenges have always been how to quickly complete a massively parallel system installation from bare metal components.
For many years, Teradata has developed its own tools and methodology for the system installation but it has become obvious that we need a different approach to meet our customer’s demand and expectation. We have selected Stacki as Teradata’s tool to help us with bare metal provisioning. This presentation is to show the challenges Teradata faces today and how Teradata uses Stacki to address these challenges.
Deploying Kubernetes on Bare Metal
StackIQ released an open source Stacki Pallet for Kubernetes in January 2017. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
In this presentation, Joe will demo how to build a Kubernetes cluster using the Stacki Pallet for Kubernetes. The Pallet runs on the Stacki framework giving you a functioning Kubernetes cluster with a kubernetes-dashboard deployment if you request it.
The Best ARM on the team is an Ace: Installing a Cluster of Raspberry Pis (with Contest to Win one with Stacki Ace Installed!)
The Raspberry Pi was originally developed by the Raspberry Pi Foundation to promote the teaching of basic computer science in schools and developing countries. And although these little single-board computers have done just that, they have the ability to do so much more when paired with the correct tools. That’s why StackIQ ported Stacki (their original bare metal x86_64 server installer) to support Raspberry Pis, creating Stacki Ace: an open-source bare-metal installer for Raspberry Pis.
Greg will demo how to create a cluster of Raspberry Pis with Stacki Ace. At the end of the presentation, we will hold a contest where one attendee will win a Raspberry Pi with Stacki Ace already installed!
Ken Bingham – Senior Systems Engineer, Empowered Benefits
Ken is a Linux operations administrator and systems development engineer with entwined passions for future-forward devops tooling, information assurance, and team development.
Greg Bruno – VP Engineering/Co-Founder, StackIQ
Greg Bruno is the Vice President of Engineering and co-founder of StackIQ. Prior to joining StackIQ, he co-founded the open-source Rocks Cluster Group at the San Diego Supercomputer Center (SDSC) located on the campus of the University of California at San Diego (UCSD). He received his M.S. and Ph.D. in Computer Science from UCSD where he researched parallel file systems. Prior to the Rocks Cluster Group, he worked for 10 years at Teradata Systems where he developed cluster management software for the systems that supported the world’s largest databases. He spent the last 10 years helping to architect, design and implement the Rocks Cluster Distribution, an open-source software stack that enables domain-specific users to build and manage their own clusters.
Pervez Choudhry – CEO, StackIQ
Pervez joined StackIQ as Chief Executive Officer and a member of the Board in July 2016. He brings over twenty-five years of experience in product management, business development, and enterprise technology sales with a successful track record of combining business and technology knowledge to build high growth business-to-business venture-backed start-ups. Prior to joining StackIQ, Pervez served as Vice President of North America at Puppet Labs from 2013 to 2015 where he built major accounts from the ground up. Previously, he held multiple roles at Splunk from 2006 to 2013 where he worked closely with the founders and executive management in establishing best practices for high growth in enterprise markets. Earlier in his career, Pervez held various roles at Composite Software (acquired by Cisco), WebLogic/BEA (acquired by Oracle), Sybase (acquired by SAP), Versant (acquired by Actian Corporation) and IBM. Pervez received an MSc from Columbia University in New York and a BSc from Marist College in New York.
Anthony Chen – Platform SW Engineer, Teradata
Anthony Chen is the platform SW Engineer at Teradata Corp. Anthony’s current focus is on leading the architecture and the implementation of Teradata system bare-metal provisioning for manufacturing system staging, customer’s system installation and upgrades, and Teradata’s private cloud environment in which Stacki is one of the major components.
Prior to his current assignment, Anthony has been working on OS-related projects for different Unix OSes VxWorks, AT&T Unix, BSD Unix, SUSE Linux for Teradata. Anthony has involved in the kernel problem analysis, performance analysis, kernel enhancement, and kernel customization for Teradata’s SUSE Linux.
Joe Kaiser – Director of Open Source Engineering, StackIQ
He’s back, and we still don’t know what to do with him. Joe Kaiser has aged significantly in the past year, but is none the wiser, not gotten any taller, is probably shorter, and no less wide. We made him the Director of Open Source Engineering in hopes he would acquire some tact but this didn’t seem to work. If anyone here irritates you, it’s likely going to be him. Next year we are going to invite his wife in the hopes he’ll behave. He mocks anyone suffering from the inevitable unrequited Apple love, sysadmins who think diskless is a viable computing strategy, clusters less than 500 machines, anyone willingly running Windows, and men with (hair) buns. He loves long walks on the beach as long as they’re only about 20ft, and puppies as long as they’re owned by someone else and he doesn’t have to see, hear, or touch them. He did some software-ish stuff to make installing Kubernetes and Docker on bare-metal easier than the crappy documentation would have you believe. He thinks K8s is a dumb nickname and still thinks the Cloud(s) will blow over. But he also continues to see The Lord of the Rings as an addiction allegory, so what does he know? He’s been doing Linux and big ass clusters for 20+ years and, someday, he’ll finally get something right and can then go be a greeter at Wal-Mart before a robot makes that job pointless – hopefully soon.
Mason Katz – Chief Architect/Co-Founder, StackIQ
Mason Katz is the Chief Technology Officer and co-founder of StackIQ. Prior to joining StackIQ, he co-founded the open-source Rocks Clusters Group at the San Diego Supercomputer Center (SDSC) located on the campus of the University of California at San Diego (UCSD). He has 20 years of experience in distributed and networked systems. His first job was as an 8-bit embedded software engineer, where he integrated first generation GPS units into a national grid of weather sensors. It was here that he discovered his passion for creating highly automated, maintainable, and supportable systems. He then took his embedded experience into the OS kernel arena and spent several years developing research-oriented network protocol stacks (x-kernel and IPSec) for Linux. For the last half of his career, his primary focus has been on the design of software to manage commodity Linux clusters. As a co-founder of the Rocks Cluster Group, he helped position Rocks as the de facto standard in Linux cluster management.
Hugh Ma – Cloud Validation Developer, Flex Ciii
Hugh Ma is a Cloud Validation Developer at Flex Ciii. He focuses on supporting various groups in developing automation processes for benchmarking and validation. He also works on OpenStack and DevOps. When he’s not writing scripts to make machines do his bidding, he is available at Hugh.Ma@flextronics.com. Otherwise, he’ll be on #Ansible IRC, or riding his motorcycle off into the sunset.
Tim McIntire – COO/CTO/Co-Founder, StackIQ
Tim McIntire co-founded StackIQ to bring the cluster computing sector’s thriving open source community into the enterprise. His vision led to StackIQ’s development of the commercial version of Rocks, which dramatically reduces the time and cost of setting up and managing clusters through intelligent multi-server software automation. This work resulted in partnerships with Amazon, HP, Dell, Intel (and others), which has enabled StackIQ to bootstrap through its first four years. McIntire’s previous work includes leading the development team at the Digital Image Analysis Lab at Scripps Institution of Oceanography. His research, which was primarily funded as part of NASA’s Direct Broadcast program, has been published in various IEEE and Elsevier Journals on topics ranging from remote sensing using neural networks to parallel image processing using HPC Clusters. He also has contributed to or been featured in IBM developerWorks, HPCwire, Bio IT World, Apple Developer Connection, and the computational science magazine, enVision. McIntire received a B.S. in Computer Science from the University of California, San Diego.
Bill Sanders – Software Engineer, StackIQ
Bill is a software engineer for StackIQ, makers of the world’s fastest open source bare metal Linux deployment tool: Stacki. Prior to working at StackIQ he worked in Teradata’s R&D organization on clustered storage projects and before that as a sysadmin at a public school. With a background as a sometimes sysadmin, sometimes developer, and full-time open source advocate he has a strong dedication to building open source tools to automate every corner of the stack. When not writing code, he enjoys mentoring college students, baking, gardening and yes, long walks on the beach.
Justin Senseney – Senior Computer Scientist, NIST
Justin Senseney is a Senior Computer Scientist at the National Institute of Standards and Technology (NIST) in Gaithersburg, Maryland. He first worked on single-node image processing solutions at the National Institutes of Health. Then at Walter Reed Military Medical Center, he worked on an image processing pipeline to improve the understanding of traumatic brain injury (TBI) through magnetic resonance imaging and other imaging modalities. Patients at Walter Reed enroll in this research study to help provide a more objective and historical measurement of TBI. He then assisted with the upgrade of NIST’s CentOS 5 HPC cluster to CentOS 7 using Stacki. He develops and implements software solutions for various computing requirements at NIST.