The New Stack Makers show

The New Stack Makers

Summary: The New Stack Makers is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For The New Stack Analysts podcast, please see https://soundcloud.com/thenewstackanalysts For The New Stack @ Scale podcast, please see https://soundcloud.com/thenewstackatscale For The New Stack Context podcast, please see https://soundcloud.com/thenewstackcontext Subcribe to TNS on YouTube at: https://www.youtube.com/c/TheNewStack

Join Now to Subscribe to this Podcast

Podcasts:

 Helping Enterprises Move Faster With New Relic's Grace Andrews | File Type: audio/mpeg | Duration: 00:36:35

The biggest challenge for businesses today is development of software at speed, at scale. For startups, this may be second nature, but for large enterprises, it's the nature of the beast: the bigger the company, the slower it probably moves. For developers inside of these enterprises, it can be easy to feel left behind. As the world moves on to cloud-based AI, IoT, serverless, event-driven streaming servers, the typical enterprise developer with a monolith, database, and an actual datacenter can easily get a feeling of being old and boring. Grace Andrews, solutions engineer at New Relic, said that most enterprises have the same problem: speed. "The ecosystem is growing so much faster than it ever has before. People are always talking about Docker, containers, microservices, etc. But folks sometimes forget there's still roughly 70% to 80% of the industry that is not yet fully automated. You've got folks on this side having conversations about these containerized ecosystems and environments, and how they need to be able to have both Kubernetes but also some other orchestration tool like a cloud platform... You have all of these things in motion, but you still have people who have physical servers. I think we're meeting this critical convergence point where, whether people are implementing the new techs in their environment, or they're looking at it, everybody is worried about scale and speed. Not only are they worrying about scale and speed, they are worried about getting left behind," said Andrews.

 Adil Aijaz of Split Software Talks Agile Deployment | File Type: audio/mpeg | Duration: 00:19:39

Building software at scale and at velocity requires a great deal of infrastructure, process, and management. While some companies like Facebook and Google may make it seem like CI/CD is easy to build, in reality, both of these companies have spent billions of dollars optimizing their build pipelines and enabling developers to be more productive by removing barriers in the build/test/fix feedback loop. For the rest of us, there are many ways to help improve a CI/CD process that don't actually require writing your own build system, or spending $1 billion to make PHP compile to C++, as Facebook once did. Instead, there are numerous vendors, open source projects, and CI/CD gurus out there to help your team get from writing code to deploying it to production much faster. One company aimed at helping solve the CI/CD speed-up problem is Split Software. Adil Aijaz is the CEO and co-founder of Split, and he sat down with us to discuss the current state of agile deployment in the enterprise world.

 Covalent Talks Cilium, and How it Brings BPF to Kubernetes | File Type: audio/mpeg | Duration: 00:20:56

The Berkeley Packet Filter is ancient history. It was created in 1992 at Lawrence Berkeley Labs as a way better filter and sort network packets. In the early 2000's it was at the heart of the long running SCO versus Linux lawsuit. Today, it's just another raw interface included with Linux. Recently, however, BPF has become a bit of an interesting topic, as it's become a popular replacement for IPTables. Thomas Graf, CTO and co-founder at Covalent. is also the leader of the Cilium Project. Cilium offers API-aware networking and security for Kubernetes users based on BPF. Graf said that the power of BPF can be tough to utilize in Kubernetes, and so the Cilium Project is aimed at making that easier. "It's allowing you to translate declarative high level intent such as policy, networking, localizing, all of this high level intent that is described with Kubernetes services. Cilium implements these high level constructs with BPF in a most efficient and secure manner. Its bringing the power of BPF in an easily consumable way, and implementing known Kubernetes interfaces," said Graf.

 Neo4j’s Emil Eifrem on Graph Databases, Machine Learning, and More | File Type: audio/mpeg | Duration: 00:28:34

When it comes to machine learning and GraphDB, “We’re in the first inning of a nine-inning game,” said Emil Eifrem, CEO and founder of Neo4j.  “If you look at machine learning algorithms, they are written in graph language, or can be expressed in graph language.” Eifrem joins TC Currie in this episode of The New Stack Makers.  Speaking from Fort Mason in San Francisco at the GraphTour event earlier this year, Eifrem talks about graph databases, how they are different from relational databases, and how this decade-old technology is keeping up with the new kids on the block.

 Google's Melody Meckfessel and Sam Ramji Reveal the Secrets of DevOps | File Type: audio/mpeg | Duration: 00:19:44

At the JFrog SwampUp conference in Napa earlier this year, Melody Meckfessel and former Google vice president Sam Ramji (now with Autodesk) were showing off the way Google makes its sausage. Unlike other less well curated development experiences, Google's process is worth examining and shouldn't leave anyone offended or covered in sausage leavings. For starters, Google's internal development processes and practices are immense. The numbers revealed by Meckfessel at the conference showed that over 500 million tests are run per day inside Google's systems. That's to accommodate over 4 million builds, daily. Why so many builds? Because Google's Bazel build system allows for near instant build processes, ensuring developers can quickly gain the feedback they need from their code.

 Talking Up Kubernetes with Rancher | File Type: audio/mpeg | Duration: 00:34:38

Shannon Williams has played a very hands-on role in launching software for over 20 years. Recently, after observing the power and potential of containerization, Williams co-founded  Rancher Labs in 2014, after successfully creating Cloud.com (now owned by Citrix Systems). Since 2014, things have changed. Kubernetes, of course, has emerged as the standard containerization platform, prompting Rancher to go “all Kubernetes” with its recent production release of Rancher’s namesake product Rancher 2.0. In this episode of The New Stack Makers podcast, Williams takes a step back to offer his perspective on Kubernetes and why and how Rancher made the shift to the platform.

 Talking Serverless with Oracle's Chad Arimura | File Type: audio/mpeg | Duration: 00:19:28

Just like Kung Fu in the 70's, serverless application development and deployment is hot. But just like Kung Fu, serverless is as much of a mindset as it is a platform. Amazon's Lambdas really kicked off the excitement, but going even further back, the origins of this style of programming can be found in functional principles: those found in Erlang, Haskell, and Scala. Primarily, the idea of stateless computing and the goal of building discrete application functions drive this new paradigm of serverless. What's new about serverless is the fact that applications are offered up to the cloud to run in some unknown nebula managed by the cloud provider, with scaling needs completely abstracted away from the developer. That's the promise, anyway. For the older readers out there, this probably sounds a bit like an elaborate new form of application server. And you'd be completely right. To this end, both IBM and Oracle's approach to the serverless revolution has been to offer open source runtimes for anyone to run in their own cloud.

 Paperspace Co-Founders Discuss TPUs and Cloud Deep Learning | File Type: audio/mpeg | Duration: 00:18:20

It's a crowded market if you're a machine learning company. Every vendor under the sun has integrated some new-fangled AI-driven service, making the real ROI tough to spot in the jungle of buzzwords and feature creep. Paperspace is hoping to make that journey a little easier for businesses by offering Gradient, an easily manageable infrastructure platform for deep learning. Under the hood, Paperspace is not just some AI startup: they're offering developers access to Google's Tensorflow Processing Units (TPUs), which are otherwise only available for research groups and other computer sciencey-types who've applied to Google to gain access. Paperspace is currently offering TPU access as well as GPU access in its deep learning platform. That alone was worth sitting down with CEO Dillon Erb and CTO Tom Sanfilippo for a chat. These co-founders took time to discuss just what it's like to build deep learning applications with TPUs.

 Discussing Real World Chef Usage With DevOps Experts | File Type: audio/mpeg | Duration: 00:28:06

There are two sides to every software and IT story: the story of the people doing the work, and the stories the vendors will tell you about how that work can be done more efficiently. Sometimes, the hardest part of the job can be reconciling these two often disparate views of the world, as vision means pavement, and tools meet unique problems. That's why it's so important to discuss the usage of IT tooling with real IT practitioners. Today, we've got two very smart IT folks packed into this episode of the Maker's Podcast, both here to discuss the usage of Chef in enterprise environments. First up is Stephen Figgins, Associate Director of Operations for Agile Technology Solutions at the University of Kansas. "We have been working with Chef for six years and have had a lot of success. Our Chef made it very easy for us to move to AWS. AWS had a lot of complexity to it, like learning how to setup security and networks and things like that. One thing we really didn't have to worry about in this was how do we configure the EC2 nodes we're going to create. Because we already knew how to take a blank EC2 node and make it run our application. We were able to not focus instead on lifting and shifting our applications from our non-cloud environment and instead focus on how we could best leverage the cloud," said Figgins. Watch on YouTube: https://www.youtube.com/watch?v=nYpTuheMXE8

 At Scale Delivery and Deployment with Kenzan CTO Jon Stockdill | File Type: audio/mpeg | Duration: 00:32:44

There is a big difference between agile development and continuous delivery and deployment, but you probably can't get to the latter without having implemented the former. At the end of the day, every company wants to ship better code more often, in order to innovate in its market, but actually turning your software development and IT teams into lean mean feature shipping machines isn't as easy as taking a straight road to a clear goal ahead. Instead, the road to success is paved with DevOps tools, agile processes and best practices. One of those tools is Spinnaker, the open source, multi-cloud continuous delivery tool that originated at Google. Jon Stockdill, CTO and co-founder of Kenzan uses Spinnaker in his DevOps consulting engagements with clients. He said it's the best CI/CD solution out there, at the moment.

 Chef Across The Enterprise | File Type: audio/mpeg | Duration: 00:24:53

Chef isn't just for standing up machines anymore. With so much riding on enterprise infrastructure, it's tough to move things around with confidence, and to remain certain that everything you removed, moved or replaced is properly back online after such a shift. Chef has evolved to provide this sort of reassurance to IT administrators and developers, far beyond the original use case of provisioning and standing up single machines. Brittany Woods, automation engineer at CARFAX, said that "I'm responsible for making sure Chef works for us, and making sure people's lives are easier because of it. We are a Linux shop. Primarily, we use Chef. Exclusively we use Chef for Linux. We build the systems to support the products, and we Chef to make that happen, and we use Chef to manage those systems throughout their entire lifecycle," said Woods. "Chef is our way to fully manage that architecture from configuration--configuration specific to apps, configuration speicfic to tooling that we use--basically the entire build of the system outside of provisioning... Right now we are comprised of several different smaller teams that maintain a different focus. What products they support, they maintain cookbooks for those products. My role is to make them successful. To ensure they have the tools they need to be successful, and also to manage the Chef architecture," said Woods. Watch on YouTube: https://www.youtube.com/watch?v=jEWRCSeYzQ0

 Discussing DevOps, Data and Microservices with Vexata CTO Surya Varanasi | File Type: audio/mpeg | Duration: 00:16:11

Data in microservice-based environments can be difficult to manage at scale. When application servers scale to near infinity, the datastores can't necessarily expand to meet that demand; they can only be optimized to keep up, and perhaps sharded. Considering just how much enterprise information is stored in some of those large systems, it's a worrying proposition to be asked by management to increase application performance when much of it is tied to an Oracle or Microsoft database. Surya Varanasi, CTO of Vexata, has been dealing with large amounts of data for over a decade, now. While he once worked around the hardware layer at Brocade, today he focuses very heavily on the enterprise databases that power businesses around the world. From Oracle, to SAP, to SAS, and Microsoft, Vexata swims in a decidedly enterprise pool of customers.

 The State Of Building Images On Kubernetes | File Type: audio/mpeg | Duration: 00:24:53

At KubeCon in Copenhagen in May, many talks focused on the work required to build continuous integration and continuous deployment pipelines using containers. One of the major issue still remaining in the container world is specifically that last bit of the CI/CD pipeline: building, storing, and securing containers built for internal software projects. Steve Speicher, principal product manager on the Red Hat OpenShift Team, spent a good deal of time at KubeCon looking into the solutions and remaining pain points that exist around dynamically building and managing containers within a more traditional agile development environment. "A lot of people want to leverage Kubernetes for the build in the pipeline itself, and so that's one of the things we're talking about here and learning more about what people are interested in to leverage the platform to do more CI/CD," said Speicher. Ben Parees, Principal Engineer at Red Hat, "You have people who have build CI/CD farms and infrastructure and then their deployment platform. With Kubernetes and OpenShift you have the opportunity to put that all in one, so your cluster is both your build platform, your test platform, and your deployment platform. It's easy to scale up multiple instances, to test them, run your builds there," said Parees.

 Optimizely's Claire Vo Talks Successful A/B Testing at Scale | File Type: audio/mpeg | Duration: 00:19:44

When building front-end software, it can be tricky to figure out just what works. As with any page layout endeavor, from the Web to the supermarket checkout line tabloids, there are plenty of nooks and crannies to explore with headlines, graphics, and colors. Any software shop earning money on the Web likely already knows about "A/B" testing: the practice of subtle changing your page design and gathering metrics on its effectiveness at converting visitors versus the existing version of the site. Now that such testing regimes are commonplace in the enterprise, it is inevitable that every team eventually encounters the egregeous and exhausting existentialist crisis that is test management. Gathering metrics for a single test is one thing, but what happens when the entire enterprise is pushing tests across thousands of sites all the time? Claire Vo is Silicon Valley success story: she sold her startup Experiment Engine to Optimizely in 2017. Her particular winning formula was to help solve this exact problem for enterprises: Managing experiments at scale across thousands of sites, and measuring results in order to effect actionable changes overall.

 Security for Kubernetes | File Type: audio/mpeg | Duration: 00:22:13

With all the excitement around containers and Kubernetes, it can be easy to forget that these systems still require the same types of help that older virtual machine and hardware-based systems needed. Chief among that list of needs is security. We sat down at KubeCon in Copenhagen to discuss this very topic with Liz Rice, Technology Evangelist at Aqua Security, Justin Cappos, Associate Professor Computer Science and Engineering at the NYU - Tandon School of Engineering. Cappos is one of the driving forces behind the TUF Project, which stands for "The Update Framework." "We had a pretty long history of going and doing a lot of work with folks at the Tor project and other large software distributions and maybe had concerns about nation-state actors maybe stepping in. About 3 or 4 years ago the Docker community came together and build a really nice implementation of TUF Notary, and as of about 6 months ago, both the Docker implementation, which is the cloud focused implementation of TUF, and the TUF specification itself became CNCF Projects," said Cappos. Watch on YouTube: https://youtu.be/mNFoqxnuecg

Comments

Login or signup comment.