BOSTON — Enterprise purchasing decisions for DevOps platforms now emphasize layers of infrastructure beyond Kubernetes, expanding opportunities but upping the ante for Red Hat OpenShift product development.
Red Hat faces competitive pressure from public cloud providers that continue to add developer-friendly features to managed Kubernetes services. But enterprise customers presenting at this week’s Red Hat Summit said they kept OpenShift as they added public cloud deployments in recent years because of its advanced security features and access to Red Hat professional services for refactoring legacy applications.
“[Azure Red Hat OpenShift] had the same look and feel as OpenShift on premises … so it was one of the ways that we were going to manage change for our [internal] customers,” said Chuck Uwanna, container platforms solutions architect at ExxonMobil, during a summit presentation. “[But] it allowed us to move our focus up the stack. … We were only focused on what was important to our customers, which was helping them manage the application lifecycle.”
Red Hat and Microsoft Azure professional services teams also helped convert some of ExxonMobil’s monolithic on-premises apps into microservices as it expanded into the public cloud in 2021, Uwanna said. Vendor support helped solve the complex networking problems that came with maintaining security and compliance when the energy company connected cloud-based resources to its on-premises data centers.
The DevOps platform battlefield has expanded to cover many fronts, from software supply chain security to cloud-native networking and observability, alongside multi-cluster and multi-cloud Kubernetes management. Now the question of how Red Hat will maintain both the product’s many moving parts and a cohesive strategy concerns some industry observers.
“I feel like they need to position themselves more clearly,” said Rick Rackow, senior site reliability engineer at geolocation tech company TomTom in Amsterdam, in an online interview this week. Rackow worked for Red Hat as a senior SRE from 2019 to 2021 and authored Operating OpenShift from O’Reilly Media in 2022. Rackow’s current employer is evaluating DevOps platforms now and considering OpenShift, among others.
“The option is totally there to be No. 1 in the platform engineering space, but they can’t have it all,” Rackow said. “Either they’re a complete platform now, which would be great, or they’re an improved Kubernetes version, which is sadly still what a lot of folks believe.”
OpenShift software supply chain security plays to its strength
ExxonMobil wasn’t the only customer that cited both security concerns and application modernization challenges in sticking with OpenShift amid cloud expansion at Red Hat Summit.
Citizens Bank didn’t previously have OpenShift on premises but based on new cloud applications over the last three years in Red Hat OpenShift on AWS and Azure Red Hat OpenShift (ARO), mostly for security reasons, according to Krishna Mopati, director of software engineering at Citizens Bank, during a customer panel presentation.
“When we looked at a plain vanilla Kubernetes — we do have instances of that in the bank — we literally bring it up and have to lock it down,” he said. “Whereas you install OpenShift and have to punch holes to get out. Security wise it’s so much better — because we are not all Kubernetes experts — to work with something that’s locked down and then understand what you need to do rather than figuring out what you need to lock down before you proceed.”
Advance Auto Parts managed its own Kubernetes clusters on AWS but expanded to Azure using ARO to speed up its application delivery during the COVID-19 pandemic, said Sonia Pereira, senior director of enterprise architecture for the company, during the same customer panel.
“We were looking at, say, our website providing new features during the pandemic. And we rolled out things like same-day delivery — new features that we’d like to get out to our customers quickly at the time when it’s needed, ” she said. “We also got a catalog out and updated our pricing services — all on our cloud deployment.”
Red Hat added OpenShift features this week that cater to this speed- and security-focused audience with Trusted Software Supply Chain, which consists of two new cloud services — Trusted Application Pipeline and Red Hat Trusted Content — alongside the existing Quay container registry and Advanced Cluster Security (ACS) cloud services.
Trusted Application Pipeline, available as a preview, is based on an upstream integration between Tekton CI/CD pipelines and Signstore for software artifact provenance and signing. The product can import code from Git repositories, analyze it for security vulnerabilities and vulnerable dependencies, generate a software bill of materials with Signstore attestation and promote container images through different stages of the deployment pipeline. It also contains an enterprise contract policy engine to confirm that container images are consistent with industry frameworks, such as Supply-chain Levels for Software Artifacts.
Rackow said the new software supply chain security features will be a key point in Red Hat’s favor during his company’s DevOps platform evaluations.
“If the supply chain security control works well, we’d even get rid of another vendor,” he said.
Kubernetes networking a tangled web
Elsewhere, industry experts said, Red Hat’s product strategy isn’t as clear or strong, especially cloud-native networking, where the vendor muddied the waters further this week by releasing Red Hat Service Interconnect. The service is supported on Kubernetes clusters, virtual machines and bare metal hosts both inside and outside OpenShift. It adds a Layer 7 network overlay, based on the Skupper project, that developers can configure without having to address the underlying network infrastructure.
Rick RackowSenior site reliability engineer, TomTom
Red Hat officials acknowledged this week that it still needs to clarify from a platform engineering perspective how Service Interconnect will fit with the other cloud-native networking projects already integrated with OpenShift. Advanced Cluster Management integrates with Submariner to handle multi-cluster network connections, while OpenShift Service Mesh is also meant to automate and abstract Kubernetes network connections for developers. Meanwhile, at least one GitHub issue remains open from an OpenShift service mesh user about incompatibility with Skupper.
“Service Interconnect is fundamentally about connectivity between different services [that are] not even necessarily on a Kubernetes cluster, far apart in different environments,” said Jamie Longmuir, principal product manager for OpenShift Service Mesh at Red Hat, during a Q&A presentation. “We’re still working on having a better integration story between [Service Interconnect and service mesh] … but it’s still fairly new.”
Submariner functions at a lower layer of the network stack than Service Interconnect, and service meshes were developed for use within Kubernetes clusters. However, multiple service mesh vendors, including Isovalent, Kong and Buoyant, are now market tools to manage multi-cluster and multi-cloud environments as well as non-container resources.
One industry analyst said he remains confused about the value proposition of Service Interconnect, given service mesh vendors and projects, including Istio, have also touted the ability to hide Kubernetes network complexity from developers.
“Wasn’t this what service mesh was supposed to do?” said Gary Chen, an analyst at IDC, in an interview at the summit. “[It] seems like many of these things were originally built for [infrastructure and operations] people without as much thought for a good developer interface, and that is starting to come now.”
Red Hat’s overall cloud-native networking strategy has not been as strong as competitors such as VMware and Microsoft, added Brad Casemore, another IDC analyst.
“Red Hat just hasn’t been as active in networking as one might have expected,” he said. “It was passive when it came to network virtualization, where VMware took the lead. When disaggregation came to networking … many wondered why Red Hat didn’t assert itself. Now Isovalent has taken networking right into the Linux kernel — another realm where Red Hat might have been more aggressive.”
Multi-cluster complexity rises amid observability turnover
Meanwhile, Red Hat continues to execute its roadmap for high-scale multi-cluster and multi-cloud Kubernetes management under its Advanced Cluster Management (ACM) product that it disclosed at KubeCon + CloudNativeCon North America in October.
For example, ACM will soon be able to provision non-OpenShift Kubernetes clusters. Currently it can only manage existing clusters on Azure Kubernetes Service, Amazon Elastic Kubernetes Service and Google Kubernetes Engine, according to Kashif Islam, principal architect at Red Hat, during a Q&A presentation.
Other features coming this year for ACM include an ACM Global Hub, soon to be released in tech preview. It can manage multiple ACM hub management clusters and aggregate observability alerts among multiple Prometheus and Thanos instances.
“We’re really focused on that Global Hub as an observability point,” said Jeff Brent, director of product management for ACM, during a presentation. “Internally, we call it uber-Thanos. Eventually, we want to be able to aggregate multiple Thanos instances up to a point where we’re … centralizing alerts from multiple hubs … and able to drill down very quickly to get to troubleshooting points of contact.”
Creating a common look and feel for cross-cluster network observability is also a priority for ACS, said Kirsten Newcomer, director of cloud and DevSecOps strategy at Red Hat, during the same roadmap presentation.
But Rackow said he’s concerned about a recent loss of observability talent from Red Hat’s engineering staff.
“There have been some tough losses in terms of engineering capabilities, be it at a very high level with someone like [former Red Hat hybrid cloud CTO] Clayton Coleman going to Google [in 2022] or a little further down with [several members of the] monitoring/observability team leaving Red Hat completely or switching teams,” he said.
Senior principal software engineer Frederic Branczyk left Red Hat in 2020 to found continuous profiling company Polar Signals; two other Red Hat senior software engineers with expertise in Prometheus and Thanos, Matthias Loibl and Kemal Akkoyun, followed in 2021. Prometheus core maintainer and former Red Hat principal software engineer Bartlomiej Plotka left for Google in January. Sergiusz Urbaniak, who contributed Prometheus, Alertmanager and Thanos operator to OpenShift and served as OpenShift monitoring team lead from 2019 to 2021, remains with Red Hat but has since held different roles.
“Those were some of the best engineers in that space — Prometheus, Prometheus Operator, and Thanos creators and maintainers,” Rackow said. “It’s hard to keep the same level of quality for this specific part of OpenShift if a team dissolves like that.”
Red Hat remains confident in its engineering talent, including in observability, said chief product officer Ashesh Badani in an emailed statement to TechTarget Editorial this week.
“We’ve also had many talented engineers return to Red Hat after stints at other companies, from both direct competitors and household consumer tech brands,” Badani wrote. “Coupled with OpenShift just recently hitting $1 billion in annual run rate, we feel like we’re continuing to push the envelope as far as being Kubernetes innovators.”
Beth Pariseau, senior news writer at TechTarget, is an award-winning veteran of IT journalism. She can be reached at [email protected] or on Twitter @PariseauTT.