I enjoyed this sub-conference within VMworld, which is driven by GM of Cloud-Native Applications – Kit Colbert. Great job, Eric.
In October, I had the pleasure of introducing Illumio as it officially came out of stealth mode. Well it’s certainly stealthy no more – what an amazing 5 months it has been with substantial progress on several fronts! On the eve of the RSA security conference, I thought I’d take a few minutes to highlight where we are now and why I believe it’s significant.
First and foremost, the customer adoption of Illumio’s Adaptive Security Platform (ASP) has exceeded even our most optimistic goals and it’s now providing fine-grained visibility and continuous protection to thousands of workloads across a wonderful mix of deployment types. The workloads run in both public and private clouds, and the Illumio solution is deployed as either SaaS or on-premise software. A modern security solution must enable these choices, and the hard work to enable them is paying off. And beyond providing this choice, the control afforded to Illumio-protected environments is recognized as critical. Our computing environments are more dynamic and more distributed than ever before, and IT departments have requirements for protection that embraces this distribution without slowing anything down. You’ll hear more and more exciting customer announcements in the coming weeks.
Secondly, the Illumio team has increased in size and talent, and is pushing out strong new capabilities through its bi-weekly sprints. Today we’re excited to announce a continuation of our mission to protect any workload wherever it may be running. The existing product provides protection to workloads running in virtual machines, on bare-metal, or in Amazon Web Services (AWS). As of today, it can also continuously deliver security policies and enforcement down to the process level within workloads. This is cool! Today’s state of the art is micro-segmentation, whereby individual machines can be arranged to effectively have their own network subnet. Illumio SP has taken this to the next level (let’s call this “nano-segmentation”), isolating multiple processes or applications on a single host running bare-metal or virtualized. If any part of an application changes (such as scaling up with new web servers), Illumio ASP automatically adapts security policies on all impacted workloads or processes.
Thirdly, there has always been a need to dynamically enforce network policy across a broader swath of the datacenter. Today Illumio is extending its adaptive policy enforcement to existing hardware and software solutions, announcing a partnership with F5. With this partnership, Illumio ASP now speaks to the F5 BIG-IP product family, integrating the industry-leading traffic management system as an additional enforcement point behind the perimeter. Stay tuned for more ecosystem-related announcement in the coming weeks.
And last but not least, there is growing recognition of just how important this new adaptive security approach is to the modern IT organization. Today we’re happily announcing that Illumio has raised an additional $100 million from outstanding investors to accelerate the delivery of new capabilities, new partnerships, and new levels of customer enablement.
Whew. It’s clearly been a very exciting 5 months in the public eye, and here’s to the excitement that the next 5 months will surely bring!
As CTO at VMware, I witnessed major changes to almost all aspects of IT – apps, compute, security, and networking. However, one critical aspect of IT has fallen farther and farther behind – security. As a technology investor at General Catalyst, I have made attacking this disparity my top focus. As such, I’m incredibly pleased to share news of the public arrival of Illumio and its first products.
What problem does Illumio address?
So many of the recent infrastructure advances have been driven by the need for speed. IT teams are constantly asked to move faster—to be able to respond to changes and push new applications out quicker than ever. But they’re also held accountable for security and governance. And with the nearly daily drumbeat of highly visible security breaches, the latter has become a top priority and even a board-level discussion.
Our industry has delivered outstanding new technologies – public clouds, containers, and virtualization for example – with the promise of lowering costs and increasing agility. But it can be a challenge to securely adopt these technologies. Today’s security approach remains strongly tied to legacy network infrastructure and to enforcing policies at the perimeter of a datacenter. This perimeter is dead — mobile devices wounded it and the cloud finished it off.
Even within a single datacenter, today’s infrastructure-centric security can’t keep up. It was designed for relatively static environments while today’s data centers are far more dynamic and distributed. In an attempt to keep up, security teams must, at best, slow down the rest of IT. In the worse and not uncommon case, they end up omitting potential protections or face misconfiguring a spaghetti bowl of legacy rules and disconnected security implementations.
I constantly see this challenge—everyone knows that there’s this bright computing future out there, but we have to find a way to secure it. What’s more, we’d like to secure it in a simple and consistent way across all deployment destinations. These challenges are the focus of Illumio, a previously stealth-mode company that I’m ecstatic to be involved with. Illumio unveiled today the first-ever software platform that provides granular visibility and security for all data center and cloud computing environments.
What is Illumio’s solution?
Illumio has taken a clean sheet design to security with a very ambitious goal – provide outstanding and easy-to-manage security at the speed of cloud and that consistently applies across today’s and tomorrow’s IT environments. The result of the multi-year effort is Illumio’s Adaptive Security Platform, which provides visibility, security, and encryption for applications, free from dependencies on the network and designed for today’s highly dynamic world.
The solution consists of two primary components:
- The first is called the Virtual Enforcement Node (VEN). This is a lightweight piece of software that lives with each workload. Its job is to provide visibility and then to enforce protection.
- The protection instructions come from the Policy Compute Engine (PCE), which constantly analyzes all the relationships between different applications and different nodes, dynamically calculating the security policy and pushing it out to wherever the workload currently resides. What’s more, these security policies are written in natural language rather than the fragile infrastructure-centric languages of today’s tools.
These components work together to create a protective bubble that surrounds an application, moving with it whenever and wherever it runs – whether on bare-metal or virtualization in a private datacenter or in public clouds provided by Amazon, Google, or Microsoft.
In addition to this protection, Illumio provides granular visibility into application composition and behavior. The company name itself highlights the fantastic “Illumination” that IT receives when it sees exactly how components of the application are talking to one another and to the outside world. The look I’ve seen on customers’ faces when they get their first glimpse of what’s truly going on in their environment is priceless.
Where do we go from here?
Today’s launch is substantial from a product and technology standpoint. Just as exciting is the great list of customers who have been actively involved with Illumio in the product design and implementation and who are actively using the product today. We’re seeing excitement and adoption across a variety of company sizes and industry verticals – a testament to just how critical of a problem this is to IT.
I’m very excited to be part of today’s Illumio launch and to support the company on their “IT Illumination” journey. Illumio has taken an aggressive clean-sheet design to security, unshackling it from static infrastructure and from the fallen perimeter. I believe the end result will be security that is the enabler – not the roadblock – to safer and more agile IT.
Congratulations to the entire Illumio team, and here’s to an outstanding launch!
I just caught up with an old friend and walked through what I’ve been up to in the (many) years since I departed Texas. This friend isn’t a real techy, so I had to take a higher-level look at the various companies and projects I’ve worked on over the last [number redacted] years.
Half-way through the list I realized that almost every project revolved around some form of virtualization. And not just the ”virtual machine” version of this term, but the more general english definition of “separating out a logical view from its physical implementation”. The list runs something like:
- MPEG hardware: My undergraduate research thesis focused on building hardware to accelerate the decode of video streams (yes, the early MPEG-1 days!). The data stream had to always stay the same, it was just up to the hardware to more efficiently convert it to useful video.
- Ada compiler: I also spent a summer at Convex Computers (now part of HP) working on a compiler that could unroll loops and optimize unmodified Ada code to utilize the company’s vector hardware. It sure would have been simpler if we were able to add hints to the source code, but that wasn’t allowed. The challenger (against Cray in this case) often doesn’t have the luxury of asking for changes specifically on their behalf.
- SimOS: My dissertation focused on a complete machine simulator capable of running unmodified IRIX and IRIX binaries. This was a major pain to get right (and fast), but allowed us to study real life applications and get previously unseen visibility into system performance.
- MIPS R10000: While at the tail end of graduate school, I worked at SGI for the MIPS architecture group to help design their newest processor. While MIPs has one of the simplest instruction set, backward compatibility still was a pain that restricted several possible optimizations.
- VMware: Don’t need to say much more here. Whether for servers, storage, networking, or desktops, the engineering obsession was always about allowing completely unmodified applications to work seamlessly in a more agile, portable, and efficient environment. Early attempts to simplify this challenge (paravirtualization, for example) sure sounded nice, but we knew that they created a barrier to adoption that would be hard to swallow early on.
- Recent Investments: And my early investments at General Catalyst have all focused upon this as well. The two that most exemplify this passion are still in stealth mode, so stay tuned for a proper unveiling. Both of them work with existing workloads and user behavior, surreptitiously doing things behind the scenes for dramatic improvements.
I meet so many startups that offer IT Nirvana if you just ignore existing hardware and software. At the end of the day, the requirement of working with existing applications, code, or environments is a pain. It’s always easier to have a completely “greenfield” and no compatibility requirements… which reminds me of this quotation of unknown origin:
“God created the world in seven days — because he had no legacy infrastructure”
But today’s businesses do have legacy infrastructure and a slew of existing applications, processes, and user behaviors. While always keeping an eye out for great clean-slate solutions, I suspect I’ll continually come back to those that also try to fit in!
Just a little retrospective navel-gazing for a sunny Tuesday…
I was talking with some startup folks last week and heard one of them ask “why doesn’t someone track and publish how all of the other web companies build their sites?”. I assumed this site was pretty well-known, but in case it isn’t, check it out: http://trends.builtwith.com/
Pretty nice way to track all sorts of interesting tool usage including:
They also break them down by different cohorts… such as YCombinator classes:
The above sort of data is free. They have a pro version with more reporting, lead generation, etc.
I have no ties to the site… I’ve just used it a lot and the past and hope it’s helpful to others.
Today I’m happy to announce our investment in Runscope, a developer-centric API-focused company based in San Francisco. Co-founded by CEO John Shehan (Twilio, IFTTT) and Frank Stratton (Twilio), Runscope creates tools that help app developers test, debug, support, and maintain their integrations with public and private APIs.
As first discussed in the “Time for Mobile First Infrastructure” blog, formal APIs are sprouting up everywhere. They are already the backbone of the cloud economy, and are increasingly marching into inter- and intra-enterprise use. In many enterprises that I speak with, formal APIs are often first launched to enable a company’s own mobile applications. From there they evolve to be the core plumbing for the web or thick client versions of these apps. And the next step is often publishing the APIs for external uses enabling new sources of revenue, better customer support, or a previously non-existent partner ecosystem.
However, they also can be a challenge to work with, maintain, and support. That’s where Runscope comes in! This team knows developers as well as any team that I’ve met, and they’ve spent much of their lives helping companies deal with the challenges of APIs. As a result, the early feedback on their Runscope Radar, API Traffic Inspector, and Passageway tools often looks like this:
They are also supporters of several popular community projects (including hurl.it, which I personally love to kick around). And you can certainly imagine why I’m excited about their announcement today of Runscope Enterprise, extending these great capabilities behind the firewall.
To learn even more about Runscope and why I’m so excited about them, please read John’s post. So here’s to Runscope and their efforts to help developers in this brave new world of APIs. Or as Runscope proudly proclaims on their famous T-shirts:
Staying in Sync: The majority of enterprise mobile applications are required to keep data consistent across multiple instances. This includes synchronization between users collaborating on some project, between a user’s online- and offline- document stores, and between a company’s master data sources and the version available on a users mobile device. We see this capability in several in SaaS/Mobile offerings (Box, Dropbox, Google Docs, Quip) and it’s a core offering in many Mobile Backend-as-a-Service (MBaaS) offerings (e.g. Parse, StackMob, FeedHenry, and many others). I’d claim that mobile alerting and notification systems are a very specific instance of this general synchronization trend. And while these synchronization services are widely deployed in the consumer world, they must evolve to support the needs of the enterprise. This includes:
- integration with enterprise identity management solutions (individual- and group-based policies)
- fine-grained data control policies (what data can and can’t move to the mobile device, who can share with whom)
- auditing reports (tell me what data was accessed in certain places and by certain people)
- other data security offerings (data leakage prevention, encryption policies)
Lots of work to do, but it’s clear that enterprise-class synchronization capabilities will be a core capability of the mobile-first infrastructure headed our way.
P.S. Kudos to Bret for calling out how we are having to return to many of the lessons taught in computer science departments. To summarize his argument, we have had 5-10 web-centric years where so many developers treated the always-on, high speed internet as the norm. Mobile devices have required today’s developers to dust off those lessons about coping with highly variable network speeds as well as times when the app is completely offline (gasp!).