[v0.7 yet] Microsoft is poised to bring its market leading cloud technology to private and hosted clouds with the upcoming Windows Server 2016. While the server product with all the planned additions is not available yet (a test build is due out later this summer), the cloud technologies in it were very much disclosed by the company in the last two months. Microsoft is also adding virtualized containers, Nano Server mode to Windows Server 2016 in order to serve the Next Generation Cloud market in a highly competitive way. Then System Center 2016 will also be an important part of that effort, among others. See the whole Cloud Platform roadmap public preview site which came first live on Jan 30, 2015, and since then it is offering a continuously updated “snapshot of what Microsoft is working on in the Cloud Platform business”.
I’ve been tracking this all along since then (but not posting about that here), and now I’ve come to conclusion that this will revolutionize the whole cloud solution market, particularly the one targeting the network operators and telcos who are in a great need for proven solutions in their “Networked Society” efforts.
July 13, 2015: Partners for the journey to build the Intelligent Cloud by Scott Guthrie, EVP of Microsoft Cloud and Enterprise: Additional information:
– July 13, 2015: Announcing Cortana Analytics Suite and New Partner Investments at WPC 2015 by Takeshi Numoto, Microsoft CVP, Cloud + Enterprise, on Microsoft Azure Blog
Additional information about the 3 partners participating in Scott Guthrie’s keynote:
– June 13, 2015: Rackspace Unveils Fanatical Support for Microsoft Azure news release by the company
– June 13, 2015: Try DataStax Enterprise on the Microsoft Azure cloud for FREE! by DataStax
– July 15, 2015: Announcing new and improved DataStax Cluster deployment experience on Azure Marketplace By Senior Program Manager, Microsoft Azure
– June 13, 2015: VMob CEO Shares Partner Success Story at Microsoft WPC Vision Keynote 2015 press release by the company
July 13, 2015: Product innovation and channel evolution by John Case: He discusses the momentum around product innovation and channel evolution, and emphasizes the role of partners in the announcements made around CSP (Cloud Solution Partner program) and Office 365.
Beginning today, Microsoft will expand CSP to additional markets, bringing the total number of markets in which CSP is available to 131. Additionally, Azure and CRM Online will join Office 365, Windows Intune and Enterprise Mobility Suite (EMS) as available services in the CSP.
Today we’ll share that we are introducing a new premium Office 365 enterprise suite called E5 before the end of this year. E5 will encompass the core value of Office 365 productivity and collaboration capabilities, as well as significant new innovations, including newSkype for Business services such as Cloud PBX and Meeting Broadcast; Power BI & analytics features, like Power BI Pro and Delve Organizational Analytics; and new advanced security features, such as eDiscovery, Customer Lockbox, and Advanced Threat Protection. The E5 suite will provide a significant new opportunity for partners to build new service offerings around real-time communication and analytics, and to reach new customers with important new security features.
The Business Development Side:
1. Microsoft CEO Satya Nadella Soundbites
July 13, 2015: Microsoft 2015 Worldwide Partner Conference Day 1 New technology innovations were showcased, including Project GigJam and the Cortana Analytics Suite, demonstrating Microsoft’s ambition to reinvent productivity and business processes, build the intelligent cloud and create more personal computing. There was Another Windows Holographic demo showing universal apps running on the HoloLens.
The Business Execution (i.e. sales and marketing) side:
We have a growing opportunity!
July 15, 2015: Achieving more together by Kevin Turner, Microsoft COO “The cloud where the real growth is.” … “We have market momentum,” Turner continued, citing a survey done last June, a couple of weeks ago. “Forty-six percent said such spending will increase with Microsoft.” … “Building the intelligent cloud, our strong foundation: …”
I. Hyper-scale Azure with host SDN*:
*Software Defined Networking
In order to understand the true impact of the upcoming Microsoft Cloud Platform I will quote here from the Microsoft Gives Software Networking a Hardware Boost article of June 30 on LightReading, a leading community driven media for the communications industry:
To achieve scale, Microsoft had to use “hyperscale SDN,” breaking away from a proprietary appliance combining management, control and data plane and separating those functions. Now, the management plane exposes APIs, the control plane uses the APIs to create rules, and then passes those rules to switches.
The company has developed its own SmartNIC to offload networking processing loads from hosts, which can dedicate their processing power to application workloads.
“With this, we can offload functionality into the device, saving CPU,” says Mark Russinovich, Microsoft Azure CTO, speaking at the Open Networking Summitthis month.
The SmartNIC improves performance to 30 Gbit/s on host networks using 40Gbit/s NICs.
The SmartNIC uses an Field-Programmable Gate Array (FPGA) for reconfigurable functions. Microsoft already uses FPGAs in Catapult, an FPGA-based reconfigurable fabric for large-scale data centers developed jointly by Microsoft Research and Bing. Using programmable hardware allows Microsoft to update equipment as it does software.
Microsoft programs the hardware using Generic Flow Tables (GFT), which is the language used for programming SDN.
The SmartNIC can also do crypto, QoS, storage acceleration and more, says Russinovich.
But can we truly call it software-defined networking if it includes a custom hardware component? Does it matter? Microsoft has found a solution that works for it, and that other network operators might want to emulate.
In addition to the following infomation representing the OpenStack V4 level state-of-technology-development as of June 25, 2015 see my related, but specific update post:
– OpenStack adoption (by Q1 2016), ‘Experiencing the Cloud’, June 7, 2016
To understand the real significance of that statement by the author of the communication industry media article we should briefly characterize the state-of-the-art of cloud technology which has been the focus of network operators and telcos so far. This is OpenStack, the only available open source technology primarily deployed as an IaaS solution which all the rest of the cloud technologies is going to be built upon. Its promise has been huge, but “OpenStack is heading to the Trough of Disillusionment on the Technology Adoption Curve” as it was characterized by Randy Bias in his State of the Stack v4 address to the attendees of the OpenStack Summit held on May 20, 2015:For the continuation of my appraisal (highly recommended) of the OpenStack state-of-the-art go to my homepage here, scroll down to the above image, and read the information following it.
My conclusion at the end of that appraisal was that most of the network and telecommunications oriented contributions to the code will come in the future OpenStack releases. My personal guess there was that about 2 more years will be needed for “telco/carrier grade” hardening of the OpenStack code together with the necessary enhancements in the functionality (see the May 12 2014 “OpenStack as the Key Engine of NFV” story by Ericsson indicated in the appraisal earlier).
This is representing a huge window of opportunity for the 2016 wave of Microsoft Cloud platform products (Windows Server 2016 et. al.) to penetrate the market of network operators and telcos which is crucial for the Microsoft survival. I will cover the upcoming Microsoft moves in that direction with future posts, and just indicate that here.
II. IaaS 2.0 (INTRODUCTION):
Azure offers you great cloud solutions, built on virtual machines—based on the emulation of physical computer hardware—to enable agile movement of software deployments and dramatically better resource consolidation than physical hardware. In the past few years, largely thanks to the Docker approach to containers and the docker ecosystem, Linux container technology has dramatically expanded the ways you can develop and manage distributed software. Application code in a container is isolated from the host Azure VM as well as other containers on the same VM, which gives you more development and deployment agility at the application level—in addition to the agility that Azure VMs already give you.
But that’s old news. The new news is that Azure offers you even more Docker goodness:
- Many different ways to create Docker hosts for containers to suit your situation
- Azure Resource Manager and resource group templates to simplify deploying and updating complex distributed applications
- integration with a large array of both proprietary and open-source configuration management tools
And because you can programmatically create VMs and Linux containers on Azure, you can also use VM and container orchestration tools to create groups of Virtual Machines (VMs) and to deploy applications inside both Linux containers and soon Windows Server Containers.
This article not only discusses these concepts at a high level, it also contains tons of links to more information, tutorials, and products related to container and cluster usage on Azure. If you know all this, and just want the links, they’re right here.
Virtual machines run inside an isolated hardware virtualization environment provided by a hypervisor. In Azure, the Virtual Machines service handles all that for you: You just create Virtual Machines by choosing the operating system and configuring it to run the way you want—or by uploading your own custom VM image. Virtual Machines are a time-tested, “battle-hardened” technology, and there are many tools available to manage operating systems and to configure the applications you install and run. Anything running in a virtual machine is hidden from the host operating system and, from the point of view of an application or user running inside a virtual machine, the virtual machine appears to be an autonomous physical computer.
Linux containers—which includes those created and hosted using docker tools, and there are other approaches—do not require or use a hypervisor to provide isolation. Instead, the container host uses the process and file system isolation features of the Linux kernel to expose to the container (and its application) only certain kernel features and its own isolated file system (at a minimum). From the point of view of an application running inside a container, the container appears to be a unique operating system instance. A contained application cannot see processes or any other resources outside of its container.
Because in this isolation and execution model the kernel of the Docker host computer is shared, and because the disk requirements of the container now do not include an entire operating system, both the start-up time of the container and the required disk storage overhead are much, much smaller.
It’s pretty cool.
Windows Server Containers provide the same advantages as Linux containers for applications that run on Windows. Windows Server Containers support the docker image format and the docker API. As a result, an application using Windows Server Containers can be developed, published, retrieved, and deployed using similar commands to those on Mac and Linux. That’s in addition to having new docker support in Microsoft Visual Studio. The larger container ecosystem will give everyone tools to do the work they need to do with containers.
That’s pretty cool, too.
Well, yes—and no. Containers, like any other technology, do not magically wipe away all the hard work required by distributed applications. Yet, at the same time containers do really change:
- how fast application code can be developed and shared widely
- how fast and with what confidence it can be tested
- how fast and with what confidence it can be deployed
That said, remember containers execute on a container host—an operating system, and in Azure that means an Azure Virtual Machine. Even if you already love the idea of containers, you’re still going to need a VM infrastructure hosting the containers, but the benefits are that containers do not care on which VM they are running (although whether the container wants a Linux or Windows execution environment will be important, for example).
They’re great for many things, but they encourage—as do Azure Cloud Services and Azure Service Fabric—the creation of single-service, microservice-oriented distributed applications, in which application design is based on more small, composable parts rather than on larger, more strongly coupled components.
This is especially true in public cloud environments like Azure, in which you rent VMs when and where you want them. Not only do you get isolation and rapid deployment and orchestration tools, but you can make more efficient application infrastructure decisions.
For example, you might currently have a deployment consisting of 9 Azure VMs of a large size for a highly-available, distributed application. If the components of this application can be deployed in containers, you might be able to use only 4 VMs and deploy your application components inside 20 containers for redundancy and load balancing.
This is just an example, of course, but if you can do this in your scenario, you can adjust to usage spikes with more containers rather than more Azure VMs, and use the remaining overall CPU load much more efficiently than before.
In addition, there are many scenarios that do not lend themselves to a microservices approach; you will know best whether microservices and containers will help you.
In general, it’s easy to see that container technology is a step forward, but there are more specific benefits as well. Let’s take the example of Docker containers. This topic will not dive deeply into Docker right now (read What is Docker? for that story, or wikipedia), but Docker and its ecosystem offer tremendous benefits to both developers and IT professionals.
Developers take to Docker containers quickly, because above all it makes using Linux containers easy:
- They can use simple, incremental commands to create a fixed image that is easy to deploy and can automate building those images using a dockerfile
- They can share those images easily using simple, git-style push and pull commands to public or private docker registries
- They can think of isolated application components instead of computers
- They can use a large number of tools that understand docker containers and different base images
IT and operations professionals also benefit from the combination of containers and virtual machines.
- contained services are isolated from VM host execution environment
- contained code is verifiably identical
- contained services can be started, stopped, and moved quickly between development, test, and production environments
Features like these—and there are more—excite established businesses, where professional information technology organizations have the job of fitting resources—including pure processing power—to the tasks required to not only stay in business, but increase customer satisfaction and reach. Small businesses, ISVs, and startups have exactly the same requirement, but they might describe it differently.
Virtual machines provide the backbone of cloud computing, and that doesn’t change. If virtual machines start more slowly, have a larger disk footprint, and do not map directly to a microservices architecture, they do have very important benefits:
- By default, they have much more robust default security protections for host computer
- They support any major OS and application configurations
- They have longstanding tool ecosystems for command and control
- They provide the execution environment to host containers
The last item is important, because a contained application still requires a specific operating system and CPU type, depending upon the calls the application will make. It’s important to remember that you install containers on VMs because they contain the applications you want to deploy; containers are not replacements for VMs or operating systems.
The following table describes at a very high level the kind of feature differences that—without much extra work—exist between VMs and Linux containers. Note that some features maybe more or less desirable depending upon your own application needs, and that as with all software, extra work provides increased feature support, especially in the area of security.
FEATURE VMS CONTAINERS “Default” security support to a greater degree to a slightly lesser degree Memory on disk required Complete OS plus apps App requirements only Time taken to start up Substantially Longer: Boot of OS plus app loading Substantially shorter: Only apps need to start because kernel is already running Portability Portable With Proper Preparation Portable within image format; typically smaller Image Automation Varies widely depending on OS and apps Docker registry; others
At this point, any architect, developer, or IT operations specialist might be thinking, “I can automate ALL of this; this really IS Data-Center-As-A-Service!”.
You’re right, it can be, and there are any number of systems, many of which you may already use, that can either manage groups of Azure VMs and inject custom code using scripts, often with the CustomScriptingExtension for Windows or the CustomScriptingExtension for Linux. You can—and perhaps already have—automated your Azure deployments using PowerShell or Azure CLI scripts like this.
More recently, Azure released the Azure resource management REST API, and updated PowerShell and Azure CLI tools to use it easily. You can deploy, modify, or redeploy entire application topologies using Azure Resource Manager templates with the Azure resource management API using:
- the Azure preview portal using templates—hint, use the “DeployToAzure” button
- the Azure CLI
- the Azure PowerShell modules
There are several popular systems that can deploy entire groups of VMs and install Docker (or other Linux container host systems) on them as an automatable group. For direct links, see the containers and tools section, below. There are several systems that do this to a greater or lesser extent, and this list is not exhaustive. Depending upon your skill set and scenarios, they may or may not be useful.
Docker has its own set of VM-creation tools (docker-machine) and a load-balancing, docker-container cluster management tool (swarm). In addition, theAzure Docker VM Extension comes with default support for
docker-compose, which can deploy configured application containers across multiple containers.
In addition, you can try out Mesosphere’s Data Center Operating System (DCOS). DCOS is based on the open-source mesos “distributed systems kernel” that enables you to treat your datacenter as one addressable service. DCOS has built-in packages for several important systems such as Spark and Kafka(and others) as well as built-in services such as Marathon (a container control system) and Chronos (a distributed scheduler). Mesos was derived from lessons learned at Twitter, AirBnb, and other web-scale businesses.
Also, kubernetes is an open-source system for VM and container group management derived from lessons learned at Google. You can even use kubernetes with weave to provide networking support.
Deis is an open source “Platform-as-a-Service” (PaaS) that makes it easy to deploy and manage applications on your own servers. Deis builds upon Docker and CoreOS to provide a lightweight PaaS with a Heroku-inspired workflow. You can easily create a 3-Node Azure VM group and install Deis on Azure and then install a Hello World Go application.
Ubuntu, another very popular Linux distribution, supports Docker very well, but also supports Linux (LXC-style) clusters.
Working with containers and Azure VMs uses tools. This section provides a list of only some of the most useful or important concepts and tools about containers, groups, and the larger configuration and orchestration tools used with them.
This area is changing amazingly rapidly, and while we will do our best to keep this topic and its links up to date, it might well be an impossible task. Make sure you search on interesting subjects to keep up to date!
Some Linux container technologies:
Windows Server Container links:
Visual Studio Docker links:
Docker on Microsoft Azure:
- Docker VM Extension for Linux on Azure
- Azure Docker VM Extension User Guide
- Using the Docker VM Extension from the Azure Command-line Interface (Azure CLI)
- Using the Docker VM Extension from the Azure Preview Portal
- Getting Started Quickly with Docker in the Azure Marketplace
- How to use docker-machine on Azure
- How to use docker with swarm on Azure
- Get Started with Docker and Compose on Azure
- Using an Azure resource group template to create a Docker host on Azure quickly
- The built-in support for
composefor contained applications
- Implement a Docker private registry on Azure
Linux distributions and Azure examples:
Configuration, cluster management, and container orchestration:
- Fleet on CoreOS
- Jenkins and Hudson
- Azure Automation
- Powershell DSC for Linux
III. Hybrid flexibility and freedom of the Microsoft Cloud (INTRODUCTION):
I. Hyper-scale Azure with host SDN
Massive, distributed 40GbE network built on commodity hardware
- No Hardware per tenant ACLs
- No Hardware NAT
- No Hardware VPN / Overlay
- No Vendor-specific control, management or data plane
This host networking approach we’re taking to SDN has enabled to let us scale these massive physical networks, but still get the agility that we need in the abstractions that our customers need from their API’s, and be able to scale out to these kinds of numbers.
- All policy is in software – and everything’s a VM
- Network services deployed like all other services
- Battle-tested solutions in Azure are coming to private cloud with Windows Server 2016
Building SDN for Hyperscale Learnings
FOR MORE TECHNICAL INFORMATION WATCH THE FOLLOWING VIDEO:
June 17, 2015, Open Networking Summit: Achieving Hyper-Scale with Software Defined Networking By Mark Russinovich, CTO, Microsoft Azure in the Microsoft Azure Blog
Today, I am excited to deliver a keynote talk at the Open Networking Summit, where I’ll be talking about how Microsoft is leveraging software-defined networking to power one of the largest public clouds in the world – Microsoft Azure.
SDN is probably not a new term to you so what is the hype really about? To answer that question we need to take a step back and look at how the datacenter is evolving to meet the growing need for scalability, flexibility and reliability that many IT users need in this mobile-first, cloud-first world. Cloud-native apps and services are creating an unprecedented demand for scale and automation on IT infrastructure. Across the industry, this is driving the move of control systems from hardware devices into software in a trend called Software Defined Datacenter (SDDC), which means empowering customers to virtualize servers, storage and networking to optimize resources and apps with a single click.
With 22 hyper-scale regions around the world, Azure storage and compute usage doubling every six months, and 90,000 new Azure subscriptions a month, Azure has experienced exponential growth. In this environment, we’ve had to learn how to run a software-defined datacenter within our own infrastructure to deliver Azure services to a growing user base. Since the inception of SDDC, we have applied the principles of virtualized, scale-out, partitioned cloud design and central control to everything from the Azure compute plane implementation to cloud storage, and of course, to networking.
Leveraging SDN for Industry-Leading Virtual Networks
We are investing in bringing a cloud design pattern to networking to deliver scalability and flexibility to our customers consuming cloud services both from Azure and within their datacenters. How exactly are we doing this? For starters, we are delivering industry-leading virtual networks (Vnets), which are critical for any public cloud customer. Vnets are built using overlay and Network Functions Virtualization (NFV) technologies implemented in software running on commodity servers, on top of a shared physical network.
By abstracting the software from the hardware layer, we have developed Vnets that are both scalable and agile, but also secure and reliable. Through segmentation of subnets and security groups, traffic flow control with User Defined Routes, and ExpressRoute for private enterprise grade connectivity, we are able to mimic the feel of a physical network with these Vnets.
Elastic Scale through Disaggregating the Network
With the demands on Azure, Vnets must be able to scale up for very large workloads and back down for small workloads. By both separating the control plane and data plane, and centralizing the control plane, we enable networks that can be modified, scaled and programmed quickly. To give a concrete example of the kind of hyper-scale we can achieve in one region, we can scale the data plane to hundreds of thousands of servers by abstracting to hosts.
We use the Azure Virtual Filtering Platform (VFP) in the Hyper-V hosts to enable Azure’s data plane to act as a Hyper-V virtual network switch, enabling us to provide core SDN functionality for Azure networking services. VFP is a programmable switch that exposes an easy-to-program abstract interface to network agents that act on behalf of network controllers like the Vnet controller and our software load balancer controller. By leveraging host components and doing much of packet processing on each host running in the datacenter, the Azure SDN data plane scales massively – both out and up nodes from 1 Gbs to 40 Gbs, and growing.
Scaling up to 40 Gbs and beyond requires significant computation for packet processing. To help us scale up without consuming CPU cycles that can otherwise be made available for customer VMs, Microsoft is building network interface controller (NIC) offloads on Azure SmartNICs. With SmartNICs, Microsoft is bringing the flexibility and acceleration of Field Programmable Gate Arrays (FPGAs) into cloud servers. FPGAs have not yet been widely used as compute accelerators in servers, so Microsoft using them to enable rapid scale with the programmability of SDN and the performance of dedicated hardware is unique in the industry.
Network Security and Reliability with Azure Innovation
Security and reliability are paramount for us. On Azure, one of the ways we ensure a reliable, secure network is through partitioning Vnets with Azure Controllers, which are organized as a set of inter-connected services. Each service is partitioned to scale and runs protocols on multiple instances for high availability. A partition manager service is responsible for partitioning the load among these services based on subscriptions, while a gateway manager service routes requests to the appropriate partition by utilizing the partition service.
Introduced at //Build, Azure Service Fabric is the platform we used to build our network controllers. Service Fabric’s microservices-based architectural design, customers can update individual application components on a rolling basis without having to update the entire application – resulting in a more reliable service, faster updates and higher scalability for building mission-critical applications. Service Fabric powers a broad range of Microsoft hyper-scale services like Azure Data Factory, SQL Database, Bing Cortana, and Event Hubs.
Bringing Azure SDN Innovation to Our Customers’ Datacenters
Every day we learn from the hyper-scale deployments of Microsoft Azure. Those learnings enable us to bring new capabilities to your datacenter, functioning at a smaller scale to bring you cloud efficiency and reliability. Our strategy is to adapt the cloud design patterns, points of innovation and structural practices that make Azure a true enterprise grade offering. The capabilities for the on-premises components are the same, and they’re resident in technology currently in production in datacenters across the world.
We first released SDN technology in Windows Server 2012 including network virtualization and subsequently enhanced this with the release of Windows Server 2012 R2 and System Center 2012 R2. SDN capabilities in Windows Server derive from the foundational networking technologies that underlie Azure. Moving forward, we will continue to enhance SDN capabilities with the release of Windows Server 2016 and Microsoft Azure Stack. New features include a data plane and programmable network controller based on Azure, as well as load balancer that is proven at Azure scale.
To see more of what’s going on at ONS, check out the recording here.
June 17, 2015: Microsoft Azure Gives SDN a Hardware Assist By
… SmartNIC covers those functions that need a hardware boost, or that Microsoft would just prefer to offload from the CPU — the philosophy being that CPUs are better left running virtual machines to serve Azure customers, Russinovich said.
Encryption is a prime example of the “boost” case: Hardware will always be able to do it faster than software. It’s just a question of whether you need that much firepower. You often don’t. But as 100-Gb/s networking starts to become a reality in the data center, Microsoft is worried — rightfully so — about software’s ability to keep up.
So, the SmartNIC is going to be applied inline — meaning traffic flows through it — for functions including encryption, quality-of-service processing, and storage acceleration. “The sky’s the limit, really, with what we can do with an FPGA given its flexible programming,” Russinovich said.
Separately, Russinovich talked about Microsoft’s tiered system of SDN controllers — the tiering being necessary for controlling regions as large as 500,000 hosts apiece.
A regional controller oversees a region and delegates work to cluster controllers, which act as the proxies that talk to network switches.
The regional controller also keeps track of network state. If a cluster controller fails, its replacement can learn its state from the regional controller.
The tiered approach looks like it’s going to be common in large networks. AT&T wants to use tiered SDN controllers as well. A controller based on OpenDaylight Project code would be responsible for a global view, overseeing local controllers based on either ONOS (for white box switches) or OpenContrail (for virtual routers and virtual switches).
II. IaaS 2.0 (DETAILS)
Virtual Machines service with Resource Manager: New scalable Resource Manager for IaaS
⇒ Compute resources: Virtual machines, VM extensions
⇒ Storage resources: Storage accounts (blobs)
⇒ Networking resources: Virtual networks, Network interface cards (NICs), Load balancers, IP addresses, Network Security Groups
Azure Resource Manager V2 – the new management model for IaaS
Source: Microsoft, May 2015
⇒ Faster Scalability, Larger overall deployments
⇒ Ability to make parallel configuration changes
⇒ Templates enable single click deployments of complex applications into a resource group. A resource group is a container that holds all related elements of an application and can be managed as a single unit providing granular access control via role based authentication and control (RBAC)
⇒ A single unified Azure Stack for the Microsoft Cloud (public cloud, private cloud and hosted cloud)
FOR MORE TECHNICAL INFORMATION WATCH THE FOLLOWING VIDEO:
May 5, 2015, Microsoft Ignite: Taking a Deep Dive into Microsoft Azure IaaS Capabilities
April 29, 2015: Azure peaks in the valley: New features and innovation By Vibhor Kapoor, Director, Product Marketing, Microsoft Azure, in the Microsoft Azure Blog
At the core of every Azure innovation, is our focus to solve for the needs of developers and ISVs . Today at //build, we announced exciting updates to Azure which enable developers of all types with the flexibility to build cloud apps and services across multiple devices and platforms. With the updates announced today, Microsoft has the most complete platform for predictive analytics and intelligent applications, empowering enterprises to realize the maximum value from their data.
SQL Database Enhancements
As Scott [Guthrie] shared on stage this morning, we made a number of updates and enhancements to SQL Database. Developers building software-as-a-service (SaaS) applications can leverage SQL Database to provide flexibility to support both explosive growth and profitable business models. ….
Azure Data Lake
As part of Microsoft’s big data and analytics portfolio of products, we pre-announced Azure Data Lake, a hyper scale repository for big data analytic workloads. … Azure Data Lake is a Hadoop File System compatible with HDFS that works with the Hadoop ecosystem providing integration with Azure HDInsight and will be integrated with Microsoft offerings such as Revolution-R Enterprise, industry standard distributions like Hortonworks and Cloudera, and individual Hadoop projects like Spark, Storm, Flume, Sqoop, Kafka, etc. …
Azure SQL Data Warehouse
We are also pleased to preannounce Microsoft Azure SQL Data Warehouse. As part of Microsoft’s extension to Data Warehousing, Azure SQL Data Warehouse is an elastic data warehouse-as-a-service with enterprise-grade features based on SQL Server’s massively parallel processing architecture. It provides customers the ability to scale data, either on premise, or in our cloud. …
Azure Service Fabric
Today we are excited to make available the developer preview of Azure Service Fabric [a new PaaS platform announced on April 20th] – a high control platform that enables developers and ISVs to build cloud services with a high degree of scalability and customization. As we discussed last week, Service Fabric supports creating both stateless and stateful microservices – an architectural approach where complex applications are composed of small, independently versioned services – to power the most complex, low-latency, data-intensive scenarios and scale them into the cloud. [Azure Service Fabric is a mature technology that Microsoft is making available to customers for the first time, having powered Microsoft products and services for more than 5 years and being in development for the last 10 years.] …
Azure Resource Manager Support for VMs, Storage and Networking
Azure Resource Manager Support for Virtual Machines, Storage and Networking is now available in public preview. Azure Resource Manager templates enable single click deployments of complex applications into a resource group. A resource groups can contain all elements of an application and can be managed as a single unit providing granular access control via role based authentication and control (RBAC). Furthermore, you have ability to tag resources so you can better manage resources with a granular understanding of costs. We will have also have a starting set of more than 80 templates available in GitHub at preview release.
As part of our Azure Resource Manager availability, we are announcing partnerships across a broad set of PaaS, orchestration and management partners building on the new scalable Resource Manager for IaaS, including Cloud Foundry, Mesosphere, Juju, Apprenda, Jelastic and Scalr. We will also make available templates for Apprenda and Mesosphere directly in GitHub. The initial set of templates will also include many open-source solutions from many sources, including a template for MySQL, Chef, ElasticSearch, Zookeeper, MongoDB, and PostGreSQL. For more information, please visit https://azure.microsoft.com/en-us/documentation/articles/resource-group-overview/
[⇒Azure Resource Manager Overview]
Applications are typically made up of many components – maybe a web app, database, database server, storage, and 3rd party services. You do not see these components as separate entities, instead you see them as related and interdependent parts of a single entity. You want to deploy, manage, and monitor them as a group. Azure Resource Manager enables you to work with the resources in your application as a group. You can deploy, update or delete all of the resources for your application in a single, coordinated operation. You use a template for deployment and that template can work for different environments such as testing, staging and production. You can clarify billing for your organization by viewing the rolled-up costs for the entire group.
Azure Resource Manager natively integrates access control into the management platform so you can specify which actions a user in your organization can take for a resource group.
This topic describes resources, groups, and templates using the preview portal to demonstrate the concepts. However, you can also create, manage, and delete Azure resources using the Azure CLI for Mac, Linux, and Windows as well as PowerShell.
A resource group is a container that holds related resources for an application. The resource group could include all of the resources for an application, or only those resources that are logically grouped together. You can decide how you want to allocate resources to resource groups based on what makes the most sense for your organization.
There are some important factors to consider when defining your resource group:
- All of the resources in your group must share the same lifecycle. You will deploy, update and delete them together. If one resource, such as a database server, needs to exist on a different deployment cycle it should be in another resource group.
- Each resource can only exist in one resource group.
- You can add or remove a resource to a resource group at any time.
- A resource group can contain resources that reside in different regions.
- A resource group can be used to scope access control for administrative actions.
In the Azure preview portal, all new resources are created in a resource group. Even if you create just a single resource such as a web site, you must decide whether to add that resource to an existing group or create a new group for that resource.
The following image shows a resource group with a web site, a database, and Application Insights.
To understand the significance of the Azure Resource Manager availability in terms of partnerships I’ve selected the Jelastic case which I am considering one of the most important ones.
Jelastic support was announced as a unique “multilingual PaaS with a worldwide network of providers“. 2 other popular 3rd-party PaaS solutions were announced for support at the same time: CloudFoundry, “an open source PaaS platform”, and Apprenda, “an Enterprise Private PaaS offering”. These announcements were made a week after the Service Fabric application platform announcement (April 20, 2015) to show Microsoft’s “strong commitment to offering customers the most choice and flexibility on the Azure platform, and with this wide range of PaaS offerings, whether Microsoft or Partner delivered, meet all customer needs in development, deployment, and management.”
- Zero Code Change: deploy any new or legacy applications
- Easy Migration: no restrictions, back and forward — Jelastic’s unique feature
- Automatic horizontal scaling by triggers
- Automatic vertical scaling for legacy application systems
- Secure containers
- High availability on application and hardware levels
- Live migration
- Smart distribution of containers
- Hibernation of inactive containers
- SSH access and open API
More details for the above are available from the Jelastic Slideshare site:
Jelastic is the first company that combined unlimited PaaS ease of use (developer’s productivity) and the IaaS flexibility (agility) within a single platform. Jelastic is a venture-backed enterprise software company that enables hosting service providers, systems integrators, enterprises and OEMs to unleash the full potential of the cloud to generate superior ROI & efficiencies. Jelastic’s proven technology is deployed by over 150,000 developers, more than 50 private & public cloud providers around the world. Jelastic supports popular programming languages such as Java, PHP, Ruby, Node.js, Python & .NET.
Let’s watch first the Elastic Java, PHP, Ruby, Python and Node.js in the Cloud, by Jelastic video of January 4, 2015 on the JelasticCloud YouTube channel. It is only 1 and a half minutes long and it very effectively presents the Jelastic value proposition as a solution:
Next watch John Derrick, CEO of Jelastic [till September 2014 when founder and then CTO, Ruslan Synytsky took back the CEO position] explaining in this SYS-CON.tv interview at the 14th International Cloud Expo® (http://www.CloudComputingExpo.com/), held June 10-12, 2014 that “Jelastic is focused on getting people to the cloud sooner, easier, without having to go to new APIs or different standards, to give them the full benefit of the cloud right away“.
Next is to understand the .NET/Windows Hosting Beta Support available since March 4, 2015:
Jelastic has begun beta testing of .NET/Windows hosting support. Within its confines, the following servers are provided:
The detailed information on these servers can be found within the corresponding linked guides.
And now let’s reveal the required steps for providing .NET/Windows hosting at your Jelastic Platform:
- Hardware Nodes Requirements
- Required Licenses
- Enabling Jelastic .NET/Windows Hosting
- Windows JCA Settings
Hardware Nodes Requirements
If you’d like to get the ability to provide the .NET/Windows hosting services for end-users, your Jelastic cluster needs to contain a separate Windows-dedicated hardware node(s). The hardware requirements for such nodes are similar to the one’s for Linux-based user nodes.
For the detailed requirements and assistance, please contact your dedicated Jelastic account manager.
… <read the details in the original place>
Enabling Jelastic .NET/Windows Hosting
… <read the details in the original place>
++ April 2, 2015: A New Resident at Jelastic Polyglot Platform: .NET/Windows Beta Hosting
++ April 4, 2015: Test-Drive the Strong Sides of Windows Cloud Hosting with Remote Desktop Access
++ April 21, 2015: From IDE to the Cloud within a Minute – Experience the Direct and Blazingly Fast .NET Projects Deployment
… Beside the set of well-known Jelastic outstanding features like auto scalability, live migration and high availability, the .NET hosting at our PaaS provides tight integration with the Microsoft Visual Studio as the most popular IDE for .NET apps’ development. Such a collaboration became possible thanks to the Web Deploy tool, which is simultaneously integrated with the IIS application server and Visual Studio, allowing you to host your projects inside the Cloud, really easily and quickly.
Speaking more specifically, with Jelastic, you can benefit from the immediate deployment of your .NET projects directly inside your environment, in a matter of a few minutes. Nevertheless, you can still follow the more traditional deployment approach, i.e. package your project into a single archive and upload it to the dashboard, where you’ll be able to deploy it manually to any of your environments, at any time you need and without any additional software requirements. …
March 4, 2015: Windows Virtual Private Server (VPS) 2008/2012
Providing the beta support of .NET/Windows hosting, Jelastic allows developers and ISV companies to host the web applications and services, that are run on Windows operating system. Beside the commonly popular software stacks like IIS web-server and MSSQL database server, you can get a separate dedicated virtual private server with the appropriate OS run inside.
Windows VPS hosting at Jelastic provides all the functionality of a virtual private server with the availability of the cloud, backed by the strength of the Windows Operating System:
- Virtual Private Server (VPS) is a term used to refer to a virtual independent machine, that operates with a separate OS copy. Although running on the same physical computer as other customers’ VMs, virtual private server is in many aspects functionally equivalent to a separate physical computer, since it’s dedicated to the individual customer’s needs, has the corresponding level of privacy, and can be configured to run server software.
- The Jelastic Cloud ensures that each given account is fully insulated, thus every VPS user works with his own Windows cloud server, getting the deep configurability, high performance and security guarantees. And thanks to the provided advanced management possibilities, the required container can be configured via RDP, with the connection established using either the inbuilt Guacamole HTML5 tool or local RD client.
- The Windows Server OS (2012 and optionally 2008 R2 versions are provided) allows you to run any Windows-based software on your virtual machine with support of such widely used tools and technologies as ASP.NET, PHP, SQL, Visual Studio, Active Directory etc, wherein the inbuilt Server Manager will help you to adjust your server in a way you need, through setting the appropriate roles and features.
Taking the benefits of these combined solutions, you can easily get the highly available and always ready to automatically scale (based on your needs) bare virtual machine with the Windows Server OS installed – just follow a few simple steps below.
…<read the details in the original place>
Then read the Jelastic on Azure post of June 3, 2015 by Kundana Palagiri, Senior Program Manager, Microsoft Azure:
At /build this year, Azure announced support for Jelastic PaaS on Azure IaaS. This is yet another PaaS solution that can now run seamlessly on Azure! Today Jelastic is available in Azure in two modes– Jelastic Hybrid cloud and a Virtual Private cloud in Azure along with deployment tutorials to get started. This blog provides an overview for getting started with Jelastic on Azure.
Jelastic is a multilingual PaaS solutions that supports Java and other popular programming languages like PHP, Ruby etc and requires no code changes for cross platform deployment. With this integration the Jelastic platform is available for automatic and seamless installation on top of Azure, providing a fast and easy method to get into the cloud. Using Azure Marketplace, any ISV or enterprise customer can create the Jelastic Orchestrator in the Azure cloud. It’s provided via the Azure Marketplace image with ‘Bring your own License’ support. This image provides easy installation of a dedicated Jelastic Virtual Private Cloud on Azure and with simple configuration steps.
To get started, find the Jelastic image in Azure marketplace and follow the deployment guide here.
Once the private cloud environment is setup, you can deploy and manage applications on Azure, just like you would on any other cloud platform. After this you can deploy a Java, Ruby, Python or a .NET application to Azure using the deployment tutorials that are available here.
In the second scenario you can extend your private cloud environment to Azure by creating Jelastic Hybrid PaaS on Azure using the deployment guide. This is ideal for burst to Azure or disaster recovery scenarios.
We also continue to work with our partners to bring to Azure solutions that offer more choice and flexibility to our customers and we’d love to hear from you about solutions that we can integrate with Azure for a seamless experience.
We hope you give this a try and leave us feedback below on how we can make this experience even better!
Also read this July 9, 2015 Guest Post: Why You Should Deploy Jelastic Hybrid Cloud on Azure By Tetiana Fydorenchyk, Director of Marketing at Jelastic on Microsoft Azure Partner Blog
Jelastic provides a cloud platform for container orchestration with entire freedom of choice from technology to vendor and an advanced level of DevOps workload mobility.
With multi-region support within different data centers and clouds, Jelastic offers a hybrid cloud solution with advanced automation for certified containers. The intended outcome is the ability to distribute various companies’ workloads in a variety of regions, within one hybrid cloud.
The multi-region feature makes cloud hosting universal by aggregating and orchestrating various types of hardware, IaaS and third-party cloud tools within a single Jelastic installation. Such an approach doubles efficiency by ensuring extra distribution possibilities for both hosting service providers/ISV companies and their customers. It allows the first group to grow locally and also conquer the remote market and gives the second group impressive flexibility in application life cycle management and smart organization of the dissemination policy.
Jelastic has already integrated with Microsoft Azure, enabling ISVs, hosting providers or enterprise customers to allocate extra regions using the Azure Marketplace. As a result, Jelastic Hybrid Cloud is available with the following benefits:
- Expand hosting business to more countries by selling resources from many data centers (19 compute regions of Azure).
- Burst to Azure in case of temporary applications’ load spikes or when additional computing power is needed.
- Disaster recovery using Azure.
- Backup to Azure.
- Allocate a dedicated region for a particular type of user (e.g. for enterprise clients separately).
- Ability to migrate the projects among different regions, depending on the current development stage.
To reveal in more detail how your desired hybrid cloud can be implemented, follow these step-by-step instructions to easily add extra Azure regions to your Jelastic cluster.
So it is time to understand Why Developers Choose Jelastic as explained by Dmitry Lazarenko, Director of Business Development, Jelastic in this September 22, 2014 video:
And here is the Jelastic Standard Edition on Microsoft Azure Marketplace page for an overview of the Jelastic capabilities itself
Jelastic provides a turnkey Private, Public and Hybrid Cloud management platform that brings together enterprise-grade PaaS and container-native IaaS. The platform supports the most popular programming languages such as Java, PHP, Ruby, Node.JS and Python and provides the maximum application density, the fastest deployment model without coding to proprietary APIs and the easiest management for cloud solutions, while driving down TCO and increasing agility. Jelastic features:
- User-friendly self-service portals for developers and IT-operations for full management of the apps and the cluster itself
- Full scaling automation of enterprise applications (not just horizontal but vertical as well)
- Support of popular programming languages and technologies (Java, Java EE, PHP, Ruby, Node.js, Python, Docker®, Rocket) for hosting any cloud application, without code changes (even legacy apps)
- Supported databases: MySQL, PostgreSQL, Cassandra, Redis, Neo4J, OrientDB, Memcached
- Smooth migration between dev-test-production stages of an application’s lifecycle
- Optimal continuous delivery process arrangement and management
- Support of both stateful and stateless architectures
- Automation of application packaging for SaaS-application’s delivery
Supported languages: Java, PHP, Node.JS, Ruby and Python are supported Supported application servers: Tomcat, JBoss, GlassFish, TomEE, Wildfly, Jetty, NGINX, Apache Supported data services: MySQL, MariaDB, PostgreSQL, Cassandra, Redis, Neo4J, OrientDB, CouchDB
BASE OS: Linux
Finally here is the Virtual Private Cloud for DevOps – Ruslan Synytsky, WHD[WorldHostingDays].usa 2015 post of June 4, 2015 on the Jelastic blog which presents Jelastic as a “new solution … to provide cloud-in-a-box for DevOps enterprise teams“. Network operators and telcos alike, I will specifically add to that as their are specific companies in that space are mentioned as well.
Recently Ruslan Synytsky, Jelastic CEO [since September 2014 when he took back the CEO position from Derrick], visited WHD.usa and presented a session about Virtual Private Cloud for DevOps, with a market overview for hosting providers and new trends in cloud hosting. Below is a transcribed short version of the presentation and the full video of the session. Download the presentation
Today I will talk about new opportunities for hosting companies to attract enterprise customers, specifically enterprise kind of customers. I believe you have all heard about containers and Docker hype and I would like to explain how hosting providers can make money on this DevOps trend.
First of all – waves. Waves are in our life everywhere and your business is like a wave, sometimes it’s surging higher and sometimes it’s receding. We need to think about the future – how to catch the wave. We were smart in 2011 and started to use containers at that time and our partners are already riding this wave, as they know this area very well.
And today I would like to speak about the next big thing, that is coming to the hosting industry.
What is Jelastic?
Jelastic is a complex solution. It provides Public, Private and Hybrid clouds.
We are working with hosting service providers, systems integrators and enterprise DevOps teams. Our Ecosystem today is not really big, we have about 30 Public cloud hosting service providers and about 10 Private cloud customers, because we are still a start up. But we are about 4 years in the market, so we know this area very well.
This is high-level overview of our solution. It can be installed on top of bare metal hardware, on top of any public cloud infrastructure. So we support containers: Virtuozzo containers, Docker® containers and Rocket containers as well. And on top we have Jelastic smart management with three panels: one panel for Dev guys, one panel for Ops guys and one panel for small and medium businesses.
What is DevOps?
DevOps is a new approach to develop software. It helps to speed up time-to-market, to automate the Pipeline and reduce the number of fails on production. So, actually, it helps companies to save money.
Below is the main picture in DevOps. It is the Pipeline of the application delivery to production. As you can see, an application or template can be stored in some registry or hub registry. Dev guys pull it to a Dev environment, and after that they move it to a test environment, then to a stage environment, and after that to production. So automation of this Pipeline is very important, because it’s not really easy to build an automated Pipeline.
Containers and Docker®
Why did Docker® become so popular? Because they changed the degree of containers. They introduced containers for developers. In the past it was used by hosting service providers, by many of them, but it was not very popular. Today enterprises are looking for solutions with containers.
Why is this important? Because it provides much better flexibility, higher density, elasticity and portability. So it’s easy to migrate an application from one cloud to another.
Containers is not a new technology. This is like an evolution path. And it’s not finished yet. Smart containers are coming.
Anyway Container Orchestration is a challenge – it’s not easy to build smart container orchestration. And if you try to do it yourself, I believe you’ll spend a lot of time and efforts. And you need to know what kind of solution it is better to use not to make a mess out of this.
And today we see on the market several solutions: Tutum, it is the start up, but they do a really good job on container orchestration, Amazon, Google, IBM and Azure already offer container services, as well as CoreOS who recently introduced Tectonics. You can see, all big players implemented containers into their platforms.
And containers in Jelastic have been in use since 2011. We use the Virtuozzo containers which is a really good solution and much more secure than LXC containers, default Docker® containers.
Virtual Private Cloud
Now about Virtual Private Cloud. What exactly you can sell to end customers and where are the new markets?
In the image you can see a cloud with 3 regions. One region is for small, trial/beta users – you can put them all into a shared cloud, like shared hardware.
And big customers, they are looking for dedicated clouds. And you can provide a dedicated cluster specifically for one customer. And you can create one region and then another region for for another customer. Because big customers will not be willing to put their solution inside a shared hardware region. Each region is a separate Virtual Private Cloud, and you can use a single orchestration platform to manage all of these regions.
Hybrid Virtual Private Cloud
Hybrid Virtual Private Cloud – what does it mean? Just imagine that you have enterprise customers, that have sensitive data, they don’t want to put their data into your public cloud, but they can install Jelastic on top of their hardware on premise, and they can use your region from your data center and burst to your data centerwhen they need more resources, or to do some kind of testing, or to create some kind of backup solution. This is Hybrid Virtual Private Cloud.
Imagine you have several data centers where you can install Jelastic and provide services for enterprise customers, ensuring high availability across data centers because you can build a container highway between them.
For example – big projects and companies, like Liquid Robotics (the father of Java, James Gosling is the CTO of this project). They were looking for a solution with multi continent high availability, because they manage robots, so if they lose their connection to a data center, they will lose their robots.
Who are the target customers?
Enterprises, like systems integrators, telcos, industrial companies, banks, retail companies.
DevOps and Developer teams – ISV’s, outsourcing teams, software development agencies, gambling, IT and consulting services.
Some of our partners are working with government organizations as well. They provide services for educational institutions, health-care, for financial IT departments and others.
- Support of Java, PHP, Ruby, Python, Node.js and .NET, as well as Docker®
- Zero Code Change, no lock-in, easy migration to and from the cloud
- Automatic horizontal scaling by triggers
- Automatic vertical scaling for legacy application systems
- Secure containers
- High availability on application and hardware levels
- Live migration
- Smart distribution of containers
- Hibernation of inactive containers
- SSH access and open API
As you can see, it is a new solution for enterprises. Before enterprises were just buying dedicated hardware, now you can combine hardware together with software and provide cloud-in-a-box for DevOps enterprise teams.
Register to try out Jelastic Virtual Private Cloud – install in hours, test for weeks.
As an additional information: Jelastic Trinity Release 3.3: Seamless Union of Hybrid Cloud with Public and Private Cloud Options
Jelastic introduces Hybrid Cloud with multi-region support for reaching workload mobility across different data centers and clouds, in one seamless multi-cloud solution
Malaga, Spain, June 9, 2015 – Jelastic, Inc., the cloud company that provides a platform for container orchestration with entire freedom of choice from technology to vendor and an advanced level of DevOps workload mobility, today launched the new 3.3 version of the platform, aptly named “Trinity” to represent the union of three cloud options supported from this release. With multi-region support within different data centers and clouds, the company now officially offers a Hybrid Cloud solution alongside Public and Private. The intended outcome is the ability to distribute various companies’ workloads in a variety of regions, within one Hybrid Cloud.
In response to the words of Trinity from The Matrix: “The answer is out there, Neo, and it’s looking for you, and it will find you if you want it to.” Ruslan Synytsky, Jelastic CEO, said, “We consider each of our customers to be the Chosen One, and provide them with answers to the issues they face while developing applications, offering cloud services and building businesses. And in this release, our clients and partners found the answer on how to use multiple data centers within just one multi-cloud platform. The hardware equipment as well as cloud resources may differ in parameters, or belong to another data center, located in another country, and that opens the doors to new markets and more business opportunities.”
Hybrid Cloud with Multiple Regions
The basic premise of the multi-region feature is in the provided ability to usevarious types of hardware, anyIaaS or cloud services, either due to variety of parameters or distant geographical location. It produces the biggest benefits for hosting service providers, systems integrators and enterprises by allowing them to:
- Conquer a new market or cooperate with various hosting providers from different countries by creating a region in a remote location
- Significantly improve the response time for users closer to the data center, providing geo distribution of applications/data
- Keep sensitive data on a more expensive private cloud and insensitive data/apps can be stored on a lower-priced public cloud
- Use additional regions from external clouds in case of temporary burst, with no need to invest in hardware for variable loads
- Reach lower TCO and higher ROI by providing a combination of regions with hardware of different capacity – critical workloads can be hosted on stable, security-enhanced and as a result, more expensive clouds. Non critical workloads can be hosted on cheaper inferior clouds
- Gain disaster recovery and high availability across multiple data centers
- Implement complex access policies and manage hardware usage permissions, for example, providing separate regions for particular team groups like coders, QA, ops that are involved in DevOps workflows, as well as creating dedicated regions for important customers
- Provide the ability to add cloud resources from public clouds like Microsoft Azure, IBM SoftLayer, Google Cloud and AWS
Simultaneously, end users and developers will be able to benefit from the following possibilities:
- Choose between higher quality or more cost affordable hardware
- Easily relocate the projects to the superior hardware with the help of environment migration
- Host applications in a trusted local data center of the preferable hosting service provider
- Achieve higher availability through geo-distribution by locating the applications on several data centers around the world
Today, Jelastic is available as a public cloud solution via 36 public cloud hosting service providers across the world. A seamless Hybrid Cloud integration with the help of the multi-region feature is advantageous for these partners who have their own data centers, but who want to offer cloud services from several locations for their customers. It is also ideal if they have customers with temporary needs for a large burst during the load spikes and they do not want to invest in extra hardware.
“Jelastic Trinity is a milestone release, pulling the two public and private cloud threads of Jelastic’s offering together, forming that important third platform capability – hybrid cloud. Opening new markets and exciting collaboration opportunities for Layershift as the UK’s leading Jelastic partner,” said Damien Ransome, Service Director at Layershift. “I’m expecting to see greater collaboration between Jelastic hosting partners and also between hosting partners and ISVs. Jelastic Trinity will unlock the potential of each partner’s individual success story – through strengthening the value proposition of collaborating hosting partners, whilst ensuring that end users benefit from the increased flexibility and reduced lock-in uniquely offered by the Jelastic ecosystem.”
In addition, the multi-region feature doubles efficiency by positively affecting systems integrators and enterprises who actively use private clouds and can now reach more workload mobility, by adding new regions in different clouds. This allows them to further expand the variety of existing possibilities and achieve even more high-availability for their applications.
“We are very happy that Jelastic has released a hybrid cloud option with support for multiple regions,” said Miguel Hormigo, South Regional Manager at GMV. “GMV is currently using Jelastic private cloud for our internal development and testing processes and as an offering to our customers who are looking for a secure and cost efficient alternative. The biggest benefit that we foresee from the Trinity release is the ability to provide our clients with a hybrid cloud solution, gaining more control over portability of their applications and data.”
Such a Hybrid Cloud represents an advanced extension to the private cloud-in-a-box solution actively offered by Jelastic to enterprise customers. As a result, they can seamlessly combine the hardware of Dell, IBM, HP and other vendors within one cloud platform, using each region for specific demands of their business.
“Jelastic’s unique private cloud-in-a-box solution, based on Dell’s PowerEdge VRTX, enables rapid application development and deployment through the cloud,” said Juergen Domnik, service provider director, Dell EMEA. “We’ve seen, and been surprised, by the different ways customers are getting value from the Dell PowerEdge VRTX. Its versatility means that small to medium businesses or franchises can optimise office IT environments, or in this instance serve as an ideal platform for DevOps. We’re delighted to provide this solution to Jelastic to better enable their customers.”
Jelastic in a Hybrid Cloud with Azure
As the first practical step in providing Hybrid Cloud, Jelastic has already integrated with Azure for ISVs, hosting providers or enterprise customers enabling them to allocate extra regions using Azure Marketplace. As a result, Jelastic Hybrid Cloud will be available for enterprises and existing Jelastic hosting providers with the following benefits:
- Expand hosting business to more countries by selling resources from many data centers (19 compute regions of Azure)
- Burst to Azure in case of temporary applications’ load spikes or in the case additional computing power is needed
- Disaster recovery using Azure
- Backup to Azure
“We’re excited to be working closely with Jelastic to deliver their Hybrid Cloud to our customers,” said Corey Sanders, Director of Program Management at Microsoft Azure. “As strong hybrid cloud offerings are the cornerstone for Microsoft’s cloud differentiation, we are thrilled to bring additional cloud infrastructure options and offerings to our customers through Jelastic’s inclusion in the Azure marketplace.”
June 10, 2015: DevOps with Containers for Microservices (slides only at the moment) By Ruslan Synytsky at the DevOps Summit 2015 East on Cloud Expo
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively.
In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of DevOps with containers. In addition, he discussed known issues and solutions for enterprise applications in containers.
Download Slide Deck: ▸ Here
The Microservices slide from that deck:
Microservices – a software architecture design pattern, it`s complex applications are composed of small, independent processes.
- responsibility for one functionality
- independence within microservices
- truly loosely coupled
- relative independence within different teams
- easier testing
- continuous delivery or deployment
Related blog posts by Jelastic:
– March 3, 2015: Webinar Roundup – Multi-Containers Orchestration with Live Migration and HA for Microservices
– March 16, 2015: Multi-Containers Orchestration with Live Migration and High-Availability for Microservices in Jelastic
– April 14, 2015: 5 Key Features to Make Containers Reliable for Production Applications
– April 27, 2015: Smart Container Orchestration within the Cloud Platform. Part1: Installation
– May 21, 2015: Smart Container Orchestration within the Cloud Platform. Part2: Configurations
More information from Jelastic:
– August 21, 2014: An In-depth Interview with Ruslan Synytsky, Jelastic Founder and CTO
– September 24, 2014: Centerprise PaaS Business Case (video)
– January 6, 2015: “Jelastic’s Mission is to Upgrade Companies to a New Level of Automation,” Ruslan Synytsky
– March 17, 2015: Jelastic 3.1 – Production Ready Docker Containers and Native .NET Based on Windows Containers
– April 27, 2015: THE DZONE GUIDE TO CLOUD DEVELOPMENT 2015 EDITION
– April 29, 2015: Jelastic Virtual Private Cloud and Hybrid Cloud are now Available in the Microsoft Azure Marketplace
– May 20, 2015: Jelastic Virtual Private Cloud on Azure. Deployment Manual
– July 8, 2015: How to Deploy Jelastic Hybrid Cloud on Azure
– June 10, 2015: Jelastic Virtual Private Cloud – Use Cases